id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2005.01862
Complex Amplitude-Phase Boltzmann Machines
We extend the framework of Boltzmann machines to a network of complex-valued neurons with variable amplitudes, referred to as Complex Amplitude-Phase Boltzmann machine (CAP-BM). The model is capable of performing unsupervised learning on the amplitude and relative phase distribution in complex data. The sampling rule of the Gibbs distribution and the learning rules of the model are presented. Learning in a Complex Amplitude-Phase restricted Boltzmann machine (CAP-RBM) is demonstrated on synthetic complex-valued images, and handwritten MNIST digits transformed by a complex wavelet transform. Specifically, we show the necessity of a new amplitude-amplitude coupling term in our model. The proposed model is potentially valuable for machine learning tasks involving complex-valued data with amplitude variation, and for developing algorithms for novel computation hardware, such as coupled oscillators and neuromorphic hardware, on which Boltzmann sampling can be executed in the complex domain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
175,693
2409.16465
Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion
We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) initialization pipeline for autonomous space robots. Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion (SfSM) to robustly initialize a monocular agent in spacecraft inspection trajectories, addressing visual estimation challenges such as weak-perspective projection and center-pointing motion, which exacerbates the bas-relief ambiguity, dominant planar geometry, which causes motion estimation degeneracies in classical Structure from Motion, and dynamic illumination conditions, which reduce the survivability of visual information. We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness over existing monocular initialization procedures.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
491,352
2405.04086
Optimizing Language Model's Reasoning Abilities with Weak Supervision
While Large Language Models (LLMs) have demonstrated proficiency in handling complex queries, much of the past work has depended on extensively annotated datasets by human experts. However, this reliance on fully-supervised annotations poses scalability challenges, particularly as models and data requirements grow. To mitigate this, we explore the potential of enhancing LLMs' reasoning abilities with minimal human supervision. In this work, we introduce self-reinforcement, which begins with Supervised Fine-Tuning (SFT) of the model using a small collection of annotated questions. Then it iteratively improves LLMs by learning from the differences in responses from the SFT and unfinetuned models on unlabeled questions. Our approach provides an efficient approach without relying heavily on extensive human-annotated explanations. However, current reasoning benchmarks typically only include golden-reference answers or rationales. Therefore, we present \textsc{PuzzleBen}, a weakly supervised benchmark that comprises 25,147 complex questions, answers, and human-generated rationales across various domains, such as brainteasers, puzzles, riddles, parajumbles, and critical reasoning tasks. A unique aspect of our dataset is the inclusion of 10,000 unannotated questions, enabling us to explore utilizing fewer supersized data to boost LLMs' inference capabilities. Our experiments underscore the significance of \textsc{PuzzleBen}, as well as the effectiveness of our methodology as a promising direction in future endeavors. Our dataset and code will be published soon on \texttt{Anonymity Link}.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
452,440
1402.0570
A Feature Subset Selection Algorithm Automatic Recommendation Method
Many feature subset selection (FSS) algorithms have been proposed, but not all of them are appropriate for a given feature selection problem. At the same time, so far there is rarely a good way to choose appropriate FSS algorithms for the problem at hand. Thus, FSS algorithm automatic recommendation is very important and practically useful. In this paper, a meta learning based FSS algorithm automatic recommendation method is presented. The proposed method first identifies the data sets that are most similar to the one at hand by the k-nearest neighbor classification algorithm, and the distances among these data sets are calculated based on the commonly-used data set characteristics. Then, it ranks all the candidate FSS algorithms according to their performance on these similar data sets, and chooses the algorithms with best performance as the appropriate ones. The performance of the candidate FSS algorithms is evaluated by a multi-criteria metric that takes into account not only the classification accuracy over the selected features, but also the runtime of feature selection and the number of selected features. The proposed recommendation method is extensively tested on 115 real world data sets with 22 well-known and frequently-used different FSS algorithms for five representative classifiers. The results show the effectiveness of our proposed FSS algorithm recommendation method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
30,585
2405.03188
Hyperbolic Geometric Latent Diffusion Model for Graph Generation
Diffusion models have made significant contributions to computer vision, sparking a growing interest in the community recently regarding the application of them to graph generation. Existing discrete graph diffusion models exhibit heightened computational complexity and diminished training efficiency. A preferable and natural way is to directly diffuse the graph within the latent space. However, due to the non-Euclidean structure of graphs is not isotropic in the latent space, the existing latent diffusion models effectively make it difficult to capture and preserve the topological information of graphs. To address the above challenges, we propose a novel geometrically latent diffusion framework HypDiff. Specifically, we first establish a geometrically latent space with interpretability measures based on hyperbolic geometry, to define anisotropic latent diffusion processes for graphs. Then, we propose a geometrically latent diffusion process that is constrained by both radial and angular geometric properties, thereby ensuring the preservation of the original topological properties in the generative graphs. Extensive experimental results demonstrate the superior effectiveness of HypDiff for graph generation with various topologies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
452,085
2312.16750
Bayesian Sensor Placement for Multi-source Localization of Pathogens in Wastewater Networks
Wastewater monitoring is an effective approach for the early detection of viral and bacterial disease outbreaks. It has recently been used to identify the presence of individuals infected with COVID-19. To monitor large communities and accurately localize buildings with infected individuals with a limited number of sensors, one must carefully choose the sampling locations in wastewater networks. We also have to account for concentration requirements on the collected wastewater samples to ensure reliable virus presence test results. We model this as a sensor placement problem. Although sensor placement for source localization arises in numerous problems, most approaches use application-specific heuristics and fail to consider multiple source scenarios. To address these limitations, we develop a novel approach that combines Bayesian networks and discrete optimization to efficiently identify informative sensor placements and accurately localize virus sources. Our approach also takes into account concentration requirements on wastewater samples during optimization. Our simulation experiments demonstrate the quality of our sensor placements and the accuracy of our source localization approach. Furthermore, we show the robustness of our approach to discrepancies between the virus outbreak model and the actual outbreak rates.
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
418,493
2410.18583
Benchmarking Graph Learning for Drug-Drug Interaction Prediction
Predicting drug-drug interaction (DDI) plays an important role in pharmacology and healthcare for identifying potential adverse interactions and beneficial combination therapies between drug pairs. Recently, a flurry of graph learning methods have been introduced to predict drug-drug interactions. However, evaluating existing methods has several limitations, such as the absence of a unified comparison framework for DDI prediction methods, lack of assessments in meaningful real-world scenarios, and insufficient exploration of side information usage. In order to address these unresolved limitations in the literature, we propose a DDI prediction benchmark on graph learning. We first conduct unified evaluation comparison among existing methods. To meet realistic scenarios, we further evaluate the performance of different methods in settings with new drugs involved and examine the performance across different DDI types. Component analysis is conducted on the biomedical network to better utilize side information. Through this work, we hope to provide more insights for the problem of DDI prediction. Our implementation and data is open-sourced at https://anonymous.4open.science/r/DDI-Benchmark-ACD9/.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
501,948
2208.09618
Fully Automated End-to-End Fake Audio Detection
The existing fake audio detection systems often rely on expert experience to design the acoustic features or manually design the hyperparameters of the network structure. However, artificial adjustment of the parameters can have a relatively obvious influence on the results. It is almost impossible to manually set the best set of parameters. Therefore this paper proposes a fully automated end-toend fake audio detection method. We first use wav2vec pre-trained model to obtain a high-level representation of the speech. Furthermore, for the network structure, we use a modified version of the differentiable architecture search (DARTS) named light-DARTS. It learns deep speech representations while automatically learning and optimizing complex neural structures consisting of convolutional operations and residual blocks. The experimental results on the ASVspoof 2019 LA dataset show that our proposed system achieves an equal error rate (EER) of 1.08%, which outperforms the state-of-the-art single system.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
313,765
2501.07487
Data and System Perspectives of Sustainable Artificial Intelligence
Sustainable AI is a subfield of AI for concerning developing and using AI systems in ways of aiming to reduce environmental impact and achieve sustainability. Sustainable AI is increasingly important given that training of and inference with AI models such as large langrage models are consuming a large amount of computing power. In this article, we discuss current issues, opportunities and example solutions for addressing these issues, and future challenges to tackle, from the data and system perspectives, related to data acquisition, data processing, and AI model training and inference.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
524,413
2105.10882
Weakly-supervised 3D Human Pose Estimation with Cross-view U-shaped Graph Convolutional Network
Although monocular 3D human pose estimation methods have made significant progress, it is far from being solved due to the inherent depth ambiguity. Instead, exploiting multi-view information is a practical way to achieve absolute 3D human pose estimation. In this paper, we propose a simple yet effective pipeline for weakly-supervised cross-view 3D human pose estimation. By only using two camera views, our method can achieve state-of-the-art performance in a weakly-supervised manner, requiring no 3D ground truth but only 2D annotations. Specifically, our method contains two steps: triangulation and refinement. First, given the 2D keypoints that can be obtained through any classic 2D detection methods, triangulation is performed across two views to lift the 2D keypoints into coarse 3D poses. Then, a novel cross-view U-shaped graph convolutional network (CV-UGCN), which can explore the spatial configurations and cross-view correlations, is designed to refine the coarse 3D poses. In particular, the refinement progress is achieved through weakly-supervised learning, in which geometric and structure-aware consistency checks are performed. We evaluate our method on the standard benchmark dataset, Human3.6M. The Mean Per Joint Position Error on the benchmark dataset is 27.4 mm, which outperforms existing state-of-the-art methods remarkably (27.4 mm vs 30.2 mm).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
236,528
2002.03399
Two-Stream Aural-Visual Affect Analysis in the Wild
Human affect recognition is an essential part of natural human-computer interaction. However, current methods are still in their infancy, especially for in-the-wild data. In this work, we introduce our submission to the Affective Behavior Analysis in-the-wild (ABAW) 2020 competition. We propose a two-stream aural-visual analysis model to recognize affective behavior from videos. Audio and image streams are first processed separately and fed into a convolutional neural network. Instead of applying recurrent architectures for temporal analysis we only use temporal convolutions. Furthermore, the model is given access to additional features extracted during face-alignment. At training time, we exploit correlations between different emotion representations to improve performance. Our model achieves promising results on the challenging Aff-Wild2 database.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
163,252
2101.01444
CycleGAN for Interpretable Online EMT Compensation
Purpose: Electromagnetic Tracking (EMT) can partially replace X-ray guidance in minimally invasive procedures, reducing radiation in the OR. However, in this hybrid setting, EMT is disturbed by metallic distortion caused by the X-ray device. We plan to make hybrid navigation clinical reality to reduce radiation exposure for patients and surgeons, by compensating EMT error. Methods: Our online compensation strategy exploits cycle-consistent generative adversarial neural networks (CycleGAN). 3D positions are translated from various bedside environments to their bench equivalents. Domain-translated points are fine-tuned to reduce error in the bench domain. We evaluate our compensation approach in a phantom experiment. Results: Since the domain-translation approach maps distorted points to their lab equivalents, predictions are consistent among different C-arm environments. Error is successfully reduced in all evaluation environments. Our qualitative phantom experiment demonstrates that our approach generalizes well to an unseen C-arm environment. Conclusion: Adversarial, cycle-consistent training is an explicable, consistent and thus interpretable approach for online error compensation. Qualitative assessment of EMT error compensation gives a glimpse to the potential of our method for rotational error compensation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
214,373
2402.01985
A Discrete-time Dynamical Model for Optimal Dispatching and Rebalancing of Autonomous Mobility-on-Demand Systems
Autonomous vehicles are rapidly evolving and will soon enable the application of large-scale mobility-on-demand (MoD) systems. Managing the fleets of available vehicles, commonly known as "rebalancing," is crucial to ensure that vehicles are distributed properly to meet customer demands. This paper presents an optimal control approach to optimize vehicle scheduling and rebalancing in an autonomous mobility-on-demand (AMoD) system. We use graph theory to model a city partitioned into virtual zones. Zones represent small areas of the city where vehicles can stop and pick up/drop off customers, whereas links denote corridors of the city along which autonomous vehicles can move. They are considered vertices and edges in the graph. Vehicles employed in the AMoD scheme are autonomous, and rebalancing can be executed by dispatching available empty vehicles to areas undersupplied. Rebalancing is performed on the graph's vertices, i.e., between city areas. We propose a linear, discrete-time model of an AMoD system using a transformed network. After acquiring the model, the desired number of rebalancing vehicles for the AMoD model is derived through an optimization problem. Moreover, the well-posedness of the model is illustrated. To leverage the proposed model, we implemented the model predictive control (MPC) framework to find the optimal rebalancing and scheduling policy. We show the MPC's effectiveness and how the MPC framework can be implemented in real-time for a real-world case study. The numerical results show that the MPC with a linear cost function and linear reference, which it tracks, is effective, outperforming other MPC-based and state-of-the-art algorithms across all evaluation criteria.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
426,317
1203.0617
Bayesian inference under differential privacy
Bayesian inference is an important technique throughout statistics. The essence of Beyesian inference is to derive the posterior belief updated from prior belief by the learned information, which is a set of differentially private answers under differential privacy. Although Bayesian inference can be used in a variety of applications, it becomes theoretically hard to solve when the number of differentially private answers is large. To facilitate Bayesian inference under differential privacy, this paper proposes a systematic mechanism. The key step of the mechanism is the implementation of Bayesian updating with the best linear unbiased estimator derived by Gauss-Markov theorem. In addition, we also apply the proposed inference mechanism into an online queryanswering system, the novelty of which is that the utility for users is guaranteed by Bayesian inference in the form of credible interval and confidence level. Theoretical and experimental analysis are shown to demonstrate the efficiency and effectiveness of both inference mechanism and online query-answering system.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
14,701
1408.0592
Decoy state measurement-device-independent quantum key distribution based on the Clauser-Horne-Shimony-Holt inequality
The measurement-device-independent quantum key distribution (MDI-QKD) protocol is proposed to remove the detector side channel attacks, while its security relies on the assumption that the encoding systems are perfectly characterized. In contrast, the MDI-QKD protocol based on the Clauser-Horne-Shimony-Holt inequality (CHSH-MDI-QKD) weakens this assumption, which only requires the quantum state to be prepared in the two-dimensional Hilbert space and the devices are independent. In experimental realizations, the weak coherent state, which is always used in QKD systems due to the lack of an ideal single photon source, may be prepared in the high-dimensional space. In this paper, we investigate the decoy-state CHSH-MDI-QKD protocol with $s(3 \le s \le 5)$ intensities, including one signal state and $s-1$ decoy states, and we also consider the finite-size effect on the decoy-state CHSH-MDI-QKD protocol with five intensities. Simulation results show that this scheme is very practical.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,099
2108.02982
Improving Contrastive Learning by Visualizing Feature Transformation
Contrastive learning, which aims at minimizing the distance between positive pairs while maximizing that of negative ones, has been widely and successfully applied in unsupervised feature learning, where the design of positive and negative (pos/neg) pairs is one of its keys. In this paper, we attempt to devise a feature-level data manipulation, differing from data augmentation, to enhance the generic contrastive self-supervised learning. To this end, we first design a visualization scheme for pos/neg score (Pos/neg score indicates cosine similarity of pos/neg pair.) distribution, which enables us to analyze, interpret and understand the learning process. To our knowledge, this is the first attempt of its kind. More importantly, leveraging this tool, we gain some significant observations, which inspire our novel Feature Transformation proposals including the extrapolation of positives. This operation creates harder positives to boost the learning because hard positives enable the model to be more view-invariant. Besides, we propose the interpolation among negatives, which provides diversified negatives and makes the model more discriminative. It is the first attempt to deal with both challenges simultaneously. Experiment results show that our proposed Feature Transformation can improve at least 6.0% accuracy on ImageNet-100 over MoCo baseline, and about 2.0% accuracy on ImageNet-1K over the MoCoV2 baseline. Transferring to the downstream tasks successfully demonstrate our model is less task-bias. Visualization tools and codes https://github.com/DTennant/CL-Visualizing-Feature-Transformation .
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,518
2104.00824
Tusom2021: A Phonetically Transcribed Speech Dataset from an Endangered Language for Universal Phone Recognition Experiments
There is growing interest in ASR systems that can recognize phones in a language-independent fashion. There is additionally interest in building language technologies for low-resource and endangered languages. However, there is a paucity of realistic data that can be used to test such systems and technologies. This paper presents a publicly available, phonetically transcribed corpus of 2255 utterances (words and short phrases) in the endangered Tangkhulic language East Tusom (no ISO 639-3 code), a Tibeto-Burman language variety spoken mostly in India. Because the dataset is transcribed in terms of phones, rather than phonemes, it is a better match for universal phone recognition systems than many larger (phonemically transcribed) datasets. This paper describes the dataset and the methodology used to produce it. It further presents basic benchmarks of state-of-the-art universal phone recognition systems on the dataset as baselines for future experiments.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
228,133
1306.2290
Asymptotically Optimal Sequential Estimation of the Mean Based on Inclusion Principle
A large class of problems in sciences and engineering can be formulated as the general problem of constructing random intervals with pre-specified coverage probabilities for the mean. Wee propose a general approach for statistical inference of mean values based on accumulated observational data. We show that the construction of such random intervals can be accomplished by comparing the endpoints of random intervals with confidence sequences for the mean. Asymptotic results are obtained for such sequential methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
25,113
2307.15020
SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark
Large language models (LLMs) have shown the potential to be integrated into human daily lives. Therefore, user preference is the most critical criterion for assessing LLMs' performance in real-world scenarios. However, existing benchmarks mainly focus on measuring models' accuracy using multi-choice questions, which limits the understanding of their capabilities in real applications. We fill this gap by proposing a comprehensive Chinese benchmark SuperCLUE, named after another popular Chinese LLM benchmark CLUE. SuperCLUE encompasses three sub-tasks: actual users' queries and ratings derived from an LLM battle platform (CArena), open-ended questions with single and multiple-turn dialogues (OPEN), and closed-ended questions with the same stems as open-ended single-turn ones (CLOSE). Our study shows that accuracy on closed-ended questions is insufficient to reflect human preferences achieved on open-ended ones. At the same time, they can complement each other to predict actual user preferences. We also demonstrate that GPT-4 is a reliable judge to automatically evaluate human preferences on open-ended questions in a Chinese context. Our benchmark will be released at https://www.CLUEbenchmarks.com
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
382,126
2404.00149
VSRD: Instance-Aware Volumetric Silhouette Rendering for Weakly Supervised 3D Object Detection
Monocular 3D object detection poses a significant challenge in 3D scene understanding due to its inherently ill-posed nature in monocular depth estimation. Existing methods heavily rely on supervised learning using abundant 3D labels, typically obtained through expensive and labor-intensive annotation on LiDAR point clouds. To tackle this problem, we propose a novel weakly supervised 3D object detection framework named VSRD (Volumetric Silhouette Rendering for Detection) to train 3D object detectors without any 3D supervision but only weak 2D supervision. VSRD consists of multi-view 3D auto-labeling and subsequent training of monocular 3D object detectors using the pseudo labels generated in the auto-labeling stage. In the auto-labeling stage, we represent the surface of each instance as a signed distance field (SDF) and render its silhouette as an instance mask through our proposed instance-aware volumetric silhouette rendering. To directly optimize the 3D bounding boxes through rendering, we decompose the SDF of each instance into the SDF of a cuboid and the residual distance field (RDF) that represents the residual from the cuboid. This mechanism enables us to optimize the 3D bounding boxes in an end-to-end manner by comparing the rendered instance masks with the ground truth instance masks. The optimized 3D bounding boxes serve as effective training data for 3D object detection. We conduct extensive experiments on the KITTI-360 dataset, demonstrating that our method outperforms the existing weakly supervised 3D object detection methods. The code is available at https://github.com/skmhrk1209/VSRD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
442,781
2310.16144
ROM-Based Stochastic Optimization for a Continuous Manufacturing Process
This paper proposes a model-based optimization method for the production of automotive seals in an extrusion process. The high production throughput, coupled with quality constraints and the inherent uncertainty of the process, encourages the search for operating conditions that minimize nonconformities. The main uncertainties arise from the process variability and from the raw material itself. The proposed method, which is based on Bayesian optimization, takes these factors into account and obtains a robust set of process parameters. Due to the high computational cost and complexity of performing detailed simulations, a reduced order model is used to address the optimization. The proposal has been evaluated in a virtual environment, where it has been verified that it is able to minimize the impact of process uncertainties. In particular, it would significantly improve the quality of the product without incurring additional costs, achieving a 50% tighter dimensional tolerance compared to a solution obtained by a deterministic optimization algorithm.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
402,596
2102.09376
NFCNN: Toward a Noise Fusion Convolutional Neural Network for Image Denoising
Deep learning based methods have achieved the state-of-the-art performance in image denoising. In this paper, a deep learning based denoising method is proposed and a module called fusion block is introduced in the convolutional neural network. For this so-called Noise Fusion Convolutional Neural Network (NFCNN), there are two branches in its multi-stage architecture. One branch aims to predict the latent clean image, while the other one predicts the residual image. A fusion block is contained between every two stages by taking the predicted clean image and the predicted residual image as a part of inputs, and it outputs a fused result to the next stage. NFCNN has an attractive texture preserving ability because of the fusion block. To train NFCNN, a stage-wise supervised training strategy is adopted to avoid the vanishing gradient and exploding gradient problems. Experimental results show that NFCNN is able to perform competitive denoising results when compared with some state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
220,754
2204.05492
The performance of the amplitude-based model for complex phase retrieval
The paper aims to study the performance of the amplitude-based model \newline $\widehat{\mathbf x} \in {\rm argmin}_{{\mathbf x}\in \mathbb{C}^d}\sum_{j=1}^m\left(|\langle {\mathbf a}_j,{\mathbf x}\rangle|-b_j\right)^2$, where $b_j:=|\langle {\mathbf a}_j,{\mathbf x}_0\rangle|+\eta_j$ and ${\mathbf x}_0\in \mathbb{C}^d$ is a target signal. The model is raised in phase retrieval as well as in absolute value rectification neural networks. Many efficient algorithms have been developed to solve it in the past decades. {However, there are very few results available regarding the estimation performance in the complex case under noisy conditions.} In this paper, {we present a theoretical guarantee on the amplitude-based model for the noisy complex phase retrieval problem}. Specifically, we show that $\min_{\theta\in[0,2\pi)}\|\widehat{\mathbf x}-\exp(\mathrm{i}\theta)\cdot{\mathbf x}_0\|_2 \lesssim \frac{\|{\mathbf \eta}\|_2}{\sqrt{m}}$ holds with high probability provided the measurement vectors ${\mathbf a}_j\in \mathbb{C}^d,$ $j=1,\ldots,m,$ are {i.i.d.} complex sub-Gaussian random vectors and $m\gtrsim d$. Here ${\mathbf \eta}=(\eta_1,\ldots,\eta_m)\in \mathbb{R}^m$ is the noise vector without any assumption on the distribution. Furthermore, we prove that the reconstruction error is sharp. For the case where the target signal ${\mathbf x}_0\in \mathbb{C}^{d}$ is sparse, we establish a similar result for the nonlinear constrained $\ell_1$ minimization model. { To accomplish this, we leverage a strong version of restricted isometry property for an operator on the space of simultaneous low-rank and sparse matrices.}
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
291,041
2105.14337
Optimal transport with $f$-divergence regularization and generalized Sinkhorn algorithm
Entropic regularization provides a generalization of the original optimal transport problem. It introduces a penalty term defined by the Kullback-Leibler divergence, making the problem more tractable via the celebrated Sinkhorn algorithm. Replacing the Kullback-Leibler divergence with a general $f$-divergence leads to a natural generalization. The case of divergences defined by superlinear functions was recently studied by Di Marino and Gerolin. Using convex analysis, we extend the theory developed so far to include all $f$-divergences defined by functions of Legendre type, and prove that under some mild conditions, strong duality holds, optimums in both the primal and dual problems are attained, the generalization of the $c$-transform is well-defined, and we give sufficient conditions for the generalized Sinkhorn algorithm to converge to an optimal solution. We propose a practical algorithm for computing an approximate solution of the optimal transport problem with $f$-divergence regularization via the generalized Sinkhorn algorithm. Finally, we present experimental results on synthetic 2-dimensional data, demonstrating the effects of using different $f$-divergences for regularization, which influences convergence speed, numerical stability and sparsity of the optimal coupling.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
237,633
2205.12539
Apport des ontologies pour le calcul de la similarit\'e s\'emantique au sein d'un syst\`eme de recommandation
Measurement of the semantic relatedness or likeness between terms, words, or text data plays an important role in different applications dealing with textual data such as knowledge acquisition, recommender system, and natural language processing. Over the past few years, many ontologies have been developed and used as a form of structured representation of knowledge bases for information systems. The calculation of semantic similarity from ontology has developed and depending on the context is complemented by other similarity calculation methods. In this paper, we propose and carry on an approach for the calculation of ontology-based semantic similarity using in the context of a recommender system.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
298,595
2311.14295
Exploiting Active RIS in NOMA Networks with Hardware Impairments
Active reconfigurable intelligent surface (ARIS) is a promising way to compensate for multiplicative fading attenuation by amplifying and reflecting event signals to selected users. This paper investigates the performance of ARIS assisted non-orthogonal multiple access (NOMA) networks over cascaded Nakagami-m fading channels. The effects of hardware impairments (HIS) and reflection coefficients on ARIS-NOMA networks with imperfect successive interference cancellation (ipSIC) and perfect successive interference cancellation (pSIC) are considered. More specifically, we develop new precise and asymptotic expressions of outage probability and ergodic data rate with ipSIC/pSIC for ARIS-NOMA-HIS networks. According to the approximated analyses, the diversity orders and multiplexing gains for couple of non-orthogonal users are attained in detail. Additionally, the energy efficiency of ARIS-NOMA-HIS networks is surveyed in delay-limited and delay-tolerant transmission schemes. The simulation findings are presented to demonstrate that: i) The outage behaviors and ergodic data rates of ARIS-NOMA-HIS networks precede that of ARIS aided orthogonal multiple access (OMA) and passive reconfigurable intelligent surface (PRIS) aided OMA; ii) As the reflection coefficient of ARIS increases, ARIS-NOMA-HIS networks have the ability to provide the strengthened outage performance; and iii) ARIS-NOMA-HIS networks are more energy efficient than ARIS/PRIS-OMA networks and conventional cooperative schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
410,058
cs/0610139
How to beat the sphere-packing bound with feedback
The sphere-packing bound $E_{sp}(R)$ bounds the reliability function for fixed-length block-codes. For symmetric channels, it remains a valid bound even when strictly causal noiseless feedback is allowed from the decoder to the encoder. To beat the bound, the problem must be changed. While it has long been known that variable-length block codes can do better when trading-off error probability with expected block-length, this correspondence shows that the {\em fixed-delay} setting also presents such an opportunity for generic channels. While $E_{sp}(R)$ continues to bound the tradeoff between bit error and fixed end-to-end latency for symmetric channels used {\em without} feedback, a new bound called the ``focusing bound'' gives the limits on what can be done with feedback. If low-rate reliable flow-control is free (ie. the noisy channel has strictly positive zero-error capacity), then the focusing bound can be asymptotically achieved. Even when the channel has no zero-error capacity, it is possible to substantially beat the sphere-packing bound by synthesizing an appropriately reliable channel to carry the flow-control information.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,817
0909.4601
Rank Metric Decoder Architectures for Random Linear Network Coding with Error Control
While random linear network coding is a powerful tool for disseminating information in communication networks, it is highly susceptible to errors caused by various sources. Due to error propagation, errors greatly deteriorate the throughput of network coding and seriously undermine both reliability and security of data. Hence error control for network coding is vital. Recently, constant-dimension codes (CDCs), especially K\"otter-Kschischang (KK) codes, have been proposed for error control in random linear network coding. KK codes can also be constructed from Gabidulin codes, an important class of rank metric codes. Rank metric decoders have been recently proposed for both Gabidulin and KK codes, but they have high computational complexities. Furthermore, it is not clear whether such decoders are feasible and suitable for hardware implementations. In this paper, we reduce the complexities of rank metric decoders and propose novel decoder architectures for both codes. The synthesis results of our decoder architectures for Gabidulin and KK codes with limited error-correcting capabilities over small fields show that our architectures not only are affordable, but also achieve high throughput.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
4,569
1802.04633
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary. In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,264
2010.06083
Trace Reconstruction Problems in Computational Biology
The problem of reconstructing a string from its error-prone copies, the trace reconstruction problem, was introduced by Vladimir Levenshtein two decades ago. While there has been considerable theoretical work on trace reconstruction, practical solutions have only recently started to emerge in the context of two rapidly developing research areas: immunogenomics and DNA data storage. In immunogenomics, traces correspond to mutated copies of genes, with mutations generated naturally by the adaptive immune system. In DNA data storage, traces correspond to noisy copies of DNA molecules that encode digital data, with errors being artifacts of the data retrieval process. In this paper, we introduce several new trace generation models and open questions relevant to trace reconstruction for immunogenomics and DNA data storage, survey theoretical results on trace reconstruction, and highlight their connections to computational biology. Throughout, we discuss the applicability and shortcomings of known solutions and suggest future research directions.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
200,358
1806.02984
DeepFirearm: Learning Discriminative Feature Representation for Fine-grained Firearm Retrieval
There are great demands for automatically regulating inappropriate appearance of shocking firearm images in social media or identifying firearm types in forensics. Image retrieval techniques have great potential to solve these problems. To facilitate research in this area, we introduce Firearm 14k, a large dataset consisting of over 14,000 images in 167 categories. It can be used for both fine-grained recognition and retrieval of firearm images. Recent advances in image retrieval are mainly driven by fine-tuning state-of-the-art convolutional neural networks for retrieval task. The conventional single margin contrastive loss, known for its simplicity and good performance, has been widely used. We find that it performs poorly on the Firearm 14k dataset due to: (1) Loss contributed by positive and negative image pairs is unbalanced during training process. (2) A huge domain gap exists between this dataset and ImageNet. We propose to deal with the unbalanced loss by employing a double margin contrastive loss. We tackle the domain gap issue with a two-stage training strategy, where we first fine-tune the network for classification, and then fine-tune it for retrieval. Experimental results show that our approach outperforms the conventional single margin approach by a large margin (up to 88.5% relative improvement) and even surpasses the strong triplet-loss-based approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
99,903
cs/0601059
A Descriptive Model of Robot Team and the Dynamic Evolution of Robot Team Cooperation
At present, the research on robot team cooperation is still in qualitative analysis phase and lacks the description model that can quantitatively describe the dynamical evolution of team cooperative relationships with constantly changeable task demand in Multi-robot field. First this paper whole and static describes organization model HWROM of robot team, then uses Markov course and Bayesian theorem for reference, dynamical describes the team cooperative relationships building. Finally from cooperative entity layer, ability layer and relative layer we research team formation and cooperative mechanism, and discuss how to optimize relative action sets during the evolution. The dynamic evolution model of robot team and cooperative relationships between robot teams proposed and described in this paper can not only generalize the robot team as a whole, but also depict the dynamic evolving process quantitatively. Users can also make the prediction of the cooperative relationship and the action of the robot team encountering new demands based on this model. Journal web page & a lot of robotic related papers www.ars-journal.com
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
539,199
1611.05384
A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging
Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this work, we propose a feature-enriched neural model for joint Chinese word segmentation and part-of-speech tagging task. Specifically, to simulate the feature templates of traditional discrete feature based models, we use different filters to model the complex compositional features with convolutional and pooling layer, and then utilize long distance dependency information with recurrent layer. Experimental results on five different datasets show the effectiveness of our proposed model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
64,010
2202.07832
Heterogeneous Graph Learning for Explainable Recommendation over Academic Networks
With the explosive growth of new graduates with research degrees every year, unprecedented challenges arise for early-career researchers to find a job at a suitable institution. This study aims to understand the behavior of academic job transition and hence recommend suitable institutions for PhD graduates. Specifically, we design a deep learning model to predict the career move of early-career researchers and provide suggestions. The design is built on top of scholarly/academic networks, which contains abundant information about scientific collaboration among scholars and institutions. We construct a heterogeneous scholarly network to facilitate the exploring of the behavior of career moves and the recommendation of institutions for scholars. We devise an unsupervised learning model called HAI (Heterogeneous graph Attention InfoMax) which aggregates attention mechanism and mutual information for institution recommendation. Moreover, we propose scholar attention and meta-path attention to discover the hidden relationships between several meta-paths. With these mechanisms, HAI provides ordered recommendations with explainability. We evaluate HAI upon a real-world dataset against baseline methods. Experimental results verify the effectiveness and efficiency of our approach.
false
false
false
true
true
true
true
false
false
false
false
false
false
false
false
false
false
false
280,674
2305.20015
AI for Low-Code for AI
Low-code programming allows citizen developers to create programs with minimal coding effort, typically via visual (e.g. drag-and-drop) interfaces. In parallel, recent AI-powered tools such as Copilot and ChatGPT generate programs from natural language instructions. We argue that these modalities are complementary: tools like ChatGPT greatly reduce the need to memorize large APIs but still require their users to read (and modify) programs, whereas visual tools abstract away most or all programming but struggle to provide easy access to large APIs. At their intersection, we propose LowCoder, the first low-code tool for developing AI pipelines that supports both a visual programming interface (LowCoder_VP) and an AI-powered natural language interface (LowCoder_NL). We leverage this tool to provide some of the first insights into whether and how these two modalities help programmers by conducting a user study. We task 20 developers with varying levels of AI expertise with implementing four ML pipelines using LowCoder, replacing the LowCoder_NL component with a simple keyword search in half the tasks. Overall, we find that LowCoder is especially useful for (i) Discoverability: using LowCoder_NL, participants discovered new operators in 75% of the tasks, compared to just 32.5% and 27.5% using web search or scrolling through options respectively in the keyword-search condition, and (ii) Iterative Composition: 82.5% of tasks were successfully completed and many initial pipelines were further successfully improved. Qualitative analysis shows that AI helps users discover how to implement constructs when they know what to do, but still fails to support novices when they lack clarity on what they want to accomplish. Overall, our work highlights the benefits of combining the power of AI with low-code programming.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
369,774
1807.09418
Video Storytelling: Textual Summaries for Events
Bridging vision and natural language is a longstanding goal in computer vision and multimedia research. While earlier works focus on generating a single-sentence description for visual content, recent works have studied paragraph generation. In this work, we introduce the problem of video storytelling, which aims at generating coherent and succinct stories for long videos. Video storytelling introduces new challenges, mainly due to the diversity of the story and the length and complexity of the video. We propose novel methods to address the challenges. First, we propose a context-aware framework for multimodal embedding learning, where we design a Residual Bidirectional Recurrent Neural Network to leverage contextual information from past and future. Second, we propose a Narrator model to discover the underlying storyline. The Narrator is formulated as a reinforcement learning agent which is trained by directly optimizing the textual metric of the generated story. We evaluate our method on the Video Story dataset, a new dataset that we have collected to enable the study. We compare our method with multiple state-of-the-art baselines, and show that our method achieves better performance, in terms of quantitative measures and user study.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
103,717
2403.04050
Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations
Reinforcement learning (RL) has achieved phenomenal success in various domains. However, its data-driven nature also introduces new vulnerabilities that can be exploited by malicious opponents. Recent work shows that a well-trained RL agent can be easily manipulated by strategically perturbing its state observations at the test stage. Existing solutions either introduce a regularization term to improve the smoothness of the trained policy against perturbations or alternatively train the agent's policy and the attacker's policy. However, the former does not provide sufficient protection against strong attacks, while the latter is computationally prohibitive for large environments. In this work, we propose a new robust RL algorithm for deriving a pessimistic policy to safeguard against an agent's uncertainty about true states. This approach is further enhanced with belief state inference and diffusion-based state purification to reduce uncertainty. Empirical results show that our approach obtains superb performance under strong attacks and has a comparable training overhead with regularization-based methods. Our code is available at https://github.com/SliencerX/Belief-enriched-robust-Q-learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
435,433
2312.10191
Tell Me What You See: Text-Guided Real-World Image Denoising
Image reconstruction from noisy sensor measurements is a challenging problem. Many solutions have been proposed for it, where the main approach is learning good natural images prior along with modeling the true statistics of the noise in the scene. In the presence of very low lighting conditions, such approaches are usually not enough, and additional information is required, e.g., in the form of using multiple captures. We suggest as an alternative to add a description of the scene as prior, which can be easily done by the photographer capturing the scene. Inspired by the remarkable success of diffusion models for image generation, using a text-guided diffusion model we show that adding image caption information significantly improves image denoising and reconstruction on both synthetic and real-world images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
416,052
2303.01141
DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint Satisfaction
As machine learning models, specifically neural networks, are becoming increasingly popular, there are concerns regarding their trustworthiness, specially in safety-critical applications, e.g. actions of an autonomous vehicle must be safe. There are approaches that can train neural networks where such domain requirements are enforced as constraints, but they either cannot guarantee that the constraint will be satisfied by all possible predictions (even on unseen data) or they are limited in the type of constraints that can be enforced. In this paper, we present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions. The approach builds on earlier work where learning linear models is formulated as a constraint satisfaction problem (CSP). To make this idea applicable to neural networks, two crucial new elements are added: constraint propagation over the network layers, and weight updates based on a mix of gradient descent and CSP solving. Evaluation on various machine learning tasks demonstrates that our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
348,846
1206.6412
A Simple Algorithm for Semi-supervised Learning with Improved Generalization Error Bound
In this work, we develop a simple algorithm for semi-supervised regression. The key idea is to use the top eigenfunctions of integral operator derived from both labeled and unlabeled examples as the basis functions and learn the prediction function by a simple linear regression. We show that under appropriate assumptions about the integral operator, this approach is able to achieve an improved regression error bound better than existing bounds of supervised learning. We also verify the effectiveness of the proposed algorithm by an empirical study.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,947
2003.13659
Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation
Learning a good image prior is a long-term goal for image restoration and manipulation. While existing methods like deep image prior (DIP) capture low-level image statistics, there are still gaps toward an image prior that captures rich image semantics including color, spatial coherence, textures, and high-level concepts. This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images. As shown in Fig.1, the deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images. It also enables diverse image manipulation including random jittering, image morphing, and category transfer. Such highly flexible restoration and manipulation are made possible through relaxing the assumption of existing GAN-inversion methods, which tend to fix the generator. Notably, we allow the generator to be fine-tuned on-the-fly in a progressive manner regularized by feature distance obtained by the discriminator in GAN. We show that these easy-to-implement and practical changes help preserve the reconstruction to remain in the manifold of nature image, and thus lead to more precise and faithful reconstruction for real images. Code is available at https://github.com/XingangPan/deep-generative-prior.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
170,270
1807.03477
Shape analysis of framed space curves
In the elastic shape analysis approach to shape matching and object classification, plane curves are represented as points in an infinite-dimensional Riemannian manifold, wherein shape dissimilarity is measured by geodesic distance. A remarkable result of Younes, Michor, Shah and Mumford says that the space of closed planar shapes, endowed with a natural metric, is isometric to an infinite-dimensional Grassmann manifold via the so-called square root transform. This result facilitates efficient shape comparison by virtue of explicit descriptions of Grassmannian geodesics. In this paper, we extend this shape analysis framework to treat shapes of framed space curves. By considering framed curves, we are able to generalize the square root transform by using quaternionic arithmetic and properties of the Hopf fibration. Under our coordinate transformation, the space of closed framed curves corresponds to an infinite-dimensional complex Grassmannian. This allows us to describe geodesics in framed curve space explicitly. We are also able to produce explicit geodesics between closed, unframed space curves by studying the action of the loop group of the circle on the Grassmann manifold. Averages of collections of plane and space curves are computed via a novel algorithm utilizing flag means.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
102,530
2104.01568
Information-theoretic regularization for Multi-source Domain Adaptation
Adversarial learning strategy has demonstrated remarkable performance in dealing with single-source Domain Adaptation (DA) problems, and it has recently been applied to Multi-source DA (MDA) problems. Although most existing MDA strategies rely on a multiple domain discriminator setting, its effect on the latent space representations has been poorly understood. Here we adopt an information-theoretic approach to identify and resolve the potential adverse effect of the multiple domain discriminators on MDA: disintegration of domain-discriminative information, limited computational scalability, and a large variance in the gradient of the loss during training. We examine the above issues by situating adversarial DA in the context of information regularization. This also provides a theoretical justification for using a single and unified domain discriminator. Based on this idea, we implement a novel neural architecture called a Multi-source Information-regularized Adaptation Networks (MIAN). Large-scale experiments demonstrate that MIAN, despite its structural simplicity, reliably and significantly outperforms other state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
228,399
1808.01625
Towards Closing the Gap in Weakly Supervised Semantic Segmentation with DCNNs: Combining Local and Global Models
Generating training sets for deep convolutional neural networks (DCNNs) is a bottleneck for modern real-world applications. This is a demanding task for applications where annotating training data is costly, such as in semantic segmentation. In the literature, there is still a gap between the performance achieved by a network trained on full and on weak annotations. In this paper, we establish a strategy to measure this gap and to identify the ingredients necessary to reduce it. On scribbles, we establish new state-of-the-art results: we obtain a mIoU of 75.6% without, and 75.7% with CRF post-processing. We reduce the gap by 64.2% whereas the current state-of-the-art reduces it only by 57.5%. Thanks to a systematic study of the different ingredients involved in the weakly supervised scenario and an original experimental strategy, we unravel a counter-intuitive mechanism that is simple and amenable to generalisations to other weakly-supervised scenarios: averaging poor local predicted annotations with the baseline ones and reuse them for training a DCNN yields new state-of-the-art results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,610
2310.19225
Stochastic Configuration Machines: FPGA Implementation
Neural networks for industrial applications generally have additional constraints such as response speed, memory size and power usage. Randomized learners can address some of these issues. However, hardware solutions can provide better resource reduction whilst maintaining the model's performance. Stochastic configuration networks (SCNs) are a prime choice in industrial applications due to their merits and feasibility for data modelling. Stochastic Configuration Machines (SCMs) extend this to focus on reducing the memory constraints by limiting the randomized weights to a binary value with a scalar for each node and using a mechanism model to improve the learning performance and result interpretability. This paper aims to implement SCM models on a field programmable gate array (FPGA) and introduce binary-coded inputs to the algorithm. Results are reported for two benchmark and two industrial datasets, including SCM with single-layer and deep architectures.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
403,906
2403.00465
Polyamorous Scheduling
Finding schedules for pairwise meetings between the members of a complex social group without creating interpersonal conflict is challenging, especially when different relationships have different needs. We formally define and study the underlying optimisation problem: Polyamorous Scheduling. In Polyamorous Scheduling, we are given an edge-weighted graph and try to find a periodic schedule of matchings in this graph such that the maximal weighted waiting time between consecutive occurrences of the same edge is minimised. We show that the problem is NP-hard and that there is no efficient approximation algorithm with a better ratio than 4/3 unless P = NP. On the positive side, we obtain an $O(\log n)$-approximation algorithm; indeed, a $O(\log \Delta)$-approximation for $\Delta$ the maximum degree, i.e., the largest number of relationships of any individual. We also define a generalisation of density from the Pinwheel Scheduling Problem, "poly density", and ask whether there exists a poly-density threshold similar to the 5/6-density threshold for Pinwheel Scheduling [Kawamura, STOC 2024]. Polyamorous Scheduling is a natural generalisation of Pinwheel Scheduling with respect to its optimisation variant, Bamboo Garden Trimming. Our work contributes the first nontrivial hardness-of-approximation reduction for any periodic scheduling problem, and opens up numerous avenues for further study of Polyamorous Scheduling.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
433,992
2006.05601
Robust Estimation of Tree Structured Ising Models
We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities. In this paper, we focus on the problem of robust estimation of tree-structured Ising models. Without any additional assumption of side information, this is an open problem. We first prove that this problem is unidentifiable, however, this unidentifiability is limited to a small equivalence class of trees formed by leaf nodes exchanging positions with their neighbors. Next, we propose an algorithm to solve the above problem with logarithmic sample complexity in the number of nodes and polynomial run-time complexity. Lastly, we empirically demonstrate that, as expected, existing algorithms are not inherently robust in the proposed setting whereas our algorithm correctly recovers the underlying equivalence class.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,129
2405.21042
Comparing the information content of probabilistic representation spaces
Probabilistic representation spaces convey information about a dataset and are shaped by factors such as the training data, network architecture, and loss function. Comparing the information content of such spaces is crucial for understanding the learning process, yet most existing methods assume point-based representations, neglecting the distributional nature of probabilistic spaces. To address this gap, we propose two information-theoretic measures to compare general probabilistic representation spaces by extending classic methods to compare the information content of hard clustering assignments. Additionally, we introduce a lightweight method of estimation that is based on fingerprinting a representation space with a sample of the dataset, designed for scenarios where the communicated information is limited to a few bits. We demonstrate the utility of these measures in three case studies. First, in the context of unsupervised disentanglement, we identify recurring information fragments within individual latent dimensions of VAE and InfoGAN ensembles. Second, we compare the full latent spaces of models and reveal consistent information content across datasets and methods, despite variability during training. Finally, we leverage the differentiability of our measures to perform model fusion, synthesizing the information content of weak learners into a single, coherent representation. Across these applications, the direct comparison of information content offers a natural basis for characterizing the processing of information.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
459,621
1104.3162
Ubiquitousness of link-density and link-pattern communities in real-world networks
Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
10,005
2001.05848
Translating multispectral imagery to nighttime imagery via conditional generative adversarial networks
Nighttime satellite imagery has been applied in a wide range of fields. However, our limited understanding of how observed light intensity is formed and whether it can be simulated greatly hinders its further application. This study explores the potential of conditional Generative Adversarial Networks (cGAN) in translating multispectral imagery to nighttime imagery. A popular cGAN framework, pix2pix, was adopted and modified to facilitate this translation using gridded training image pairs derived from Landsat 8 and Visible Infrared Imaging Radiometer Suite (VIIRS). The results of this study prove the possibility of multispectral-to-nighttime translation and further indicate that, with the additional social media data, the generated nighttime imagery can be very similar to the ground-truth imagery. This study fills the gap in understanding the composition of satellite observed nighttime light and provides new paradigms to solve the emerging problems in nighttime remote sensing fields, including nighttime series construction, light desaturation, and multi-sensor calibration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
160,657
2412.03387
Adaptive Model Predictive Control for Differential-Algebraic Systems towards a Higher Path Accuracy for Physically Coupled Robots
The physical coupling between robots has the potential to improve the capabilities of multi-robot systems in challenging manufacturing processes. However, the path tracking accuracy of physically coupled robots is not studied adequately, especially considering the uncertain kinematic parameters, the mechanical elasticity, and the built-in controllers of off-the-shelf robots. This paper addresses these issues with a novel differential-algebraic system model which is verified against measurement data from real execution. The uncertain kinematic parameters are estimated online to adapt the model. Consequently, an adaptive model predictive controller is designed as a coordinator between the robots. The controller achieves a path tracking error reduction of 88.6% compared to the state-of-the-art benchmark in the simulation.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
513,939
2301.00005
Intrinsic Motivation in Dynamical Control Systems
Biological systems often choose actions without an explicit reward signal, a phenomenon known as intrinsic motivation. The computational principles underlying this behavior remain poorly understood. In this study, we investigate an information-theoretic approach to intrinsic motivation, based on maximizing an agent's empowerment (the mutual information between its past actions and future states). We show that this approach generalizes previous attempts to formalize intrinsic motivation, and we provide a computationally efficient algorithm for computing the necessary quantities. We test our approach on several benchmark control problems, and we explain its success in guiding intrinsically motivated behaviors by relating our information-theoretic control function to fundamental properties of the dynamical system representing the combined agent-environment system. This opens the door for designing practical artificial, intrinsically motivated controllers and for linking animal behaviors to their dynamical properties.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
338,765
2004.02167
Change Rate Estimation and Optimal Freshness in Web Page Crawling
For providing quick and accurate results, a search engine maintains a local snapshot of the entire web. And, to keep this local cache fresh, it employs a crawler for tracking changes across various web pages. However, finite bandwidth availability and server restrictions impose some constraints on the crawling frequency. Consequently, the ideal crawling rates are the ones that maximise the freshness of the local cache and also respect the above constraints. Azar et al. 2018 recently proposed a tractable algorithm to solve this optimisation problem. However, they assume the knowledge of the exact page change rates, which is unrealistic in practice. We address this issue here. Specifically, we provide two novel schemes for online estimation of page change rates. Both schemes only need partial information about the page change process, i.e., they only need to know if the page has changed or not since the last crawled instance. For both these schemes, we prove convergence and, also, derive their convergence rates. Finally, we provide some numerical experiments to compare the performance of our proposed estimators with the existing ones (e.g., MLE).
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
false
171,142
2201.09414
Generalized Spatially-Coupled Parallel Concatenated Codes With Partial Repetition
A new class of spatially-coupled turbo-like codes (SC-TCs), dubbed generalized spatially coupled parallel concatenated codes (GSC-PCCs), is introduced. These codes are constructed by applying spatial coupling on parallel concatenated codes (PCCs) with a fraction of information bits repeated $q$ times. GSC-PCCs can be seen as a generalization of the original spatially-coupled parallel concatenated codes proposed by Moloudi et al. [2]. To characterize the asymptotic performance of GSC-PCCs, we derive the corresponding density evolution equations and compute their decoding thresholds. The threshold saturation effect is observed and proven. Most importantly, we rigorously prove that any rate-$R$ GSC-PCC ensemble with 2-state convolutional component codes achieves at least a fraction $1-\frac{R}{R+q}$ of the capacity of the binary erasure channel (BEC) for repetition factor $q\geq2$ and this multiplicative gap vanishes as $q$ tends to infinity. To the best of our knowledge, this is the first class of SC-TCs that are proven to be capacity-achieving. Further, the connection between the strength of the component codes, the decoding thresholds of GSC-PCCs, and the repetition factor are established. The superiority of the proposed codes with finite blocklength is exemplified by comparing their error performance with that of existing SC-TCs via computer simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
276,666
2004.14942
Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing
Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence. Deep learning is based on computational models that are, to a certain extent, bio-inspired, as they rely on networks of connected simple computing units operating in parallel. Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring. These successes have been mostly supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raise the question of whether the described progress will be slowed or halted due to hardware limitations. This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks. Central themes are the reliance on non-von-Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in artificial intelligence, we briefly discuss an example based reservoir computing. We conclude the review by speculating on the big picture view of future neuromorphic and brain-inspired computing systems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
175,058
1810.09245
A Review on Learning Planning Action Models for Socio-Communicative HRI
For social robots to be brought more into widespread use in the fields of companionship, care taking and domestic help, they must be capable of demonstrating social intelligence. In order to be acceptable, they must exhibit socio-communicative skills. Classic approaches to program HRI from observed human-human interactions fails to capture the subtlety of multimodal interactions as well as the key structural differences between robots and humans. The former arises due to a difficulty in quantifying and coding multimodal behaviours, while the latter due to a difference of the degrees of liberty between a robot and a human. However, the notion of reverse engineering from multimodal HRI traces to learn the underlying behavioral blueprint of the robot given multimodal traces seems an option worth exploring. With this spirit, the entire HRI can be seen as a sequence of exchanges of speech acts between the robot and human, each act treated as an action, bearing in mind that the entire sequence is goal-driven. Thus, this entire interaction can be treated as a sequence of actions propelling the interaction from its initial to goal state, also known as a plan in the domain of AI planning. In the same domain, this action sequence that stems from plan execution can be represented as a trace. AI techniques, such as machine learning, can be used to learn behavioral models (also known as symbolic action models in AI), intended to be reusable for AI planning, from the aforementioned multimodal traces. This article reviews recent machine learning techniques for learning planning action models which can be applied to the field of HRI with the intent of rendering robots as socio-communicative.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
111,021
2412.09220
USDRL: Unified Skeleton-Based Dense Representation Learning with Multi-Grained Feature Decorrelation
Contrastive learning has achieved great success in skeleton-based representation learning recently. However, the prevailing methods are predominantly negative-based, necessitating additional momentum encoder and memory bank to get negative samples, which increases the difficulty of model training. Furthermore, these methods primarily concentrate on learning a global representation for recognition and retrieval tasks, while overlooking the rich and detailed local representations that are crucial for dense prediction tasks. To alleviate these issues, we introduce a Unified Skeleton-based Dense Representation Learning framework based on feature decorrelation, called USDRL, which employs feature decorrelation across temporal, spatial, and instance domains in a multi-grained manner to reduce redundancy among dimensions of the representations to maximize information extraction from features. Additionally, we design a Dense Spatio-Temporal Encoder (DSTE) to capture fine-grained action representations effectively, thereby enhancing the performance of dense prediction tasks. Comprehensive experiments, conducted on the benchmarks NTU-60, NTU-120, PKU-MMD I, and PKU-MMD II, across diverse downstream tasks including action recognition, action retrieval, and action detection, conclusively demonstrate that our approach significantly outperforms the current state-of-the-art (SOTA) approaches. Our code and models are available at https://github.com/wengwanjiang/USDRL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
516,400
0810.1248
Resource Allocation in Multiple Access Channels
We consider the problem of rate allocation in a Gaussian multiple-access channel, with the goal of maximizing a utility function over transmission rates. In contrast to the literature which focuses on linear utility functions, we study general concave utility functions. We present a gradient projection algorithm for this problem. Since the constraint set of the problem is described by exponentially many constraints, methods that use exact projections are computationally intractable. Therefore, we develop a new method that uses approximate projections. We use the polymatroid structure of the capacity region to show that the approximate projection can be implemented by a recursive algorithm in time polynomial in the number of users. We further propose another algorithm for implementing the approximate projections using rate-splitting and show improved bounds on its convergence time.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
2,466
2305.14470
Integrated Object Deformation and Contact Patch Estimation from Visuo-Tactile Feedback
Reasoning over the interplay between object deformation and force transmission through contact is central to the manipulation of compliant objects. In this paper, we propose Neural Deforming Contact Field (NDCF), a representation that jointly models object deformations and contact patches from visuo-tactile feedback using implicit representations. Representing the object geometry and contact with the environment implicitly allows a single model to predict contact patches of varying complexity. Additionally, learning geometry and contact simultaneously allows us to enforce physical priors, such as ensuring contacts lie on the surface of the object. We propose a neural network architecture to learn a NDCF, and train it using simulated data. We then demonstrate that the learned NDCF transfers directly to the real-world without the need for fine-tuning. We benchmark our proposed approach against a baseline representing geometry and contact patches with point clouds. We find that NDCF performs better on simulated data and in transfer to the real-world.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
367,047
2310.13852
Gradual Domain Adaptation: Theory and Algorithms
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way. Though widely applied, UDA faces a great challenge whenever the distribution shift between the source and the target is large. Gradual domain adaptation (GDA) mitigates this limitation by using intermediate domains to gradually adapt from the source to the target domain. In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound compared with Kumar et al. (2020). Our theoretical analysis leads to an interesting insight: to minimize the generalization error on the target domain, the sequence of intermediate domains should be placed uniformly along the Wasserstein geodesic between the source and target domains. The insight is particularly useful under the situation where intermediate domains are missing or scarce, which is often the case in real-world applications. Based on the insight, we propose $\textbf{G}$enerative Gradual D$\textbf{O}$main $\textbf{A}$daptation with Optimal $\textbf{T}$ransport (GOAT), an algorithmic framework that can generate intermediate domains in a data-dependent way. More concretely, we first generate intermediate domains along the Wasserstein geodesic between two given consecutive domains in a feature space, then apply gradual self-training to adapt the source-trained classifier to the target along the sequence of intermediate domains. Empirically, we demonstrate that our GOAT framework can improve the performance of standard GDA when the given intermediate domains are scarce, significantly broadening the real-world application scenarios of GDA. Our code is available at https://github.com/uiuctml/GOAT.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
401,608
2411.11318
Syllabus: Portable Curricula for Reinforcement Learning Agents
Curriculum learning has been a quiet yet crucial component of many of the high-profile successes of reinforcement learning. Despite this, none of the major reinforcement learning libraries directly support curriculum learning or include curriculum learning implementations. These methods can improve the capabilities and robustness of RL agents, but often require significant, complex changes to agent training code. We introduce Syllabus, a library for training RL agents with curriculum learning, as a solution to this problem. Syllabus provides a universal API for curriculum learning algorithms, implementations of popular curriculum learning methods, and infrastructure for easily integrating them with distributed training code written in nearly any RL library. Syllabus provides a minimal API for each of the core components of curriculum learning, dramatically simplifying the process of designing new algorithms and applying existing algorithms to new environments. We demonstrate that the same Syllabus code can be used to train agents written in multiple different RL libraries on numerous domains. In doing so, we present the first examples of curriculum learning in NetHack and Neural MMO, two of the premier challenges for single-agent and multi-agent RL respectively, achieving strong results compared to state of the art baselines.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
509,010
2004.08545
Kernels for time series with irregularly-spaced multivariate observations
Time series are an interesting frontier for kernel-based methods, for the simple reason that there is no kernel designed to represent them and their unique characteristics in full generality. Existing sequential kernels ignore the time indices, with many assuming that the series must be regularly-spaced; some such kernels are not even psd. In this manuscript, we show that a "series kernel" that is general enough to represent irregularly-spaced multivariate time series may be built out of well-known "vector kernels". We also show that all series kernels constructed using our methodology are psd, and are thus widely applicable. We demonstrate this point by formulating a Gaussian process-based strategy - with our series kernel at its heart - to make predictions about test series when given a training set. We validate the strategy experimentally by estimating its generalisation error on multiple datasets and comparing it to relevant baselines. We also demonstrate that our series kernel may be used for the more traditional setting of time series classification, where its performance is broadly in line with alternative methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
173,089
1506.08438
Unsupervised Semantic Parsing of Video Collections
Human communication typically has an underlying structure. This is reflected in the fact that in many user generated videos, a starting point, ending, and certain objective steps between these two can be identified. In this paper, we propose a method for parsing a video into such semantic steps in an unsupervised way. The proposed method is capable of providing a semantic "storyline" of the video composed of its objective steps. We accomplish this using both visual and language cues in a joint generative model. The proposed method can also provide a textual description for each of the identified semantic steps and video segments. We evaluate this method on a large number of complex YouTube videos and show results of unprecedented quality for this intricate and impactful problem.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
44,615
2007.13374
Decomposing Generation Networks with Structure Prediction for Recipe Generation
Recipe generation from food images and ingredients is a challenging task, which requires the interpretation of the information from another modality. Different from the image captioning task, where the captions usually have one sentence, cooking instructions contain multiple sentences and have obvious structures. To help the model capture the recipe structure and avoid missing some cooking details, we propose a novel framework: Decomposing Generation Networks (DGN) with structure prediction, to get more structured and complete recipe generation outputs. Specifically, we split each cooking instruction into several phases, and assign different sub-generators to each phase. Our approach includes two novel ideas: (i) learning the recipe structures with the global structure prediction component and (ii) producing recipe phases in the sub-generator output component based on the predicted structure. Extensive experiments on the challenging large-scale Recipe1M dataset validate the effectiveness of our proposed model, which improves the performance over the state-of-the-art results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
189,110
2010.00526
LiveQA: A Question Answering Dataset over Sports Live
In this paper, we introduce LiveQA, a new question answering dataset constructed from play-by-play live broadcast. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu (https://nba.hupu.com/games) website. Derived from the characteristics of sports games, LiveQA can potentially test the reasoning ability across timeline-based live broadcasts, which is challenging compared to the existing datasets. In LiveQA, the questions require understanding the timeline, tracking events or doing mathematical computations. Our preliminary experiments show that the dataset introduces a challenging problem for question answering models, and a strong baseline model only achieves the accuracy of 53.1\% and cannot beat the dominant option rule. We release the code and data of this paper for future research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
198,317
2405.06003
Binary Hypothesis Testing for Softmax Models and Leverage Score Models
Softmax distributions are widely used in machine learning, including Large Language Models (LLMs) where the attention unit uses softmax distributions. We abstract the attention unit as the softmax model, where given a vector input, the model produces an output drawn from the softmax distribution (which depends on the vector input). We consider the fundamental problem of binary hypothesis testing in the setting of softmax models. That is, given an unknown softmax model, which is known to be one of the two given softmax models, how many queries are needed to determine which one is the truth? We show that the sample complexity is asymptotically $O(\epsilon^{-2})$ where $\epsilon$ is a certain distance between the parameters of the models. Furthermore, we draw analogy between the softmax model and the leverage score model, an important tool for algorithm design in linear algebra and graph theory. The leverage score model, on a high level, is a model which, given vector input, produces an output drawn from a distribution dependent on the input. We obtain similar results for the binary hypothesis testing problem for leverage score models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
453,144
1504.07865
ASTROMLSKIT: A New Statistical Machine Learning Toolkit: A Platform for Data Analytics in Astronomy
Astroinformatics is a new impact area in the world of astronomy, occasionally called the final frontier, where several astrophysicists, statisticians and computer scientists work together to tackle various data intensive astronomical problems. Exponential growth in the data volume and increased complexity of the data augments difficult questions to the existing challenges. Classical problems in Astronomy are compounded by accumulation of astronomical volume of complex data, rendering the task of classification and interpretation incredibly laborious. The presence of noise in the data makes analysis and interpretation even more arduous. Machine learning algorithms and data analytic techniques provide the right platform for the challenges posed by these problems. A diverse range of open problem like star-galaxy separation, detection and classification of exoplanets, classification of supernovae is discussed. The focus of the paper is the applicability and efficacy of various machine learning algorithms like K Nearest Neighbor (KNN), random forest (RF), decision tree (DT), Support Vector Machine (SVM), Na\"ive Bayes and Linear Discriminant Analysis (LDA) in analysis and inference of the decision theoretic problems in Astronomy. The machine learning algorithms, integrated into ASTROMLSKIT, a toolkit developed in the course of the work, have been used to analyze HabCat data and supernovae data. Accuracy has been found to be appreciably good.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
42,584
2405.13095
Presentations are not always linear! GNN meets LLM for Document-to-Presentation Transformation with Attribution
Automatically generating a presentation from the text of a long document is a challenging and useful problem. In contrast to a flat summary, a presentation needs to have a better and non-linear narrative, i.e., the content of a slide can come from different and non-contiguous parts of the given document. However, it is difficult to incorporate such non-linear mapping of content to slides and ensure that the content is faithful to the document. LLMs are prone to hallucination and their performance degrades with the length of the input document. Towards this, we propose a novel graph based solution where we learn a graph from the input document and use a combination of graph neural network and LLM to generate a presentation with attribution of content for each slide. We conduct thorough experiments to show the merit of our approach compared to directly using LLMs for this task.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
455,809
2208.12753
Spatio-Temporal Representation Learning Enhanced Source Cell-phone Recognition from Speech Recordings
The existing source cell-phone recognition method lacks the long-term feature characterization of the source device, resulting in inaccurate representation of the source cell-phone related features which leads to insufficient recognition accuracy. In this paper, we propose a source cell-phone recognition method based on spatio-temporal representation learning, which includes two main parts: extraction of sequential Gaussian mean matrix features and construction of a recognition model based on spatio-temporal representation learning. In the feature extraction part, based on the analysis of time-series representation of recording source signals, we extract sequential Gaussian mean matrix with long-term and short-term representation ability by using the sensitivity of Gaussian mixture model to data distribution. In the model construction part, we design a structured spatio-temporal representation learning network C3D-BiLSTM to fully characterize the spatio-temporal information, combine 3D convolutional network and bidirectional long short-term memory network for short-term spectral information and long-time fluctuation information representation learning, and achieve accurate recognition of cell-phones by fusing spatio-temporal feature information of recording source signals. The method achieves an average accuracy of 99.03% for the closed-set recognition of 45 cell-phones under the CCNU\_Mobile dataset, and 98.18% in small sample size experiments, with recognition performance better than the existing state-of-the-art methods. The experimental results show that the method exhibits excellent recognition performance in multi-class cell-phones recognition.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
314,838
2401.11107
Exploiting Duality in Open Information Extraction with Predicate Prompt
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence. Compared with general information extraction (IE), OpenIE poses more challenges for the IE models, {especially when multiple complicated triplets exist in a sentence. To extract these complicated triplets more effectively, in this paper we propose a novel generative OpenIE model, namely \emph{DualOIE}, which achieves a dual task at the same time as extracting some triplets from the sentence, i.e., converting the triplets into the sentence.} Such dual task encourages the model to correctly recognize the structure of the given sentence and thus is helpful to extract all potential triplets from the sentence. Specifically, DualOIE extracts the triplets in two steps: 1) first extracting a sequence of all potential predicates, 2) then using the predicate sequence as a prompt to induce the generation of triplets. Our experiments on two benchmarks and our dataset constructed from Meituan demonstrate that DualOIE achieves the best performance among the state-of-the-art baselines. Furthermore, the online A/B test on Meituan platform shows that 0.93\% improvement of QV-CTR and 0.56\% improvement of UV-CTR have been obtained when the triplets extracted by DualOIE were leveraged in Meituan's search system.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
422,873
1605.06821
The Game-Theoretic Formation of Interconnections Between Networks
We introduce a network design game where the objective of the players is to design the interconnections between the nodes of two different networks $G_1$ and $G_2$ in order to maximize certain local utility functions. In this setting, each player is associated with a node in $G_1$ and has functional dependencies on certain nodes in $G_2$. We use a distance-based utility for the players in which the goal of each player is to purchase a set of edges (incident to its associated node) such that the sum of the distances between its associated node and the nodes it depends on in $G_2$ is minimized. We consider a heterogeneous set of players (i.e., players have their own costs and benefits for constructing edges). We show that finding a best response of a player in this game is NP-hard. Despite this, we characterize some properties of the best response actions which are helpful in determining a Nash equilibrium for certain instances of this game. In particular, we prove existence of pure Nash equilibria in this game when $G_2$ contains a star subgraph, and provide an algorithm that outputs such an equilibrium for any set of players. Finally, we show that the price of anarchy in this game can be arbitrarily large.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
56,194
2403.01225
A Cost-Effective Cooperative Exploration and Inspection Strategy for Heterogeneous Aerial System
In this paper, we propose a cost-effective strategy for heterogeneous UAV swarm systems for cooperative aerial inspection. Unlike previous swarm inspection works, the proposed method does not rely on precise prior knowledge of the environment and can complete full 3D surface coverage of objects in any shape. In this work, agents are partitioned into teams, with each drone assign a different task, including mapping, exploration, and inspection. Task allocation is facilitated by assigning optimal inspection volumes to each team, following best-first rules. A voxel map-based representation of the environment is used for pathfinding, and a rule-based path-planning method is the core of this approach. We achieved the best performance in all challenging experiments with the proposed approach, surpassing all benchmark methods for similar tasks across multiple evaluation trials. The proposed method is open source at https://github.com/ntu-aris/caric_baseline and used as the baseline of the Cooperative Aerial Robots Inspection Challenge at the 62nd IEEE Conference on Decision and Control 2023.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
434,311
2407.00217
A flexured-gimbal 3-axis force-torque sensor reveals minimal cross-axis coupling in an insect-sized flapping-wing robot
The mechanical complexity of flapping wings, their unsteady aerodynamic flow, and challenge of making measurements at the scale of a sub-gram flapping-wing flying insect robot (FIR) make its behavior hard to predict. Knowing the precise mapping from voltage input to torque output, however, can be used to improve their mechanical and flight controller design. To address this challenge, we created a sensitive force-torque sensor based on a flexured gimbal that only requires a standard motion capture system or accelerometer for readout. Our device precisely and accurately measures pitch and roll torques simultaneously, as well as thrust, on a tethered flapping-wing FIR in response to changing voltage input signals. With it, we were able to measure cross-axis coupling of both torque and thrust input commands on a 180 mg FIR, the UW Robofly. We validated these measurements using free-flight experiments. Our results showed that roll and pitch have maximum cross-axis coupling errors of 8.58% and 17.24%, respectively, relative to the range of torque that is possible. Similarly, varying the pitch and roll commands resulted in up to a 5.78% deviation from the commanded thrust, across the entire commanded torque range. Our system, the first to measure two torque axes simultaneously, shows that torque commands have a negligible cross-axis coupling on both torque and thrust.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
468,769
2312.07243
A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models
Recent years have witnessed the rapid progress and broad application of diffusion probabilistic models (DPMs). Sampling from DPMs can be viewed as solving an ordinary differential equation (ODE). Despite the promising performance, the generation of DPMs usually consumes much time due to the large number of function evaluations (NFE). Though recent works have accelerated the sampling to around 20 steps with high-order solvers, the sample quality with less than 10 NFE can still be improved. In this paper, we propose a unified sampling framework (USF) to study the optional strategies for solver. Under this framework, we further reveal that taking different solving strategies at different timesteps may help further decrease the truncation error, and a carefully designed \emph{solver schedule} has the potential to improve the sample quality by a large margin. Therefore, we propose a new sampling framework based on the exponential integral formulation that allows free choices of solver strategy at each step and design specific decisions for the framework. Moreover, we propose $S^3$, a predictor-based search method that automatically optimizes the solver schedule to get a better time-quality trade-off of sampling. We demonstrate that $S^3$ can find outstanding solver schedules which outperform the state-of-the-art sampling methods on CIFAR-10, CelebA, ImageNet, and LSUN-Bedroom datasets. Specifically, we achieve 2.69 FID with 10 NFE and 6.86 FID with 5 NFE on CIFAR-10 dataset, outperforming the SOTA method significantly. We further apply $S^3$ to Stable-Diffusion model and get an acceleration ratio of 2$\times$, showing the feasibility of sampling in very few steps without retraining the neural network.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
414,848
2211.14555
Distribution Free Prediction Sets for Node Classification
Graph Neural Networks (GNNs) are able to achieve high classification accuracy on many important real world datasets, but provide no rigorous notion of predictive uncertainty. Quantifying the confidence of GNN models is difficult due to the dependence between datapoints induced by the graph structure. We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios. We do this by taking an existing approach for conformal classification that relies on \textit{exchangeable} data and modifying it by appropriately weighting the conformal scores to reflect the network structure. We show through experiments on standard benchmark datasets using popular GNN models that our approach provides tighter and better calibrated prediction sets than a naive application of conformal prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
332,893
2310.18687
Unsupervised Behavior Extraction via Random Intent Priors
Reward-free data is abundant and contains rich prior knowledge of human behaviors, but it is not well exploited by offline reinforcement learning (RL) algorithms. In this paper, we propose UBER, an unsupervised approach to extract useful behaviors from offline reward-free datasets via diversified rewards. UBER assigns different pseudo-rewards sampled from a given prior distribution to different agents to extract a diverse set of behaviors, and reuse them as candidate policies to facilitate the learning of new tasks. Perhaps surprisingly, we show that rewards generated from random neural networks are sufficient to extract diverse and useful behaviors, some even close to expert ones. We provide both empirical and theoretical evidence to justify the use of random priors for the reward function. Experiments on multiple benchmarks showcase UBER's ability to learn effective and diverse behavior sets that enhance sample efficiency for online RL, outperforming existing baselines. By reducing reliance on human supervision, UBER broadens the applicability of RL to real-world scenarios with abundant reward-free data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
403,656
2303.03428
Towards provably efficient quantum algorithms for large-scale machine-learning models
Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. In this work, we show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms, scaling as O(T^2 polylog(n)), where n is the size of the models and T is the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse, with small learning rates. Based on earlier efficient quantum algorithms for dissipative differential equations, we find and prove that similar algorithms work for (stochastic) gradient descent, the primary algorithm for machine learning. In practice, we benchmark instances of large machine learning models from 7 million to 103 million parameters. We find that, in the context of sparse training, a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows solidly that fault-tolerant quantum algorithms could potentially contribute to most state-of-the-art, large-scale machine-learning problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
349,728
2006.12986
FNA++: Fast Network Adaptation via Parameter Remapping and Architecture Search
Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Network Adaptation (FNA++) method, which can adapt both the architecture and parameters of a seed network (e.g. an ImageNet pre-trained network) to become a network with different depths, widths, or kernel sizes via a parameter remapping technique, making it possible to use NAS for segmentation and detection tasks a lot more efficiently. In our experiments, we apply FNA++ on MobileNetV2 to obtain new networks for semantic segmentation, object detection, and human pose estimation that clearly outperform existing networks designed both manually and by NAS. We also implement FNA++ on ResNets and NAS networks, which demonstrates a great generalization ability. The total computation cost of FNA++ is significantly less than SOTA segmentation and detection NAS approaches: 1737x less than DPC, 6.8x less than Auto-DeepLab, and 8.0x less than DetNAS. A series of ablation studies are performed to demonstrate the effectiveness, and detailed analysis is provided for more insights into the working mechanism. Codes are available at https://github.com/JaminFong/FNA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
183,767
1510.03059
Influence of network topology on cooperative problem-solving systems
The idea of a collective intelligence behind the complex natural structures built by organisms suggests that the organization of social networks is selected so as to optimize problem-solving competence at the group-level. Here we study the influence of the social network topology on the performance of a group of agents whose task is to locate the global maxima of NK fitness landscapes. Agents cooperate by broadcasting messages informing on their fitness and use this information to imitate the fittest agent in their influence networks. In the case those messages convey accurate information on the proximity of the solution (i.e., for smooth fitness landscapes) we find that high connectivity as well as centralization boost the group performance. For rugged landscapes, however, these characteristics are beneficial for small groups only. For large groups, it is advantageous to slow down the information transmission through the network to avoid local maximum traps. Long-range links and modularity have marginal effects on the performance of the group, except for a very narrow region of the model parameters.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
47,805
1907.12698
EVO* 2019 -- Late-Breaking Abstracts Volume
This volume contains the Late-Breaking Abstracts submitted to the EVO* 2019 Conference, that took place in Leipzig, from 24 to 26 of April. These papers where presented as short talks and also at the poster session of the conference together with other regular submissions. All of them present ongoing research and preliminary results investigating on the application of different approaches of Evolutionary Computation to different problems, most of them real world ones.
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
140,175
2006.12870
NLPContributions: An Annotation Scheme for Machine Reading of Scholarly Contributions in Natural Language Processing Literature
We describe an annotation initiative to capture the scholarly contributions in natural language processing (NLP) articles, particularly, for the articles that discuss machine learning (ML) approaches for various information extraction tasks. We develop the annotation task based on a pilot annotation exercise on 50 NLP-ML scholarly articles presenting contributions to five information extraction tasks 1. machine translation, 2. named entity recognition, 3. question answering, 4. relation classification, and 5. text classification. In this article, we describe the outcomes of this pilot annotation phase. Through the exercise we have obtained an annotation methodology; and found ten core information units that reflect the contribution of the NLP-ML scholarly investigations. The resulting annotation scheme we developed based on these information units is called NLPContributions. The overarching goal of our endeavor is four-fold: 1) to find a systematic set of patterns of subject-predicate-object statements for the semantic structuring of scholarly contributions that are more or less generically applicable for NLP-ML research articles; 2) to apply the discovered patterns in the creation of a larger annotated dataset for training machine readers of research contributions; 3) to ingest the dataset into the Open Research Knowledge Graph (ORKG) infrastructure as a showcase for creating user-friendly state-of-the-art overviews; 4) to integrate the machine readers into the ORKG to assist users in the manual curation of their respective article contributions. We envision that the NLPContributions methodology engenders a wider discussion on the topic toward its further refinement and development. Our pilot annotated dataset of 50 NLP-ML scholarly articles according to the NLPContributions scheme is openly available to the research community at https://doi.org/10.25835/0019761.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
true
183,738
1806.08437
Star Shape Prior in Fully Convolutional Networks for Skin Lesion Segmentation
Semantic segmentation is an important preliminary step towards automatic medical image interpretation. Recently deep convolutional neural networks have become the first choice for the task of pixel-wise class prediction. While incorporating prior knowledge about the structure of target objects has proven effective in traditional energy-based segmentation approaches, there has not been a clear way for encoding prior knowledge into deep learning frameworks. In this work, we propose a new loss term that encodes the star shape prior into the loss function of an end-to-end trainable fully convolutional network (FCN) framework. We penalize non-star shape segments in FCN prediction maps to guarantee a global structure in segmentation results. Our experiments demonstrate the advantage of regularizing FCN parameters by the star shape prior and our results on the ISBI 2017 skin segmentation challenge data set achieve the first rank in the segmentation task among $21$ participating teams.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
101,152
2205.10651
Tensor Shape Search for Optimum Data Compression
Various tensor decomposition methods have been proposed for data compression. In real world applications of the tensor decomposition, selecting the tensor shape for the given data poses a challenge and the shape of the tensor may affect the error and the compression ratio. In this work, we study the effect of the tensor shape on the tensor decomposition and propose an optimization model to find an optimum shape for the tensor train (TT) decomposition. The proposed optimization model maximizes the compression ratio of the TT decomposition given an error bound. We implement a genetic algorithm (GA) linked with the TT-SVD algorithm to solve the optimization model. We apply the proposed method for the compression of RGB images. The results demonstrate the effectiveness of the proposed evolutionary tensor shape search for the TT decomposition.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
297,797
1608.02316
Interdependency of Transmission and Distribution Pricing
Distribution markets are among the prospect being considered for the future of power systems. They would facilitate integration of distributed energy resources (DERs) and microgrids via a market mechanism and enable them to monetize services they can provide. This paper follows the ongoing work in implementing the distribution market operator (DMO) concept, and its clearing and settlement procedures, and focuses on investigating the pricing conducted by the DMO. The distribution locational marginal prices (D-LMPs) and their relationship with the transmission system locational marginal prices (T-LMPs) are subject of this paper. Numerical simulations on a test distribution system exhibit the benefits and drawbacks of the proposed DMO pricing processes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
59,549
2003.01313
Unveiling Coordinated Groups Behind White Helmets Disinformation
Propaganda, disinformation, manipulation, and polarization are the modern illnesses of a society increasingly dependent on social media as a source of news. In this paper, we explore the disinformation campaign, sponsored by Russia and allies, against the Syria Civil Defense (a.k.a. the White Helmets). We unveil coordinated groups using automatic retweets and content duplication to promote narratives and/or accounts. The results also reveal distinct promoting strategies, ranging from the small groups sharing the exact same text repeatedly, to complex "news website factories" where dozens of accounts synchronously spread the same news from multiple sites.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
166,612
1012.4981
Local Minima of a Quadratic Binary Functional with a Quasi-Hebbian Connection Matrix
The local minima of a quadratic functional depending on binary variables are discussed. An arbitrary connection matrix can be presented in the form of quasi-Hebbian expansion where each pattern is supplied with its own individual weight. For such matrices statistical physics methods allow one to derive an equation describing local minima of the functional. A model where only one weight differs from other ones is discussed in detail. In this case the equation can be solved analytically. The critical values of the weight, for which the energy landscape is reconstructed, are obtained. Obtained results are confirmed by computer simulations.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
8,628
2109.15294
Targeted Ads and/as Racial Discrimination: Exploring Trends in New York City Ads for College Scholarships
This paper uses and recycles data from a third-party digital marketing firm, to explore how targeted ads contribute to larger systems of racial discrimination. Focusing on a case study of targeted ads for educational searches in New York City, it discusses data visualizations and mappings of trends in the advertisements' targeted populations alongside U.S census data corresponding to these target zipcodes. We summarize and reflect on the results to consider how internet platforms systemically and differentially target advertising messages to users based on race; the tangible harms and risks that result from an internet traffic system designed to discriminate; and finally, novel approaches and frameworks for further auditing systems amid opaque, black-boxed processes forestalling transparency and accountability.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
258,246
2401.11731
Fast and Scalable Network Slicing by Integrating Deep Learning with Lagrangian Methods
Network slicing is a key technique in 5G and beyond for efficiently supporting diverse services. Many network slicing solutions rely on deep learning to manage complex and high-dimensional resource allocation problems. However, deep learning models suffer limited generalization and adaptability to dynamic slicing configurations. In this paper, we propose a novel framework that integrates constrained optimization methods and deep learning models, resulting in strong generalization and superior approximation capability. Based on the proposed framework, we design a new neural-assisted algorithm to allocate radio resources to slices to maximize the network utility under inter-slice resource constraints. The algorithm exhibits high scalability, accommodating varying numbers of slices and slice configurations with ease. We implement the proposed solution in a system-level network simulator and evaluate its performance extensively by comparing it to state-of-the-art solutions including deep reinforcement learning approaches. The numerical results show that our solution obtains near-optimal quality-of-service satisfaction and promising generalization performance under different network slicing scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
423,128
2101.11093
Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams
This paper considers the problem of planning trajectories for a team of sensor-equipped robots to reduce uncertainty about a dynamical process. Optimizing the trade-off between information gain and energy cost (e.g., control effort, distance travelled) is desirable but leads to a non-monotone objective function in the set of robot trajectories. Therefore, common multi-robot planning algorithms based on techniques such as coordinate descent lose their performance guarantees. Methods based on local search provide performance guarantees for optimizing a non-monotone submodular function, but require access to all robots' trajectories, making it not suitable for distributed execution. This work proposes a distributed planning approach based on local search and shows how lazy/greedy methods can be adopted to reduce the computation and communication of the approach. We demonstrate the efficacy of the proposed method by coordinating robot teams composed of both ground and aerial vehicles with different sensing/control profiles and evaluate the algorithm's performance in two target tracking scenarios. Compared to the naive distributed execution of local search, our approach saves up to 60% communication and 80--92% computation on average when coordinating up to 10 robots, while outperforming the coordinate descent based algorithm in achieving a desirable trade-off between sensing and energy cost.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
217,153
2206.10968
Multiple-Access Channel Coding with Non-Signaling Correlations
We address the problem of coding for classical multiple-access channels (MACs) with the assistance of non-signaling correlations between parties. It is well-known that non-signaling assistance does not change the capacity of classical point-to-point channels. However, it was recently observed that one can construct MACs from two-player non-local games while relating the winning probability of the game to the capacity of the MAC. By considering games for which entanglement increases the winning probability, this shows that for some specific kinds of channels, entanglement between the senders can increase the capacity. We make several contributions towards understanding the capacity region for MACs with the assistance of non-signaling correlations. We develop a linear program computing the optimal success probability for coding over $n$ copies of a MAC $W$ with size growing polynomially in $n$. Solving this linear program allows us to achieve inner bounds for MACs. Applying this method to the binary adder channel, we show that using non-signaling assistance, the sum-rate $1.5425$ can be reached even with zero error, which beats the maximum sum-rate capacity of $1.5$ in the unassisted case. For noisy channels, where the zero-error non-signaling assisted capacity region is trivial, we can use concatenated codes to obtain achievable points in the capacity region. Applied to a noisy version of the binary adder channel, we show that non-signaling assistance still improves the sum-rate capacity. Complementing these achievability results, we give an outer bound on the non-signaling assisted capacity region that has the same expression as the unassisted region except that the channel inputs are not required to be independent. Finally, we show that the capacity region with non-signaling assistance shared only between each sender and the receiver independently is the same as without assistance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
304,102
2103.02761
Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation
Labeling data for modern machine learning is expensive and time-consuming. Latent variable models can be used to infer labels from weaker, easier-to-acquire sources operating on unlabeled data. Such models can also be trained using labeled data, presenting a key question: should a user invest in few labeled or many unlabeled points? We answer this via a framework centered on model misspecification in method-of-moments latent variable estimation. Our core result is a bias-variance decomposition of the generalization error, which shows that the unlabeled-only approach incurs additional bias under misspecification. We then introduce a correction that provably removes this bias in certain cases. We apply our decomposition framework to three scenarios -- well-specified, misspecified, and corrected models -- to 1) choose between labeled and unlabeled data and 2) learn from their combination. We observe theoretically and with synthetic experiments that for well-specified models, labeled points are worth a constant factor more than unlabeled points. With misspecification, however, their relative value is higher due to the additional bias but can be reduced with correction. We also apply our approach to study real-world weak supervision techniques for dataset construction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
223,055
2402.02056
AnthroScore: A Computational Linguistic Measure of Anthropomorphism
Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology. We present AnthroScore, an automatic metric of implicit anthropomorphism in language. We use a masked language model to quantify how non-human entities are implicitly framed as human by the surrounding context. We show that AnthroScore corresponds with human judgments of anthropomorphism and dimensions of anthropomorphism described in social science literature. Motivated by concerns of misleading anthropomorphism in computer science discourse, we use AnthroScore to analyze 15 years of research papers and downstream news articles. In research papers, we find that anthropomorphism has steadily increased over time, and that papers related to language models have the most anthropomorphism. Within ACL papers, temporal increases in anthropomorphism are correlated with key neural advancements. Building upon concerns of scientific misinformation in mass media, we identify higher levels of anthropomorphism in news headlines compared to the research papers they cite. Since AnthroScore is lexicon-free, it can be directly applied to a wide range of text sources.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
426,361
2301.10902
Efficient Hyperdimensional Computing
Hyperdimensional computing (HDC) is a method to perform classification that uses binary vectors with high dimensions and the majority rule. This approach has the potential to be energy-efficient and hence deemed suitable for resource-limited platforms due to its simplicity and massive parallelism. However, in order to achieve high accuracy, HDC sometimes uses hypervectors with tens of thousands of dimensions. This potentially negates its efficiency advantage. In this paper, we examine the necessity of such high dimensions and conduct a detailed theoretical analysis of the relationship between hypervector dimensions and accuracy. Our results demonstrate that as the dimension of the hypervectors increases, the worst-case/average-case HDC prediction accuracy with the majority rule decreases. Building on this insight, we develop HDC models that use binary hypervectors with dimensions orders of magnitude lower than those of state-of-the-art HDC models while maintaining equivalent or even improved accuracy and efficiency. For instance, on the MNIST dataset, we achieve 91.12% HDC accuracy in image classification with a dimension of only 64. Our methods perform operations that are only 0.35% of other HDC models with dimensions of 10,000. Furthermore, we evaluate our methods on ISOLET, UCI-HAR, and Fashion-MNIST datasets and investigate the limits of HDC computing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
341,945
2403.05860
On the equivalence of direct and indirect data-driven predictive control approaches
Recently, several direct Data-Driven Predictive Control (DDPC) methods have been proposed, advocating the possibility of designing predictive controllers from historical input-output trajectories without the need to identify a model. In this work, we show that these approaches are equivalent to an indirect approach. Reformulating the direct methods in terms of estimated parameters and covariance matrices allows us to give new insights into how they work in comparison with, for example, Subspace Predictive Control (SPC). In particular, we show that for unconstrained problems the direct methods are equivalent to SPC with a reduced weight on the tracking cost. Via a numerical experiment, motivated by the reformulation, we also illustrate why the performance of direct DDPC methods with fixed regularization tends to degrade as the number of training samples increases.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
436,192
1505.06225
Three-Phase Dynamic Simulation of Power Systems Using Combined Transmission and Distribution System Models
This paper presents a new method for studying electromechanical transients in power systems using three phase, combined transmission and distribution models (hybrid models). The methodology models individual phases of an electric network and associated unbalance in load and generation. Therefore, the impacts of load unbalance, single phase distributed generation and line impedance unbalance on electromechanical transients can be studied without using electromagnetic transient simulation (EMTP) programs. The implementation of this methodology in software is called the Three Phase Dynamics Analyzer (TPDA). Case studies included in the paper demonstrate the accuracy of TPDA and its ability to simulate electromechanical transients in hybrid models. TPDA has the potential for providing electric utilities and power system planners with more accurate assessment of system stability than traditional dynamic simulation software that assume balanced network topology.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
43,387
1607.03202
Rapid Prediction of Player Retention in Free-to-Play Mobile Games
Predicting and improving player retention is crucial to the success of mobile Free-to-Play games. This paper explores the problem of rapid retention prediction in this context. Heuristic modeling approaches are introduced as a way of building simple rules for predicting short-term retention. Compared to common classification algorithms, our heuristic-based approach achieves reasonable and comparable performance using information from the first session, day, and week of player activity.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
58,470
2203.04299
Plug-and-play Shape Refinement Framework for Multi-site and Lifespan Brain Skull Stripping
Skull stripping is a crucial prerequisite step in the analysis of brain magnetic resonance images (MRI). Although many excellent works or tools have been proposed, they suffer from low generalization capability. For instance, the model trained on a dataset with specific imaging parameters cannot be well applied to other datasets with different imaging parameters. Especially, for the lifespan datasets, the model trained on an adult dataset is not applicable to an infant dataset due to the large domain difference. To address this issue, numerous methods have been proposed, where domain adaptation based on feature alignment is the most common. Unfortunately, this method has some inherent shortcomings, which need to be retrained for each new domain and requires concurrent access to the input images of both domains. In this paper, we design a plug-and-play shape refinement (PSR) framework for multi-site and lifespan skull stripping. To deal with the domain shift between multi-site lifespan datasets, we take advantage of the brain shape prior, which is invariant to imaging parameters and ages. Experiments demonstrate that our framework can outperform the state-of-the-art methods on multi-site lifespan datasets.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
284,414
2006.07826
Few-shot Object Detection on Remote Sensing Images
In this paper, we deal with the problem of object detection on remote sensing images. Previous methods have developed numerous deep CNN-based methods for object detection on remote sensing images and the report remarkable achievements in detection performance and efficiency. However, current CNN-based methods mostly require a large number of annotated samples to train deep neural networks and tend to have limited generalization abilities for unseen object categories. In this paper, we introduce a few-shot learning-based method for object detection on remote sensing images where only a few annotated samples are provided for the unseen object categories. More specifically, our model contains three main components: a meta feature extractor that learns to extract feature representations from input images, a reweighting module that learn to adaptively assign different weights for each feature representation from the support images, and a bounding box prediction module that carries out object detection on the reweighted feature maps. We build our few-shot object detection model upon YOLOv3 architecture and develop a multi-scale object detection framework. Experiments on two benchmark datasets demonstrate that with only a few annotated samples our model can still achieve a satisfying detection performance on remote sensing images and the performance of our model is significantly better than the well-established baseline models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,963
2402.02678
Counterfactual Explanations of Black-box Machine Learning Models using Causal Discovery with Applications to Credit Rating
Explainable artificial intelligence (XAI) has helped elucidate the internal mechanisms of machine learning algorithms, bolstering their reliability by demonstrating the basis of their predictions. Several XAI models consider causal relationships to explain models by examining the input-output relationships of prediction models and the dependencies between features. The majority of these models have been based their explanations on counterfactual probabilities, assuming that the causal graph is known. However, this assumption complicates the application of such models to real data, given that the causal relationships between features are unknown in most cases. Thus, this study proposed a novel XAI framework that relaxed the constraint that the causal graph is known. This framework leveraged counterfactual probabilities and additional prior information on causal structure, facilitating the integration of a causal graph estimated through causal discovery methods and a black-box classification model. Furthermore, explanatory scores were estimated based on counterfactual probabilities. Numerical experiments conducted employing artificial data confirmed the possibility of estimating the explanatory score more accurately than in the absence of a causal graph. Finally, as an application to real data, we constructed a classification model of credit ratings assigned by Shiga Bank, Shiga prefecture, Japan. We demonstrated the effectiveness of the proposed method in cases where the causal graph is unknown.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
426,679
1811.05370
Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents
User interaction with voice-powered agents generates large amounts of unlabeled utterances. In this paper, we explore techniques to efficiently transfer the knowledge from these unlabeled utterances to improve model performance on Spoken Language Understanding (SLU) tasks. We use Embeddings from Language Model (ELMo) to take advantage of unlabeled data by learning contextualized word representations. Additionally, we propose ELMo-Light (ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer. Additionally, we show that the gains from unsupervised transfer techniques can be further improved by supervised transfer. The improvements are more pronounced in low resource settings and when using only 1000 labeled in-domain samples, our techniques match the performance of training from scratch on 10-15x more labeled in-domain data.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
113,299