id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2307.10954
Soft-tissue Driven Craniomaxillofacial Surgical Planning
In CMF surgery, the planning of bony movement to achieve a desired facial outcome is a challenging task. Current bone driven approaches focus on normalizing the bone with the expectation that the facial appearance will be corrected accordingly. However, due to the complex non-linear relationship between bony structure and facial soft-tissue, such bone-driven methods are insufficient to correct facial deformities. Despite efforts to simulate facial changes resulting from bony movement, surgical planning still relies on iterative revisions and educated guesses. To address these issues, we propose a soft-tissue driven framework that can automatically create and verify surgical plans. Our framework consists of a bony planner network that estimates the bony movements required to achieve the desired facial outcome and a facial simulator network that can simulate the possible facial changes resulting from the estimated bony movement plans. By combining these two models, we can verify and determine the final bony movement required for planning. The proposed framework was evaluated using a clinical dataset, and our experimental results demonstrate that the soft-tissue driven approach greatly improves the accuracy and efficacy of surgical planning when compared to the conventional bone-driven approach.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
380,751
1801.02753
SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or rely on retrieval of existing photographs. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes plausible images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new network building block suitable for both the generator and discriminator which improves the information flow by injecting the input image at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,977
2403.17358
Addressing Myopic Constrained POMDP Planning with Recursive Dual Ascent
Lagrangian-guided Monte Carlo tree search with global dual ascent has been applied to solve large constrained partially observable Markov decision processes (CPOMDPs) online. In this work, we demonstrate that these global dual parameters can lead to myopic action selection during exploration, ultimately leading to suboptimal decision making. To address this, we introduce history-dependent dual variables that guide local action selection and are optimized with recursive dual ascent. We empirically compare the performance of our approach on a motivating toy example and two large CPOMDPs, demonstrating improved exploration, and ultimately, safer outcomes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
441,428
2410.21300
Contrastive Learning with Auxiliary User Detection for Identifying Activities
Human Activity Recognition (HAR) is essential in ubiquitous computing, with far-reaching real-world applications. While recent SOTA HAR research has demonstrated impressive performance, some key aspects remain under-explored. Firstly, HAR can be both highly contextualized and personalized. However, prior work has predominantly focused on being Context-Aware (CA) while largely ignoring the necessity of being User-Aware (UA). We argue that addressing the impact of innate user action-performing differences is equally crucial as considering external contextual environment settings in HAR tasks. Secondly, being user-aware makes the model acknowledge user discrepancies but does not necessarily guarantee mitigation of these discrepancies, i.e., unified predictions under the same activities. There is a need for a methodology that explicitly enforces closer (different user, same activity) representations. To bridge this gap, we introduce CLAUDIA, a novel framework designed to address these issues. Specifically, we expand the contextual scope of the CA-HAR task by integrating User Identification (UI) within the CA-HAR framework, jointly predicting both CA-HAR and UI in a new task called User and Context-Aware HAR (UCA-HAR). This approach enriches personalized and contextual understanding by jointly learning user-invariant and user-specific patterns. Inspired by SOTA designs in the visual domain, we introduce a supervised contrastive loss objective on instance-instance pairs to enhance model efficacy and improve learned feature quality. Evaluation across three real-world CA-HAR datasets reveals substantial performance enhancements, with average improvements ranging from 5.8% to 14.1% in Matthew's Correlation Coefficient and 3.0% to 7.2% in Macro F1 score.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
503,183
2208.03905
Towards Analytical Electromagnetic Models for Reconfigurable Intelligent Surfaces
Physically accurate and mathematically tractable models are presented to characterize scattering and reflection properties of reconfigurable intelligent surfaces (RISs). We take continuous and discrete strategies to model a single patch and patch array and their interactions with multiple incident electromagnetic (EM) waves. The proposed models consider the effect of the incident and scattered angles, polarization features, and the topology and geometry of RISs. Particularly, a simple system of linear equations can describe the multiple-input multiple-output (MIMO) behaviors of RISs under reasonable assumptions. It can serve as a fundamental model for analyzing and optimizing the performance of RIS-aided systems in the far-field regime. The proposed models are employed to identify the advantages and limitations of three typical configurations. One important finding is that complicated beam reshaping functionality can not be endowed by popular phase compensation configurations. A possible solution is the simultaneous configurations of collecting area and phase shifting. Numerical simulations verify the effectiveness of the proposed configuration schemes.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
311,936
2304.04554
Use the Detection Transformer as a Data Augmenter
Detection Transformer (DETR) is a Transformer architecture based object detection model. In this paper, we demonstrate that it can also be used as a data augmenter. We term our approach as DETR assisted CutMix, or DeMix for short. DeMix builds on CutMix, a simple yet highly effective data augmentation technique that has gained popularity in recent years. CutMix improves model performance by cutting and pasting a patch from one image onto another, yielding a new image. The corresponding label for this new example is specified as the weighted average of the original labels, where the weight is proportional to the area of the patch. CutMix selects a random patch to be cut. In contrast, DeMix elaborately selects a semantically rich patch, located by a pre-trained DETR. The label of the new image is specified in the same way as in CutMix. Experimental results on benchmark datasets for image classification demonstrate that DeMix significantly outperforms prior art data augmentation methods including CutMix. Oue code is available at https://github.com/ZJLAB-AMMI/DeMix.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,262
1705.05823
Real-Time Adaptive Image Compression
We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
73,558
2211.15924
Weakly Supervised Learning Significantly Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT
Modern machine learning pipelines, in particular those based on deep learning (DL) models, require large amounts of labeled data. For classification problems, the most common learning paradigm consists of presenting labeled examples during training, thus providing strong supervision on what constitutes positive and negative samples. This constitutes a major obstacle for the development of DL models in radiology--in particular for cross-sectional imaging (e.g., computed tomography [CT] scans)--where labels must come from manual annotations by expert radiologists at the image or slice-level. These differ from examination-level annotations, which are coarser but cheaper, and could be extracted from radiology reports using natural language processing techniques. This work studies the question of what kind of labels should be collected for the problem of intracranial hemorrhage detection in brain CT. We investigate whether image-level annotations should be preferred to examination-level ones. By framing this task as a multiple instance learning problem, and employing modern attention-based DL architectures, we analyze the degree to which different levels of supervision improve detection performance. We find that strong supervision (i.e., learning with local image-level annotations) and weak supervision (i.e., learning with only global examination-level labels) achieve comparable performance in examination-level hemorrhage detection (the task of selecting the images in an examination that show signs of hemorrhage) as well as in image-level hemorrhage detection (highlighting those signs within the selected images). Furthermore, we study this behavior as a function of the number of labels available during training. Our results suggest that local labels may not be necessary at all for these tasks, drastically reducing the time and cost involved in collecting and curating datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
333,446
1809.06436
Contact-Implicit Trajectory Optimization using Orthogonal Collocation
In this paper we propose a method to improve the accuracy of trajectory optimization for dynamic robots with intermittent contact by using orthogonal collocation. Until recently, most trajectory optimization methods for systems with contacts employ mode-scheduling, which requires an a priori knowledge of the contact order and thus cannot produce complex or non-intuitive behaviors. Contact-implicit trajectory optimization methods offer a solution to this by allowing the optimization to make or break contacts as needed, but thus far have suffered from poor accuracy. Here, we combine methods from direct collocation using higher order orthogonal polynomials with contact-implicit optimization to generate trajectories with significantly improved accuracy. The key insight is to increase the order of the polynomial representation while maintaining the assumption that impact occurs over the duration of one finite element.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
108,057
2501.14604
Inverse Evolution Data Augmentation for Neural PDE Solvers
Neural networks have emerged as promising tools for solving partial differential equations (PDEs), particularly through the application of neural operators. Training neural operators typically requires a large amount of training data to ensure accuracy and generalization. In this paper, we propose a novel data augmentation method specifically designed for training neural operators on evolution equations. Our approach utilizes insights from inverse processes of these equations to efficiently generate data from random initialization that are combined with original data. To further enhance the accuracy of the augmented data, we introduce high-order inverse evolution schemes. These schemes consist of only a few explicit computation steps, yet the resulting data pairs can be proven to satisfy the corresponding implicit numerical schemes. In contrast to traditional PDE solvers that require small time steps or implicit schemes to guarantee accuracy, our data augmentation method employs explicit schemes with relatively large time steps, thereby significantly reducing computational costs. Accuracy and efficacy experiments confirm the effectiveness of our approach. Additionally, we validate our approach through experiments with the Fourier Neural Operator and UNet on three common evolution equations that are Burgers' equation, the Allen-Cahn equation and the Navier-Stokes equation. The results demonstrate a significant improvement in the performance and robustness of the Fourier Neural Operator when coupled with our inverse evolution data augmentation method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
527,182
1811.07674
An Adaptive Oversampling Learning Method for Class-Imbalanced Fault Diagnostics and Prognostics
Data-driven fault diagnostics and prognostics suffers from class-imbalance problem in industrial systems and it raises challenges to common machine learning algorithms as it becomes difficult to learn the features of the minority class samples. Synthetic oversampling methods are commonly used to tackle these problems by generating the minority class samples to balance the distributions between majority and minority classes. However, many of oversampling methods are inappropriate that they cannot generate effective and useful minority class samples according to different distributions of data, which further complicate the process of learning samples. Thus, this paper proposes a novel adaptive oversampling technique: EM-based Weighted Minority Oversampling TEchnique (EWMOTE) for industrial fault diagnostics and prognostics. The methods comprises a weighted minority sampling strategy to identify hard-to-learn informative minority fault samples and Expectation Maximization (EM) based imputation algorithm to generate fault samples. To validate the performance of the proposed methods, experiments are conducted in two real datasets. The results show that the method could achieve better performance on not only binary class, but multi-class imbalance learning task in different imbalance ratios than other oversampling-based baseline models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,842
1912.10801
Majorization Minimization Technique for Optimally Solving Deep Dictionary Learning
The concept of deep dictionary learning has been recently proposed. Unlike shallow dictionary learning which learns single level of dictionary to represent the data, it uses multiple layers of dictionaries. So far, the problem could only be solved in a greedy fashion; this was achieved by learning a single layer of dictionary in each stage where the coefficients from the previous layer acted as inputs to the subsequent layer (only the first layer used the training samples as inputs). This was not optimal; there was feedback from shallower to deeper layers but not the other way. This work proposes an optimal solution to deep dictionary learning whereby all the layers of dictionaries are solved simultaneously. We employ the Majorization Minimization approach. Experiments have been carried out on benchmark datasets; it shows that optimal learning indeed improves over greedy piecemeal learning. Comparison with other unsupervised deep learning tools (stacked denoising autoencoder, deep belief network, contractive autoencoder and K-sparse autoencoder) show that our method supersedes their performance both in accuracy and speed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,410
2501.01333
On the Robustness of Cover Version Identification Models: A Study Using Cover Versions from YouTube
Recent advances in cover song identification have shown great success. However, models are usually tested on a fixed set of datasets which are relying on the online cover song database SecondHandSongs. It is unclear how well models perform on cover songs on online video platforms, which might exhibit alterations that are not expected. In this paper, we annotate a subset of songs from YouTube sampled by a multi-modal uncertainty sampling approach and evaluate state-of-the-art models. We find that existing models achieve significantly lower ranking performance on our dataset compared to a community dataset. We additionally measure the performance of different types of versions (e.g., instrumental versions) and find several types that are particularly hard to rank. Lastly, we provide a taxonomy of alterations in cover versions on the web.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
true
522,022
2103.06453
Smartphone Impostor Detection with Behavioral Data Privacy and Minimalist Hardware Support
Impostors are attackers who take over a smartphone and gain access to the legitimate user's confidential and private information. This paper proposes a defense-in-depth mechanism to detect impostors quickly with simple Deep Learning algorithms, which can achieve better detection accuracy than the best prior work which used Machine Learning algorithms requiring computation of multiple features. Different from previous work, we then consider protecting the privacy of a user's behavioral (sensor) data by not exposing it outside the smartphone. For this scenario, we propose a Recurrent Neural Network (RNN) based Deep Learning algorithm that uses only the legitimate user's sensor data to learn his/her normal behavior. We propose to use Prediction Error Distribution (PED) to enhance the detection accuracy. We also show how a minimalist hardware module, dubbed SID for Smartphone Impostor Detector, can be designed and integrated into smartphones for self-contained impostor detection. Experimental results show that SID can support real-time impostor detection, at a very low hardware cost and energy consumption, compared to other RNN accelerators.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
224,305
2309.15191
Deep Learning for Optimization of Trajectories for Quadrotors
This paper presents a novel learning-based trajectory planning framework for quadrotors that combines model-based optimization techniques with deep learning. Specifically, we formulate the trajectory optimization problem as a quadratic programming (QP) problem with dynamic and collision-free constraints using piecewise trajectory segments through safe flight corridors [1]. We train neural networks to directly learn the time allocation for each segment to generate optimal smooth and fast trajectories. Furthermore, the constrained optimization problem is applied as a separate implicit layer for backpropagation in the network, for which the differential loss function can be obtained. We introduce an additional penalty function to penalize time allocations which result in solutions that violate the constraints to accelerate the training process and increase the success rate of the original optimization problem. To this end, we enable a flexible number of sequences of piece-wise trajectories by adding an extra end-of-sentence token during training. We illustrate the performance of the proposed method via extensive simulation and experimentation and show that it works in real time in diverse, cluttered environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
394,874
2412.12310
Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion
This paper addresses the critical need for democratizing large language models (LLM) in the Arab world, a region that has seen slower progress in developing models comparable to state-of-the-art offerings like GPT-4 or ChatGPT 3.5, due to a predominant focus on mainstream languages (e.g., English and Chinese). One practical objective for an Arabic LLM is to utilize an Arabic-specific vocabulary for the tokenizer that could speed up decoding. However, using a different vocabulary often leads to a degradation of learned knowledge since many words are initially out-of-vocabulary (OOV) when training starts. Inspired by the vocabulary learning during Second Language (Arabic) Acquisition for humans, the released AraLLaMA employs progressive vocabulary expansion, which is implemented by a modified BPE algorithm that progressively extends the Arabic subwords in its dynamic vocabulary during training, thereby balancing the OOV ratio at every stage. The ablation study demonstrated the effectiveness of Progressive Vocabulary Expansion. Moreover, AraLLaMA achieves decent performance comparable to the best Arabic LLMs across a variety of Arabic benchmarks. Models, training data, benchmarks, and codes will be all open-sourced.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
517,821
cs/0207094
Answer Sets for Consistent Query Answering in Inconsistent Databases
A relational database is inconsistent if it does not satisfy a given set of integrity constraints. Nevertheless, it is likely that most of the data in it is consistent with the constraints. In this paper we apply logic programming based on answer sets to the problem of retrieving consistent information from a possibly inconsistent database. Since consistent information persists from the original database to every of its minimal repairs, the approach is based on a specification of database repairs using disjunctive logic programs with exceptions, whose answer set semantics can be represented and computed by systems that implement stable model semantics. These programs allow us to declare persistence by defaults and repairing changes by exceptions. We concentrate mainly on logic programs for binary integrity constraints, among which we find most of the integrity constraints found in practice.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
537,671
1810.05188
Fighting Contextual Bandits with Stochastic Smoothing
We introduce a new stochastic smoothing perspective to study adversarial contextual bandit problems. We propose a general algorithm template that represents random perturbation based algorithms and identify several perturbation distributions that lead to strong regret bounds. Using the idea of smoothness, we provide an $O(\sqrt{T})$ zero-order bound for the vanilla algorithm and an $O(L^{*2/3}_{T})$ first-order bound for the clipped version. These bounds hold when the algorithms use with a variety of distributions that have a bounded hazard rate. Our algorithm template includes EXP4 as a special case corresponding to the Gumbel perturbation. Our regret bounds match existing results for EXP4 without relying on the specific properties of the algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
110,178
2002.08799
Structured Prediction for Conditional Meta-Learning
The goal of optimization-based meta-learning is to find a single initialization shared across a distribution of tasks to speed up the process of learning new tasks. Conditional meta-learning seeks task-specific initialization to better capture complex task distributions and improve performance. However, many existing conditional methods are difficult to generalize and lack theoretical guarantees. In this work, we propose a new perspective on conditional meta-learning via structured prediction. We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions by weighing meta-training data on target tasks. Our non-parametric approach is model-agnostic and can be combined with existing meta-learning methods to achieve conditioning. Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,866
2106.07563
BPLF: A Bi-Parallel Linear Flow Model for Facial Expression Generation from Emotion Set Images
The flow-based generative model is a deep learning generative model, which obtains the ability to generate data by explicitly learning the data distribution. Theoretically its ability to restore data is stronger than other generative models. However, its implementation has many limitations, including limited model design, too many model parameters and tedious calculation. In this paper, a bi-parallel linear flow model for facial emotion generation from emotion set images is constructed, and a series of improvements have been made in terms of the expression ability of the model and the convergence speed in training. The model is mainly composed of several coupling layers superimposed to form a multi-scale structure, in which each coupling layer contains 1*1 reversible convolution and linear operation modules. Furthermore, this paper sorted out the current public data set of facial emotion images, made a new emotion data, and verified the model through this data set. The experimental results show that, under the traditional convolutional neural network, the 3-layer 3*3 convolution kernel is more conducive to extracte the features of the face images. The introduction of principal component decomposition can improve the convergence speed of the model.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
240,979
2012.03084
Pre-training Protein Language Models with Label-Agnostic Binding Pairs Enhances Performance in Downstream Tasks
Less than 1% of protein sequences are structurally and functionally annotated. Natural Language Processing (NLP) community has recently embraced self-supervised learning as a powerful approach to learn representations from unlabeled text, in large part due to the attention-based context-aware Transformer models. In this work we present a modification to the RoBERTa model by inputting during pre-training a mixture of binding and non-binding protein sequences (from STRING database). However, the sequence pairs have no label to indicate their binding status, as the model relies solely on Masked Language Modeling (MLM) objective during pre-training. After fine-tuning, such approach surpasses models trained on single protein sequences for protein-protein binding prediction, TCR-epitope binding prediction, cellular-localization and remote homology classification tasks. We suggest that the Transformer's attention mechanism contributes to protein binding site discovery. Furthermore, we compress protein sequences by 64% with the Byte Pair Encoding (BPE) vocabulary consisting of 10K subwords, each around 3-4 amino acids long. Finally, to expand the model input space to even larger proteins and multi-protein assemblies, we pre-train Longformer models that support 2,048 tokens. Further work in token-level classification for secondary structure prediction is needed. Code available at: https://github.com/PaccMann/paccmann_proteomics
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
209,979
1809.08706
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR
Many state-of-the-art machine learning models such as deep neural networks have recently shown to be vulnerable to adversarial perturbations, especially in classification tasks. Motivated by adversarial machine learning, in this paper we investigate the robustness of sparse regression models with strongly correlated covariates to adversarially designed measurement noises. Specifically, we consider the family of ordered weighted $\ell_1$ (OWL) regularized regression methods and study the case of OSCAR (octagonal shrinkage clustering algorithm for regression) in the adversarial setting. Under a norm-bounded threat model, we formulate the process of finding a maximally disruptive noise for OWL-regularized regression as an optimization problem and illustrate the steps towards finding such a noise in the case of OSCAR. Experimental results demonstrate that the regression performance of grouping strongly correlated features can be severely degraded under our adversarial setting, even when the noise budget is significantly smaller than the ground-truth signals.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
108,564
1809.02255
Adversarial Domain Adaptation for Duplicate Question Detection
We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,004
2411.17255
APT: Architectural Planning and Text-to-Blueprint Construction Using Large Language Models for Open-World Agents
We present APT, an advanced Large Language Model (LLM)-driven framework that enables autonomous agents to construct complex and creative structures within the Minecraft environment. Unlike previous approaches that primarily concentrate on skill-based open-world tasks or rely on image-based diffusion models for generating voxel-based structures, our method leverages the intrinsic spatial reasoning capabilities of LLMs. By employing chain-of-thought decomposition along with multimodal inputs, the framework generates detailed architectural layouts and blueprints that the agent can execute under zero-shot or few-shot learning scenarios. Our agent incorporates both memory and reflection modules to facilitate lifelong learning, adaptive refinement, and error correction throughout the building process. To rigorously evaluate the agent's performance in this emerging research area, we introduce a comprehensive benchmark consisting of diverse construction tasks designed to test creativity, spatial reasoning, adherence to in-game rules, and the effective integration of multimodal instructions. Experimental results using various GPT-based LLM backends and agent configurations demonstrate the agent's capacity to accurately interpret extensive instructions involving numerous items, their positions, and orientations. The agent successfully produces complex structures complete with internal functionalities such as Redstone-powered systems. A/B testing indicates that the inclusion of a memory module leads to a significant increase in performance, emphasizing its role in enabling continuous learning and the reuse of accumulated experience. Additionally, the agent's unexpected emergence of scaffolding behavior highlights the potential of future LLM-driven agents to utilize subroutine planning and leverage the emergence ability of LLMs to autonomously develop human-like problem-solving techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
511,372
1803.05541
Context-Aware Mixed Reality: A Framework for Ubiquitous Interaction
Mixed Reality (MR) is a powerful interactive technology that yields new types of user experience. We present a semantic based interactive MR framework that exceeds the current geometry level approaches, a step change in generating high-level context-aware interactions. Our key insight is to build semantic understanding in MR that not only can greatly enhance user experience through object-specific behaviours, but also pave the way for solving complex interaction design challenges. The framework generates semantic properties of the real world environment through dense scene reconstruction and deep image understanding. We demonstrate our approach with a material-aware prototype system for generating context-aware physical interactions between the real and the virtual objects. Quantitative and qualitative evaluations are carried out and the results show that the framework delivers accurate and fast semantic information in interactive MR environment, providing effective semantic level interactions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
92,652
1506.04395
Reading Scene Text in Deep Convolutional Sequences
We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
44,163
1905.13587
GENO -- GENeric Optimization for Classical Machine Learning
Although optimization is the longstanding algorithmic backbone of machine learning, new models still require the time-consuming implementation of new solvers. As a result, there are thousands of implementations of optimization algorithms for machine learning problems. A natural question is, if it is always necessary to implement a new solver, or if there is one algorithm that is sufficient for most models. Common belief suggests that such a one-algorithm-fits-all approach cannot work, because this algorithm cannot exploit model specific structure and thus cannot be efficient and robust on a wide variety of problems. Here, we challenge this common belief. We have designed and implemented the optimization framework GENO (GENeric Optimization) that combines a modeling language with a generic solver. GENO generates a solver from the declarative specification of an optimization problem class. The framework is flexible enough to encompass most of the classical machine learning problems. We show on a wide variety of classical but also some recently suggested problems that the automatically generated solvers are (1) as efficient as well-engineered specialized solvers, (2) more efficient by a decent margin than recent state-of-the-art solvers, and (3) orders of magnitude more efficient than classical modeling language plus solver approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
133,184
2310.03095
Opinion Dynamics Optimization Through Noncooperative Differential Games
In this paper, I study optimizing the opinion formation of a social network of a population of individuals on a graph whose opinion evolves according to the Hegselmann-Krause model for opinion dynamics. I propose an optimization problem based on a differential game for a population of individuals who are not stubborn. The objective of each individual is to seek an optimal control policy for her own opinion evolution by optimizing a personal performance index. The Nash equilibrium actions and the associated opinion trajectory with the equilibrium actions are derived for the opinion optimization model using Pontryagin's principle. The game strategies were executed on the well-known Zachary's Karate Club social network. The resulting opinion trajectories associated with the game strategies showed that in non-stubborn Zachary's network, the opinions moved toward the average opinion of the network, but a consensus of final opinions did not necessarily emerge.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
397,130
2106.03816
Diversity driven Query Rewriting in Search Advertising
Retrieving keywords (bidwords) with the same intent as query, referred to as close variant keywords, is of prime importance for effective targeted search advertising. For head and torso search queries, sponsored search engines use a huge repository of same intent queries and keywords, mined ahead of time. Online, this repository is used to rewrite the query and then lookup the rewrite in a repository of bid keywords contributing to significant revenue. Recently generative retrieval models have been shown to be effective at the task of generating such query rewrites. We observe two main limitations of such generative models. First, rewrites generated by these models exhibit low lexical diversity, and hence the rewrites fail to retrieve relevant keywords that have diverse linguistic variations. Second, there is a misalignment between the training objective - the likelihood of training data, v/s what we desire - improved quality and coverage of rewrites. In this work, we introduce CLOVER, a framework to generate both high-quality and diverse rewrites by optimizing for human assessment of rewrite quality using our diversity-driven reinforcement learning algorithm. We use an evaluation model, trained to predict human judgments, as the reward function to finetune the generation policy. We empirically show the effectiveness of our proposed approach through offline experiments on search queries across geographies spanning three major languages. We also perform online A/B experiments on Bing, a large commercial search engine, which shows (i) better user engagement with an average increase in clicks by 12.83% accompanied with an average defect reduction by 13.97%, and (ii) improved revenue by 21.29%.
false
false
false
false
true
false
false
false
true
true
false
false
false
false
false
false
false
false
239,469
2305.09765
OpenVR: Teleoperation for Manipulation
Across the robotics field, quality demonstrations are an integral part of many control pipelines. However, collecting high-quality demonstration trajectories remains time-consuming and difficult, often resulting in the number of demonstrations being the performance bottleneck. To address this issue, we present a method of Virtual Reality (VR) Teleoperation that uses an Oculus VR headset to teleoperate a Franka Emika Panda robot. Although other VR teleoperation methods exist, our code is open source, designed for readily available consumer hardware, easy to modify, agnostic to experimental setup, and simple to use.
true
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
364,762
2403.08836
Structural Positional Encoding for knowledge integration in transformer-based medical process monitoring
Predictive process monitoring is a process mining task aimed at forecasting information about a running process trace, such as the most correct next activity to be executed. In medical domains, predictive process monitoring can provide valuable decision support in atypical and nontrivial situations. Decision support and quality assessment in medicine cannot ignore domain knowledge, in order to be grounded on all the available information (which is not limited to data) and to be really acceptable by end users. In this paper, we propose a predictive process monitoring approach relying on the use of a {\em transformer}, a deep learning architecture based on the attention mechanism. A major contribution of our work lies in the incorporation of ontological domain-specific knowledge, carried out through a graph positional encoding technique. The paper presents and discusses the encouraging experimental result we are collecting in the domain of stroke management.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
437,507
2409.10314
Rate-Splitting Multiple Access for Coexistence of Semantic and Bit Communications
In the sixth generation (6G) of cellular networks, the demands for capacity and connectivity will increase dramatically to meet the requirements of emerging services for both humans and machines. Semantic communication has shown great potential because of its efficiency, and suitability for users who only care about the semantic meaning. But bit communication is still needed for users requiring original messages. Therefore, there will be a coexistence of semantic and bit communications in future networks. This motivates us to explore how to allocate resources in such a coexistence scenario. We investigate different uplink multiple access (MA) schemes for the coexistence of semantic users and a bit user, namely orthogonal multiple access (OMA), non-orthogonal multiple access (NOMA) and rate-splitting multiple access (RSMA). We characterize the rate regions achieved by those MA schemes. The simulation results show that RSMA always outperforms NOMA and has better performance in high semantic rate regimes compared to OMA. We find that RSMA scheme design, rate region, and power allocation are quite different in the coexistence scenario compared to the bit-only communication, primarily due to the need to consider the understandability in semantic communications. Interestingly, in contrast to bit-only communications where RSMA is capacity achieving without any need for time sharing, in the coexistence scenario, time sharing helps enlarging RSMA rate region.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
488,699
2409.15720
Optimization of partially isolated quantum harmonic oscillator memory systems by mean square decoherence time criteria
This paper is concerned with open quantum harmonic oscillators with position-momentum system variables, whose internal dynamics and interaction with the environment are governed by linear quantum stochastic differential equations. A recently proposed approach to such systems as Heisenberg picture quantum memories exploits their ability to approximately retain initial conditions over a decoherence horizon. Using the quantum memory decoherence time defined previously in terms of a fidelity threshold on a weighted mean-square deviation of the system variables from their initial values, we apply this approach to a partially isolated subsystem of the oscillator, which is not directly affected by the external fields. The partial isolation leads to an appropriate system decomposition and a qualitatively different short-horizon asymptotic behaviour of the deviation, which yields a longer decoherence time in the high-fidelity limit. The resulting approximate decoherence time maximization over the energy parameters for improving the quantum memory performance is discussed for a coherent feedback interconnection of such systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
491,028
2010.08591
A Conglomerate of Multiple OCR Table Detection and Extraction
Information representation as tables are compact and concise method that eases searching, indexing, and storage requirements. Extracting and cloning tables from parsable documents is easier and widely used, however industry still faces challenge in detecting and extracting tables from OCR documents or images. This paper proposes an algorithm that detects and extracts multiple tables from OCR document. The algorithm uses a combination of image processing techniques, text recognition and procedural coding to identify distinct tables in same image and map the text to appropriate corresponding cell in dataframe which can be stored as Comma-separated values, Database, Excel and multiple other usable formats.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
201,222
2006.02230
PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Deep Neural Networks (DNNs) have revolutionized many aspects of our lives. The use of DNNs is becoming ubiquitous including in softwares for image recognition, speech recognition, speech synthesis, language translation, to name a few. he training of DNN architectures however is computationally expensive. Once the model is created, its use in the intended application - the inference task, is computationally heavy too and the inference needs to be fast for real time use. For obtaining high performance today, the code of Deep Learning (DL) primitives optimized for specific architectures by expert programmers exposed via libraries is the norm. However, given the constant emergence of new DNN architectures, creating hand optimized code is expensive, slow and is not scalable. To address this performance-productivity challenge, in this paper we present compiler algorithms to automatically generate high performance implementations of DL primitives that closely match the performance of hand optimized libraries. We develop novel data reuse analysis algorithms using the polyhedral model to derive efficient execution schedules automatically. In addition, because most DL primitives use some variant of matrix multiplication at their core, we develop a flexible framework where it is possible to plug in library implementations of the same in lieu of a subset of the loops. We show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance. We develop compiler algorithms to also perform operator fusions that reduce data movement through the memory hierarchy of the computer system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
179,987
2501.05198
Dexterous Manipulation of Deformable Objects via Pneumatic Gripping: Lifting by One End
Manipulating deformable objects in robotic cells is often costly and not widely accessible. However, the use of localized pneumatic gripping systems can enhance accessibility. Current methods that use pneumatic grippers to handle deformable objects struggle with effective lifting. This paper introduces a method for the dexterous lifting of textile deformable objects from one edge, utilizing a previously developed gripper designed for flexible and porous materials. By precisely adjusting the orientation and position of the gripper during the lifting process, we were able to significantly reduce necessary gripping force and minimize object vibration caused by airflow. This method was tested and validated on four materials with varying mass, friction, and flexibility. The proposed approach facilitates the lifting of deformable objects from a conveyor or automated line, even when only one edge is accessible for grasping. Future work will involve integrating a vision system to optimize the manipulation of deformable objects with more complex shapes.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
523,496
1511.04810
Parsimonious shooting heuristic for trajectory control of connected automated traffic part I: Theoretical analysis with generalized time geography
This paper studies a problem of controlling trajectories of a platoon of vehicles on a highway segment with connected and automated vehicles. This problem is complex because each vehicle trajectory is an infinite-dimensional object and neighboring trajectories have complex interactions (e.g., car-following behavior). A parsimonious shooting heuristic algorithm is proposed to construct vehicle trajectories on a signalized highway segment that comply with boundary conditions for vehicle arrivals, vehicle mechanical limits, traffic lights and vehicle following safety. This algorithm breaks each vehicle trajectory into a few sections and each is analytically solvable. This decomposes the original hard trajectory control problem to a simple constructive heuristic. Then we slightly adapt this shooting heuristic algorithm to efficiently solve a leading vehicle problem on an uninterrupted freeway. To study theoretical properties of the proposed algorithms, the time geography theory is generalized by considering finite accelerations. With this generalized theory, it is found that under mild conditions, these algorithms can always obtain a feasible solution to the original complex trajectory control problem. Further, we discover that the shooting heuristic solution is a generalization of the solution to the classic kinematic wave theory by incorporating finite accelerations. We identify the theoretical bounds to the difference between the shooting heuristic solution and the kinematic wave solution. Numerical experiments are conducted to verify the theoretical results and to draw additional insights into the potential of trajectory control in improving traffic performance. Building upon this foundation, an optimization framework will be presented in a following paper as Part II of this study.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
48,951
1802.02774
Learning to score the figure skating sports videos
This paper targets at learning to score the figure skating sports videos. To address this task, we propose a deep architecture that includes two complementary components, i.e., Self-Attentive LSTM and Multi-scale Convolutional Skip LSTM. These two components can efficiently learn the local and global sequential information in each video. Furthermore, we present a large-scale figure skating sports video dataset -- FisV dataset. This dataset includes 500 figure skating videos with the average length of 2 minutes and 50 seconds. Each video is annotated by two scores of nine different referees, i.e., Total Element Score(TES) and Total Program Component Score (PCS). Our proposed model is validated on FisV and MIT-skate datasets. The experimental results show the effectiveness of our models in learning to score the figure skating videos.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
89,839
2309.14829
A Structured Prediction Approach for Robot Imitation Learning
We propose a structured prediction approach for robot imitation learning from demonstrations. Among various tools for robot imitation learning, supervised learning has been observed to have a prominent role. Structured prediction is a form of supervised learning that enables learning models to operate on output spaces with complex structures. Through the lens of structured prediction, we show how robots can learn to imitate trajectories belonging to not only Euclidean spaces but also Riemannian manifolds. Exploiting ideas from information theory, we propose a class of loss functions based on the f-divergence to measure the information loss between the demonstrated and reproduced probabilistic trajectories. Different types of f-divergence will result in different policies, which we call imitation modes. Furthermore, our approach enables the incorporation of spatial and temporal trajectory modulation, which is necessary for robots to be adaptive to the change in working conditions. We benchmark our algorithm against state-of-the-art methods in terms of trajectory reproduction and adaptation. The quantitative evaluation shows that our approach outperforms other algorithms regarding both accuracy and efficiency. We also report real-world experimental results on learning manifold trajectories in a polishing task with a KUKA LWR robot arm, illustrating the effectiveness of our algorithmic framework.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
394,757
2108.03950
Novel convex decomposition of piecewise affine functions
In this paper, we present a novel approach to decompose a given piecewise affine (PWA) function into two convex PWA functions. Convex decompositions are useful to speed up or distribute evaluations of PWA functions. Different approaches to construct a convex decomposition have already been published. However, either the two resulting convex functions have very high or very different complexities, which is often undesirable, or the decomposition procedure is inapplicable even for simple cases. Our novel methodology significantly reduces these drawbacks in order to extend the applicability of convex decompositions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
249,839
1410.4360
MIMO Broadcasting for Simultaneous Wireless Information and Power Transfer: Weighted MMSE Approaches
We consider simultaneous wireless information and power transfer (SWIPT) in MIMO Broadcast networks where one energy harvesting (EH) user and one information decoding (ID) user share the same time and frequency resource. In contrast to previous SWIPT systems based on the information rate, this paper addresses the problem in terms of the weighted minimum mean squared error (WMMSE) criterion. First, we formulate the WMMSE-SWIPT problem which minimizes the weighted sum-MSE of the message signal arrived at the ID user, while satisfying the requirement on the energy that can be harvested from the signal at the EH user. Then, we propose the optimal precoder structure of the problem and identify the best possible MSE-energy tradeoff region through the alternative update of the linear precoder at the transmitter with the linear receiver at the ID user. From the derived solution, several interesting observations are made compared to the conventional SWIPT designs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,792
1402.5164
Distribution-Independent Reliable Learning
We study several questions in the reliable agnostic learning framework of Kalai et al. (2009), which captures learning tasks in which one type of error is costlier than others. A positive reliable classifier is one that makes no false positive errors. The goal in the positive reliable agnostic framework is to output a hypothesis with the following properties: (i) its false positive error rate is at most $\epsilon$, (ii) its false negative error rate is at most $\epsilon$ more than that of the best positive reliable classifier from the class. A closely related notion is fully reliable agnostic learning, which considers partial classifiers that are allowed to predict "unknown" on some inputs. The best fully reliable partial classifier is one that makes no errors and minimizes the probability of predicting "unknown", and the goal in fully reliable learning is to output a hypothesis that is almost as good as the best fully reliable partial classifier from a class. For distribution-independent learning, the best known algorithms for PAC learning typically utilize polynomial threshold representations, while the state of the art agnostic learning algorithms use point-wise polynomial approximations. We show that one-sided polynomial approximations, an intermediate notion between polynomial threshold representations and point-wise polynomial approximations, suffice for learning in the reliable agnostic settings. We then show that majorities can be fully reliably learned and disjunctions of majorities can be positive reliably learned, through constructions of appropriate one-sided polynomial approximations. Our fully reliable algorithm for majorities provides the first evidence that fully reliable learning may be strictly easier than agnostic learning. Our algorithms also satisfy strong attribute-efficiency properties, and provide smooth tradeoffs between sample complexity and running time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
31,030
1406.2015
MOOCdb: Developing Standards and Systems to Support MOOC Data Science
We present a shared data model for enabling data science in Massive Open Online Courses (MOOCs). The model captures students interactions with the online platform. The data model is platform agnostic and is based on some basic core actions that students take on an online learning platform. Students usually interact with the platform in four different modes: Observing, Submitting, Collaborating and giving feedback. In observing mode students are simply browsing the online platform, watching videos, reading material, reading book or watching forums. In submitting mode, students submit information to the platform. This includes submissions towards quizzes, homeworks, or any assessment modules. In collaborating mode students interact with other students or instructors on forums, collaboratively editing wiki or chatting on google hangout or other hangout venues. With this basic definitions of activities, and a data model to store events pertaining to these activities, we then create a common terminology to map Coursera and edX data into this shared data model. This shared data model called MOOCdb becomes the foundation for a number of collaborative frameworks that enable progress in data science without the need to share the data.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
false
33,700
1504.04494
Perfectly Secure Index Coding
In this paper, we investigate the index coding problem in the presence of an eavesdropper. Messages are to be sent from one transmitter to a number of legitimate receivers who have side information about the messages, and share a set of secret keys with the transmitter. We assume perfect secrecy, meaning that the eavesdropper should not be able to retrieve any information about the message set. We study the minimum key lengths for zero-error and perfectly secure index coding problem. On one hand, this problem is a generalization of the index coding problem (and thus a difficult one). On the other hand, it is a generalization of the Shannon's cipher system. We show that a generalization of Shannon's one-time pad strategy is optimal up to a multiplicative constant, meaning that it obtains the entire boundary of the cone formed by looking at the secure rate region from the origin. Finally, we consider relaxation of the perfect secrecy and zero-error constraints to weak secrecy and asymptotically vanishing probability of error, and provide a secure version of the result, obtained by Langberg and Effros, on the equivalence of zero-error and $\epsilon$-error regions in the conventional index coding problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,145
2308.13565
DARWIN Series: Domain Specific Large Language Models for Natural Science
Emerging tools bring forth fresh approaches to work, and the field of natural science is no different. In natural science, traditional manual, serial, and labour-intensive work is being augmented by automated, parallel, and iterative processes driven by artificial intelligence-based experimental automation and more. To add new capabilities in natural science, enabling the acceleration and enrichment of automation of the discovery process, we present DARWIN, a series of tailored LLMs for natural science, mainly in physics, chemistry, and material science. This series relies on open-source LLM, incorporating structured and unstructured scientific knowledge from public datasets and literature. We fine-tuned the models using over 60,000 instruction data points, emphasizing factual correctness. During the fine-tuning, we introduce the Scientific Instruction Generation (SIG) model, automating instruction generation from scientific texts. This eliminates the need for manual extraction or domain-specific knowledge graphs and efficiently injects scientific knowledge into the model. We also explore multi-task training strategies, revealing interconnections between scientific tasks. DARWIN series not only achieves state-of-the-art results on various scientific tasks but also diminishes reliance on closed-source AI models. Our research showcases the ability of LLM in the scientific domain, with the overarching goal of fostering prosperity within the broader AI for science community.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
387,972
2409.09659
Leveraging Open-Source Large Language Models for Native Language Identification
Native Language Identification (NLI) - the task of identifying the native language (L1) of a person based on their writing in the second language (L2) - has applications in forensics, marketing, and second language acquisition. Historically, conventional machine learning approaches that heavily rely on extensive feature engineering have outperformed transformer-based language models on this task. Recently, closed-source generative large language models (LLMs), e.g., GPT-4, have demonstrated remarkable performance on NLI in a zero-shot setting, including promising results in open-set classification. However, closed-source LLMs have many disadvantages, such as high costs and undisclosed nature of training data. This study explores the potential of using open-source LLMs for NLI. Our results indicate that open-source LLMs do not reach the accuracy levels of closed-source LLMs when used out-of-the-box. However, when fine-tuned on labeled training data, open-source LLMs can achieve performance comparable to that of commercial LLMs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
488,417
2109.12502
Self-Supervised Learning for MRI Reconstruction with a Parallel Network Training Framework
Image reconstruction from undersampled k-space data plays an important role in accelerating the acquisition of MR data, and a lot of deep learning-based methods have been exploited recently. Despite the achieved inspiring results, the optimization of these methods commonly relies on the fully-sampled reference data, which are time-consuming and difficult to collect. To address this issue, we propose a novel self-supervised learning method. Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery. Two reconstruction losses are defined on all the scanned data points to enhance the network's capability of recovering the frequency information. Meanwhile, to constrain the learned unscanned data points of the network, a difference loss is designed to enforce consistency between the two parallel networks. In this way, the reconstruction model can be properly trained with only the undersampled data. During the model evaluation, the undersampled data are treated as the inputs and either of the two trained networks is expected to reconstruct the high-quality results. The proposed method is flexible and can be employed in any existing deep learning-based method. The effectiveness of the method is evaluated on an open brain MRI dataset. Experimental results demonstrate that the proposed self-supervised method can achieve competitive reconstruction performance compared to the corresponding supervised learning method at high acceleration rates (4 and 8). The code is publicly available at \url{https://github.com/chenhu96/Self-Supervised-MRI-Reconstruction}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
257,315
2106.09621
Privacy-Preserving Eye-tracking Using Deep Learning
The expanding usage of complex machine learning methods like deep learning has led to an explosion in human activity recognition, particularly applied to health. In particular, as part of a larger body sensor network system, face and full-body analysis is becoming increasingly common for evaluating health status. However, complex models which handle private and sometimes protected data, raise concerns about the potential leak of identifiable data. In this work, we focus on the case of a deep network model trained on images of individual faces. Full-face video recordings taken from 493 individuals undergoing an eye-tracking based evaluation of neurological function were used. Outputs, gradients, intermediate layer outputs, loss, and labels were used as inputs for a deep network with an added support vector machine emission layer to recognize membership in the training data. The inference attack method and associated mathematical analysis indicate that there is a low likelihood of unintended memorization of facial features in the deep learning model. In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence. The same process can be implemented in similar conditions for different models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
241,722
2211.11453
Methodology for Holistic Reference Modeling in Systems Engineering
Models in face of increasing complexity support development of new systems and enterprises. For an efficient procedure, reference models are adapted in order to reach a solution with les overhead which covers all necessary aspects. Here, a key challenge is applying a consistent methodology for the descriptions of such reference designs. This paper presents a holistic approach to describe reference models across different views and levels. Modeling stretches from the requirements and capabilities over their subdivision to services and components up to the realization in processes and data structures. Benefits include an end-to-end traceability of the capability coverage with performance parameters considered already at the starting point of the reference design. This enables focused development while considering design constraints and potential bottlenecks. We demonstrate the approach on the example of the development of a smart robot. Here, our methodology highly supports transferability of designs for the development of further systems.
false
false
false
false
false
false
false
false
false
false
true
true
false
false
true
false
false
true
331,738
2011.08772
KddRES: A Multi-level Knowledge-driven Dialogue Dataset for Restaurant Towards Customized Dialogue System
Compared with CrossWOZ (Chinese) and MultiWOZ (English) dataset which have coarse-grained information, there is no dataset which handle fine-grained and hierarchical level information properly. In this paper, we publish a first Cantonese knowledge-driven Dialogue Dataset for REStaurant (KddRES) in Hong Kong, which grounds the information in multi-turn conversations to one specific restaurant. Our corpus contains 0.8k conversations which derive from 10 restaurants with various styles in different regions. In addition to that, we designed fine-grained slots and intents to better capture semantic information. The benchmark experiments and data statistic analysis show the diversity and rich annotations of our dataset. We believe the publish of KddRES can be a necessary supplement of current dialogue datasets and more suitable and valuable for small and middle enterprises (SMEs) of society, such as build a customized dialogue system for each restaurant. The corpus and benchmark models are publicly available.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
206,990
2410.01242
RGD: Multi-LLM Based Agent Debugger via Refinement and Generation Guidance
Large Language Models (LLMs) have shown incredible potential in code generation tasks, and recent research in prompt engineering have enhanced LLMs' understanding of textual information. However, ensuring the accuracy of generated code often requires extensive testing and validation by programmers. While LLMs can typically generate code based on task descriptions, their accuracy remains limited, especially for complex tasks that require a deeper understanding of both the problem statement and the code generation process. This limitation is primarily due to the LLMs' need to simultaneously comprehend text and generate syntactically and semantically correct code, without having the capability to automatically refine the code. In real-world software development, programmers rarely produce flawless code in a single attempt based on the task description alone, they rely on iterative feedback and debugging to refine their programs. Inspired by this process, we introduce a novel architecture of LLM-based agents for code generation and automatic debugging: Refinement and Guidance Debugging (RGD). The RGD framework is a multi-LLM-based agent debugger that leverages three distinct LLM agents-Guide Agent, Debug Agent, and Feedback Agent. RGD decomposes the code generation task into multiple steps, ensuring a clearer workflow and enabling iterative code refinement based on self-reflection and feedback. Experimental results demonstrate that RGD exhibits remarkable code generation capabilities, achieving state-of-the-art performance with a 9.8% improvement on the HumanEval dataset and a 16.2% improvement on the MBPP dataset compared to the state-of-the-art approaches and traditional direct prompting approaches. We highlight the effectiveness of the RGD framework in enhancing LLMs' ability to generate and refine code autonomously.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
493,673
2412.03255
DynamicControl: Adaptive Condition Selection for Improved Text-to-Image Generation
To enhance the controllability of text-to-image diffusion models, current ControlNet-like models have explored various control signals to dictate image attributes. However, existing methods either handle conditions inefficiently or use a fixed number of conditions, which does not fully address the complexity of multiple conditions and their potential conflicts. This underscores the need for innovative approaches to manage multiple conditions effectively for more reliable and detailed image synthesis. To address this issue, we propose a novel framework, DynamicControl, which supports dynamic combinations of diverse control signals, allowing adaptive selection of different numbers and types of conditions. Our approach begins with a double-cycle controller that generates an initial real score sorting for all input conditions by leveraging pre-trained conditional generation models and discriminative models. This controller evaluates the similarity between extracted conditions and input conditions, as well as the pixel-level similarity with the source image. Then, we integrate a Multimodal Large Language Model (MLLM) to build an efficient condition evaluator. This evaluator optimizes the ordering of conditions based on the double-cycle controller's score ranking. Our method jointly optimizes MLLMs and diffusion models, utilizing MLLMs' reasoning capabilities to facilitate multi-condition text-to-image (T2I) tasks. The final sorted conditions are fed into a parallel multi-control adapter, which learns feature maps from dynamic visual conditions and integrates them to modulate ControlNet, thereby enhancing control over generated images. Through both quantitative and qualitative comparisons, DynamicControl demonstrates its superiority over existing methods in terms of controllability, generation quality and composability under various conditional controls.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,886
2106.10065
Being a Bit Frequentist Improves Bayesian Neural Networks
Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection. In this paper, based on empirical findings in prior works, we hypothesize that this issue is because even recent Bayesian methods have never considered OOD data in their training processes, even though this "OOD training" technique is an integral part of state-of-the-art frequentist UQ methods. To validate this, we treat OOD data as a first-class citizen in BNN training by exploring four different ways of incorporating OOD data into Bayesian inference. We show in extensive experiments that OOD-trained BNNs are competitive to recent frequentist baselines. This work thus provides strong baselines for future work in Bayesian UQ.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
241,886
2106.11184
The decline of disruptive science and technology
Theories of scientific and technological change view discovery and invention as endogenous processes, wherein prior accumulated knowledge enables future progress by allowing researchers to, in Newton's words, "stand on the shoulders of giants". Recent decades have witnessed exponential growth in the volume of new scientific and technological knowledge, thereby creating conditions that should be ripe for major advances. Yet contrary to this view, studies suggest that progress is slowing in several major fields of science and technology. Here, we analyze these claims at scale across 6 decades, using data on 45 million papers and 3.5 million patents from 6 large-scale datasets. We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions, a pattern that holds universally across fields. Subsequently, we link this decline in disruptiveness to a narrowing in the use of prior knowledge, allowing us to reconcile the patterns we observe with the "shoulders of giants" view. We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices, or field-specific factors. Overall, our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
242,307
2106.02305
Local Adaptivity in Federated Learning: Convergence and Consistency
The federated learning (FL) framework trains a machine learning model using decentralized data stored at edge client devices by periodically aggregating locally trained models. Popular optimization algorithms of FL use vanilla (stochastic) gradient descent for both local updates at clients and global updates at the aggregating server. Recently, adaptive optimization methods such as AdaGrad have been studied for server updates. However, the effect of using adaptive optimization methods for local updates at clients is not yet understood. We show in both theory and practice that while local adaptive methods can accelerate convergence, they can cause a non-vanishing solution bias, where the final converged solution may be different from the stationary point of the global objective function. We propose correction techniques to overcome this inconsistency and complement the local adaptive methods for FL. Extensive experiments on realistic federated training tasks show that the proposed algorithms can achieve faster convergence and higher test accuracy than the baselines without local adaptivity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
238,821
2001.06337
Integrated Supervised Adaptive Control for the More Electric Aircraft
The innovative concept of Electric Aircraft is a challenging topic involving different control objectives. For instance, it becomes possible to reduce the size and the weight of the generator by using the battery as an auxiliary generator in some operation phases. However, control strategies with different objectives can be conflicting and they can produce undesirable effects, even instability. For this reason an integrated design approach is needed, where stability can be guaranteed in any configuration. In other words, the design of the supervisory controller must be interlaced with that of low-level controllers. Moreover, uncertainties and noisy signals require robust control techniques and the use of adaptiveness in the control algorithm. In this paper, an aeronautic application aiming at recharging batteries and to use the battery to withstand generator overloads is addressed. Detailed and rigorous stability proofs are given for any control configuration, including the switching phases among different control objectives. Effectiveness of the proposed strategies is shown by using a detailed simulator including switching electronic components.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
160,778
2111.08794
Investigating Conversion from Mild Cognitive Impairment to Alzheimer's Disease using Latent Space Manipulation
Alzheimer's disease is the most common cause of dementia that affects millions of lives worldwide. Investigating the underlying causes and risk factors of Alzheimer's disease is essential to prevent its progression. Mild Cognitive Impairment (MCI) is considered an intermediate stage before Alzheimer's disease. Early prediction of the conversion from the MCI to Alzheimer's is crucial to take necessary precautions for decelerating the progression and developing suitable treatments. In this study, we propose a deep learning framework to discover the variables which are identifiers of the conversion from MCI to Alzheimer's disease. In particular, the latent space of a variational auto-encoder network trained with the MCI and Alzheimer's patients is manipulated to obtain the significant attributes and decipher their behavior that leads to the conversion from MCI to Alzheimer's disease. By utilizing a generative decoder and the dimensions that lead to the Alzheimer's diagnosis, we generate synthetic dementia patients from MCI patients in the dataset. Experimental results show promising quantitative and qualitative results on one of the most extensive and commonly used Alzheimer's disease neuroimaging datasets in literature.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,809
1307.5302
Kernel Adaptive Metropolis-Hastings
A Kernel Adaptive Metropolis-Hastings algorithm is introduced, for the purpose of sampling from a target distribution with strongly nonlinear support. The algorithm embeds the trajectory of the Markov chain into a reproducing kernel Hilbert space (RKHS), such that the feature space covariance of the samples informs the choice of proposal. The procedure is computationally efficient and straightforward to implement, since the RKHS moves can be integrated out analytically: our proposal distribution in the original space is a normal distribution whose mean and covariance depend on where the current sample lies in the support of the target distribution, and adapts to its local covariance structure. Furthermore, the procedure requires neither gradients nor any other higher order information about the target, making it particularly attractive for contexts such as Pseudo-Marginal MCMC. Kernel Adaptive Metropolis-Hastings outperforms competing fixed and adaptive samplers on multivariate, highly nonlinear target distributions, arising in both real-world and synthetic examples. Code may be downloaded at https://github.com/karlnapf/kameleon-mcmc.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
25,934
2302.14709
Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks
For emergency response scenarios like firefighting in urban environments, there is a need to both localize emergency responders inside the building and also support a high bandwidth communication link between the responders and a command-and-control center. The emergency networks for such scenarios can be established with the quick deployment of Unmanned Aerial Vehicles (UAVs). Further, the 3D mobility of UAVs can be leveraged to improve the quality of the wireless link by maneuvering them into advantageous locations. This has motivated recent propagation measurement campaigns to study low-altitude air-to-ground channels in both 5G-sub6 GHz and 5G-mmWave bands. In this paper, we develop a model for the link in a UAV-assisted emergency location and/or communication system. Specifically, given the importance of Line-of-Sight (LoS) links in localization as well as mmWave communication, we derive a closed-form expression for the LoS probability. This probability is parameterized by the UAV base station location, the size of the building, and the size of the window that offers the best propagation path. An expression for coverage probability is also derived. The LoS probability and coverage probabilities derived in this paper can be used to analyze the outdoor UAV-to-indoor propagation environment to determine optimal UAV positioning and the number of UAVs needed to achieve the desired performance of the emergency network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
348,408
2404.02174
Bounds of Block Rewards in Honest PinFi Systems
PinFi is a class of novel protocols for decentralized pricing of dissipative assets, whose value naturally declines over time. Central to the protocol's functionality and its market efficiency is the role of liquidity providers (LPs). This study addresses critical stability and sustainability challenges within the protocol, namely: the propensity of LPs to prefer selling in external markets over participation in the protocol; a similar inclination towards selling within the PinFi system rather than contributing as LPs; and a scenario where LPs are disinclined to sell within the protocol. Employing a game-theoretic approach, we explore PinFi's mechanisms and its broader ramifications. Our findings reveal that, under a variety of common conditions and with an assumption of participant integrity, PinFi is capable of fostering a dynamic equilibrium among LPs, sellers, and buyers. This balance is maintained through a carefully calibrated range of block rewards for LPs, ensuring the protocol's long-term stability and utility.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
443,754
1602.05830
Automated optimisation of multi-level models of collective behaviour in a mixed society of animals and robots
Animal and robotic collective behaviours can exhibit complex dynamics that require multi-level descriptions. Here, we are interested in developing a multi-level modeling framework for the use of robots in studies about animal collective decision-making. In this context, using robots can be useful for validating models in silico, inducing calibrated repetitive stimuli to trigger animal responses or modulating and controlling animal collective behaviour. However, designing appropriate biomimetic robotic behaviour faces a major challenge: how to go from the collective decision dynamics observed with animals to an actual algorithmic implementation in robots. In previous work, this was mainly done by hand, often by taking inspiration from human-designed models. Typically, models of behaviour are either macroscopic, differential equations of the population dynamics, or microscopic,explicit spatio-temporal state of each individual. Only microscopic models can easily be implemented as robot controllers. Here, we address the problem of automating the design of lower level description models that can be implemented in robots and exhibit the same collective dynamics as a given higher level model. We apply evolutionary algorithms to simultaneously optimise the parameters of models accounting for different levels of description. This methodology is applied to an experimentally validated shelter-selection problem solved by gregarious insects and robots. We successfully design and calibrate automatically both a microscopic and a hybrid model exhibiting the same dynamics as a macroscopic one. Our framework can be used for multi-level modeling of collective behaviour in animal or robot populations and bio-hybrid systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
52,298
1907.10804
Co-Evolutionary Compression for Unpaired Image Translation
Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation. However, generators in these networks are of complicated architectures with large number of parameters and huge computational complexities. Existing methods are mainly designed for compressing and speeding-up deep neural networks in the classification task, and cannot be directly applied on GANs for image translation, due to their different objectives and training procedures. To this end, we develop a novel co-evolutionary approach for reducing their memory usage and FLOPs simultaneously. In practice, generators for two image domains are encoded as two populations and synergistically optimized for investigating the most important convolution filters iteratively. Fitness of each individual is calculated using the number of parameters, a discriminator-aware regularization, and the cycle consistency. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed method for obtaining compact and effective generators.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
139,706
1902.04139
Using Deep Cross Modal Hashing and Error Correcting Codes for Improving the Efficiency of Attribute Guided Facial Image Retrieval
With benefits of fast query speed and low storage cost, hashing-based image retrieval approaches have garnered considerable attention from the research community. In this paper, we propose a novel Error-Corrected Deep Cross Modal Hashing (CMH-ECC) method which uses a bitmap specifying the presence of certain facial attributes as an input query to retrieve relevant face images from the database. In this architecture, we generate compact hash codes using an end-to-end deep learning module, which effectively captures the inherent relationships between the face and attribute modality. We also integrate our deep learning module with forward error correction codes to further reduce the distance between different modalities of the same subject. Specifically, the properties of deep hashing and forward error correction codes are exploited to design a cross modal hashing framework with high retrieval performance. Experimental results using two standard datasets with facial attributes-image modalities indicate that our CMH-ECC face image retrieval model outperforms most of the current attribute-based face image retrieval approaches.
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
false
false
121,268
1201.0409
Code Design for the Noisy Slepian-Wolf Problem
We consider a noisy Slepian-Wolf problem where two correlated sources are separately encoded (using codes of fixed rate) and transmitted over two independent binary memoryless symmetric channels. The capacity of each channel is characterized by a single parameter which is not known at the transmitter. The goal is to design systems that retain near-optimal performance without channel knowledge at the transmitter. It was conjectured that it may be hard to design codes that perform well for symmetric channel conditions. In this work, we present a provable capacity-achieving sequence of LDGM ensembles for the erasure Slepian-Wolf problem with symmetric channel conditions. We also introduce a staggered structure which enables codes optimized for single user channels to perform well for symmetric channel conditions. We provide a generic framework for analyzing the performance of joint iterative decoding, using density evolution. Using differential evolution, we design punctured systematic LDPC codes to maximize the region of achievable channel conditions. The resulting codes are then staggered to further increase the region of achievable parameters. The main contribution of this paper is to demonstrate that properly designed irregular LDPC codes can perform well simultaneously over a wide range of channel parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,655
2407.08324
A Cantor-Kantorovich Metric Between Markov Decision Processes with Application to Transfer Learning
We extend the notion of Cantor-Kantorovich distance between Markov chains introduced by (Banse et al., 2023) in the context of Markov Decision Processes (MDPs). The proposed metric is well-defined and can be efficiently approximated given a finite horizon. Then, we provide numerical evidences that the latter metric can lead to interesting applications in the field of reinforcement learning. In particular, we show that it could be used for forecasting the performance of transfer learning algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
472,122
2308.01264
Exploring the psychology of LLMs' Moral and Legal Reasoning
Large language models (LLMs) exhibit expert-level performance in tasks across a wide range of different domains. Ethical issues raised by LLMs and the need to align future versions makes it important to know how state of the art models reason about moral and legal issues. In this paper, we employ the methods of experimental psychology to probe into this question. We replicate eight studies from the experimental literature with instances of Google's Gemini Pro, Anthropic's Claude 2.1, OpenAI's GPT-4, and Meta's Llama 2 Chat 70b. We find that alignment with human responses shifts from one experiment to another, and that models differ amongst themselves as to their overall alignment, with GPT-4 taking a clear lead over all other models we tested. Nonetheless, even when LLM-generated responses are highly correlated to human responses, there are still systematic differences, with a tendency for models to exaggerate effects that are present among humans, in part by reducing variance. This recommends caution with regards to proposals of replacing human participants with current state-of-the-art LLMs in psychological research and highlights the need for further research about the distinctive aspects of machine psychology.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
383,201
2501.18998
Adversarial Attacks on AI-Generated Text Detection Models: A Token Probability-Based Approach Using Embeddings
In recent years, text generation tools utilizing Artificial Intelligence (AI) have occasionally been misused across various domains, such as generating student reports or creative writings. This issue prompts plagiarism detection services to enhance their capabilities in identifying AI-generated content. Adversarial attacks are often used to test the robustness of AI-text generated detectors. This work proposes a novel textual adversarial attack on the detection models such as Fast-DetectGPT. The method employs embedding models for data perturbation, aiming at reconstructing the AI generated texts to reduce the likelihood of detection of the true origin of the texts. Specifically, we employ different embedding techniques, including the Tsetlin Machine (TM), an interpretable approach in machine learning for this purpose. By combining synonyms and embedding similarity vectors, we demonstrates the state-of-the-art reduction in detection scores against Fast-DetectGPT. Particularly, in the XSum dataset, the detection score decreased from 0.4431 to 0.2744 AUROC, and in the SQuAD dataset, it dropped from 0.5068 to 0.3532 AUROC.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
528,957
2403.05571
DiffuSolve: Diffusion-based Solver for Non-convex Trajectory Optimization
Optimal trajectory design is computationally expensive for nonlinear and high-dimensional dynamical systems. The challenge arises from the non-convex nature of the optimization problem with multiple local optima, which usually requires a global search. Traditional numerical solvers struggle to find diverse solutions efficiently without appropriate initial guesses. In this paper, we introduce DiffuSolve, a general diffusion model-based solver for non-convex trajectory optimization. An expressive diffusion model is trained on pre-collected locally optimal solutions and efficiently samples initial guesses, which then warm-starts numerical solvers to fine-tune the feasibility and optimality. We also present DiffuSolve+, a novel constrained diffusion model with an additional loss in training that further reduces the problem constraint violations of diffusion samples. Experimental evaluations on three tasks verify the improved robustness, diversity, and a 2$\times$ to 11$\times$ increase in computational efficiency with our proposed method, which generalizes well to trajectory optimization problems of varying challenges.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
436,063
1909.07201
MuPNet: Multi-modal Predictive Coding Network for Place Recognition by Unsupervised Learning of Joint Visuo-Tactile Latent Representations
Extracting and binding salient information from different sensory modalities to determine common features in the environment is a significant challenge in robotics. Here we present MuPNet (Multi-modal Predictive Coding Network), a biologically plausible network architecture for extracting joint latent features from visuo-tactile sensory data gathered from a biomimetic mobile robot. In this study we evaluate MuPNet applied to place recognition as a simulated biomimetic robot platform explores visually aliased environments. The F1 scores demonstrate that its performance over prior hand-crafted sensory feature extraction techniques is equivalent under controlled conditions, with significant improvement when operating in novel environments.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
145,617
2204.07380
Crowd counting with segmentation attention convolutional neural network
Deep learning occupies an undisputed dominance in crowd counting. In this paper, we propose a novel convolutional neural network (CNN) architecture called SegCrowdNet. Despite the complex background in crowd scenes, the proposeSegCrowdNet still adaptively highlights the human head region and suppresses the non-head region by segmentation. With the guidance of an attention mechanism, the proposed SegCrowdNet pays more attention to the human head region and automatically encodes the highly refined density map. The crowd count can be obtained by integrating the density map. To adapt the variation of crowd counts, SegCrowdNet intelligently classifies the crowd count of each image into several groups. In addition, the multi-scale features are learned and extracted in the proposed SegCrowdNet to overcome the scale variations of the crowd. To verify the effectiveness of our proposed method, extensive experiments are conducted on four challenging datasets. The results demonstrate that our proposed SegCrowdNet achieves excellent performance compared with the state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
291,680
1405.0209
Performance of PZF and MMSE Receivers in Cellular Networks with Multi-User Spatial Multiplexing
This paper characterizes the performance of cellular networks employing multiple antenna open-loop spatial multiplexing techniques. We use a stochastic geometric framework to model distance depended inter cell interference. Using this framework, we analyze the coverage and rate using two linear receivers, namely, partial zero-forcing (PZF) and minimum-mean-square-estimation (MMSE) receivers. Analytical expressions are obtained for coverage and rate distribution that are suitable for fast numerical computation. In the case of the PZF receiver, we show that it is not optimal to utilize all the receive antenna for canceling interference. With $\alpha$ as the path loss exponent, $N_t$ transmit antenna, $N_r$ receive antenna, we show that it is optimal to use $N_t\left\lceil\left(1-\frac{2}{\alpha}\right) \left( \frac{N_r}{N_t} -\frac{1}{2}\right)\right\rceil$ receive antennas for interference cancellation and the remaining antennas for signal enhancement (array gain). For both PZF and MMSE receivers, we observe that increasing the number of data streams provides an improvement in the mean data rate with diminishing returns. Also transmitting $N_r$ streams is not optimal in terms of the mean sum rate. We observe that increasing the SM rate with a PZF receiver always degrades the cell edge data rate while the performance with MMSE receiver is nearly independent of the SM rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
32,753
2304.01663
On the Stability-Plasticity Dilemma of Class-Incremental Learning
A primary goal of class-incremental learning is to strike a balance between stability and plasticity, where models should be both stable enough to retain knowledge learned from previously seen classes, and plastic enough to learn concepts from new classes. While previous works demonstrate strong performance on class-incremental benchmarks, it is not clear whether their success comes from the models being stable, plastic, or a mixture of both. This paper aims to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off. We establish analytical tools that measure the stability and plasticity of feature representations, and employ such tools to investigate models trained with various algorithms on large-scale class-incremental benchmarks. Surprisingly, we find that the majority of class-incremental learning algorithms heavily favor stability over plasticity, to the extent that the feature extractor of a model trained on the initial set of classes is no less effective than that of the final incremental model. Our observations not only inspire two simple algorithms that highlight the importance of feature representation analysis, but also suggest that class-incremental learning approaches, in general, should strive for better feature representation learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
356,169
1807.00848
Client-Specific Anomaly Detection for Face Presentation Attack Detection
The one-class anomaly detection approach has previously been found to be effective in face presentation attack detection, especially in an \textit{unseen} attack scenario, where the system is exposed to novel types of attacks. This work follows the same anomaly-based formulation of the problem and analyses the merits of deploying \textit{client-specific} information for face spoofing detection. We propose training one-class client-specific classifiers (both generative and discriminative) using representations obtained from pre-trained deep convolutional neural networks. Next, based on subject-specific score distributions, a distinct threshold is set for each client, which is then used for decision making regarding a test query. Through extensive experiments using different one-class systems, it is shown that the use of client-specific information in a one-class anomaly detection formulation (both in model construction as well as decision threshold tuning) improves the performance significantly. In addition, it is demonstrated that the same set of deep convolutional features used for the recognition purposes is effective for face presentation attack detection in the class-specific one-class anomaly detection paradigm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
101,925
2111.07489
Deep Learning based Urban Vehicle Trajectory Analytics
A `trajectory' refers to a trace generated by a moving object in geographical spaces, usually represented by of a series of chronologically ordered points, where each point consists of a geo-spatial coordinate set and a timestamp. Rapid advancements in location sensing and wireless communication technology enabled us to collect and store a massive amount of trajectory data. As a result, many researchers use trajectory data to analyze mobility of various moving objects. In this dissertation, we focus on the `urban vehicle trajectory,' which refers to trajectories of vehicles in urban traffic networks, and we focus on `urban vehicle trajectory analytics.' The urban vehicle trajectory analytics offers unprecedented opportunities to understand vehicle movement patterns in urban traffic networks including both user-centric travel experiences and system-wide spatiotemporal patterns. The spatiotemporal features of urban vehicle trajectory data are structurally correlated with each other, and consequently, many previous researchers used various methods to understand this structure. Especially, deep-learning models are getting attentions of many researchers due to its powerful function approximation and feature representation abilities. As a result, the objective of this dissertation is to develop deep-learning based models for urban vehicle trajectory analytics to better understand the mobility patterns of urban traffic networks. Particularly, this dissertation focuses on two research topics, which has high necessity, importance and applicability: Next Location Prediction, and Synthetic Trajectory Generation. In this study, we propose various novel models for urban vehicle trajectory analytics using deep learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,389
1307.1584
Comparing Data-mining Algorithms Developed for Longitudinal Observational Databases
Longitudinal observational databases have become a recent interest in the post marketing drug surveillance community due to their ability of presenting a new perspective for detecting negative side effects. Algorithms mining longitudinal observation databases are not restricted by many of the limitations associated with the more conventional methods that have been developed for spontaneous reporting system databases. In this paper we investigate the robustness of four recently developed algorithms that mine longitudinal observational databases by applying them to The Health Improvement Network (THIN) for six drugs with well document known negative side effects. Our results show that none of the existing algorithms was able to consistently identify known adverse drug reactions above events related to the cause of the drug and no algorithm was superior.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
25,643
2403.00423
Validation of ML-UQ calibration statistics using simulated reference values: a sensitivity analysis
Some popular Machine Learning Uncertainty Quantification (ML-UQ) calibration statistics do not have predefined reference values and are mostly used in comparative studies. In consequence, calibration is almost never validated and the diagnostic is left to the appreciation of the reader. Simulated reference values, based on synthetic calibrated datasets derived from actual uncertainties, have been proposed to palliate this problem. As the generative probability distribution for the simulation of synthetic errors is often not constrained, the sensitivity of simulated reference values to the choice of generative distribution might be problematic, shedding a doubt on the calibration diagnostic. This study explores various facets of this problem, and shows that some statistics are excessively sensitive to the choice of generative distribution to be used for validation when the generative distribution is unknown. This is the case, for instance, of the correlation coefficient between absolute errors and uncertainties (CC) and of the expected normalized calibration error (ENCE). A robust validation workflow to deal with simulated reference values is proposed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,974
1910.13520
Digital Twin approach to Clinical DSS with Explainable AI
We propose a digital twin approach to improve healthcare decision support systems with a combination of domain knowledge and data. Domain knowledge helps build decision thresholds that doctors can use to determine a risk or recommend a treatment or test based on the specific patient condition. However, these assessments tend to be highly subjective and differ from doctor to doctor and from patient to patient. We propose a system where we collate this subjective risk by compiling data from different doctors treating different patients and build a machine learning model that learns from this knowledge. Then using state-of-the-art explainability concepts we derive explanations from this model. These explanations give us a summary of different doctor domain knowledge applied in different cases to give a more generic perspective. Also these explanations are specific to a particular patient and are customized for their condition. This is a form of a digital twin for the patient that can now be used to enhance decision boundaries for earlier defined decision tables that help in diagnosis. We will show an example of running this analysis for a liver disease risk diagnosis.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
151,410
2407.13584
Connecting Consistency Distillation to Score Distillation for Text-to-3D Generation
Although recent advancements in text-to-3D generation have significantly improved generation quality, issues like limited level of detail and low fidelity still persist, which requires further improvement. To understand the essence of those issues, we thoroughly analyze current score distillation methods by connecting theories of consistency distillation to score distillation. Based on the insights acquired through analysis, we propose an optimization framework, Guided Consistency Sampling (GCS), integrated with 3D Gaussian Splatting (3DGS) to alleviate those issues. Additionally, we have observed the persistent oversaturation in the rendered views of generated 3D assets. From experiments, we find that it is caused by unwanted accumulated brightness in 3DGS during optimization. To mitigate this issue, we introduce a Brightness-Equalized Generation (BEG) scheme in 3DGS rendering. Experimental results demonstrate that our approach generates 3D assets with more details and higher fidelity than state-of-the-art methods. The codes are released at https://github.com/LMozart/ECCV2024-GCS-BEG.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,433
2110.15200
Generating 3D Molecules Conditional on Receptor Binding Sites with Deep Generative Models
The goal of structure-based drug discovery is to find small molecules that bind to a given target protein. Deep learning has been used to generate drug-like molecules with certain cheminformatic properties, but has not yet been applied to generating 3D molecules predicted to bind to proteins by sampling the conditional distribution of protein-ligand binding interactions. In this work, we describe for the first time a deep learning system for generating 3D molecular structures conditioned on a receptor binding site. We approach the problem using a conditional variational autoencoder trained on an atomic density grid representation of cross-docked protein-ligand structures. We apply atom fitting and bond inference procedures to construct valid molecular conformations from generated atomic densities. We evaluate the properties of the generated molecules and demonstrate that they change significantly when conditioned on mutated receptors. We also explore the latent space learned by our generative model using sampling and interpolation techniques. This work opens the door for end-to-end prediction of stable bioactive molecules from protein structures with deep learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,803
2410.22149
Capacity Control is an Effective Memorization Mitigation Mechanism in Text-Conditional Diffusion Models
In this work, we present compelling evidence that controlling model capacity during fine-tuning can effectively mitigate memorization in diffusion models. Specifically, we demonstrate that adopting Parameter-Efficient Fine-Tuning (PEFT) within the pre-train fine-tune paradigm significantly reduces memorization compared to traditional full fine-tuning approaches. Our experiments utilize the MIMIC dataset, which comprises image-text pairs of chest X-rays and their corresponding reports. The results, evaluated through a range of memorization and generation quality metrics, indicate that PEFT not only diminishes memorization but also enhances downstream generation quality. Additionally, PEFT methods can be seamlessly combined with existing memorization mitigation techniques for further improvement. The code for our experiments is available at: https://github.com/Raman1121/Diffusion_Memorization_HPO
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
503,522
1510.01440
Harvesting Discriminative Meta Objects with Deep CNN Features for Scene Classification
Recent work on scene classification still makes use of generic CNN features in a rudimentary manner. In this ICCV 2015 paper, we present a novel pipeline built upon deep CNN features to harvest discriminative visual objects and parts for scene classification. We first use a region proposal technique to generate a set of high-quality patches potentially containing objects, and apply a pre-trained CNN to extract generic deep features from these patches. Then we perform both unsupervised and weakly supervised learning to screen these patches and discover discriminative ones representing category-specific objects and parts. We further apply discriminative clustering enhanced with local CNN fine-tuning to aggregate similar objects and parts into groups, called meta objects. A scene image representation is constructed by pooling the feature response maps of all the learned meta objects at multiple spatial scales. We have confirmed that the scene image representation obtained using this new pipeline is capable of delivering state-of-the-art performance on two popular scene benchmark datasets, MIT Indoor 67~\cite{MITIndoor67} and Sun397~\cite{Sun397}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
47,622
1805.01631
Is Information in the Brain Represented in Continuous or Discrete Form?
The question of continuous-versus-discrete information representation in the brain is a fundamental yet unresolved question. Historically, most analyses assume a continuous representation without considering the discrete alternative. Our work explores the plausibility of both, answering the question from a communications systems engineering perspective. Using Shannon's communications theory, we posit that information in the brain is represented in discrete form. We address this hypothesis using 2 approaches. First, we identify the fundamental communication requirements of the brain. Second, we estimate the symbol error probability and channel capacity for a continuous information representation. Our work concludes that information cannot be communicated and represented reliably in the brain using a continuous representation - it has to be in a discrete form. This is a major demarcation from conventional and current wisdom. We apply this discrete result to the 4 major neural coding hypotheses, and illustrate the use of discrete ISI neural coding in analyzing electrophysiology experimental data. We further posit and illustrate a plausible direct link between Weber's Law and discrete neural coding. We end by outlining a number of key research questions on discrete neural coding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
96,687
1712.02109
Multi-channel Encoder for Neural Machine Translation
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT), which typically relies on recurrent neural networks (RNN) to build the blocks that will be lately called by attentive reader during the decoding process. This design of encoder yields relatively uniform composition on source sentence, despite the gating mechanism employed in encoding RNN. On the other hand, we often hope the decoder to take pieces of source sentence at varying levels suiting its own linguistic structure: for example, we may want to take the entity name in its raw form while taking an idiom as a perfectly composed unit. Motivated by this demand, we propose Multi-channel Encoder (MCE), which enhances encoding components with different levels of composition. More specifically, in addition to the hidden state of encoding RNN, MCE takes 1) the original word embedding for raw encoding with no composition, and 2) a particular design of external memory in Neural Turing Machine (NTM) for more complex composition, while all three encoding strategies are properly blended during decoding. Empirical study on Chinese-English translation shows that our model can improve by 6.52 BLEU points upon a strong open source NMT system: DL4MT1. On the WMT14 English- French task, our single shallow system achieves BLEU=38.8, comparable with the state-of-the-art deep models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
86,240
2009.12146
GEFA: Early Fusion Approach in Drug-Target Affinity Prediction
Predicting the interaction between a compound and a target is crucial for rapid drug repurposing. Deep learning has been successfully applied in drug-target affinity (DTA) problem. However, previous deep learning-based methods ignore modeling the direct interactions between drug and protein residues. This would lead to inaccurate learning of target representation which may change due to the drug binding effects. In addition, previous DTA methods learn protein representation solely based on a small number of protein sequences in DTA datasets while neglecting the use of proteins outside of the DTA datasets. We propose GEFA (Graph Early Fusion Affinity), a novel graph-in-graph neural network with attention mechanism to address the changes in target representation because of the binding effects. Specifically, a drug is modeled as a graph of atoms, which then serves as a node in a larger graph of residues-drug complex. The resulting model is an expressive deep nested graph neural network. We also use pre-trained protein representation powered by the recent effort of learning contextualized protein representation. The experiments are conducted under different settings to evaluate scenarios such as novel drugs or targets. The results demonstrate the effectiveness of the pre-trained protein embedding and the advantages our GEFA in modeling the nested graph for drug-target interaction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
197,346
2205.13681
Sequential Nature of Recommender Systems Disrupts the Evaluation Process
Datasets are often generated in a sequential manner, where the previous samples and intermediate decisions or interventions affect subsequent samples. This is especially prominent in cases where there are significant human-AI interactions, such as in recommender systems. To characterize the importance of this relationship across samples, we propose to use adversarial attacks on popular evaluation processes. We present sequence-aware boosting attacks and provide a lower bound on the amount of extra information that can be exploited from a confidential test set solely based on the order of the observed data. We use real and synthetic data to test our methods and show that the evaluation process on the MovieLense-100k dataset can be affected by $\sim1\%$ which is important when considering the close competition. Codes are publicly available.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
299,036
1906.04881
Multiple instance learning with graph neural networks
Multiple instance learning (MIL) aims to learn the mapping between a bag of instances and the bag-level label. In this paper, we propose a new end-to-end graph neural network (GNN) based algorithm for MIL: we treat each bag as a graph and use GNN to learn the bag embedding, in order to explore the useful structural information among instances in bags. The final graph representation is fed into a classifier for label prediction. Our algorithm is the first attempt to use GNN for MIL. We empirically show that the proposed algorithm achieves the state of the art performance on several popular MIL data sets without losing model interpretability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
134,864
2303.00894
Active Reward Learning from Multiple Teachers
Reward learning algorithms utilize human feedback to infer a reward function, which is then used to train an AI system. This human feedback is often a preference comparison, in which the human teacher compares several samples of AI behavior and chooses which they believe best accomplishes the objective. While reward learning typically assumes that all feedback comes from a single teacher, in practice these systems often query multiple teachers to gather sufficient training data. In this paper, we investigate this disparity, and find that algorithmic evaluation of these different sources of feedback facilitates more accurate and efficient reward learning. We formally analyze the value of information (VOI) when reward learning from teachers with varying levels of rationality, and define and evaluate an algorithm that utilizes this VOI to actively select teachers to query for feedback. Surprisingly, we find that it is often more informative to query comparatively irrational teachers. By formalizing this problem and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to aligning AI behavior with human values.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
348,739
2209.12762
Just-In-Time Learning for Operational Risk Assessment in Power Grids
In a grid with a significant share of renewable generation, operators will need additional tools to evaluate the operational risk due to the increased volatility in load and generation. The computational requirements of the forward uncertainty propagation problem, which must solve numerous security-constrained economic dispatch (SCED) optimizations, is a major barrier for such real-time risk assessment. This paper proposes a Just-In-Time Risk Assessment Learning Framework (JITRALF) as an alternative. JITRALF trains risk surrogates, one for each hour in the day, using Machine Learning (ML) to predict the quantities needed to estimate risk, without explicitly solving the SCED problem. This significantly reduces the computational burden of the forward uncertainty propagation and allows for fast, real-time risk estimation. The paper also proposes a novel, asymmetric loss function and shows that models trained using the asymmetric loss perform better than those using symmetric loss functions. JITRALF is evaluated on the French transmission system for assessing the risk of insufficient operating reserves, the risk of load shedding, and the expected operating cost.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
319,645
1607.03183
How to calculate partition functions using convex programming hierarchies: provable bounds for variational methods
We consider the problem of approximating partition functions for Ising models. We make use of recent tools in combinatorial optimization: the Sherali-Adams and Lasserre convex programming hierarchies, in combination with variational methods to get algorithms for calculating partition functions in these families. These techniques give new, non-trivial approximation guarantees for the partition function beyond the regime of correlation decay. They also generalize some classical results from statistical physics about the Curie-Weiss ferromagnetic Ising model, as well as provide a partition function counterpart of classical results about max-cut on dense graphs \cite{arora1995polynomial}. With this, we connect techniques from two apparently disparate research areas -- optimization and counting/partition function approximations. (i.e. \#-P type of problems). Furthermore, we design to the best of our knowledge the first provable, convex variational methods. Though in the literature there are a host of convex versions of variational methods \cite{wainwright2003tree, wainwright2005new, heskes2006convexity, meshi2009convexifying}, they come with no guarantees (apart from some extremely special cases, like e.g. the graph has a single cycle \cite{weiss2000correctness}). We consider dense and low threshold rank graphs, and interestingly, the reason our approach works on these types of graphs is because local correlations propagate to global correlations -- completely the opposite of algorithms based on correlation decay. In the process we design novel entropy approximations based on the low-order moments of a distribution. Our proof techniques are very simple and generic, and likely to be applicable to many other settings other than Ising models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
58,465
1408.1560
MacWilliams identities for poset level weight enumerators of linear codes
Codes over various metrics such as Rosenbloom-Tsfasman (RT), Lee, etc. have been considered. Recently, codes over poset metrics have been studied. Poset metric is a great generalization of many metrics especially the well-known ones such as the RT and the Hamming metrics. Poset metric can be realized on the channels with localized error occurrences. It has been shown that MacWilliams identities are not admissible for codes over poset metrics in general [Kim and Oh, 2005]. Lately, to overcome this problem some further studies on MacWilliams identities over poset metrics has been presented. In this paper, we introduce new poset level weight enumerators of linear codes over Frobenius commutative rings. We derive MacWilliams-type identities for each of the given enumerators which generalize in great deal the previous results discussed in the literature. Most of the weight enumerators in the literature such as Hamming, Rosenbloom-Tsfasman and complete m-spotty weight enumerators follow as corollaries to these identities especially.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,188
2312.07540
diff History for Neural Language Agents
Neural Language Models (LMs) offer an exciting solution for general-purpose embodied control. However, a key technical issue arises when using an LM-based controller: environment observations must be converted to text, which coupled with history, results in long and verbose textual prompts. As a result, prior work in LM agents is limited to restricted domains with small observation size as well as minimal needs for interaction history or instruction tuning. In this paper, we introduce diff history, a simple and highly effective solution to these issues. By applying the Unix diff command on consecutive text observations in the interaction histories used to prompt LM policies, we can both abstract away redundant information and focus the content of textual inputs on the salient changes in the environment. On NetHack, an unsolved video game that requires long-horizon reasoning for decision-making, LMs tuned with diff history match state-of-the-art performance for neural agents while needing 1800x fewer training examples compared to prior work. Even on the simpler BabyAI-Text environment with concise text observations, we find that although diff history increases the length of prompts, the representation it provides offers a 25% improvement in the efficiency of low-sample instruction tuning. Further, we show that diff history scales favorably across different tuning dataset sizes. We open-source our code and data to https://diffhistory.github.io.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
414,965
2407.05118
SHINE: Saliency-aware HIerarchical NEgative Ranking for Compositional Temporal Grounding
Temporal grounding, also known as video moment retrieval, aims at locating video segments corresponding to a given query sentence. The compositional nature of natural language enables the localization beyond predefined events, posing a certain challenge to the compositional generalizability of existing methods. Recent studies establish the correspondence between videos and queries through a decompose-reconstruct manner to achieve compositional generalization. However, they only consider dominant primitives and build negative queries through random sampling and recombination, resulting in semantically implausible negatives that hinder the models from learning rational compositions. In addition, recent DETR-based methods still underperform in compositional temporal grounding, showing irrational saliency responses when given negative queries that have subtle differences from positive queries. To address these limitations, we first propose a large language model-driven method for negative query construction, utilizing GPT-3.5-Turbo to generate semantically plausible hard negative queries. Subsequently, we introduce a coarse-to-fine saliency ranking strategy, which encourages the model to learn the multi-granularity semantic relationships between videos and hierarchical negative queries to boost compositional generalization. Extensive experiments on two challenging benchmarks validate the effectiveness and generalizability of our proposed method. Our code is available at https://github.com/zxccade/SHINE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
470,828
2108.04932
Spectrum Shaping For Multiple Link Discovery in 6G THz Systems
This paper presents a novel antenna configuration to measure directions of multiple signal sources at the receiver in a THz mobile network via a single channel measurement. Directional communication is an intrinsic attribute of THz wireless networks and the knowledge of direction should be harvested continuously to maintain link quality. Direction discovery can potentially impose an immense burden on the network that limits its communication capacity exceedingly. To utterly mitigate direction discovery overhead, we propose a novel technique called spectrum shaping capable of measuring direction, power, and relative distance of propagation paths via a single measurement. We demonstrate that the proposed technique is also able to measure the transmitter antenna orientation. We evaluate the performance of the proposed design in several scenarios and show that the introduced technique performs similar to a large array of antennas while attaining a much simpler hardware architecture. Results show that the spectrum shaping with only two antennas placed 0.5 mm, 5 mm, and 1 cm apart performs direction of arrival estimation similar to a much more complex uniform linear array equipped with 7, 60, and 120 antennas, respectively.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
250,154
2207.00847
Combinatory Adjoints and Differentiation
We develop a compositional approach for automatic and symbolic differentiation based on categorical constructions in functional analysis where derivatives are linear functions on abstract vectors rather than being limited to scalars, vectors, matrices or tensors represented as multi-dimensional arrays. We show that both symbolic and automatic differentiation can be performed using a differential calculus for generating linear functions representing Fr\'echet derivatives based on rules for primitive, constant, linear and bilinear functions as well as their sequential and parallel composition. Linear functions are represented in a combinatory domain-specific language. Finally, we provide a calculus for symbolically computing the adjoint of a derivative without using matrices, which are too inefficient to use on high-dimensional spaces. The resulting symbolic representation of a derivative retains the data-parallel operations from the input program. The combination of combinatory differentiation and computing formal adjoints turns out to be behaviorally equivalent to reverse-mode automatic differentiation. In particular, it provides opportunities for optimizations where matrices are too inefficient to represent linear functions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
305,930
2501.00334
Loss-Aware Curriculum Learning for Chinese Grammatical Error Correction
Chinese grammatical error correction (CGEC) aims to detect and correct errors in the input Chinese sentences. Recently, Pre-trained Language Models (PLMS) have been employed to improve the performance. However, current approaches ignore that correction difficulty varies across different instances and treat these samples equally, enhancing the challenge of model learning. To address this problem, we propose a multi-granularity Curriculum Learning (CL) framework. Specifically, we first calculate the correction difficulty of these samples and feed them into the model from easy to hard batch by batch. Then Instance-Level CL is employed to help the model optimize in the appropriate direction automatically by regulating the loss function. Extensive experimental results and comprehensive analyses of various datasets prove the effectiveness of our method.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
521,628
2410.16012
Massimo: Public Queue Monitoring and Management using Mass-Spring Model
An efficient system of a queue control and regulation in public spaces is very important in order to avoid the traffic jams and to improve the customer satisfaction. This article offers a detailed road map based on a merger of intelligent systems and creating an efficient systems of queues in public places. Through the utilization of different technologies i.e. computer vision, machine learning algorithms, deep learning our system provide accurate information about the place is crowded or not and the necessary efforts to be taken.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
500,840
2007.16044
Low Dimensional State Representation Learning with Reward-shaped Priors
Reinforcement Learning has been able to solve many complicated robotics tasks without any need for feature engineering in an end-to-end fashion. However, learning the optimal policy directly from the sensory inputs, i.e the observations, often requires processing and storage of a huge amount of data. In the context of robotics, the cost of data from real robotics hardware is usually very high, thus solutions that achieve high sample-efficiency are needed. We propose a method that aims at learning a mapping from the observations into a lower-dimensional state space. This mapping is learned with unsupervised learning using loss functions shaped to incorporate prior knowledge of the environment and the task. Using the samples from the state space, the optimal policy is quickly and efficiently learned. We test the method on several mobile robot navigation tasks in a simulation environment and also on a real robot.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
189,834
2206.08749
From a few Accurate 2D Correspondences to 3D Point Clouds
Key points, correspondences, projection matrices, point clouds and dense clouds are the skeletons in image-based 3D reconstruction, of which point clouds have the important role in generating a realistic and natural model for a 3D reconstructed object. To achieve a good 3D reconstruction, the point clouds must be almost everywhere in the surface of the object. In this article, with a main purpose to build the point clouds covering the entire surface of the object, we propose a new feature named a geodesic feature or geo-feature. Based on the new geo-feature, if there are several (given) initial world points on the object's surface along with all accurately estimated projection matrices, some new world points on the geodesics connecting any two of these given world points will be reconstructed. Then the regions on the surface bordering by these initial world points will be covered by the point clouds. Thus, if the initial world points are around the surface, the point clouds will cover the entire surface. This article proposes a new method to estimate the world points and projection matrices from their correspondences. This method derives the closed-form and iterative solutions for the world points and projection matrices and proves that when the number of world points is less than seven and the number of images is at least five, the proposed solutions are global optimal. We propose an algorithm named World points from their Correspondences (WPfC) to estimate the world points and projection matrices from their correspondences, and another algorithm named Creating Point Clouds (CrPC) to create the point clouds from the world points and projection matrices given by the first algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
303,279
2402.14095
Zero-shot generalization across architectures for visual classification
Generalization to unseen data is a key desideratum for deep networks, but its relation to classification accuracy is unclear. Using a minimalist vision dataset and a measure of generalizability, we show that popular networks, from deep convolutional networks (CNNs) to transformers, vary in their power to extrapolate to unseen classes both across layers and across architectures. Accuracy is not a good predictor of generalizability, and generalization varies non-monotonically with layer depth.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
431,527
1304.3819
SybilFence: Improving Social-Graph-Based Sybil Defenses with User Negative Feedback
Detecting and suspending fake accounts (Sybils) in online social networking (OSN) services protects both OSN operators and OSN users from illegal exploitation. Existing social-graph-based defense schemes effectively bound the accepted Sybils to the total number of social connections between Sybils and non-Sybil users. However, Sybils may still evade the defenses by soliciting many social connections to real users. We propose SybilFence, a system that improves over social-graph-based Sybil defenses to further thwart Sybils. SybilFence is based on the observation that even well-maintained fake accounts inevitably receive a significant number of user negative feedback, such as the rejections to their friend requests. Our key idea is to discount the social edges on users that have received negative feedback, thereby limiting the impact of Sybils' social edges. The preliminary simulation results show that our proposal is more resilient to attacks where fake accounts continuously solicit social connections over time.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
23,928