id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1612.01597
Deterministic and Probabilistic Conditions for Finite Completability of Low-Tucker-Rank Tensor
We investigate the fundamental conditions on the sampling pattern, i.e., locations of the sampled entries, for finite completability of a low-rank tensor given some components of its Tucker rank. In order to find the deterministic necessary and sufficient conditions, we propose an algebraic geometric analysis on the Tucker manifold, which allows us to incorporate multiple rank components in the proposed analysis in contrast with the conventional geometric approaches on the Grassmannian manifold. This analysis characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern, which is closely related to finite completion. Probabilistic conditions are then studied and a lower bound on the sampling probability is given, which guarantees that the proposed deterministic conditions on the sampling patterns for finite completability hold with high probability. Furthermore, using the proposed geometric approach for finite completability, we propose a sufficient condition on the sampling pattern that ensures there exists exactly one completion for the sampled tensor.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
65,112
2501.01564
Semialgebraic Neural Networks: From roots to representations
Many numerical algorithms in scientific computing -- particularly in areas like numerical linear algebra, PDE simulation, and inverse problems -- produce outputs that can be represented by semialgebraic functions; that is, the graph of the computed function can be described by finitely many polynomial equalities and inequalities. In this work, we introduce Semialgebraic Neural Networks (SANNs), a neural network architecture capable of representing any bounded semialgebraic function, and computing such functions up to the accuracy of a numerical ODE solver chosen by the programmer. Conceptually, we encode the graph of the learned function as the kernel of a piecewise polynomial selected from a class of functions whose roots can be evaluated using a particular homotopy continuation method. We show by construction that the SANN architecture is able to execute this continuation method, thus evaluating the learned semialgebraic function. Furthermore, the architecture can exactly represent even discontinuous semialgebraic functions by executing a continuation method on each connected component of the target function. Lastly, we provide example applications of these networks and show they can be trained with traditional deep-learning techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
522,120
1912.05665
Managing Machine Learning Workflow Components
Machine Learning Workflows (MLWfs) have become essential and a disruptive approach in problem-solving over several industries. However, the development process of MLWfs may be complicated, hard to achieve, time-consuming, and error-prone. To handle this problem, in this paper, we introduce machine learning workflow management (MLWfM) as a technique to aid the development and reuse of MLWfs and their components through three aspects: representation, execution, and creation. More precisely, we discuss our approach to structure the MLWfs' components and their metadata to aid retrieval and reuse of components in new MLWfs. Also, we consider the execution of these components within a tool. The hybrid knowledge representation, called Hyperknowledge, frames our methodology, supporting the three MLWfM's aspects. To validate our approach, we show a practical use case in the Oil & Gas industry.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
157,161
2409.09312
Registration between Point Cloud Streams and Sequential Bounding Boxes via Gradient Descent
In this paper, we propose an algorithm for registering sequential bounding boxes with point cloud streams. Unlike popular point cloud registration techniques, the alignment of the point cloud and the bounding box can rely on the properties of the bounding box, such as size, shape, and temporal information, which provides substantial support and performance gains. Motivated by this, we propose a new approach to tackle this problem. Specifically, we model the registration process through an overall objective function that includes the final goal and all constraints. We then optimize the function using gradient descent. Our experiments show that the proposed method performs remarkably well with a 40\% improvement in IoU and demonstrates more robust registration between point cloud streams and sequential bounding boxes
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
488,265
1503.03191
A model-based approach to recovering the structure of a plant from images
We present a method for recovering the structure of a plant directly from a small set of widely-spaced images. Structure recovery is more complex than shape estimation, but the resulting structure estimate is more closely related to phenotype than is a 3D geometric model. The method we propose is applicable to a wide variety of plants, but is demonstrated on wheat. Wheat is made up of thin elements with few identifiable features, making it difficult to analyse using standard feature matching techniques. Our method instead analyses the structure of plants using only their silhouettes. We employ a generate-and-test method, using a database of manually modelled leaves and a model for their composition to synthesise plausible plant structures which are evaluated against the images. The method is capable of efficiently recovering accurate estimates of plant structure in a wide variety of imaging scenarios, with no manual intervention.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
41,024
2305.16344
Enabling and Analyzing How to Efficiently Extract Information from Hybrid Long Documents with LLMs
Large Language Models (LLMs) demonstrate exceptional performance in textual understanding and tabular reasoning tasks. However, their ability to comprehend and analyze hybrid text, containing textual and tabular data, remains underexplored. In this research, we specialize in harnessing the potential of LLMs to comprehend critical information from financial reports, which are hybrid long-documents. We propose an Automated Financial Information Extraction (AFIE) framework that enhances LLMs' ability to comprehend and extract information from financial reports. To evaluate AFIE, we develop a Financial Reports Numerical Extraction (FINE) dataset and conduct an extensive experimental analysis. Our framework is effectively validated on GPT-3.5 and GPT-4, yielding average accuracy increases of 53.94% and 33.77%, respectively, compared to a naive method. These results suggest that the AFIE framework offers accuracy for automated numerical extraction from complex, hybrid documents.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
368,038
1912.00979
KernelNet: A Data-Dependent Kernel Parameterization for Deep Generative Modeling
Learning with kernels is an important concept in machine learning. Standard approaches for kernel methods often use predefined kernels that require careful selection of hyperparameters. To mitigate this burden, we propose in this paper a framework to construct and learn a data-dependent kernel based on random features and implicit spectral distributions that are parameterized by deep neural networks. The constructed network (called KernelNet) can be applied to deep generative modeling in various scenarios, including two popular learning paradigms in deep generative models, MMD-GAN and implicit Variational Autoencoder (VAE). We show that our proposed kernel indeed exists in applications and is guaranteed to be positive definite. Furthermore, the induced Maximum Mean Discrepancy (MMD) can endow the continuity property in weak topology by simple regularization. Extensive experiments indicate that our proposed KernelNet consistently achieves better performance compared to related methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
155,948
2408.09070
CodeTaxo: Enhancing Taxonomy Expansion with Limited Examples via Code Language Prompts
Taxonomies play a crucial role in various applications by providing a structural representation of knowledge. The task of taxonomy expansion involves integrating emerging concepts into existing taxonomies by identifying appropriate parent concepts for these new query concepts. Previous approaches typically relied on self-supervised methods that generate annotation data from existing taxonomies. However, these methods are less effective when the existing taxonomy is small (fewer than 100 entities). In this work, we introduce \textsc{CodeTaxo}, a novel approach that leverages large language models through code language prompts to capture the taxonomic structure. Extensive experiments on five real-world benchmarks from different domains demonstrate that \textsc{CodeTaxo} consistently achieves superior performance across all evaluation metrics, significantly outperforming previous state-of-the-art methods. The code and data are available at \url{https://github.com/QingkaiZeng/CodeTaxo-Pub}.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
481,265
2411.14349
Agnostic Learning of Arbitrary ReLU Activation under Gaussian Marginals
We consider the problem of learning an arbitrarily-biased ReLU activation (or neuron) over Gaussian marginals with the squared loss objective. Despite the ReLU neuron being the basic building block of modern neural networks, we still do not understand the basic algorithmic question of whether one arbitrary ReLU neuron is learnable in the non-realizable setting. In particular, all existing polynomial time algorithms only provide approximation guarantees for the better-behaved unbiased setting or restricted bias setting. Our main result is a polynomial time statistical query (SQ) algorithm that gives the first constant factor approximation for arbitrary bias. It outputs a ReLU activation that achieves a loss of $O(\mathrm{OPT}) + \varepsilon$ in time $\mathrm{poly}(d,1/\varepsilon)$, where $\mathrm{OPT}$ is the loss obtained by the optimal ReLU activation. Our algorithm presents an interesting departure from existing algorithms, which are all based on gradient descent and thus fall within the class of correlational statistical query (CSQ) algorithms. We complement our algorithmic result by showing that no polynomial time CSQ algorithm can achieve a constant factor approximation. Together, these results shed light on the intrinsic limitation of gradient descent, while identifying arguably the simplest setting (a single neuron) where there is a separation between SQ and CSQ algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
510,120
1704.06375
Quantum Codes from Linear Codes over Finite Chain Rings
In this paper, we provide two methods of constructing quantum codes from linear codes over finite chain rings. The first one is derived from the Calderbank-Shor-Steane (CSS) construction applied to self-dual codes over finite chain rings. The second construction is derived from the CSS construction applied to Gray images of the linear codes over finite chain ring $\mathbb{F}_{p^{2m}}+u\mathbb{F}_{p^{2m}}$. The good parameters of quantum codes from cyclic codes over finite chain rings are obtained.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
72,163
2404.07544
From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance rivaling (or even outperforming) that of traditional supervised methods such as Random Forest, Bagging, or Gradient Boosting. For example, on the challenging Friedman #2 regression dataset, Claude 3 outperforms many supervised methods such as AdaBoost, SVM, Random Forest, KNN, or Gradient Boosting. We then investigate how well the performance of large language models scales with the number of in-context exemplars. We borrow from the notion of regret from online learning and empirically show that LLMs are capable of obtaining a sub-linear regret.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
445,877
1811.05154
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
We propose a bandit algorithm that explores by randomizing its history of rewards. Specifically, it pulls the arm with the highest mean reward in a non-parametric bootstrap sample of its history with pseudo rewards. We design the pseudo rewards such that the bootstrap mean is optimistic with a sufficiently high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and derive a $O(K \Delta^{-1} \log n)$ bound on its $n$-round regret, where $\Delta$ is the difference in the expected rewards of the optimal and the best suboptimal arms, and $K$ is the number of arms. The main advantage of our exploration design is that it easily generalizes to structured problems. To show this, we propose contextual Giro with an arbitrary reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that it performs well.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,254
2105.14461
A Hybrid SIE-PDE Formulation Without Boundary Condition Requirement for Transverse Magnetic Electromagnetic Analysis
A hybrid surface integral equation partial differential equation (SIE-PDE) formulation without the boundary condition requirement is proposed to solve the transverse magnetic (TM) electromagnetic problems. In the proposed formulation, the computational domain is decomposed into two overlapping domains: the SIE and PDE domains. In the SIE domain, complex structures with piecewise homogeneous media, e.g., highly conductive media, are included. An equivalent model for those structures is constructed by replacing them with the background medium and introducing a surface equivalent electric current density on an enclosed boundary to represent their electromagnetic effects. The remaining computational domain and homogeneous background medium replaced domain consist of the PDE domain, in which inhomogeneous or non-isotropic media are included. Through combining the surface equivalent electric current density and the inhomogeneous Helmholtz equation, a hybrid SIE-PDE formulation is derived. It requires no boundary conditions, and is mathematically equivalent to the original physical model. Through careful construction of basis functions to expand electric fields and the equivalent current density, the discretized formulation is made compatible with the SIE and PDE domain interface. The accuracy and efficiency are validated through two numerical examples. Results show that the proposed SIE-PDE formulation can obtain accurate results, and significant performance improvements in terms of CPU time and memory consumption compared with the FEM are achieved.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
237,682
2305.19474
Ethical Considerations for Machine Translation of Indigenous Languages: Giving a Voice to the Speakers
In recent years machine translation has become very successful for high-resource language pairs. This has also sparked new interest in research on the automatic translation of low-resource languages, including Indigenous languages. However, the latter are deeply related to the ethnic and cultural groups that speak (or used to speak) them. The data collection, modeling and deploying machine translation systems thus result in new ethical questions that must be addressed. Motivated by this, we first survey the existing literature on ethical considerations for the documentation, translation, and general natural language processing for Indigenous languages. Afterward, we conduct and analyze an interview study to shed light on the positions of community leaders, teachers, and language activists regarding ethical concerns for the automatic translation of their languages. Our results show that the inclusion, at different degrees, of native speakers and community members is vital to performing better and more ethical research on Indigenous languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
369,541
2401.13087
Open-source data pipeline for street-view images: a case study on community mobility during COVID-19 pandemic
Street View Images (SVI) are a common source of valuable data for researchers. Researchers have used SVI data for estimating pedestrian volumes, demographic surveillance, and to better understand built and natural environments in cityscapes. However, the most common source of publicly available SVI data is Google Street View. Google Street View images are collected infrequently, making temporal analysis challenging, especially in low population density areas. Our main contribution is the development of an open-source data pipeline for processing 360-degree video recorded from a car-mounted camera. The video data is used to generate SVIs, which then can be used as an input for temporal analysis. We demonstrate the use of the pipeline by collecting a SVI dataset over a 38-month longitudinal survey of Seattle, WA, USA during the COVID-19 pandemic. The output of our pipeline is validated through statistical analyses of pedestrian traffic in the images. We confirm known results in the literature and provide new insights into outdoor pedestrian traffic patterns. This study demonstrates the feasibility and value of collecting and using SVI for research purposes beyond what is possible with currently available SVI data. Limitations and future improvements on the data pipeline and case study are also discussed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
423,604
2101.06937
$(\epsilon, n)$ Fixed-Length Strong Coordination Capacity
This paper investigates the problem of synthesizing joint distributions in the finite-length regime. For a fixed blocklength $n$ and an upper bound on the distribution approximation $\epsilon$, we prove a capacity result for fixed-length strong coordination. It is shown analytically that the rate conditions for the fixed-length regime are lower-bounded by the mutual information that appears in the asymptotical condition plus $Q^{-1} \left(\epsilon \right) \sqrt{ V/n}$, where $V$ is the channel dispersion, and $Q^{-1}$ is the inverse of the Gaussian cumulative distribution function.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
215,891
0712.4102
Digital Ecosystems: Evolving Service-Oriented Architectures
We view Digital Ecosystems to be the digital counterparts of biological ecosystems, exploiting the self-organising properties of biological ecosystems, which are considered to be robust, self-organising and scalable architectures that can automatically solve complex, dynamic problems. Digital Ecosystems are a novel optimisation technique where the optimisation works at two levels: a first optimisation, migration of agents (representing services) which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. We created an Ecosystem-Oriented Architecture of Digital Ecosystems by extending Service-Oriented Architectures with distributed evolutionary computing, allowing services to recombine and evolve over time, constantly seeking to improve their effectiveness for the user base. Individuals within our Digital Ecosystem will be applications (groups of services), created in response to user requests by using evolutionary optimisation to aggregate the services. These individuals will migrate through the Digital Ecosystem and adapt to find niches where they are useful in fulfilling other user requests for applications. Simulation results imply that the Digital Ecosystem performs better at large scales than a comparable Service-Oriented Architecture, suggesting that incorporating ideas from theoretical ecology can contribute to useful self-organising properties in digital ecosystems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
1,084
1611.06249
Geometric Controllability of The Purcell's Swimmer and its Symmetrized Cousin
We analyse weak and strong controllability notions for the locomotion of the 3-link Purcell's swimmer, the simplest possible swimmer at low Reynolds number from a geometric framework. After revisiting a purely kinematic form of the equations, we apply an extension of Chow's theorem to analyze controllability in the strong and weak sense. Further, the connection form for the symmetric version of the Purcell's swimmer is derived, based on which, the controllability analysis utilizing the Abelian nature of the structure group is presented. The novelty in our approach is the usage of geometry and the principal fiber bundle structure of the configuration manifold of the system to arrive at strong and weak controllability notions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
64,151
2110.02393
Geometric Algebra Attention Networks for Small Point Clouds
Much of the success of deep learning is drawn from building architectures that properly respect underlying symmetry and structure in the data on which they operate - a set of considerations that have been united under the banner of geometric deep learning. Often problems in the physical sciences deal with relatively small sets of points in two- or three-dimensional space wherein translation, rotation, and permutation equivariance are important or even vital for models to be useful in practice. In this work, we present rotation- and permutation-equivariant architectures for deep learning on these small point clouds, composed of a set of products of terms from the geometric algebra and reductions over those products using an attention mechanism. The geometric algebra provides valuable mathematical structure by which to combine vector, scalar, and other types of geometric inputs in a systematic way to account for rotation invariance or covariance, while attention yields a powerful way to impose permutation equivariance. We demonstrate the usefulness of these architectures by training models to solve sample problems relevant to physics, chemistry, and biology.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
259,100
2205.10940
Toward smart composites: small-scale, untethered prediction and control for soft sensor/actuator systems
We present formulation and open-source tools to achieve in-material model predictive control of sensor/actuator systems using learned forward kinematics and on-device computation. Microcontroller units (MCUs) that compute the prediction and control task while colocated with the sensors and actuators enable in-material untethered behaviors. In this approach, small parameter size neural network models learn forward kinematics offline. Our open-source compiler, nn4mc, generates code to offload these predictions onto MCUs. A Newton-Raphson solver then computes the control input in real time. We first benchmark this nonlinear control approach against a PID controller on a mass-spring-damper simulation. We then study experimental results on two experimental rigs with different sensing, actuation and computational hardware: a tendon-based platform with embedded LightLace sensors and a HASEL-based platform with magnetic sensors. Experimental results indicate effective high-bandwidth tracking of reference paths (greater than or equal to 120 Hz) with a small memory footprint (less than or equal to 6.4% of flash memory). The measured path following error does not exceed 2mm in the tendon-based platform. The simulated path following error does not exceed 1mm in the HASEL-based platform. The mean power consumption of this approach in an ARM Cortex-M4f device is 45.4 mW. This control approach is also compatible with Tensorflow Lite models and equivalent on-device code. In-material intelligence enables a new class of composites that infuse autonomy into structures and systems with refined artificial proprioception.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
297,933
2405.02586
Enhancing Vision-Language Models Generalization via Diversity-Driven Novel Feature Synthesis
Vision-language foundation models like CLIP have shown impressive zero-shot generalization, but finetuning on downstream datasets can cause overfitting and loss of its generalization ability on unseen domains. Although collecting additional data from new domains of interest is possible, this method is often impractical due to the challenges in obtaining annotated data. To address this, we propose a plug-and-play feature synthesis method called LDFS (Language-Guided Diverse Feature Synthesis) to synthesize new domain features and improve existing CLIP fine-tuning strategies. LDFS has three main contributions: 1) To synthesize novel domain features and promote diversity, we propose an instance-conditional feature augmentation strategy based on a text-guided feature augmentation loss. 2) To maintain feature quality after augmenting, we introduce a pairwise regularizer to preserve augmented feature coherence within the CLIP feature space. 3) We propose to use stochastic text feature augmentation to reduce the modality gap and further facilitate the process of text-guided feature synthesis. Extensive experiments show LDFS superiority in improving CLIP generalization ability on unseen domains without collecting data from those domains. The code will be made publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
451,824
1103.4435
Information Theoretic Bounds for Tensor Rank Minimization over Finite Fields
We consider the problem of noiseless and noisy low-rank tensor completion from a set of random linear measurements. In our derivations, we assume that the entries of the tensor belong to a finite field of arbitrary size and that reconstruction is based on a rank minimization framework. The derived results show that the smallest number of measurements needed for exact reconstruction is upper bounded by the product of the rank, the order and the dimension of a cubic tensor. Furthermore, this condition is also sufficient for unique minimization. Similar bounds hold for the noisy rank minimization scenario, except for a scaling function that depends on the channel error probability.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,720
2303.13072
Beyond Universal Transformer: block reusing with adaptor in Transformer for automatic speech recognition
Transformer-based models have recently made significant achievements in the application of end-to-end (E2E) automatic speech recognition (ASR). It is possible to deploy the E2E ASR system on smart devices with the help of Transformer-based models. While these models still have the disadvantage of requiring a large number of model parameters. To overcome the drawback of universal Transformer models for the application of ASR on edge devices, we propose a solution that can reuse the block in Transformer models for the occasion of the small footprint ASR system, which meets the objective of accommodating resource limitations without compromising recognition accuracy. Specifically, we design a novel block-reusing strategy for speech Transformer (BRST) to enhance the effectiveness of parameters and propose an adapter module (ADM) that can produce a compact and adaptable model with only a few additional trainable parameters accompanying each reusing block. We conducted an experiment with the proposed method on the public AISHELL-1 corpus, and the results show that the proposed approach achieves the character error rate (CER) of 9.3%/6.63% with only 7.6M/8.3M parameters without and with the ADM, respectively. In addition, we also make a deeper analysis to show the effect of ADM in the general block-reusing method.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
353,535
1711.00363
Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making
It is often argued that an agent making decisions on behalf of two or more principals who have different utility functions should adopt a {\em Pareto-optimal} policy, i.e., a policy that cannot be improved upon for one agent without making sacrifices for another. A famous theorem of Harsanyi shows that, when the principals have a common prior on the outcome distributions of all policies, a Pareto-optimal policy for the agent is one that maximizes a fixed, weighted linear combination of the principals' utilities. In this paper, we show that Harsanyi's theorem does not hold for principals with different priors, and derive a more precise generalization which does hold, which constitutes our main result. In this more general case, the relative weight given to each principal's utility should evolve over time according to how well the agent's observations conform with that principal's prior. The result has implications for the design of contracts, treaties, joint ventures, and robots.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
83,706
1811.09577
Individualized Time-Series Segmentation for Mining Mobile Phone User Behavior
Mobile phones can record individual's daily behavioral data as a time-series. In this paper, we present an effective time-series segmentation technique that extracts optimal time segments of individual's similar behavioral characteristics utilizing their mobile phone data. One of the determinants of an individual's behavior is the various activities undertaken at various times-of-the-day and days-of-the-week. In many cases, such behavior will follow temporal patterns. Currently, researchers use either equal or unequal interval-based segmentation of time for mining mobile phone users' behavior. Most of them take into account static temporal coverage of 24-h-a-day and few of them take into account the number of incidences in time-series data. However, such segmentations do not necessarily map to the patterns of individual user activity and subsequent behavior because of not taking into account the diverse behaviors of individuals over time-of-the-week. Therefore, we propose a behavior-oriented time segmentation (BOTS) technique that takes into account not only the temporal coverage of the week but also the number of incidences of diverse behaviors dynamically for producing similar behavioral time segments over the week utilizing time-series data. Experiments on the real mobile phone datasets show that our proposed segmentation technique better captures the user's dominant behavior at various times-of-the-day and days-of-the-week enabling the generation of high confidence temporal rules in order to mine individual mobile phone users' behavior.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
114,270
1703.01250
Virtual vs. Real: Trading Off Simulations and Physical Experiments in Reinforcement Learning with Bayesian Optimization
In practice, the parameters of control policies are often tuned manually. This is time-consuming and frustrating. Reinforcement learning is a promising alternative that aims to automate this process, yet often requires too many experiments to be practical. In this paper, we propose a solution to this problem by exploiting prior knowledge from simulations, which are readily available for most robotic platforms. Specifically, we extend Entropy Search, a Bayesian optimization algorithm that maximizes information gain from each experiment, to the case of multiple information sources. The result is a principled way to automatically combine cheap, but inaccurate information from simulations with expensive and accurate physical experiments in a cost-effective manner. We apply the resulting method to a cart-pole system, which confirms that the algorithm can find good control policies with fewer experiments than standard Bayesian optimization on the physical system only.
false
false
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
69,322
2007.08351
Strengthening Deterministic Policies for POMDPs
The synthesis problem for partially observable Markov decision processes (POMDPs) is to compute a policy that satisfies a given specification. Such policies have to take the full execution history of a POMDP into account, rendering the problem undecidable in general. A common approach is to use a limited amount of memory and randomize over potential choices. Yet, this problem is still NP-hard and often computationally intractable in practice. A restricted problem is to use neither history nor randomization, yielding policies that are called stationary and deterministic. Previous approaches to compute such policies employ mixed-integer linear programming (MILP). We provide a novel MILP encoding that supports sophisticated specifications in the form of temporal logic constraints. It is able to handle an arbitrary number of such specifications. Yet, randomization and memory are often mandatory to achieve satisfactory policies. First, we extend our encoding to deliver a restricted class of randomized policies. Second, based on the results of the original MILP, we employ a preprocessing of the POMDP to encompass memory-based decisions. The advantages of our approach over state-of-the-art POMDP solvers lie (1) in the flexibility to strengthen simple deterministic policies without losing computational tractability and (2) in the ability to enforce the provable satisfaction of arbitrarily many specifications. The latter point allows taking trade-offs between performance and safety aspects of typical POMDP examples into account. We show the effectiveness of our method on a broad range of benchmarks.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
true
187,600
1802.05843
Minimal Algorithmic Information Loss Methods for Dimension Reduction, Feature Selection and Network Sparsification
We present a novel, domain-agnostic, model-independent, unsupervised, and universally applicable approach for data summarization. Specifically, we focus on addressing the challenge of reducing certain dimensionality aspects, such as the number of edges in a network, while retaining essential features of interest. These features include preserving crucial network properties like degree distribution, clustering coefficient, edge betweenness, and degree and eigenvector centralities. Our approach outperforms state-of-the-art network reduction techniques by achieving an average improvement in feature preservation. Previous methods grounded in statistics or classical information theory have been limited in their ability to capture more intricate patterns and features, particularly nonlinear patterns stemming from deterministic computable processes. Moreover, these approaches heavily rely on a priori feature selection, demanding constant supervision. Our findings demonstrate the effectiveness of the algorithms proposed in this study in overcoming these limitations, all while maintaining a time-efficient computational profile. In many instances, our approach not only matches but also surpasses the performance of established network reduction algorithms. Furthermore, we extend the applicability of our method to lossy compression tasks involving images or any bi-dimensional data. This highlights the versatility and broad utility of our approach in various domains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
90,521
2206.00524
Vietnamese Hate and Offensive Detection using PhoBERT-CNN and Social Media Streaming Data
Society needs to develop a system to detect hate and offense to build a healthy and safe environment. However, current research in this field still faces four major shortcomings, including deficient pre-processing techniques, indifference to data imbalance issues, modest performance models, and lacking practical applications. This paper focused on developing an intelligent system capable of addressing these shortcomings. Firstly, we proposed an efficient pre-processing technique to clean comments collected from Vietnamese social media. Secondly, a novel hate speech detection (HSD) model, which is the combination of a pre-trained PhoBERT model and a Text-CNN model, was proposed for solving tasks in Vietnamese. Thirdly, EDA techniques are applied to deal with imbalanced data to improve the performance of classification models. Besides, various experiments were conducted as baselines to compare and investigate the proposed model's performance against state-of-the-art methods. The experiment results show that the proposed PhoBERT-CNN model outperforms SOTA methods and achieves an F1-score of 67,46% and 98,45% on two benchmark datasets, ViHSD and HSD-VLSP, respectively. Finally, we also built a streaming HSD application to demonstrate the practicality of our proposed system.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
300,161
1307.7821
Algorithms for the Majority Rule (+) Consensus Tree and the Frequency Difference Consensus Tree
This paper presents two new deterministic algorithms for constructing consensus trees. Given an input of k phylogenetic trees with identical leaf label sets and n leaves each, the first algorithm constructs the majority rule (+) consensus tree in O(kn) time, which is optimal since the input size is Omega(kn), and the second one constructs the frequency difference consensus tree in min(O(kn^2), O(kn (k+log^2 n))) time.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
26,140
2308.13307
Asch Meets HRI: Human Conformity to Robot Groups
We present a research outline that aims at investigating group dynamics and peer pressure in the context of industrial robots. Our research plan was motivated by the fact that industrial robots became already an integral part of human-robot co-working. However, industrial robots have been sparsely integrated into research on robot credibility, group dynamics, and potential users' tendency to follow a robot's indication. Therefore, we aim to transfer the classic Asch experiment (see \cite{Asch_51}) into HRI with industrial robots. More precisely, we will test to what extent participants follow a robot's response when confronted with a group (vs. individual) industrial robot arms (vs. human) peers who give a false response. We are interested in highlighting the effects of group size, perceived robot credibility, psychological stress, and peer pressure in the context of industrial robots. With the results of this research, we hope to highlight group dynamics that might underlie HRI in industrial settings in which numerous robots already work closely together with humans in shared environments.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
387,869
2204.04665
Effective Out-of-Distribution Detection in Classifier Based on PEDCC-Loss
Deep neural networks suffer from the overconfidence issue in the open world, meaning that classifiers could yield confident, incorrect predictions for out-of-distribution (OOD) samples. Thus, it is an urgent and challenging task to detect these samples drawn far away from training distribution based on the security considerations of artificial intelligence. Many current methods based on neural networks mainly rely on complex processing strategies, such as temperature scaling and input preprocessing, to obtain satisfactory results. In this paper, we propose an effective algorithm for detecting out-of-distribution examples utilizing PEDCC-Loss. We mathematically analyze the nature of the confidence score output by the PEDCC (Predefined Evenly-Distribution Class Centroids) classifier, and then construct a more effective scoring function to distinguish in-distribution (ID) and out-of-distribution. In this method, there is no need to preprocess the input samples and the computational burden of the algorithm is reduced. Experiments demonstrate that our method can achieve better OOD detection performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
290,741
1703.07612
Networked Systems under Denial-of-Service: Co-located vs. Remote Control Architectures
In this paper, we study networked systems in the presence of Denial-of-Service (DoS) attacks, namely attacks that prevent transmissions over the communication network. Previous studies have shown that co-located architectures (control unit co-located with the actuators and networked sensor channel) can ensure a high level of robustness against DoS. However, co-location requires a wired or dedicated actuator channel, which could not meet flexibility and cost requirements. In this paper we consider a control architecture that approximates co-location while enabling remote implementation (networked sensor and actuator channels). We analyze closed-loop stability and quantify the robustness "gap" between this architecture and the co-located one.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
70,428
2009.12914
How do people describe locations during a natural disaster: an analysis of tweets from Hurricane Harvey
Social media platforms, such as Twitter, have been increasingly used by people during natural disasters to share information and request for help. Hurricane Harvey was a category 4 hurricane that devastated Houston, Texas, USA in August 2017 and caused catastrophic flooding in the Houston metropolitan area. Hurricane Harvey also witnessed the widespread use of social media by the general public in response to this major disaster, and geographic locations are key information pieces described in many of the social media messages. A geoparsing system, or a geoparser, can be utilized to automatically extract and locate the described locations, which can help first responders reach the people in need. While a number of geoparsers have already been developed, it is unclear how effective they are in recognizing and geo-locating the locations described by people during natural disasters. To fill this gap, this work seeks to understand how people describe locations during a natural disaster by analyzing a sample of tweets posted during Hurricane Harvey. We then identify the limitations of existing geoparsers in processing these tweets, and discuss possible approaches to overcoming these limitations.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
197,567
2206.13475
Thermodynamics-inspired Explanations of Artificial Intelligence
In recent years, predictive machine learning methods have gained prominence in various scientific domains. However, due to their black-box nature, it is essential to establish trust in these models before accepting them as accurate. One promising strategy for assigning trust involves employing explanation techniques that elucidate the rationale behind a black-box model's predictions in a manner that humans can understand. However, assessing the degree of human interpretability of the rationale generated by such methods is a nontrivial challenge. In this work, we introduce interpretation entropy as a universal solution for assessing the degree of human interpretability associated with any linear model. Using this concept and drawing inspiration from classical thermodynamics, we present Thermodynamics-inspired Explainable Representations of AI and other black-box Paradigms (TERP), a method for generating accurate, and human-interpretable explanations for black-box predictions in a model-agnostic manner. To demonstrate the wide-ranging applicability of TERP, we successfully employ it to explain various black-box model architectures, including deep learning Autoencoders, Recurrent Neural Networks, and Convolutional Neural Networks, across diverse domains such as molecular simulations, text, and image classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
304,985
2108.08046
Variational Graph Normalized Auto-Encoders
Link prediction is one of the key problems for graph-structured data. With the advancement of graph neural networks, graph autoencoders (GAEs) and variational graph autoencoders (VGAEs) have been proposed to learn graph embeddings in an unsupervised way. It has been shown that these methods are effective for link prediction tasks. However, they do not work well in link predictions when a node whose degree is zero (i.g., isolated node) is involved. We have found that GAEs/VGAEs make embeddings of isolated nodes close to zero regardless of their content features. In this paper, we propose a novel Variational Graph Normalized AutoEncoder (VGNAE) that utilize L2-normalization to derive better embeddings for isolated nodes. We show that our VGNAEs outperform the existing state-of-the-art models for link prediction tasks. The code is available at https://github.com/SeongJinAhn/VGNAE.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
251,120
2201.01449
Deep Learning-Based Sparse Whole-Slide Image Analysis for the Diagnosis of Gastric Intestinal Metaplasia
In recent years, deep learning has successfully been applied to automate a wide variety of tasks in diagnostic histopathology. However, fast and reliable localization of small-scale regions-of-interest (ROI) has remained a key challenge, as discriminative morphologic features often occupy only a small fraction of a gigapixel-scale whole-slide image (WSI). In this paper, we propose a sparse WSI analysis method for the rapid identification of high-power ROI for WSI-level classification. We develop an evaluation framework inspired by the early classification literature, in order to quantify the tradeoff between diagnostic performance and inference time for sparse analytic approaches. We test our method on a common but time-consuming task in pathology - that of diagnosing gastric intestinal metaplasia (GIM) on hematoxylin and eosin (H&E)-stained slides from endoscopic biopsy specimens. GIM is a well-known precursor lesion along the pathway to development of gastric cancer. We performed a thorough evaluation of the performance and inference time of our approach on a test set of GIM-positive and GIM-negative WSI, finding that our method successfully detects GIM in all positive WSI, with a WSI-level classification area under the receiver operating characteristic curve (AUC) of 0.98 and an average precision (AP) of 0.95. Furthermore, we show that our method can attain these metrics in under one minute on a standard CPU. Our results are applicable toward the goal of developing neural networks that can easily be deployed in clinical settings to support pathologists in quickly localizing and diagnosing small-scale morphologic features in WSI.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
274,258
2210.15173
Articulation GAN: Unsupervised modeling of articulatory learning
Generative deep neural networks are widely used for speech synthesis, but most existing models directly generate waveforms or spectral outputs. Humans, however, produce speech by controlling articulators, which results in the production of speech sounds through physical properties of sound propagation. We introduce the Articulatory Generator to the Generative Adversarial Network paradigm, a new unsupervised generative model of speech production/synthesis. The Articulatory Generator more closely mimics human speech production by learning to generate articulatory representations (electromagnetic articulography or EMA) in a fully unsupervised manner. A separate pre-trained physical model (ema2wav) then transforms the generated EMA representations to speech waveforms, which get sent to the Discriminator for evaluation. Articulatory analysis suggests that the network learns to control articulators in a similar manner to humans during speech production. Acoustic analysis of the outputs suggests that the network learns to generate words that are both present and absent in the training distribution. We additionally discuss implications of articulatory representations for cognitive models of human language and speech technology in general.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
326,839
2205.11371
Fractional-Order Partial Cancellation of Integer-Order Poles and Zeros
The key idea of this contribution is the partial compensation of non-minimum phase zeros or unstable poles. Therefore the integer-order zero/pole is split into a product of fractional-order pseudo zeros/poles. The amplitude and phase response of these fractional-order terms is derived to include these compensators into the loop-shaping design. Such compensators can be generalized to conjugate complex zeros/poles, and also implicit fractional-order terms can be applied. In the case of the non-minimum phase zero, its compensation leads to a higher phase margin and a steeper open-loop amplitude response around the crossover frequency resulting in a reduced undershooting in the step-response, as illustrated in the numerical example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
298,111
1903.10926
Classifying Partially Labeled Networked Data via Logistic Network Lasso
We apply the network Lasso to classify partially labeled data points which are characterized by high-dimensional feature vectors. In order to learn an accurate classifier from limited amounts of labeled data, we borrow statistical strength, via an intrinsic network structure, across the dataset. The resulting logistic network Lasso amounts to a regularized empirical risk minimization problem using the total variation of a classifier as a regularizer. This minimization problem is a non-smooth convex optimization problem which we solve using a primal-dual splitting method. This method is appealing for big data applications as it can be implemented as a highly scalable message passing algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
125,393
2410.14680
Influence of Backdoor Paths on Causal Link Prediction
The current method for predicting causal links in knowledge graphs uses weighted causal relations. For a given link between cause-effect entities, the presence of a confounder affects the causal link prediction, which can lead to spurious and inaccurate results. We aim to block these confounders using backdoor path adjustment. Backdoor paths are non-causal association flows that connect the \textit{cause-entity} to the \textit{effect-entity} through other variables. Removing these paths ensures a more accurate prediction of causal links. This paper proposes CausalLPBack, a novel approach to causal link prediction that eliminates backdoor paths and uses knowledge graph link prediction methods. It extends the representation of causality in a neuro-symbolic framework, enabling the adoption and use of traditional causal AI concepts and methods. We demonstrate our approach using a causal reasoning benchmark dataset of simulated videos. The evaluation involves a unique dataset splitting method called the Markov-based split that's relevant for causal link prediction. The evaluation of the proposed approach demonstrates atleast 30\% in MRR and 16\% in Hits@K inflated performance for causal link prediction that is due to the bias introduced by backdoor paths for both baseline and weighted causal relations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
500,141
1907.05139
Error Exponents for Asynchronous Multiple Access Channels. Controlled Asynchronism may Outperform Synchronism
Exponential error bounds achievable by universal coding and decoding are derived for frame-asynchronous discrete memoryless %asynchronous multiple access channels with two senders, via the method of subtypes, a refinement of the method of types. Maximum empirical multi-information decoding is employed. A key tool is an improved packing lemma, that overcomes the technical difficulty caused by codeword repetitions, via an induction based new argument. The asymptotic form of the bounds admits numerical evaluation. This demostrates that error exponents achievable by synchronous transmission (if possible) can be superseeded via controlled asynchronism, i.e. a deliberate shift of the codewords.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
138,284
1610.09580
Data-driven Estimation of Origin-Destination Demand and User Cost Functions for the Optimization of Transportation Networks
In earlier work (Zhang et al., 2016) we used actual traffic data from the Eastern Massachusetts transportation network in the form of spatial average speeds and road segment flow capacities in order to estimate Origin-Destination (OD) flow demand matrices for the network. Based on a Traffic Assignment Problem (TAP) formulation (termed "forward problem"), in this paper we use a scheme similar to our earlier work to estimate initial OD demand matrices and then propose a new inverse problem formulation in order to estimate user cost functions. This new formulation allows us to efficiently overcome numerical difficulties that limited our prior work to relatively small subnetworks and, assuming the travel latency cost functions are available, to adjust the values of the OD demands accordingly so that the flow observations are as close as possible to the solutions of the forward problem. We also derive sensitivity analysis results for the total user latency cost with respect to important parameters such as road capacities and minimum travel times. Finally, using the same actual traffic data from the Eastern Massachusetts transportation network, we quantify the Price of Anarchy (POA) for a much larger network than that in Zhang et al. (2016).
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
63,084
2005.03288
CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion
Motion synthesis in a dynamic environment has been a long-standing problem for character animation. Methods using motion capture data tend to scale poorly in complex environments because of their larger capturing and labeling requirement. Physics-based controllers are effective in this regard, albeit less controllable. In this paper, we present CARL, a quadruped agent that can be controlled with high-level directives and react naturally to dynamic environments. Starting with an agent that can imitate individual animation clips, we use Generative Adversarial Networks to adapt high-level controls, such as speed and heading, to action distributions that correspond to the original animations. Further fine-tuning through the deep reinforcement learning enables the agent to recover from unseen external perturbations while producing smooth transitions. It then becomes straightforward to create autonomous agents in dynamic environments by adding navigation modules over the entire process. We evaluate our approach by measuring the agent's ability to follow user control and provide a visual analysis of the generated motion to show its effectiveness.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
176,114
2402.04163
Tempered Calculus for ML: Application to Hyperbolic Model Embedding
Most mathematical distortions used in ML are fundamentally integral in nature: $f$-divergences, Bregman divergences, (regularized) optimal transport distances, integral probability metrics, geodesic distances, etc. In this paper, we unveil a grounded theory and tools which can help improve these distortions to better cope with ML requirements. We start with a generalization of Riemann integration that also encapsulates functions that are not strictly additive but are, more generally, $t$-additive, as in nonextensive statistical mechanics. Notably, this recovers Volterra's product integral as a special case. We then generalize the Fundamental Theorem of calculus using an extension of the (Euclidean) derivative. This, along with a series of more specific Theorems, serves as a basis for results showing how one can specifically design, alter, or change fundamental properties of distortion measures in a simple way, with a special emphasis on geometric- and ML-related properties that are the metricity, hyperbolicity, and encoding. We show how to apply it to a problem that has recently gained traction in ML: hyperbolic embeddings with a "cheap" and accurate encoding along the hyperbolic vs Euclidean scale. We unveil a new application for which the Poincar\'e disk model has very appealing features, and our theory comes in handy: \textit{model} embeddings for boosted combinations of decision trees, trained using the log-loss (trees) and logistic loss (combinations).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
427,348
1910.01933
Vulnerability of Face Recognition to Deep Morphing
It is increasingly easy to automatically swap faces in images and video or morph two faces into one using generative adversarial networks (GANs). The high quality of the resulted deep-morph raises the question of how vulnerable the current face recognition systems are to such fake images and videos. It also calls for automated ways to detect these GAN-generated faces. In this paper, we present the publicly available dataset of the Deepfake videos with faces morphed with a GAN-based algorithm. To generate these videos, we used open source software based on GANs, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. We show that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to the deep morph videos, with 85.62 and 95.00 false acceptance rates, respectively, which means methods for detecting these videos are necessary. We consider several baseline approaches for detecting deep morphs and find that the method based on visual quality metrics (often used in presentation attack detection domain) leads to the best performance with 8.97 equal error rate. Our experiments demonstrate that GAN-generated deep morph videos are challenging for both face recognition systems and existing detection methods, and the further development of deep morphing technologies will make it even more so.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
true
148,089
2405.16008
Intensity and Texture Correction of Omnidirectional Image Using Camera Images for Indirect Augmented Reality
Augmented reality (AR) using camera images in mobile devices is becoming popular for tourism promotion. However, obstructions such as tourists appearing in the camera images may cause the camera pose estimation error, resulting in CG misalignment and reduced visibility of the contents. To avoid this problem, Indirect AR (IAR), which does not use real-time camera images, has been proposed. In this method, an omnidirectional image is captured and virtual objects are synthesized on the image in advance. Users can experience AR by viewing a scene extracted from the synthesized omnidirectional image according to the device's sensor. This enables robustness and high visibility. However, if the weather conditions and season in the pre-captured 360 images differs from the current weather conditions and season when AR is experienced, the realism of the AR experience is reduced. To overcome the problem, we propose a method for correcting the intensity and texture of a past omnidirectional image using camera images from mobile devices. We first perform semantic segmentation. We then reproduce the current sky pattern by panoramic image composition and inpainting. For the other areas, we correct the intensity by histogram matching. In experiments, we show the effectiveness of the proposed method using various scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
457,208
2502.04173
Lexical Substitution is not Synonym Substitution: On the Importance of Producing Contextually Relevant Word Substitutes
Lexical Substitution is the task of replacing a single word in a sentence with a similar one. This should ideally be one that is not necessarily only synonymous, but also fits well into the surrounding context of the target word, while preserving the sentence's grammatical structure. Recent advances in Lexical Substitution have leveraged the masked token prediction task of Pre-trained Language Models to generate replacements for a given word in a sentence. With this technique, we introduce ConCat, a simple augmented approach which utilizes the original sentence to bolster contextual information sent to the model. Compared to existing approaches, it proves to be very effective in guiding the model to make contextually relevant predictions for the target word. Our study includes a quantitative evaluation, measured via sentence similarity and task performance. In addition, we conduct a qualitative human analysis to validate that users prefer the substitutions proposed by our method, as opposed to previous methods. Finally, we test our approach on the prevailing benchmark for Lexical Substitution, CoInCo, revealing potential pitfalls of the benchmark. These insights serve as the foundation for a critical discussion on the way in which Lexical Substitution is evaluated.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
531,010
2211.01098
Semantic SuperPoint: A Deep Semantic Descriptor
Several SLAM methods benefit from the use of semantic information. Most integrate photometric methods with high-level semantics such as object detection and semantic segmentation. We propose that adding a semantic segmentation decoder in a shared encoder architecture would help the descriptor decoder learn semantic information, improving the feature extractor. This would be a more robust approach than only using high-level semantic information since it would be intrinsically learned in the descriptor and would not depend on the final quality of the semantic prediction. To add this information, we take advantage of multi-task learning methods to improve accuracy and balance the performance of each task. The proposed models are evaluated according to detection and matching metrics on the HPatches dataset. The results show that the Semantic SuperPoint model performs better than the baseline one.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
328,103
1805.08134
Overabundant Information and Learning Traps
We develop a model of social learning from overabundant information: Short-lived agents sequentially choose from a large set of (flexibly correlated) information sources for prediction of an unknown state. Signal realizations are public. We demonstrate two starkly different long-run outcomes: (1) efficient information aggregation, where the community eventually learns as fast as possible; (2) "learning traps," where the community gets stuck observing suboptimal sources and learns inefficiently. Our main results identify a simple property of the signal correlation structure that separates these outcomes. In both regimes, we characterize which sources are observed in the long run and how often.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
98,055
2201.04227
A Feature Extraction based Model for Hate Speech Identification
The detection of hate speech online has become an important task, as offensive language such as hurtful, obscene and insulting content can harm marginalized people or groups. This paper presents TU Berlin team experiments and results on the task 1A and 1B of the shared task on hate speech and offensive content identification in Indo-European languages 2021. The success of different Natural Language Processing models is evaluated for the respective subtasks throughout the competition. We tested different models based on recurrent neural networks in word and character levels and transfer learning approaches based on Bert on the provided dataset by the competition. Among the tested models that have been used for the experiments, the transfer learning-based models achieved the best results in both subtasks.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
275,046
2205.12429
Interaction of a priori Anatomic Knowledge with Self-Supervised Contrastive Learning in Cardiac Magnetic Resonance Imaging
Training deep learning models on cardiac magnetic resonance imaging (CMR) can be a challenge due to the small amount of expert generated labels and inherent complexity of data source. Self-supervised contrastive learning (SSCL) has recently been shown to boost performance in several medical imaging tasks. However, it is unclear how much the pre-trained representation reflects the primary organ of interest compared to spurious surrounding tissue. In this work, we evaluate the optimal method of incorporating prior knowledge of anatomy into a SSCL training paradigm. Specifically, we evaluate using a segmentation network to explicitly local the heart in CMR images, followed by SSCL pretraining in multiple diagnostic tasks. We find that using a priori knowledge of anatomy can greatly improve the downstream diagnostic performance. Furthermore, SSCL pre-training with in-domain data generally improved downstream performance and more human-like saliency compared to end-to-end training and ImageNet pre-trained networks. However, introducing anatomic knowledge to pre-training generally does not have significant impact.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
298,532
2105.07199
Robust Data-Enabled Predictive Control: Tractable Formulations and Performance Guarantees
We introduce a general framework for robust data-enabled predictive control (DeePC) for linear time-invariant (LTI) systems. The proposed framework enables us to obtain model-free optimal control for LTI systems based on noisy input/output data. More specifically, robust DeePC solves a min-max optimization problem to compute the optimal control sequence that is resilient to all possible realizations of the uncertainties in the input/output data within a prescribed uncertainty set. We present computationally tractable reformulations of the min-max problem with various uncertainty sets. Furthermore, we show that even though an accurate prediction of the future behavior is unattainable in practice due to inaccessibility of the perfect input/output data, the obtained robust optimal control sequence provides performance guarantees for the actually realized input/output cost. We further show that the robust DeePC generalizes and robustifies the regularized DeePC (with quadratic regularization or 1-norm regularization) proposed in the literature. Finally, we demonstrate the performance of the proposed robust DeePC algorithm on high-fidelity, nonlinear, and noisy simulations of a grid-connected power converter system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
235,356
2110.06502
Prompt-tuning in ASR systems for efficient domain-adaptation
Automatic Speech Recognition (ASR) systems have found their use in numerous industrial applications in very diverse domains. Since domain-specific systems perform better than their generic counterparts on in-domain evaluation, the need for memory and compute-efficient domain adaptation is obvious. Particularly, adapting parameter-heavy transformer-based language models used for rescoring ASR hypothesis is challenging. In this work, we overcome the problem using prompt-tuning, a methodology that trains a small number of domain token embedding parameters to prime a transformer-based LM to a particular domain. With just a handful of extra parameters per domain, we achieve much better perplexity scores over the baseline of using an unadapted LM. Despite being parameter-efficient, these improvements are comparable to those of fully-fine-tuned models with hundreds of millions of parameters. We replicate our findings in perplexity numbers to Word Error Rate in a domain-specific ASR system for one such domain.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
260,645
2410.23714
Topology optimization of contact-aided compliant mechanisms for tracing multi-kink paths
This paper presents a topology optimization approach to design 2D contact-aided compliant mechanisms (CCMs) that can trace the desired output paths with more than one kink while experiencing self and/or external contacts. Such CCMs can be used as mechanical compliant switches. Hexagonal elements are used to parameterize the design domain. Negative circular masks are employed to remove material beneath them and generate rigid contact surfaces. Each mask is assigned five design variables. The first three decide the location and radius of the mask, whereas the last two determine the presence of the contact surface and its radius. To ensure continuity in contacting surfaces' normal, we employ a boundary smoothing scheme. The augmented Lagrange multiplier method is employed to incorporate self and mutual contact. An objective is formulated using the Fourier shape descriptors with the permitted resource constraint. The hill-climber optimization technique is utilized to update the design variables. An in-house code is developed for the entire process. To demonstrate the method's efficacy, a CCM is optimized with a two-kink path. The desired and obtained paths are compared.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
504,149
2410.03255
Towards a Benchmark for Large Language Models for Business Process Management Tasks
An increasing number of organizations are deploying Large Language Models (LLMs) for a wide range of tasks. Despite their general utility, LLMs are prone to errors, ranging from inaccuracies to hallucinations. To objectively assess the capabilities of existing LLMs, performance benchmarks are conducted. However, these benchmarks often do not translate to more specific real-world tasks. This paper addresses the gap in benchmarking LLM performance in the Business Process Management (BPM) domain. Currently, no BPM-specific benchmarks exist, creating uncertainty about the suitability of different LLMs for BPM tasks. This paper systematically compares LLM performance on four BPM tasks focusing on small open-source models. The analysis aims to identify task-specific performance variations, compare the effectiveness of open-source versus commercial models, and assess the impact of model size on BPM task performance. This paper provides insights into the practical applications of LLMs in BPM, guiding organizations in selecting appropriate models for their specific needs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
494,702
1812.03473
Comixify: Transform video into a comics
In this paper, we propose a solution to transform a video into a comics. We approach this task using a neural style algorithm based on Generative Adversarial Networks (GANs). Several recent works in the field of Neural Style Transfer showed that producing an image in the style of another image is feasible. In this paper, we build up on these works and extend the existing set of style transfer use cases with a working application of video comixification. To that end, we train an end-to-end solution that transforms input video into a comics in two stages. In the first stage, we propose a state-of-the-art keyframes extraction algorithm that selects a subset of frames from the video to provide the most comprehensive video context and we filter those frames using image aesthetic estimation engine. In the second stage, the style of selected keyframes is transferred into a comics. To provide the most aesthetically compelling results, we selected the most state-of-the art style transfer solution and based on that implement our own ComixGAN framework. The final contribution of our work is a Web-based working application of video comixification available at http://comixify.ii.pw.edu.pl.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,023
1902.03079
Reinforcement Learning from Hierarchical Critics
In this study, we investigate the use of global information to speed up the learning process and increase the cumulative rewards of reinforcement learning (RL) in competition tasks. Within the actor-critic RL, we introduce multiple cooperative critics from two levels of the hierarchy and propose a reinforcement learning from hierarchical critics (RLHC) algorithm. In our approach, each agent receives value information from local and global critics regarding a competition task and accesses multiple cooperative critics in a top-down hierarchy. Thus, each agent not only receives low-level details but also considers coordination from higher levels, thereby obtaining global information to improve the training performance. Then, we test the proposed RLHC algorithm against the benchmark algorithm, proximal policy optimisation (PPO), for two experimental scenarios performed in a Unity environment consisting of tennis and soccer agents' competitions. The results showed that RLHC outperforms the benchmark on both competition tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
121,010
1501.02741
Salient Object Detection: A Benchmark
We extensively compare, qualitatively and quantitatively, 40 state-of-the-art models (28 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over 6 challenging datasets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted just two years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for state-of-the-art models, provide useful hints towards constructing more challenging large scale datasets and better saliency models. Finally, we propose probable solutions for tackling several open problems such as evaluation scores and dataset bias, which also suggest future research directions in the rapidly-growing field of salient object detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,215
2210.15075
IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised Medical Image Segmentation
Due to the scarcity of labeled data, Contrastive Self-Supervised Learning (SSL) frameworks have lately shown great potential in several medical image analysis tasks. However, the existing contrastive mechanisms are sub-optimal for dense pixel-level segmentation tasks due to their inability to mine local features. To this end, we extend the concept of metric learning to the segmentation task, using a dense (dis)similarity learning for pre-training a deep encoder network, and employing a semi-supervised paradigm to fine-tune for the downstream task. Specifically, we propose a simple convolutional projection head for obtaining dense pixel-level features, and a new contrastive loss to utilize these dense projections thereby improving the local representations. A bidirectional consistency regularization mechanism involving two-stream model training is devised for the downstream task. Upon comparison, our IDEAL method outperforms the SoTA methods by fair margins on cardiac MRI segmentation. Code available: https://github.com/hritam-98/IDEAL-ICASSP23
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
326,786
2304.07127
OPI at SemEval 2023 Task 1: Image-Text Embeddings and Multimodal Information Retrieval for Visual Word Sense Disambiguation
The goal of visual word sense disambiguation is to find the image that best matches the provided description of the word's meaning. It is a challenging problem, requiring approaches that combine language and image understanding. In this paper, we present our submission to SemEval 2023 visual word sense disambiguation shared task. The proposed system integrates multimodal embeddings, learning to rank methods, and knowledge-based approaches. We build a classifier based on the CLIP model, whose results are enriched with additional information retrieved from Wikipedia and lexical databases. Our solution was ranked third in the multilingual task and won in the Persian track, one of the three language subtasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
358,239
2012.15279
Some Algorithms on Exact, Approximate and Error-Tolerant Graph Matching
The graph is one of the most widely used mathematical structures in engineering and science because of its representational power and inherent ability to demonstrate the relationship between objects. The objective of this work is to introduce the novel graph matching techniques using the representational power of the graph and apply it to structural pattern recognition applications. We present an extensive survey of various exact and inexact graph matching techniques. Graph matching using the concept of homeomorphism is presented. A category of graph matching algorithms is presented, which reduces the graph size by removing the less important nodes using some measure of relevance. We present an approach to error-tolerant graph matching using node contraction where the given graph is transformed into another graph by contracting smaller degree nodes. We use this scheme to extend the notion of graph edit distance, which can be used as a trade-off between execution time and accuracy. We describe an approach to graph matching by utilizing the various node centrality information, which reduces the graph size by removing a fraction of nodes from both graphs based on a given centrality measure. The graph matching problem is inherently linked to the geometry and topology of graphs. We introduce a novel approach to measure graph similarity using geometric graphs. We define the vertex distance between two geometric graphs using the position of their vertices and show it to be a metric over the set of all graphs with vertices only. We define edge distance between two graphs based on the angular orientation, length and position of the edges. Then we combine the notion of vertex distance and edge distance to define the graph distance between two geometric graphs and show it to be a metric. Finally, we use the proposed graph similarity framework to perform exact and error-tolerant graph matching.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
213,739
2305.19124
Calliffusion: Chinese Calligraphy Generation and Style Transfer with Diffusion Modeling
In this paper, we propose Calliffusion, a system for generating high-quality Chinese calligraphy using diffusion models. Our model architecture is based on DDPM (Denoising Diffusion Probabilistic Models), and it is capable of generating common characters in five different scripts and mimicking the styles of famous calligraphers. Experiments demonstrate that our model can generate calligraphy that is difficult to distinguish from real artworks and that our controls for characters, scripts, and styles are effective. Moreover, we demonstrate one-shot transfer learning, using LoRA (Low-Rank Adaptation) to transfer Chinese calligraphy art styles to unseen characters and even out-of-domain symbols such as English letters and digits.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
369,386
2004.11278
Mobile phone data analytics against the COVID-19 epidemics in Italy: flow diversity and local job markets during the national lockdown
Understanding collective mobility patterns is crucial to plan the restart of production and economic activities, which are currently put in stand-by to fight the diffusion of the epidemics. In this report, we use mobile phone data to infer the movements of people between Italian provinces and municipalities, and we analyze the incoming, outcoming and internal mobility flows before and during the national lockdown (March 9th, 2020) and after the closure of non-necessary productive and economic activities (March 23th, 2020). The population flow across provinces and municipalities enable for the modelling of a risk index tailored for the mobility of each municipality or province. Such an index would be a useful indicator to drive counter-measures in reaction to a sudden reactivation of the epidemics. Mobile phone data, even when aggregated to preserve the privacy of individuals, are a useful data source to track the evolution in time of human mobility, hence allowing for monitoring the effectiveness of control measures such as physical distancing. We address the following analytical questions: How does the mobility structure of a territory change? Do incoming and outcoming flows become more predictable during the lockdown, and what are the differences between weekdays and weekends? Can we detect proper local job markets based on human mobility flows, to eventually shape the borders of a local outbreak?
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
173,869
2103.01527
ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples
The training of Deep Neural Networks (DNN) is costly, thus DNN can be considered as the intellectual properties (IP) of model owners. To date, most of the existing protection works focus on verifying the ownership after the DNN model is stolen, which cannot resist piracy in advance. To this end, we propose an active DNN IP protection method based on adversarial examples against DNN piracy, named ActiveGuard. ActiveGuard aims to achieve authorization control and users' fingerprints management through adversarial examples, and can provide ownership verification. Specifically, ActiveGuard exploits the elaborate adversarial examples as users' fingerprints to distinguish authorized users from unauthorized users. Legitimate users can enter fingerprints into DNN for identity authentication and authorized usage, while unauthorized users will obtain poor model performance due to an additional control layer. In addition, ActiveGuard enables the model owner to embed a watermark into the weights of DNN. When the DNN is illegally pirated, the model owner can extract the embedded watermark and perform ownership verification. Experimental results show that, for authorized users, the test accuracy of LeNet-5 and Wide Residual Network (WRN) models are 99.15% and 91.46%, respectively, while for unauthorized users, the test accuracy of the two DNNs are only 8.92% (LeNet-5) and 10% (WRN), respectively. Besides, each authorized user can pass the fingerprint authentication with a high success rate (up to 100%). For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected. Further, ActiveGuard is demonstrated to be robust against fingerprint forgery attack, model fine-tuning attack and pruning attack.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
222,654
2411.03460
Pathway-Guided Optimization of Deep Generative Molecular Design Models for Cancer Therapy
The data-driven drug design problem can be formulated as an optimization task of a potentially expensive black-box objective function over a huge high-dimensional and structured molecular space. The junction tree variational autoencoder (JTVAE) has been shown to be an efficient generative model that can be used for suggesting legitimate novel drug-like small molecules with improved properties. While the performance of the generative molecular design (GMD) scheme strongly depends on the initial training data, one can improve its sampling efficiency for suggesting better molecules with enhanced properties by optimizing the latent space. In this work, we propose how mechanistic models - such as pathway models described by differential equations - can be used for effective latent space optimization(LSO) of JTVAEs and other similar models for GMD. To demonstrate the potential of our proposed approach, we show how a pharmacodynamic model, assessing the therapeutic efficacy of a drug-like small molecule by predicting how it modulates a cancer pathway, can be incorporated for effective LSO of data-driven models for GMD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,903
2201.13380
Deep Learning Macroeconomics
Limited datasets and complex nonlinear relationships are among the challenges that may emerge when applying econometrics to macroeconomic problems. This research proposes deep learning as an approach to transfer learning in the former case and to map relationships between variables in the latter case. Although macroeconomists already apply transfer learning when assuming a given a priori distribution in a Bayesian context, estimating a structural VAR with signal restriction and calibrating parameters based on results observed in other models, to name a few examples, advance in a more systematic transfer learning strategy in applied macroeconomics is the innovation we are introducing. We explore the proposed strategy empirically, showing that data from different but related domains, a type of transfer learning, helps identify the business cycle phases when there is no business cycle dating committee and to quick estimate a economic-based output gap. Next, since deep learning methods are a way of learning representations, those that are formed by the composition of multiple non-linear transformations, to yield more abstract representations, we apply deep learning for mapping low-frequency from high-frequency variables. The results obtained show the suitability of deep learning models applied to macroeconomic problems. First, models learned to classify United States business cycles correctly. Then, applying transfer learning, they were able to identify the business cycles of out-of-sample Brazilian and European data. Along the same lines, the models learned to estimate the output gap based on the U.S. data and obtained good performance when faced with Brazilian data. Additionally, deep learning proved adequate for mapping low-frequency variables from high-frequency data to interpolate, distribute, and extrapolate time series by related series.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
277,966
1608.05225
Active Learning for Approximation of Expensive Functions with Normal Distributed Output Uncertainty
When approximating a black-box function, sampling with active learning focussing on regions with non-linear responses tends to improve accuracy. We present the FLOLA-Voronoi method introduced previously for deterministic responses, and theoretically derive the impact of output uncertainty. The algorithm automatically puts more emphasis on exploration to provide more information to the models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
59,950
2006.10417
Deep Dense and Convolutional Autoencoders for Unsupervised Anomaly Detection in Machine Condition Sounds
This technical report describes two methods that were developed for Task 2 of the DCASE 2020 challenge. The challenge involves an unsupervised learning to detect anomalous sounds, thus only normal machine working condition samples are available during the training process. The two methods involve deep autoencoders, based on dense and convolutional architectures that use melspectogram processed sound features. Experiments were held, using the six machine type datasets of the challenge. Overall, competitive results were achieved by the proposed dense and convolutional AE, outperforming the baseline challenge method.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,882
2101.09334
Robotic Knee Tracking Control to Mimic the Intact Human Knee Profile Based on Actor-critic Reinforcement Learning
We address a state-of-the-art reinforcement learning (RL) control approach to automatically configure robotic prosthesis impedance parameters to enable end-to-end, continuous locomotion intended for transfemoral amputee subjects. Specifically, our actor-critic based RL provides tracking control of a robotic knee prosthesis to mimic the intact knee profile. This is a significant advance from our previous RL based automatic tuning of prosthesis control parameters which have centered on regulation control with a designer prescribed robotic knee profile as the target. In addition to presenting the complete tracking control algorithm based on direct heuristic dynamic programming (dHDP), we provide an analytical framework for the tracking controller with constrained inputs. We show that our proposed tracking control possesses several important properties, such as weight convergence of the learning networks, Bellman (sub)optimality of the cost-to-go value function and control input, and practical stability of the human-robot system under input constraint. We further provide a systematic simulation of the proposed tracking control using a realistic human-robot system simulator, the OpenSim, to emulate how the dHDP enables level ground walking, walking on different terrains and at different paces. These results show that our proposed dHDP based tracking control is not only theoretically suitable, but also practically useful.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
216,564
1412.6730
Convergence of Nonlinear Observers on R^n with a Riemannian Metric (Part I)
We study how convergence of an observer whose state lives in a copy of the given system's space can be established using a Riemannian metric. We show that the existence of an observer guaranteeing the property that a Riemannian distance between system and observer solutions is nonincreasing implies that the Lie derivative of the Riemannian metric along the system vector field is conditionally negative. Moreover, we establish that the existence of this metric is related to the observability of the system's linearization along its solutions. Moreover, if the observer has an infinite gain margin then the level sets of the output function are geodesically convex. Conversely, we establish that, if a complete Riemannian metric has a Lie derivative along the system vector field that is conditionally negative and is such that the output function has a monotonicity property, then there exists an observer with an infinite gain margin.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
38,705
2006.04439
Liquid Time-constant Networks
We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i.e., liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks. To demonstrate these properties, we first take a theoretical approach to find bounds over their dynamics and compute their expressive power by the trajectory length measure in latent trajectory space. We then conduct a series of time-series prediction experiments to manifest the approximation capability of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs. Code and data are available at https://github.com/raminmh/liquid_time_constant_networks
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
180,694
2005.10600
A Neural Network Looks at Leonardo's(?) Salvator Mundi
We use convolutional neural networks (CNNs) to analyze authorship questions surrounding the works of Leonardo da Vinci -- in particular, Salvator Mundi, the world's most expensive painting and among the most controversial. Trained on the works of an artist under study and visually comparable works of other artists, our system can identify likely forgeries and shed light on attribution controversies. Leonardo's few extant paintings test the limits of our system and require corroborative techniques of testing and analysis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
178,227
2404.07635
Dual Quaternion Control of UAVs with Cable-suspended Load
Modeling the kinematics and dynamics of robotics systems with suspended loads using dual quaternions has not been explored so far. This paper introduces a new innovative control strategy using dual quaternions for UAVs with cable-suspended loads, focusing on the sling load lifting and tracking problems. By utilizing the mathematical efficiency and compactness of dual quaternions, a unified representation of the UAV and its suspended load's dynamics and kinematics is achieved, facilitating the realization of load lifting and trajectory tracking. The simulation results have tested the proposed strategy's accuracy, efficiency, and robustness. This study makes a substantial contribution to present this novel control strategy that harnesses the benefits of dual quaternions for cargo UAVs. Our work also holds promise for inspiring future innovations in under-actuated systems control using dual quaternions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
445,914
1712.07800
Model-Based Clustering of Nonparametric Weighted Networks with Application to Water Pollution Analysis
Water pollution is a major global environmental problem, and it poses a great environmental risk to public health and biological diversity. This work is motivated by assessing the potential environmental threat of coal mining through increased sulfate concentrations in river networks, which do not belong to any simple parametric distribution. However, existing network models mainly focus on binary or discrete networks and weighted networks with known parametric weight distributions. We propose a principled nonparametric weighted network model based on exponential-family random graph models and local likelihood estimation and study its model-based clustering with application to large-scale water pollution network analysis. We do not require any parametric distribution assumption on network weights. The proposed method greatly extends the methodology and applicability of statistical network models. Furthermore, it is scalable to large and complex networks in large-scale environmental studies. The power of our proposed methods is demonstrated in simulation studies and a real application to sulfate pollution network analysis in Ohio watershed located in Pennsylvania, United States.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
87,102
2110.14747
Dynamic Review-based Recommenders
Just as user preferences change with time, item reviews also reflect those same preference changes. In a nutshell, if one is to sequentially incorporate review content knowledge into recommender systems, one is naturally led to dynamical models of text. In the present work we leverage the known power of reviews to enhance rating predictions in a way that (i) respects the causality of review generation and (ii) includes, in a bidirectional fashion, the ability of ratings to inform language review models and vice-versa, language representations that help predict ratings end-to-end. Moreover, our representations are time-interval aware and thus yield a continuous-time representation of the dynamics. We provide experiments on real-world datasets and show that our methodology is able to outperform several state-of-the-art models. Source code for all models can be found at [1].
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
263,618
2307.11430
Analysis of potential lifetime extension through dynamic battery reconfiguration
Growing demands for electrification result in increasingly larger battery packs. Due to factors such as cell position in the pack and variations in the manufacturing process, the packs exhibit variations in the performance of their constituent cells. Moreover, due to the fixed cell configuration, the weakest cell renders the pack highly susceptible to these variations. Reconfigurable battery pack systems, which have increased control flexibility due to additional power electronics, present a promising solution for these issues. Nevertheless, to what extent they can prolong the battery lifetime has not been investigated. This simulation study analyzes the potential of dynamic reconfiguration for extending battery lifetime w.r.t. several parameters. Results indicate that the lifetime extension is larger for series than for parallel configurations. For the latter, the dominant factor is equivalent full cycles spread at the end of life, but resistance increase with age and the number of cells in parallel are also influential. Finally, for the former, the number of series-connected elements amplifies these effects.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
380,904
2312.15831
Outlier-immune Data-driven Linear Power Flow Model Construction via Mixed-Integer Programming
The common approaches to construct a data-driven linear power flow (DD-LPF) model cannot completely eliminate the adverse impacts of outliers in a training dataset. In this letter, a novel outlier-immune DD-LPF model construction method via mixed-integer programming is presented for automatically and optimally identifying outliers to form a more accurate LPF model. Two acceleration solution strategies are further suggested to reduce the computational time. Case studies demonstrate the superior accuracy and comparable computational time of the proposed method when compared to three common approaches.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
418,148
2110.01895
Investigating the Impact of Pre-trained Language Models on Dialog Evaluation
Recently, there is a surge of interest in applying pre-trained language models (Pr-LM) in automatic open-domain dialog evaluation. Pr-LMs offer a promising direction for addressing the multi-domain evaluation challenge. Yet, the impact of different Pr-LMs on the performance of automatic metrics is not well-understood. This paper examines 8 different Pr-LMs and studies their impact on three typical automatic dialog evaluation metrics across three different dialog evaluation benchmarks. Specifically, we analyze how the choice of Pr-LMs affects the performance of automatic metrics. Extensive correlation analyses on each of the metrics are performed to assess the effects of different Pr-LMs along various axes, including pre-training objectives, dialog evaluation criteria, model size, and cross-dataset robustness. This study serves as the first comprehensive assessment of the effects of different Pr-LMs on automatic dialog evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,934
1710.07965
Backtracking Regression Forests for Accurate Camera Relocalization
Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure, and loop closure detection. Recent random forests based methods directly predict 3D world locations for 2D image locations to guide the camera pose optimization. During training, each tree greedily splits the samples to minimize the spatial variance. However, these greedy splits often produce uneven sub-trees in training or incorrect 2D-3D correspondences in testing. To address these problems, we propose a sample-balanced objective to encourage equal numbers of samples in the left and right sub-trees, and a novel backtracking scheme to remedy the incorrect 2D-3D correspondence predictions. Furthermore, we extend the regression forests based methods to use local features in both training and testing stages for outdoor RGB-only applications. Experimental results on publicly available indoor and outdoor datasets demonstrate the efficacy of our approach, which shows superior or on-par accuracy with several state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
83,021
2303.11564
Agave crop segmentation and maturity classification with deep learning data-centric strategies using very high-resolution satellite imagery
The responsible and sustainable agave-tequila production chain is fundamental for the social, environment and economic development of Mexico's agave regions. It is therefore relevant to develop new tools for large scale automatic agave region monitoring. In this work, we present an Agave tequilana Weber azul crop segmentation and maturity classification using very high resolution satellite imagery, which could be useful for this task. To achieve this, we solve real-world deep learning problems in the very specific context of agave crop segmentation such as lack of data, low quality labels, highly imbalanced data, and low model performance. The proposed strategies go beyond data augmentation and data transfer combining active learning and the creation of synthetic images with human supervision. As a result, the segmentation performance evaluated with Intersection over Union (IoU) value increased from 0.72 to 0.90 in the test set. We also propose a method for classifying agave crop maturity with 95% accuracy. With the resulting accurate models, agave production forecasting can be made available for large regions. In addition, some supply-demand problems such excessive supplies of agave or, deforestation, could be detected early.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
352,900
2310.04727
Task Aware Modulation using Representation Learning: An Approach for Few Shot Learning in Environmental Systems
We introduce TAM-RL (Task Aware Modulation using Representation Learning), a novel multimodal meta-learning framework for few-shot learning in heterogeneous systems, designed for science and engineering problems where entities share a common underlying forward model but exhibit heterogeneity due to entity-specific characteristics. TAM-RL leverages an amortized training process with a modulation network and a base network to learn task-specific modulation parameters, enabling efficient adaptation to new tasks with limited data. We evaluate TAM-RL on two real-world environmental datasets: Gross Primary Product (GPP) prediction and streamflow forecasting, demonstrating significant improvements over existing meta-learning methods. On the FLUXNET dataset, TAM-RL improves RMSE by 18.9\% over MMAML with just one month of few-shot data, while for streamflow prediction, it achieves an 8.21\% improvement with one year of data. Synthetic data experiments further validate TAM-RL's superior performance in heterogeneous task distributions, outperforming the baselines in the most heterogeneous setting. Notably, TAM-RL offers substantial computational efficiency, with at least 3x faster training times compared to gradient-based meta-learning approaches while being much simpler to train due to reduced complexity. Ablation studies highlight the importance of pretraining and adaptation mechanisms in TAM-RL's performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
397,791
2210.03255
Damage Control During Domain Adaptation for Transducer Based Automatic Speech Recognition
Automatic speech recognition models are often adapted to improve their accuracy in a new domain. A potential drawback of model adaptation to new domains is catastrophic forgetting, where the Word Error Rate on the original domain is significantly degraded. This paper addresses the situation when we want to simultaneously adapt automatic speech recognition models to a new domain and limit the degradation of accuracy on the original domain without access to the original training dataset. We propose several techniques such as a limited training strategy and regularized adapter modules for the Transducer encoder, prediction, and joiner network. We apply these methods to the Google Speech Commands and to the UK and Ireland English Dialect speech data set and obtain strong results on the new target domain while limiting the degradation on the original domain.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
321,953
2112.14005
Towards Relatable Explainable AI with the Perceptual Process
Machine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
273,423
0708.0271
Capacity Region of the Finite-State Multiple Access Channel with and without Feedback
The capacity region of the Finite-State Multiple Access Channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We characterize both an inner and an outer bound for this region, using Masseys's directed information. These bounds are shown to coincide, and hence yield the capacity region, of FS-MACs where the state process is stationary and ergodic and not affected by the inputs. Though `multi-letter' in general, our results yield explicit conclusions when applied to specific scenarios of interest. E.g., our results allow us to: - Identify a large class of FS-MACs, that includes the additive mod-2 noise MAC where the noise may have memory, for which feedback does not enlarge the capacity region. - Deduce that, for a general FS-MAC with states that are not affected by the input, if the capacity (region) without feedback is zero, then so is the capacity (region) with feedback. - Deduce that the capacity region of a MAC that can be decomposed into a `multiplexer' concatenated by a point-to-point channel (with, without, or with partial feedback), the capacity region is given by $\sum_{m} R_m \leq C$, where C is the capacity of the point to point channel and m indexes the encoders. Moreover, we show that for this family of channels source-channel coding separation holds.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
517
2106.03780
Learning Stochastic Optimal Policies via Gradient Descent
We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies. We propose a derivation of adjoint sensitivity results for stochastic differential equations through direct application of variational calculus. Then, given an objective function for a predetermined task specifying the desiderata for the controller, we optimize their parameters via iterative gradient descent methods. In doing so, we extend the range of applicability of classical SOC techniques, often requiring strict assumptions on the functional form of system and control. We verify the performance of the proposed approach on a continuous-time, finite horizon portfolio optimization with proportional transaction costs.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
239,449
2412.20061
Comparative Analysis of Listwise Reranking with Large Language Models in Limited-Resource Language Contexts
Large Language Models (LLMs) have demonstrated significant effectiveness across various NLP tasks, including text ranking. This study assesses the performance of large language models (LLMs) in listwise reranking for limited-resource African languages. We compare proprietary models RankGPT3.5, Rank4o-mini, RankGPTo1-mini and RankClaude-sonnet in cross-lingual contexts. Results indicate that these LLMs significantly outperform traditional baseline methods such as BM25-DT in most evaluation metrics, particularly in nDCG@10 and MRR@100. These findings highlight the potential of LLMs in enhancing reranking tasks for low-resource languages and offer insights into cost-effective solutions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
521,082
1803.01199
Chest X-Ray Analysis of Tuberculosis by Deep Learning with Segmentation and Augmentation
The results of chest X-ray (CXR) analysis of 2D images to get the statistically reliable predictions (availability of tuberculosis) by computer-aided diagnosis (CADx) on the basis of deep learning are presented. They demonstrate the efficiency of lung segmentation, lossless and lossy data augmentation for CADx of tuberculosis by deep convolutional neural network (CNN) applied to the small and not well-balanced dataset even. CNN demonstrates ability to train (despite overfitting) on the pre-processed dataset obtained after lung segmentation in contrast to the original not-segmented dataset. Lossless data augmentation of the segmented dataset leads to the lowest validation loss (without overfitting) and nearly the same accuracy (within the limits of standard deviation) in comparison to the original and other pre-processed datasets after lossy data augmentation. The additional limited lossy data augmentation results in the lower validation loss, but with a decrease of the validation accuracy. In conclusion, besides the more complex deep CNNs and bigger datasets, the better progress of CADx for the small and not well-balanced datasets even could be obtained by better segmentation, data augmentation, dataset stratification, and exclusion of non-evident outliers.
false
false
false
false
false
false
true
false
false
false
false
true
false
true
false
false
false
false
91,827
1202.3768
The Structure of Signals: Causal Interdependence Models for Games of Incomplete Information
Traditional economic models typically treat private information, or signals, as generated from some underlying state. Recent work has explicated alternative models, where signals correspond to interpretations of available information. We show that the difference between these formulations can be sharply cast in terms of causal dependence structure, and employ graphical models to illustrate the distinguishing characteristics. The graphical representation supports inferences about signal patterns in the interpreted framework, and suggests how results based on the generated model can be extended to more general situations. Specific insights about bidding games in classical auction mechanisms derive from qualitative graphical models.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
14,440
1612.08875
The Pessimistic Limits and Possibilities of Margin-based Losses in Semi-supervised Learning
Consider a classification problem where we have both labeled and unlabeled data available. We show that for linear classifiers defined by convex margin-based surrogate losses that are decreasing, it is impossible to construct any semi-supervised approach that is able to guarantee an improvement over the supervised classifier measured by this surrogate loss on the labeled and unlabeled data. For convex margin-based loss functions that also increase, we demonstrate safe improvements are possible.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
66,127
1001.3708
Capacity Bounds and Lattice Coding for the Star Relay Network
A half-duplex wireless network with 6 lateral nodes, 3 transmitters and 3 receivers, and a central relay is considered. The transmitters wish to send information to their corresponding receivers via a two phase communication protocol. The receivers decode their desired messages by using side information and the signals received from the relay. We derive an outer bound on the capacity region of any two phase protocol as well as 3 achievable regions by employing different relaying strategies. In particular, we combine physical and network layer coding to take advantage of the interference at the relay, using, for example, lattice-based codes. We then specialize our results to the exchange rate. It is shown that for any snr, we can achieve within 0.5 bit of the upper bound by lattice coding and within 0.34 bit, if we take the best of the 3 strategies. Also, for high snr, lattice coding is within log(3)/4 ~ 0.4 bit of the upper bound.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,474
2401.05680
Use of Graph Neural Networks in Aiding Defensive Cyber Operations
In an increasingly interconnected world, where information is the lifeblood of modern society, regular cyber-attacks sabotage the confidentiality, integrity, and availability of digital systems and information. Additionally, cyber-attacks differ depending on the objective and evolve rapidly to disguise defensive systems. However, a typical cyber-attack demonstrates a series of stages from attack initiation to final resolution, called an attack life cycle. These diverse characteristics and the relentless evolution of cyber attacks have led cyber defense to adopt modern approaches like Machine Learning to bolster defensive measures and break the attack life cycle. Among the adopted ML approaches, Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures due to their ability to process and learn from heterogeneous cyber threat data. In this paper, we look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain. We address each phase of CKC and discuss how GNNs contribute to preparing and preventing an attack from a defensive standpoint. Furthermore, We also discuss open research areas and further improvement scopes.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
true
false
false
420,872
1911.02646
Optimizing Semi-Stream CACHEJOIN for Near-Real-Time Data Warehousing
Streaming data join is a critical process in the field of near-real-time data warehousing. For this purpose, an adaptive semi-stream join algorithm called CACHEJOIN (Cache Join) focusing non-uniform stream data is provided in the literature. However, this algorithm cannot exploit the memory and CPU resources optimally and consequently it leaves its service rate suboptimal due to sequential execution of both of its phases, called stream-probing (SP) phase and disk-probing (DP) phase. By integrating the advantages of CACHEJOIN, in this paper we present two modifications in it. First is called P-CACHEJOIN (Parallel Cache Join) that enables the parallel processing of two phases in CACHEJOIN. This increases number of joined stream records and therefore improves throughput considerably. Second is called OP-CACHEJOIN (Optimized Parallel Cache Join) that implements a parallel loading of stored data into memory while the DP phase is executing. We present the performance analysis of both of our approaches with existing CACHEJOIN empirically using synthetic skewed dataset.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
152,409
2011.09369
Attentional Separation-and-Aggregation Network for Self-supervised Depth-Pose Learning in Dynamic Scenes
Learning depth and ego-motion from unlabeled videos via self-supervision from epipolar projection can improve the robustness and accuracy of the 3D perception and localization of vision-based robots. However, the rigid projection computed by ego-motion cannot represent all scene points, such as points on moving objects, leading to false guidance in these regions. To address this problem, we propose an Attentional Separation-and-Aggregation Network (ASANet), which can learn to distinguish and extract the scene's static and dynamic characteristics via the attention mechanism. We further propose a novel MotionNet with an ASANet as the encoder, followed by two separate decoders, to estimate the camera's ego-motion and the scene's dynamic motion field. Then, we introduce an auto-selecting approach to detect the moving objects for dynamic-aware learning automatically. Empirical experiments demonstrate that our method can achieve the state-of-the-art performance on the KITTI benchmark.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
207,163
2204.04588
Robust Cross-Modal Representation Learning with Progressive Self-Distillation
The learning objective of vision-language approach of CLIP does not effectively account for the noisy many-to-many correspondences found in web-harvested image captioning datasets, which contributes to its compute and data inefficiency. To address this challenge, we introduce a novel training framework based on cross-modal contrastive learning that uses progressive self-distillation and soft image-text alignments to more efficiently learn robust representations from noisy data. Our model distills its own knowledge to dynamically generate soft-alignment targets for a subset of images and captions in every minibatch, which are then used to update its parameters. Extensive evaluation across 14 benchmark datasets shows that our method consistently outperforms its CLIP counterpart in multiple settings, including: (a) zero-shot classification, (b) linear probe transfer, and (c) image-text retrieval, without incurring added computational cost. Analysis using an ImageNet-based robustness test-bed reveals that our method offers better effective robustness to natural distribution shifts compared to both ImageNet-trained models and CLIP itself. Lastly, pretraining with datasets spanning two orders of magnitude in size shows that our improvements over CLIP tend to scale with number of training examples.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
290,707
1903.02731
Integrating neural networks into the blind deblurring framework to compete with the end-to-end learning-based methods
Recently, end-to-end learning-based methods based on deep neural network (DNN) have been proven effective for blind deblurring. Without human-made assumptions and numerical algorithms, they are able to restore images with fewer artifacts and better perceptual quality. However, in practice, we also find some of their drawbacks. Without the theoretical guidance, these methods can not perform well when the motion is complex and sometimes generate unreasonable results. In this paper, for overcoming these drawbacks, we integrate deep convolution neural networks into conventional deblurring framework. Specifically, we build Stacked Estimation Residual Net (SEN) to estimate the motion flow map and Recurrent Prior Generative and Adversarial Net (RP-GAN) to learn the implicit image prior in the optimization model. Comparing with state-of-the-art end-to-end learning-based methods, our method restores reasonable details and shows better generalization ability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
123,562
2201.10711
Sparsity Regularization For Cold-Start Recommendation
Recently, Generative Adversarial Networks (GANs) have been applied to the problem of Cold-Start Recommendation, but the training performance of these models is hampered by the extreme sparsity in warm user purchase behavior. In this paper we introduce a novel representation for user-vectors by combining user demographics and user preferences, making the model a hybrid system which uses Collaborative Filtering and Content Based Recommendation. Our system models user purchase behavior using weighted user-product preferences (explicit feedback) rather than binary user-product interactions (implicit feedback). Using this we develop a novel sparse adversarial model, SRLGAN, for Cold-Start Recommendation leveraging the sparse user-purchase behavior which ensures training stability and avoids over-fitting on warm users. We evaluate the SRLGAN on two popular datasets and demonstrate state-of-the-art results.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
277,075
2401.12988
Few-Shot Learning for Chronic Disease Management: Leveraging Large Language Models and Multi-Prompt Engineering with Medical Knowledge Injection
This study harnesses state-of-the-art AI technology for chronic disease management, specifically in detecting various mental disorders through user-generated textual content. Existing studies typically rely on fully supervised machine learning, which presents challenges such as the labor-intensive manual process of annotating extensive training data for each disease and the need to design specialized deep learning architectures for each problem. To address such challenges, we propose a novel framework that leverages advanced AI techniques, including large language models and multi-prompt engineering. Specifically, we address two key technical challenges in data-driven chronic disease management: (1) developing personalized prompts to represent each user's uniqueness and (2) incorporating medical knowledge into prompts to provide context for chronic disease detection, instruct learning objectives, and operationalize prediction goals. We evaluate our method using four mental disorders, which are prevalent chronic diseases worldwide, as research cases. On the depression detection task, our method (F1 = 0.975~0.978) significantly outperforms traditional supervised learning paradigms, including feature engineering (F1 = 0.760) and architecture engineering (F1 = 0.756). Meanwhile, our approach demonstrates success in few-shot learning, i.e., requiring only a minimal number of training examples to detect chronic diseases based on user-generated textual content (i.e., only 2, 10, or 100 subjects). Moreover, our method can be generalized to other mental disorder detection tasks, including anorexia, pathological gambling, and self-harm (F1 = 0.919~0.978).
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
423,571
2111.00390
Dual Attention Network for Heart Rate and Respiratory Rate Estimation
Heart rate and respiratory rate measurement is a vital step for diagnosing many diseases. Non-contact camera based physiological measurement is more accessible and convenient in Telehealth nowadays than contact instruments such as fingertip oximeters since non-contact methods reduce risk of infection. However, remote physiological signal measurement is challenging due to environment illumination variations, head motion, facial expression, etc. It's also desirable to have a unified network which could estimate both heart rate and respiratory rate to reduce system complexity and latency. We propose a convolutional neural network which leverages spatial attention and channel attention, which we call it dual attention network (DAN) to jointly estimate heart rate and respiratory rate with camera video as input. Extensive experiments demonstrate that our proposed system significantly improves heart rate and respiratory rate measurement accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
264,200
2407.06304
VIMI: Grounding Video Generation through Multi-modal Instruction
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration. To address this, we construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts and then utilize a two-stage training strategy to enable diverse video generation tasks within the same model. In the first stage, we propose a multimodal conditional video generation framework for pretraining on these augmented datasets, establishing a foundational model for grounded video generation. Secondly, we finetune the model from the first stage on three video generation tasks, incorporating multi-modal instructions. This process further refines the model's ability to handle diverse inputs and tasks, ensuring seamless integration of multi-modal information. After this two-stage train-ing process, VIMI demonstrates multimodal understanding capabilities, producing contextually rich and personalized videos grounded in the provided inputs, as shown in Figure 1. Compared to previous visual grounded video generation methods, VIMI can synthesize consistent and temporally coherent videos with large motion while retaining the semantic control. Lastly, VIMI also achieves state-of-the-art text-to-video generation results on UCF101 benchmark.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
471,345