id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2309.16237
Object Motion Guided Human Motion Synthesis
Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
395,277
2310.11409
LLMs as Hackers: Autonomous Linux Privilege Escalation Attacks
Penetration testing, an essential component of software security testing, allows organizations to identify and remediate vulnerabilities in their systems, thus bolstering their defense mechanisms against cyberattacks. One recent advancement in the realm of penetration testing is the utilization of Language Models (LLMs). We explore the intersection of LLMs and penetration testing to gain insight into their capabilities and challenges in the context of privilege escalation. We introduce a fully automated privilege-escalation tool designed for evaluating the efficacy of LLMs for (ethical) hacking, executing benchmarks using multiple LLMs, and investigating their respective results. Our results show that GPT-4-turbo is well suited to exploit vulnerabilities (33-83% of vulnerabilities). GPT-3.5-turbo can abuse 16-50% of vulnerabilities, while local models, such as Llama3, can only exploit between 0 and 33% of the vulnerabilities. We analyze the impact of different context sizes, in-context learning, optional high-level guidance mechanisms, and memory management techniques. We discuss challenging areas for LLMs, including maintaining focus during testing, coping with errors, and finally comparing LLMs with human hackers. The current version of the LLM-guided privilege-escalation prototype can be found at https://github.com/ipa-labs/hackingBuddyGPT.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
400,628
2310.12970
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding
The real-world deployment of an autonomous driving system requires its components to run on-board and in real-time, including the motion prediction module that predicts the future trajectories of surrounding traffic participants. Existing agent-centric methods have demonstrated outstanding performance on public benchmarks. However, they suffer from high computational overhead and poor scalability as the number of agents to be predicted increases. To address this problem, we introduce the K-nearest neighbor attention with relative pose encoding (KNARPE), a novel attention mechanism allowing the pairwise-relative representation to be used by Transformers. Then, based on KNARPE we present the Heterogeneous Polyline Transformer with Relative pose encoding (HPTR), a hierarchical framework enabling asynchronous token update during the online inference. By sharing contexts among agents and reusing the unchanged contexts, our approach is as efficient as scene-centric methods, while performing on par with state-of-the-art agent-centric methods. Experiments on Waymo and Argoverse-2 datasets show that HPTR achieves superior performance among end-to-end methods that do not apply expensive post-processing or model ensembling. The code is available at https://github.com/zhejz/HPTR.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
401,223
2007.14823
Theory of gating in recurrent neural networks
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However, gating - i.e. multiplicative - interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: i) timescales and ii) dimensionality. The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
189,501
1302.6009
On learning parametric-output HMMs
We present a novel approach for learning an HMM whose outputs are distributed according to a parametric family. This is done by {\em decoupling} the learning task into two steps: first estimating the output parameters, and then estimating the hidden states transition probabilities. The first step is accomplished by fitting a mixture model to the output stationary distribution. Given the parameters of this mixture model, the second step is formulated as the solution of an easily solvable convex quadratic program. We provide an error analysis for the estimated transition probabilities and show they are robust to small perturbations in the estimates of the mixture parameters. Finally, we support our analysis with some encouraging empirical results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
22,347
1701.02538
Vandermonde Matrices with Nodes in the Unit Disk and the Large Sieve
We derive bounds on the extremal singular values and the condition number of NxK, with N>=K, Vandermonde matrices with nodes in the unit disk. The mathematical techniques we develop to prove our main results are inspired by a link---first established by by Selberg [1] and later extended by Moitra [2]---between the extremal singular values of Vandermonde matrices with nodes on the unit circle and large sieve inequalities. Our main conceptual contribution lies in establishing a connection between the extremal singular values of Vandermonde matrices with nodes in the unit disk and a novel large sieve inequality involving polynomials in z \in C with |z|<=1. Compared to Baz\'an's upper bound on the condition number [3], which, to the best of our knowledge, constitutes the only analytical result---available in the literature---on the condition number of Vandermonde matrices with nodes in the unit disk, our bound not only takes a much simpler form, but is also sharper for certain node configurations. Moreover, the bound we obtain can be evaluated consistently in a numerically stable fashion, whereas the evaluation of Baz\'an's bound requires the solution of a linear system of equations which has the same condition number as the Vandermonde matrix under consideration and can therefore lead to numerical instability in practice. As a byproduct, our result---when particularized to the case of nodes on the unit circle---slightly improves upon the Selberg-Moitra bound.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,565
2007.05783
Simulating multi-exit evacuation using deep reinforcement learning
Conventional simulations on multi-exit indoor evacuation focus primarily on how to determine a reasonable exit based on numerous factors in a changing environment. Results commonly include some congested and other under-utilized exits, especially with massive pedestrians. We propose a multi-exit evacuation simulation based on Deep Reinforcement Learning (DRL), referred to as the MultiExit-DRL, which involves in a Deep Neural Network (DNN) framework to facilitate state-to-action mapping. The DNN framework applies Rainbow Deep Q-Network (DQN), a DRL algorithm that integrates several advanced DQN methods, to improve data utilization and algorithm stability, and further divides the action space into eight isometric directions for possible pedestrian choices. We compare MultiExit-DRL with two conventional multi-exit evacuation simulation models in three separate scenarios: 1) varying pedestrian distribution ratios, 2) varying exit width ratios, and 3) varying open schedules for an exit. The results show that MultiExit-DRL presents great learning efficiency while reducing the total number of evacuation frames in all designed experiments. In addition, the integration of DRL allows pedestrians to explore other potential exits and helps determine optimal directions, leading to the high efficiency of exit utilization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,784
2207.06612
Deepfake Video Detection with Spatiotemporal Dropout Transformer
While the abuse of deepfake technology has caused serious concerns recently, how to detect deepfake videos is still a challenge due to the high photo-realistic synthesis of each frame. Existing image-level approaches often focus on single frame and ignore the spatiotemporal cues hidden in deepfake videos, resulting in poor generalization and robustness. The key of a video-level detector is to fully exploit the spatiotemporal inconsistency distributed in local facial regions across different frames in deepfake videos. Inspired by that, this paper proposes a simple yet effective patch-level approach to facilitate deepfake video detection via spatiotemporal dropout transformer. The approach reorganizes each input video into bag of patches that is then fed into a vision transformer to achieve robust representation. Specifically, a spatiotemporal dropout operation is proposed to fully explore patch-level spatiotemporal cues and serve as effective data augmentation to further enhance model's robustness and generalization ability. The operation is flexible and can be easily plugged into existing vision transformers. Extensive experiments demonstrate the effectiveness of our approach against 25 state-of-the-arts with impressive robustness, generalizability, and representation ability.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
307,931
2102.12432
Deep Reinforcement Learning for Safe Landing Site Selection with Concurrent Consideration of Divert Maneuvers
This research proposes a new integrated framework for identifying safe landing locations and planning in-flight divert maneuvers. The state-of-the-art algorithms for landing zone selection utilize local terrain features such as slopes and roughness to judge the safety and priority of the landing point. However, when there are additional chances of observation and diverting in the future, these algorithms are not able to evaluate the safety of the decision itself to target the selected landing point considering the overall descent trajectory. In response to this challenge, we propose a reinforcement learning framework that optimizes a landing site selection strategy concurrently with a guidance and control strategy to the target landing site. The trained agent could evaluate and select landing sites with explicit consideration of the terrain features, quality of future observations, and control to achieve a safe and efficient landing trajectory at a system-level. The proposed framework was able to achieve 94.8 $\%$ of successful landing in highly challenging landing sites where over 80$\%$ of the area around the initial target lading point is hazardous, by effectively updating the target landing site and feedback control gain during descent.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
false
false
false
false
221,730
2103.11333
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
In this paper, we propose a novel accelerated gradient method called ANITA for solving the fundamental finite-sum optimization problems. Concretely, we consider both general convex and strongly convex settings: i) For general convex finite-sum problems, ANITA improves previous state-of-the-art result given by Varag (Lan et al., 2019). In particular, for large-scale problems or the convergence error is not very small, i.e., $n \geq \frac{1}{\epsilon^2}$, ANITA obtains the \emph{first} optimal result $O(n)$, matching the lower bound $\Omega(n)$ provided by Woodworth and Srebro (2016), while previous results are $O(n \log \frac{1}{\epsilon})$ of Varag (Lan et al., 2019) and $O(\frac{n}{\sqrt{\epsilon}})$ of Katyusha (Allen-Zhu, 2017). ii) For strongly convex finite-sum problems, we also show that ANITA can achieve the optimal convergence rate $O\big((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon}\big)$ matching the lower bound $\Omega\big((n+\sqrt{\frac{nL}{\mu}})\log\frac{1}{\epsilon}\big)$ provided by Lan and Zhou (2015). Besides, ANITA enjoys a simpler loopless algorithmic structure unlike previous accelerated algorithms such as Varag (Lan et al., 2019) and Katyusha (Allen-Zhu, 2017) where they use double-loop structures. Moreover, we provide a novel \emph{dynamic multi-stage convergence analysis}, which is the key technical part for improving previous results to the optimal rates. We believe that our new theoretical rates and novel convergence analysis for the fundamental finite-sum problem will directly lead to key improvements for many other related problems, such as distributed/federated/decentralized optimization problems (e.g., Li and Richt\'arik, 2021). Finally, the numerical experiments show that ANITA converges faster than the previous state-of-the-art Varag (Lan et al., 2019), validating our theoretical results and confirming the practical superiority of ANITA.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
225,768
1401.8176
Double percolation phase transition in clustered complex networks
The internal organization of complex networks often has striking consequences on either their response to external perturbations or on their dynamical properties. In addition to small-world and scale-free properties, clustering is the most common topological characteristic observed in many real networked systems. In this paper, we report an extensive numerical study on the effects of clustering on the structural properties of complex networks. Strong clustering in heterogeneous networks induces the emergence of a core-periphery organization that has a critical effect on the percolation properties of the networks. We observe a novel double phase transition with an intermediate phase in which only the core of the network is percolated and a final phase in which the periphery percolates regardless of the core. This result implies breaking of the same symmetry at two different values of the control parameter, in stark contrast to the modern theory of continuous phase transitions. Inspired by this core-periphery organization, we introduce a simple model that allows us to analytically prove that such an anomalous phase transition is in fact possible.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
30,509
2305.12396
Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet Energy
Feature selection (FS) plays an important role in machine learning, which extracts important features and accelerates the learning process. In this paper, we propose a deep FS method that simultaneously conducts feature selection and differentiable $ k $-NN graph learning based on the Dirichlet Energy. The Dirichlet Energy identifies important features by measuring their smoothness on the graph structure, and facilitates the learning of a new graph that reflects the inherent structure in new feature subspace. We employ Optimal Transport theory to address the non-differentiability issue of learning $ k $-NN graphs in neural networks, which theoretically makes our method applicable to other graph neural networks for dynamic graph learning. Furthermore, the proposed framework is interpretable, since all modules are designed algorithmically. We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
365,977
1501.03271
Higher dimensional homodyne filtering for suppression of incidental phase artifacts in multichannel MRI
The aim of this paper is to introduce procedural steps for extension of the 1D homodyne phase correction for k-space truncation in all gradient encoding directions. Compared to the existing method applied to 2D partial k-space, signal losses introduced by the phase correction filter is observed to be minimal for the extended approach. In addition, the modified form of phase correction mitigates Incidental Phase Artifacts (IPA) due to truncation. For parallel imaging with undersampling along phase encode direction, the extended homodyne filtering is shown to be effective for minimizing these artifacts when each of the channel k-spaces are truncated along both phase and frequency encode directions. This is illustrated with 2D partial k-space for flow compensated multichannel Susceptibility Weighted Imaging (SWI). Extension of our method to 3D partial k-space shows improved reconstruction of flow information in phase contrast angiography.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,256
2205.08746
Power Module Heat Sink Design Optimization with Ensembles of Data-Driven Polynomial Chaos Surrogate Models
We consider the problem of optimizing the design of a heat sink used for cooling an insulated gate bipolar transistor (IGBT) power module. The thermal behavior of the heat sink is originally estimated using a high-fidelity computational fluid dynamics (CFD) simulation, which renders numerical optimization too computationally demanding. To enable optimization studies, we substitute the CFD simulation model with an inexpensive polynomial surrogate model that approximates the relation between the device's design features and a relevant thermal quantity of interest. The surrogate model of choice is a data-driven polynomial chaos expansion (DD-PCE), which learns the aforementioned relation by means of polynomial regression. Advantages of the DD-PCE include its applicability in small-data regimes and its easily adaptable model structure. To address the issue of model-form uncertainty and model robustness in view of limited training and test data, ensembles of DD-PCEs are generated based on data re-shuffling. Then, using the full ensemble of surrogate models, the surrogate-based predictions are accompanied by uncertainty metrics such as mean value and variance. Once trained and tested in terms of accuracy and robustness, the ensemble of DD-PCE surrogates replaces the high-fidelity simulation model in optimization algorithms aiming to identify heat sink designs that optimize the thermal behavior of the IGBT under geometrical and operational constraints. Optimized heat sink designs are obtained for a computational cost much smaller than utilizing the original model in the optimization procedure. Due to ensemble modeling, the optimization results can also be assessed in terms of uncertainty and robustness. Comparisons against alternative surrogate modeling techniques illustrate why the DD-PCE should be preferred in the considered setting.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
297,042
0801.1208
Hybrid Decoding of Finite Geometry LDPC Codes
For finite geometry low-density parity-check codes, heavy row and column weights in their parity check matrix make the decoding with even Min-Sum (MS) variants computationally expensive. To alleviate it, we present a class of hybrid schemes by concatenating a parallel bit flipping (BF) variant with an Min-Sum (MS) variant. In most SNR region of interest, without compromising performance or convergence rate, simulation results show that the proposed hybrid schemes can save substantial computational complexity with respect to MS variant decoding alone. Specifically, the BF variant, with much less computational complexity, bears most decoding load before resorting to MS variant. Computational and hardware complexity is also elaborated to justify the feasibility of the hybrid schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,141
2402.01231
Unveiling Delay Effects in Traffic Forecasting: A Perspective from Spatial-Temporal Delay Differential Equations
Traffic flow forecasting is a fundamental research issue for transportation planning and management, which serves as a canonical and typical example of spatial-temporal predictions. In recent years, Graph Neural Networks (GNNs) and Recurrent Neural Networks (RNNs) have achieved great success in capturing spatial-temporal correlations for traffic flow forecasting. Yet, two non-ignorable issues haven't been well solved: 1) The message passing in GNNs is immediate, while in reality the spatial message interactions among neighboring nodes can be delayed. The change of traffic flow at one node will take several minutes, i.e., time delay, to influence its connected neighbors. 2) Traffic conditions undergo continuous changes. The prediction frequency for traffic flow forecasting may vary based on specific scenario requirements. Most existing discretized models require retraining for each prediction horizon, restricting their applicability. To tackle the above issues, we propose a neural Spatial-Temporal Delay Differential Equation model, namely STDDE. It includes both delay effects and continuity into a unified delay differential equation framework, which explicitly models the time delay in spatial information propagation. Furthermore, theoretical proofs are provided to show its stability. Then we design a learnable traffic-graph time-delay estimator, which utilizes the continuity of the hidden states to achieve the gradient backward process. Finally, we propose a continuous output module, allowing us to accurately predict traffic flow at various frequencies, which provides more flexibility and adaptability to different scenarios. Extensive experiments show the superiority of the proposed STDDE along with competitive computational efficiency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
425,924
2412.20473
Toward Scene Graph and Layout Guided Complex 3D Scene Generation
Recent advancements in object-centric text-to-3D generation have shown impressive results. However, generating complex 3D scenes remains an open challenge due to the intricate relations between objects. Moreover, existing methods are largely based on score distillation sampling (SDS), which constrains the ability to manipulate multiobjects with specific interactions. Addressing these critical yet underexplored issues, we present a novel framework of Scene Graph and Layout Guided 3D Scene Generation (GraLa3D). Given a text prompt describing a complex 3D scene, GraLa3D utilizes LLM to model the scene using a scene graph representation with layout bounding box information. GraLa3D uniquely constructs the scene graph with single-object nodes and composite super-nodes. In addition to constraining 3D generation within the desirable layout, a major contribution lies in the modeling of interactions between objects in a super-node, while alleviating appearance leakage across objects within such nodes. Our experiments confirm that GraLa3D overcomes the above limitations and generates complex 3D scenes closely aligned with text prompts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
521,239
2403.16971
AIOS: LLM Agent Operating System
LLM-based intelligent agents face significant deployment challenges, particularly related to resource management. Allowing unrestricted access to LLM or tool resources can lead to inefficient or even potentially harmful resource allocation and utilization for agents. Furthermore, the absence of proper scheduling and resource management mechanisms in current agent designs hinders concurrent processing and limits overall system efficiency. As the diversity and complexity of agents continue to grow, addressing these resource management issues becomes increasingly critical to LLM-based agent systems. To address these challenges, this paper proposes the architecture of AIOS (LLM-based AI Agent Operating System) under the context of managing LLM-based agents. It introduces a novel architecture for serving LLM-based agents by isolating resources and LLM-specific services from agent applications into an AIOS kernel. This AIOS kernel provides fundamental services (e.g., scheduling, context management, memory management, storage management, access control) and efficient management of resources (e.g., LLM and external tools) for runtime agents. To enhance usability, AIOS also includes an AIOS-Agent SDK, a comprehensive suite of APIs designed for utilizing functionalities provided by the AIOS kernel. Experimental results demonstrate that using AIOS can achieve up to 2.1x faster execution for serving agents built by various agent frameworks. The source code is available at https://github.com/agiresearch/AIOS.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
441,263
2305.11403
Efficient Mixed Transformer for Single Image Super-Resolution
Recently, Transformer-based methods have achieved impressive results in single image super-resolution (SISR). However, the lack of locality mechanism and high complexity limit their application in the field of super-resolution (SR). To solve these problems, we propose a new method, Efficient Mixed Transformer (EMT) in this study. Specifically, we propose the Mixed Transformer Block (MTB), consisting of multiple consecutive transformer layers, in some of which the Pixel Mixer (PM) is used to replace the Self-Attention (SA). PM can enhance the local knowledge aggregation with pixel shifting operations. At the same time, no additional complexity is introduced as PM has no parameters and floating-point operations. Moreover, we employ striped window for SA (SWSA) to gain an efficient global dependency modelling by utilizing image anisotropy. Experimental results show that EMT outperforms the existing methods on benchmark dataset and achieved state-of-the-art performance. The Code is available at https://github.com/Fried-Rice-Lab/FriedRiceLab.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
365,512
1503.02986
Strategies for High-Throughput FPGA-based QC-LDPC Decoder Architecture
We propose without loss of generality strategies to achieve a high-throughput FPGA-based architecture for a QC-LDPC code based on a circulant-1 identity matrix construction. We present a novel representation of the parity-check matrix (PCM) providing a multi-fold throughput gain. Splitting of the node processing algorithm enables us to achieve pipelining of blocks and hence layers. By partitioning the PCM into not only layers but superlayers we derive an upper bound on the pipelining depth for the compact representation. To validate the architecture, a decoder for the IEEE 802.11n (2012) QC-LDPC is implemented on the Xilinx Kintex-7 FPGA with the help of the FPGA IP compiler [2] available in the NI LabVIEW Communication System Design Suite (CSDS) which offers an automated and systematic compilation flow where an optimized hardware implementation from the LDPC algorithm was generated in approximately 3 minutes, achieving an overall throughput of 608Mb/s (at 260MHz). As per our knowledge this is the fastest implementation of the IEEE 802.11n QC-LDPC decoder using an algorithmic compiler.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
40,999
1806.05129
What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks
This paper investigates conditional generative adversarial networks (cGANs) to overcome a fundamental limitation of using geotagged media for geographic discovery, namely its sparse and uneven spatial distribution. We train a cGAN to generate ground-level views of a location given overhead imagery. We show the "fake" ground-level images are natural looking and are structurally similar to the real images. More significantly, we show the generated images are representative of the locations and that the representations learned by the cGANs are informative. In particular, we show that dense feature maps generated using our framework are more effective for land-cover classification than approaches which spatially interpolate features extracted from sparse ground-level images. To our knowledge, ours is the first work to use cGANs to generate ground-level views given overhead imagery and to explore the benefits of the learned representations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,392
2007.13181
Controller design for robust invariance from noisy data
For an unknown linear system, starting from noisy open-loop input-state data collected during a finite-length experiment, we directly design a linear feedback controller that guarantees robust invariance of a given polyhedral set of the state in the presence of disturbances. The main result is a necessary and sufficient condition for the existence of such a controller, and amounts to the solution of a linear program. The benefits of large and rich data sets for the solution of the problem are discussed. A numerical example about a simplified platoon of two vehicles illustrates the method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
189,046
2204.11335
Simulating Fluids in Real-World Still Images
In this work, we tackle the problem of real-world fluid animation from a still image. The key of our system is a surface-based layered representation deriving from video decomposition, where the scene is decoupled into a surface fluid layer and an impervious background layer with corresponding transparencies to characterize the composition of the two layers. The animated video can be produced by warping only the surface fluid layer according to the estimation of fluid motions and recombining it with the background. In addition, we introduce surface-only fluid simulation, a $2.5D$ fluid calculation version, as a replacement for motion estimation. Specifically, we leverage the triangular mesh based on a monocular depth estimator to represent the fluid surface layer and simulate the motion in the physics-based framework with the inspiration of the classic theory of the hybrid Lagrangian-Eulerian method, along with a learnable network so as to adapt to complex real-world image textures. We demonstrate the effectiveness of the proposed system through comparison with existing methods in both standard objective metrics and subjective ranking scores. Extensive experiments not only indicate our method's competitive performance for common fluid scenes but also better robustness and reasonability under complex transparent fluid scenarios. Moreover, as the proposed surface-based layer representation and surface-only fluid simulation naturally disentangle the scene, interactive editing such as adding objects to the river and texture replacing could be easily achieved with realistic results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
293,107
2203.08795
Zero Pixel Directional Boundary by Vector Transform
Boundaries are among the primary visual cues used by human and computer vision systems. One of the key problems in boundary detection is the label representation, which typically leads to class imbalance and, as a consequence, to thick boundaries that require non-differential post-processing steps to be thinned. In this paper, we re-interpret boundaries as 1-D surfaces and formulate a one-to-one vector transform function that allows for training of boundary prediction completely avoiding the class imbalance issue. Specifically, we define the boundary representation at any point as the unit vector pointing to the closest boundary surface. Our problem formulation leads to the estimation of direction as well as richer contextual information of the boundary, and, if desired, the availability of zero-pixel thin boundaries also at training time. Our method uses no hyper-parameter in the training loss and a fixed stable hyper-parameter at inference. We provide theoretical justification/discussions of the vector transform representation. We evaluate the proposed loss method using a standard architecture and show the excellent performance over other losses and representations on several datasets. Code is available at https://github.com/edomel/BoundaryVT.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
285,912
2410.24032
Navigating the Unknown: A Chat-Based Collaborative Interface for Personalized Exploratory Tasks
The rise of large language models (LLMs) has revolutionized user interactions with knowledge-based systems, enabling chatbots to synthesize vast amounts of information and assist with complex, exploratory tasks. However, LLM-based chatbots often struggle to provide personalized support, particularly when users start with vague queries or lack sufficient contextual information. This paper introduces the Collaborative Assistant for Personalized Exploration (CARE), a system designed to enhance personalization in exploratory tasks by combining a multi-agent LLM framework with a structured user interface. CARE's interface consists of a Chat Panel, Solution Panel, and Needs Panel, enabling iterative query refinement and dynamic solution generation. The multi-agent framework collaborates to identify both explicit and implicit user needs, delivering tailored, actionable solutions. In a within-subject user study with 22 participants, CARE was consistently preferred over a baseline LLM chatbot, with users praising its ability to reduce cognitive load, inspire creativity, and provide more tailored solutions. Our findings highlight CARE's potential to transform LLM-based systems from passive information retrievers to proactive partners in personalized problem-solving and exploration.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
504,301
2006.13253
Robot Object Retrieval with Contextual Natural Language Queries
Natural language object retrieval is a highly useful yet challenging task for robots in human-centric environments. Previous work has primarily focused on commands specifying the desired object's type such as "scissors" and/or visual attributes such as "red," thus limiting the robot to only known object classes. We develop a model to retrieve objects based on descriptions of their usage. The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb. Our model directly predicts an object's appearance from the object's use specified by a verb phrase. We do not need to explicitly specify an object's class label. Our approach allows us to predict high level concepts like an object's utility based on the language query. Based on contextual information present in the language commands, our model can generalize to unseen object classes and unknown nouns in the commands. Our model correctly selects objects out of sets of five candidates to fulfill natural language commands, and achieves an average accuracy of 62.3% on a held-out test set of unseen ImageNet object classes and 53.0% on unseen object classes and unknown nouns. Our model also achieves an average accuracy of 54.7% on unseen YCB object classes, which have a different image distribution from ImageNet objects. We demonstrate our model on a KUKA LBR iiwa robot arm, enabling the robot to retrieve objects based on natural language descriptions of their usage. We also present a new dataset of 655 verb-object pairs denoting object usage over 50 verbs and 216 object classes.
false
false
false
false
true
false
false
true
true
false
false
true
false
false
false
false
false
false
183,838
2502.06164
Generalized Temporal Tensor Decomposition with Rank-revealing Latent-ODE
Tensor decomposition is a fundamental tool for analyzing multi-dimensional data by learning low-rank factors to represent high-order interactions. While recent works on temporal tensor decomposition have made significant progress by incorporating continuous timestamps in latent factors, they still struggle with general tensor data with continuous indexes not only in the temporal mode but also in other modes, such as spatial coordinates in climate data. Additionally, the problem of determining the tensor rank remains largely unexplored in temporal tensor models. To address these limitations, we propose \underline{G}eneralized temporal tensor decomposition with \underline{R}ank-r\underline{E}vealing laten\underline{T}-ODE (GRET). Our approach encodes continuous spatial indexes as learnable Fourier features and employs neural ODEs in latent space to learn the temporal trajectories of factors. To automatically reveal the rank of temporal tensors, we introduce a rank-revealing Gaussian-Gamma prior over the factor trajectories. We develop an efficient variational inference scheme with an analytical evidence lower bound, enabling sampling-free optimization. Through extensive experiments on both synthetic and real-world datasets, we demonstrate that GRET not only reveals the underlying ranks of temporal tensors but also significantly outperforms existing methods in prediction performance and robustness against noise.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,960
1912.08124
SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations
"Sparse" neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. Whereas in machine learning, "sparsity" is related to a penalty term that leads to some connecting weights becoming small or zero, in biological brains, sparsity is often created when high spiking thresholds prevent neuronal activity. Here we introduce sparsity into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to contribute to decision-making but suppressing information from neurons with high thresholds. This approach, which we term "SpaRCe", optimises the sparsity level of the reservoir without affecting the reservoir dynamics. The read-out weights and the thresholds are learned by an on-line gradient rule that minimises an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing inter-neuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe on classification problems and find that threshold learning improves performance compared to standard reservoir computing. SpaRCe alleviates the problem of catastrophic forgetting, a problem most evident in standard echo state networks and recurrent neural networks in general, due to increasing the number of task-specialised neurons that are included in the network decisions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
157,764
2502.14344
Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism
Binary Spiking Neural Networks (BSNNs) inherit the eventdriven paradigm of SNNs, while also adopting the reduced storage burden of binarization techniques. These distinct advantages grant BSNNs lightweight and energy-efficient characteristics, rendering them ideal for deployment on resource-constrained edge devices. However, due to the binary synaptic weights and non-differentiable spike function, effectively training BSNNs remains an open question. In this paper, we conduct an in-depth analysis of the challenge for BSNN learning, namely the frequent weight sign flipping problem. To mitigate this issue, we propose an Adaptive Gradient Modulation Mechanism (AGMM), which is designed to reduce the frequency of weight sign flipping by adaptively adjusting the gradients during the learning process. The proposed AGMM can enable BSNNs to achieve faster convergence speed and higher accuracy, effectively narrowing the gap between BSNNs and their full-precision equivalents. We validate AGMM on both static and neuromorphic datasets, and results indicate that it achieves state-of-the-art results among BSNNs. This work substantially reduces storage demands and enhances SNNs' inherent energy efficiency, making them highly feasible for resource-constrained environments.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
535,783
2412.18914
Long-Range Tasks Using Short-Context LLMs: Incremental Reasoning With Structured Memories
Long-range tasks require reasoning over long inputs. Existing solutions either need large compute budgets, training data, access to model weights, or use complex, task-specific approaches. We present PRISM, which alleviates these concerns by processing information as a stream of chunks, maintaining a structured in-context memory specified by a typed hierarchy schema. This approach demonstrates superior performance to baselines on diverse tasks while using at least 4x smaller contexts than long-context models. Moreover, PRISM is token-efficient. By producing short outputs and efficiently leveraging key-value (KV) caches, it achieves up to 54% cost reduction when compared to alternative short-context approaches. The method also scales down to tiny information chunks (e.g., 500 tokens) without increasing the number of tokens encoded or sacrificing quality. Furthermore, we show that it is possible to generate schemas to generalize our approach to new tasks with minimal effort.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
520,633
2303.03387
CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a Context Synergized Hyperbolic Network
The tremendous growth of social media users interacting in online conversations has led to significant growth in hate speech, affecting people from various demographics. Most of the prior works focus on detecting explicit hate speech, which is overt and leverages hateful phrases, with very little work focusing on detecting hate speech that is implicit or denotes hatred through indirect or coded language. In this paper, we present CoSyn, a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations. CoSyn introduces novel ways to encode these external contexts and employs a novel context interaction mechanism that clearly captures the interplay between them, making independent assessments of the amounts of information to be retrieved from these noisy contexts. Additionally, it carries out all these operations in the hyperbolic space to account for the scale-free dynamics of social media. We demonstrate the effectiveness of CoSyn on 6 hate speech datasets and show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
false
false
false
true
true
false
true
false
true
false
false
false
false
false
false
false
false
false
349,713
1112.4210
Approximate Decoding Approaches for Network Coded Correlated Data
This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples about the possible of our algorithms that can be deployed in sensor networks and distributed imaging applications. In both cases, the experimental results confirm the validity of our analysis and demonstrate the benefits of our low complexity solution for delivery of correlated data sources.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
13,512
2111.09613
Improving Transferability of Representations via Augmentation-Aware Self-Supervision
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering. However, such invariance could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. This is not an issue just for unsupervised learning; we found that this occurs even in supervised learning because it also learns to predict the same label for all augmented samples of an instance. To avoid such failures and obtain more generalizable representations, we suggest to optimize an auxiliary self-supervised loss, coined AugSelf, that learns the difference of augmentation parameters (e.g., cropping positions, color adjustment intensities) between two randomly augmented samples. Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. Furthermore, AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost. Extensive experiments demonstrate that our simple idea consistently improves the transferability of representations learned by supervised and unsupervised methods in various transfer learning scenarios. The code is available at https://github.com/hankook/AugSelf.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
267,061
2402.01034
VIS-MAE: An Efficient Self-supervised Learning Approach on Medical Image Segmentation and Classification
Artificial Intelligence (AI) has the potential to revolutionize diagnosis and segmentation in medical imaging. However, development and clinical implementation face multiple challenges including limited data availability, lack of generalizability, and the necessity to incorporate multi-modal data effectively. A foundation model, which is a large-scale pre-trained AI model, offers a versatile base that can be adapted to a variety of specific tasks and contexts. Here, we present VIsualization and Segmentation Masked AutoEncoder (VIS-MAE), novel model weights specifically designed for medical imaging. Specifically, VIS-MAE is trained on a dataset of 2.5 million unlabeled images from various modalities (CT, MR, PET,X-rays, and ultrasound), using self-supervised learning techniques. It is then adapted to classification and segmentation tasks using explicit labels. VIS-MAE has high label efficiency, outperforming several benchmark models in both in-domain and out-of-domain applications. In addition, VIS-MAE has improved label efficiency as it can achieve similar performance to other models with a reduced amount of labeled training data (50% or 80%) compared to other pre-trained weights. VIS-MAE represents a significant advancement in medical imaging AI, offering a generalizable and robust solution for improving segmentation and classification tasks while reducing the data annotation workload. The source code of this work is available at https://github.com/lzl199704/VIS-MAE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
425,821
1811.08214
Contingency Training
When applied to high-dimensional datasets, feature selection algorithms might still leave dozens of irrelevant variables in the dataset. Therefore, even after feature selection has been applied, classifiers must be prepared to the presence of irrelevant variables. This paper investigates a new training method called Contingency Training which increases the accuracy as well as the robustness against irrelevant attributes. Contingency training is classifier independent. By subsampling and removing information from each sample, it creates a set of constraints. These constraints aid the method to automatically find proper importance weights of the dataset's features. Experiments are conducted with the contingency training applied to neural networks over traditional datasets as well as datasets with additional irrelevant variables. For all of the tests, contingency training surpassed the unmodified training on datasets with irrelevant variables and even outperformed slightly when only a few or no irrelevant variables were present.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,987
1909.03881
Nearly-Unsupervised Hashcode Representations for Relation Extraction
Recently, kernelized locality sensitive hashcodes have been successfully employed as representations of natural language text, especially showing high relevance to biomedical relation extraction tasks. In this paper, we propose to optimize the hashcode representations in a nearly unsupervised manner, in which we only use data points, but not their class labels, for learning. The optimized hashcode representations are then fed to a supervised classifier following the prior work. This nearly unsupervised approach allows fine-grained optimization of each hash function, which is particularly suitable for building hashcode representations generalizing from a training set to a test set. We empirically evaluate the proposed approach for biomedical relation extraction tasks, obtaining significant accuracy improvements w.r.t. state-of-the-art supervised and semi-supervised approaches.
false
false
false
false
true
false
true
false
true
true
false
false
false
false
false
false
false
false
144,632
2107.13265
Learned Optimizers for Analytic Continuation
Traditional maximum entropy and sparsity-based algorithms for analytic continuation often suffer from the ill-posed kernel matrix or demand tremendous computation time for parameter tuning. Here we propose a neural network method by convex optimization and replace the ill-posed inverse problem by a sequence of well-conditioned surrogate problems. After training, the learned optimizers are able to give a solution of high quality with low time cost and achieve higher parameter efficiency than heuristic fully-connected networks. The output can also be used as a neural default model to improve the maximum entropy for better performance. Our methods may be easily extended to other high-dimensional inverse problems via large-scale pretraining.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
248,150
1809.05636
Geo-Text Data and Data-Driven Geospatial Semantics
Many datasets nowadays contain links between geographic locations and natural language texts. These links can be geotags, such as geotagged tweets or geotagged Wikipedia pages, in which location coordinates are explicitly attached to texts. These links can also be place mentions, such as those in news articles, travel blogs, or historical archives, in which texts are implicitly connected to the mentioned places. This kind of data is referred to as geo-text data. The availability of large amounts of geo-text data brings both challenges and opportunities. On the one hand, it is challenging to automatically process this kind of data due to the unstructured texts and the complex spatial footprints of some places. On the other hand, geo-text data offers unique research opportunities through the rich information contained in texts and the special links between texts and geography. As a result, geo-text data facilitates various studies especially those in data-driven geospatial semantics. This paper discusses geo-text data and related concepts. With a focus on data-driven research, this paper systematically reviews a large number of studies that have discovered multiple types of knowledge from geo-text data. Based on the literature review, a generalized workflow is extracted and key challenges for future work are discussed.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
107,834
2001.05026
Unsupervised Learning of the Set of Local Maxima
This paper describes a new form of unsupervised learning, whose input is a set of unlabeled points that are assumed to be local maxima of an unknown value function v in an unknown subset of the vector space. Two functions are learned: (i) a set indicator c, which is a binary classifier, and (ii) a comparator function h that given two nearby samples, predicts which sample has the higher value of the unknown function v. Loss terms are used to ensure that all training samples x are a local maxima of v, according to h and satisfy c(x)=1. Therefore, c and h provide training signals to each other: a point x' in the vicinity of x satisfies c(x)=-1 or is deemed by h to be lower in value than x. We present an algorithm, show an example where it is more efficient to use local maxima as an indicator function than to employ conventional classification, and derive a suitable generalization bound. Our experiments show that the method is able to outperform one-class classification algorithms in the task of anomaly detection and also provide an additional signal that is extracted in a completely unsupervised way.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
160,420
1607.08022
Instance Normalization: The Missing Ingredient for Fast Stylization
It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at https://github.com/DmitryUlyanov/texture_nets. Full paper can be found at arXiv:1701.02096.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,100
2405.08668
Promoting AI Equity in Science: Generalized Domain Prompt Learning for Accessible VLM Research
Large-scale Vision-Language Models (VLMs) have demonstrated exceptional performance in natural vision tasks, motivating researchers across domains to explore domain-specific VLMs. However, the construction of powerful domain-specific VLMs demands vast amounts of annotated data, substantial electrical energy, and computing resources, primarily accessible to industry, yet hindering VLM research in academia. To address this challenge and foster sustainable and equitable VLM research, we present the Generalized Domain Prompt Learning (GDPL) framework. GDPL facilitates the transfer of VLMs' robust recognition capabilities from natural vision to specialized domains, without the need for extensive data or resources. By leveraging small-scale domain-specific foundation models and minimal prompt samples, GDPL empowers the language branch with domain knowledge through quaternion networks, uncovering cross-modal relationships between domain-specific vision features and natural vision-based contextual embeddings. Simultaneously, GDPL guides the vision branch into specific domains through hierarchical propagation of generated vision prompt features, grounded in well-matched vision-language relations. Furthermore, to fully harness the domain adaptation potential of VLMs, we introduce a novel low-rank adaptation approach. Extensive experiments across diverse domains like remote sensing, medical imaging, geology, Synthetic Aperture Radar, and fluid dynamics, validate the efficacy of GDPL, demonstrating its ability to achieve state-of-the-art domain recognition performance in a prompt learning paradigm. Our framework paves the way for sustainable and inclusive VLM research, transcending the barriers between academia and industry.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
454,175
2006.08575
The role of optimization geometry in single neuron learning
Recent numerical experiments have demonstrated that the choice of optimization geometry used during training can impact generalization performance when learning expressive nonlinear model classes such as deep neural networks. These observations have important implications for modern deep learning but remain poorly understood due to the difficulty of the associated nonconvex optimization problem. Towards an understanding of this phenomenon, we analyze a family of pseudogradient methods for learning generalized linear models under the square loss - a simplified problem containing both nonlinearity in the model parameters and nonconvexity of the optimization which admits a single neuron as a special case. We prove non-asymptotic bounds on the generalization error that sharply characterize how the interplay between the optimization geometry and the feature space geometry sets the out-of-sample performance of the learned model. Experimentally, selecting the optimization geometry as suggested by our theory leads to improved performance in generalized linear model estimation problems such as nonlinear and nonconvex variants of sparse vector recovery and low-rank matrix sensing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,234
2301.03728
Scaling Laws for Generative Mixed-Modal Language Models
Generative language models define distributions over sequences of tokens that can represent essentially any combination of data modalities (e.g., any permutation of image tokens from VQ-VAEs, speech tokens from HuBERT, BPE tokens for language or code, and so on). To better understand the scaling properties of such mixed-modal models, we conducted over 250 experiments using seven different modalities and model sizes ranging from 8 million to 30 billion, trained on 5-100 billion tokens. We report new mixed-modal scaling laws that unify the contributions of individual modalities and the interactions between them. Specifically, we explicitly model the optimal synergy and competition due to data and model size as an additive term to previous uni-modal scaling laws. We also find four empirical phenomena observed during the training, such as emergent coordinate-ascent style training that naturally alternates between modalities, guidelines for selecting critical hyper-parameters, and connections between mixed-modal competition and training stability. Finally, we test our scaling law by training a 30B speech-text model, which significantly outperforms the corresponding unimodal models. Overall, our research provides valuable insights into the design and training of mixed-modal generative models, an important new class of unified models that have unique distributional properties.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
339,868
2303.01291
Robust, High-Precision GNSS Carrier-Phase Positioning with Visual-Inertial Fusion
Robust, high-precision global localization is fundamental to a wide range of outdoor robotics applications. Conventional fusion methods use low-accuracy pseudorange based GNSS measurements ($>>5m$ errors) and can only yield a coarse registration to the global earth-centered-earth-fixed (ECEF) frame. In this paper, we leverage high-precision GNSS carrier-phase positioning and aid it with local visual-inertial odometry (VIO) tracking using an extended Kalman filter (EKF) framework that better resolves the integer ambiguity concerned with GNSS carrier-phase. %to achieve centimeter-level accuracy in the ECEF frame. We also propose an algorithm for accurate GNSS-antenna-to-IMU extrinsics calibration to accurately align VIO to the ECEF frame. Together, our system achieves robust global positioning demonstrated by real-world hardware experiments in severely occluded urban canyons, and outperforms the state-of-the-art RTKLIB by a significant margin in terms of integer ambiguity solution fix rate and positioning RMSE accuracy.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
348,915
2005.14407
Two-Hop Connectivity to the Roadside in a VANET Under the Random Connection Model
In this paper, we compute the expected number of vehicles with at least one two-hop path to a fixed roadside unit (RSU) in a multi-hop, one-dimensional vehicular ad hoc network (VANET) where other cars can act as relays. The pairwise channels experience Rayleigh fading in the random connection model, and so exist, with a probability given by a function of the mutual distance between the cars, or between the cars and the RSU. We derive exact expressions for the expected number of cars with a two-hop connection to the RSU when the car density $\rho$ tends to zero and infinity, and determine its behaviour using an infinite oscillating power series in $\rho$, which is accurate for all regimes of traffic density. We also corroborate those findings with a realistic scenario, using snapshots of actual traffic data. Finally, a normal approximation is discussed for the probability mass function of the number of cars with a two-hop connection to the RSU.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
179,262
1906.04908
SPoC: Search-based Pseudocode to Code
We consider the task of mapping pseudocode to long programs that are functionally correct. Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation. However, without proper credit assignment to localize the sources of program failures, it is difficult to guide search toward more promising programs. We propose to perform credit assignment based on signals from compilation errors, which constitute 88.7% of program failures. Concretely, we treat the translation of each pseudocode line as a discrete portion of the program, and whenever a synthesized program fails to compile, an error localization method tries to identify the portion of the program responsible for the failure. We then focus search over alternative translations of the pseudocode for those portions. For evaluation, we collected the SPoC dataset (Search-based Pseudocode to Code) containing 18,356 programs with human-authored pseudocode and test cases. Under a budget of 100 program compilations, performing search improves the synthesis success rate over using the top-one translation of the pseudocode from 25.6% to 44.7%.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
134,874
1408.1284
Efficient Interpolation-Based Decoding of Interleaved Subspace and Gabidulin Codes
An interpolation-based decoding scheme for interleaved subspace codes is presented. The scheme can be used as a (not necessarily polynomial-time) list decoder as well as a probabilistic unique decoder. Both interpretations allow to decode interleaved subspace codes beyond half the minimum subspace distance. Further, an efficient interpolation procedure for the required linearized multivariate polynomials is presented and a computationally- and memory-efficient root-finding algorithm for the probabilistic unique decoder is proposed. These two efficient algorithms can also be applied for accelerating the decoding of interleaved Gabidulin codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,152
2311.05580
Inference for Probabilistic Dependency Graphs
Probabilistic dependency graphs (PDGs) are a flexible class of probabilistic graphical models, subsuming Bayesian Networks and Factor Graphs. They can also capture inconsistent beliefs, and provide a way of measuring the degree of this inconsistency. We present the first tractable inference algorithm for PDGs with discrete variables, making the asymptotic complexity of PDG inference similar that of the graphical models they generalize. The key components are: (1) the observation that, in many cases, the distribution a PDG specifies can be formulated as a convex optimization problem (with exponential cone constraints), (2) a construction that allows us to express these problems compactly for PDGs of boundeed treewidth, (3) contributions to the theory of PDGs that justify the construction, and (4) an appeal to interior point methods that can solve such problems in polynomial time. We verify the correctness and complexity of our approach, and provide an implementation of it. We then evaluate our implementation, and demonstrate that it outperforms baseline approaches. Our code is available at http://github.com/orichardson/pdg-infer-uai.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
406,626
1805.01046
BlazeIt: Optimizing Declarative Aggregation and Limit Queries for Neural Network-Based Video Analytics
Recent advances in neural networks (NNs) have enabled automatic querying of large volumes of video data with high accuracy. While these deep NNs can produce accurate annotations of an object's position and type in video, they are computationally expensive and require complex, imperative deployment code to answer queries. Prior work uses approximate filtering to reduce the cost of video analytics, but does not handle two important classes of queries, aggregation and limit queries; moreover, these approaches still require complex code to deploy. To address the computational and usability challenges of querying video at scale, we introduce BlazeIt, a system that optimizes queries of spatiotemporal information of objects in video. BlazeIt accepts queries via FrameQL, a declarative extension of SQL for video analytics that enables video-specific query optimization. We introduce two new query optimization techniques in BlazeIt that are not supported by prior work. First, we develop methods of using NNs as control variates to quickly answer approximate aggregation queries with error bounds. Second, we present a novel search algorithm for cardinality-limited video queries. Through these these optimizations, BlazeIt can deliver up to 83x speedups over the recent literature on video processing.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
96,568
2406.09551
Embedding machine-learnt sub-grid variability improves climate model biases
The under-representation of cloud formation is a long-standing bias associated with climate simulations. Parameterisation schemes are required to capture cloud processes within current climate models but have known biases. We overcome these biases by embedding a Multi-Output Gaussian Process (MOGP) trained on high resolution Unified Model simulations to represent the variability of temperature and specific humidity within a climate model. A trained MOGP model is coupled in-situ with a simplified Atmospheric General Circulation Model named SPEEDY. The temperature and specific humidity profiles of SPEEDY are perturbed at fixed intervals according to the variability predicted from the MOGP. Ten-year predictions are generated for both control and ML-hybrid models. The hybrid model reduces the global precipitation bias by 18\% and over the tropics by 22\%. To further understand the drivers of these improvements, physical quantities of interest are explored, such as the distribution of lifted index values and the alteration of the Hadley cell. The control and hybrid set-ups are also run in a plus 4K sea-surface temperature experiment to explore the effects of the approach on patterns relating to cloud cover and precipitation in a warmed climate setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
463,981
2305.05094
Interactive Concept Learning for Uncovering Latent Themes in Large Text Collections
Experts across diverse disciplines are often interested in making sense of large text collections. Traditionally, this challenge is approached either by noisy unsupervised techniques such as topic models, or by following a manual theme discovery process. In this paper, we expand the definition of a theme to account for more than just a word distribution, and include generalized concepts deemed relevant by domain experts. Then, we propose an interactive framework that receives and encodes expert feedback at different levels of abstraction. Our framework strikes a balance between automation and manual coding, allowing experts to maintain control of their study while reducing the manual effort required.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
362,999
1505.04585
Global Variational Method for Fingerprint Segmentation by Three-part Decomposition
Verifying an identity claim by fingerprint recognition is a commonplace experience for millions of people in their daily life, e.g. for unlocking a tablet computer or smartphone. The first processing step after fingerprint image acquisition is segmentation, i.e. dividing a fingerprint image into a foreground region which contains the relevant features for the comparison algorithm, and a background region. We propose a novel segmentation method by global three-part decomposition (G3PD). Based on global variational analysis, the G3PD method decomposes a fingerprint image into cartoon, texture and noise parts. After decomposition, the foreground region is obtained from the non-zero coefficients in the texture image using morphological processing. The segmentation performance of the G3PD method is compared to five state-of-the-art methods on a benchmark which comprises manually marked ground truth segmentation for 10560 images. Performance evaluations show that the G3PD method consistently outperforms existing methods in terms of segmentation accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,204
2304.04016
Arithmetic Intensity Balancing Convolution for Hardware-aware Efficient Block Design
As deep learning advances, edge devices and lightweight neural networks are becoming more important. To reduce latency in the AI accelerator, it's essential to not only reduce FLOPs but also enhance hardware performance. We proposed an arithmetic intensity balancing convolution (ABConv) to address the issue of the overall intensity being limited by the small weight arithmetic intensity for convolution with a small spatial size. ABConv increased the maximum bound of overall arithmetic intensity and significantly reduced latency, without sacrificing accuracy. We tested the latency and hardware performance of ABConv on the Arm Ethos-U65 NPU in various configurations and used it to replace some of MobileNetV1 and ResNet50 in image classification for CIFAR100.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
357,037
1107.1736
High-dimensional structure estimation in Ising models: Local separation criterion
We consider the problem of high-dimensional Ising (graphical) model selection. We propose a simple algorithm for structure estimation based on the thresholding of the empirical conditional variation distances. We introduce a novel criterion for tractable graph families, where this method is efficient, based on the presence of sparse local separators between node pairs in the underlying graph. For such graphs, the proposed algorithm has a sample complexity of $n=\Omega(J_{\min}^{-2}\log p)$, where $p$ is the number of variables, and $J_{\min}$ is the minimum (absolute) edge potential in the model. We also establish nonasymptotic necessary and sufficient conditions for structure estimation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
11,215
2205.07320
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective
The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over-parameterized models often show high generalization ability. It is known that when we use iterative magnitude pruning (IMP), which is an algorithm to find sparse networks with high generalization ability that can be trained from the initial weights independently, called winning tickets, the initial large learning rate does not work well in deep neural networks such as ResNet. However, since the initial large learning rate generally helps the optimizer to converge to flatter minima, we hypothesize that the winning tickets have relatively sharp minima, which is considered a disadvantage in terms of generalization ability. In this paper, we confirm this hypothesis and show that the PAC-Bayesian theory can provide an explicit understanding of the relationship between LTH and generalization behavior. On the basis of our experimental findings that flatness is useful for improving accuracy and robustness to label noise and that the distance from the initial weights is deeply involved in winning tickets, we offer the PAC-Bayes bound using a spike-and-slab distribution to analyze winning tickets. Finally, we revisit existing algorithms for finding winning tickets from a PAC-Bayesian perspective and provide new insights into these methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
296,560
2303.09474
Gradient flow on extensive-rank positive semi-definite matrix denoising
In this work, we present a new approach to analyze the gradient flow for a positive semi-definite matrix denoising problem in an extensive-rank and high-dimensional regime. We use recent linear pencil techniques of random matrix theory to derive fixed point equations which track the complete time evolution of the matrix-mean-square-error of the problem. The predictions of the resulting fixed point equations are validated by numerical experiments. In this short note we briefly illustrate a few predictions of our formalism by way of examples, and in particular we uncover continuous phase transitions in the extensive-rank and high-dimensional regime, which connect to the classical phase transitions of the low-rank problem in the appropriate limit. The formalism has much wider applicability than shown in this communication.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
352,060
2209.14155
Automatic Analysis of Available Source Code of Top Artificial Intelligence Conference Papers
Source code is essential for researchers to reproduce the methods and replicate the results of artificial intelligence (AI) papers. Some organizations and researchers manually collect AI papers with available source code to contribute to the AI community. However, manual collection is a labor-intensive and time-consuming task. To address this issue, we propose a method to automatically identify papers with available source code and extract their source code repository URLs. With this method, we find that 20.5% of regular papers of 10 top AI conferences published from 2010 to 2019 are identified as papers with available source code and that 8.1% of these source code repositories are no longer accessible. We also create the XMU NLP Lab README Dataset, the largest dataset of labeled README files for source code document research. Through this dataset, we have discovered that quite a few README files have no installation instructions or usage tutorials provided. Further, a large-scale comprehensive statistical analysis is made for a general picture of the source code of AI conference papers. The proposed solution can also go beyond AI conference papers to analyze other scientific papers from both journals and conferences to shed light on more domains.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
320,159
2410.06553
DCP: Learning Accelerator Dataflow for Neural Network via Propagation
Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs' performance and efficiency. One key reason is dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency and energy consumption. Unlike prior works that required considerable efforts from HW engineers to design suitable dataflows for different DNNs, this work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort. It has several attractive benefits that prior arts do not have. (i) We translate the HW dataflow configuration into a code representation in a unified dataflow coding space, which can be optimized by backpropagating gradients given a DNN layer or network. (ii) DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives e.g., latency and energy. (iii) It can be easily generalized to unseen HW configurations in a zero-shot or few-shot learning manner. For example, without using additional training data, DCP surpasses the GAMMA method that performs a full search using thousands of samples. Extensive experiments on several representative models such as MobileNet, ResNet, and ViT show that DCP outperforms its counterparts in various settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
496,259
2103.10257
Domain Generalization using Ensemble Learning
Domain generalization is a sub-field of transfer learning that aims at bridging the gap between two different domains in the absence of any knowledge about the target domain. Our approach tackles the problem of a model's weak generalization when it is trained on a single source domain. From this perspective, we build an ensemble model on top of base deep learning models trained on a single source to enhance the generalization of their collective prediction. The results achieved thus far have demonstrated promising improvements of the ensemble over any of its base learners.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
225,402
2211.12494
On the Transferability of Visual Features in Generalized Zero-Shot Learning
Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network. While recent GZSL methods have explored various techniques to leverage the capacity of these features, there has been an extensive growth of representation learning techniques that remain under-explored. In this work, we investigate the utility of different GZSL methods when using different feature extractors, and examine how these models' pre-training objectives, datasets, and architecture design affect their feature representation ability. Our results indicate that 1) methods using generative components for GZSL provide more advantages when using recent feature extractors; 2) feature extractors pre-trained using self-supervised learning objectives and knowledge distillation provide better feature representations, increasing up to 15% performance when used with recent GZSL techniques; 3) specific feature extractors pre-trained with larger datasets do not necessarily boost the performance of GZSL methods. In addition, we investigate how GZSL methods fare against CLIP, a more recent multi-modal pre-trained model with strong zero-shot performance. We found that GZSL tasks still benefit from generative-based GZSL methods along with CLIP's internet-scale pre-training to achieve state-of-the-art performance in fine-grained datasets. We release a modular framework for analyzing representation learning issues in GZSL here: https://github.com/uvavision/TV-GZSL
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
332,128
2210.04843
Multi-Modal Fusion by Meta-Initialization
When experience is scarce, models may have insufficient information to adapt to a new task. In this case, auxiliary information - such as a textual description of the task - can enable improved task inference and adaptation. In this work, we propose an extension to the Model-Agnostic Meta-Learning algorithm (MAML), which allows the model to adapt using auxiliary information as well as task experience. Our method, Fusion by Meta-Initialization (FuMI), conditions the model initialization on auxiliary information using a hypernetwork, rather than learning a single, task-agnostic initialization. Furthermore, motivated by the shortcomings of existing multi-modal few-shot learning benchmarks, we constructed iNat-Anim - a large-scale image classification dataset with succinct and visually pertinent textual class descriptions. On iNat-Anim, FuMI significantly outperforms uni-modal baselines such as MAML in the few-shot regime. The code for this project and a dataset exploration tool for iNat-Anim are publicly available at https://github.com/s-a-malik/multi-few .
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
322,599
2306.17046
Spiking Denoising Diffusion Probabilistic Models
Spiking neural networks (SNNs) have ultra-low energy consumption and high biological plausibility due to their binary and bio-driven nature compared with artificial neural networks (ANNs). While previous research has primarily focused on enhancing the performance of SNNs in classification tasks, the generative potential of SNNs remains relatively unexplored. In our paper, we put forward Spiking Denoising Diffusion Probabilistic Models (SDDPM), a new class of SNN-based generative models that achieve high sample quality. To fully exploit the energy efficiency of SNNs, we propose a purely Spiking U-Net architecture, which achieves comparable performance to its ANN counterpart using only 4 time steps, resulting in significantly reduced energy consumption. Extensive experimental results reveal that our approach achieves state-of-the-art on the generative tasks and substantially outperforms other SNN-based generative models, achieving up to 12x and 6x improvement on the CIFAR-10 and the CelebA datasets, respectively. Moreover, we propose a threshold-guided strategy that can further improve the performances by 2.69% in a training-free manner. The SDDPM symbolizes a significant advancement in the field of SNN generation, injecting new perspectives and potential avenues of exploration. Our code is available at https://github.com/AndyCao1125/SDDPM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
376,573
1309.7690
A Hybrid Monte Carlo Ant Colony Optimization Approach for Protein Structure Prediction in the HP Model
The hydrophobic-polar (HP) model has been widely studied in the field of protein structure prediction (PSP) both for theoretical purposes and as a benchmark for new optimization strategies. In this work we introduce a new heuristics based on Ant Colony Optimization (ACO) and Markov Chain Monte Carlo (MCMC) that we called Hybrid Monte Carlo Ant Colony Optimization (HMCACO). We describe this method and compare results obtained on well known HP instances in the 3 dimensional cubic lattice to those obtained with standard ACO and Simulated Annealing (SA). All methods were implemented using an unconstrained neighborhood and a modified objective function to prevent the creation of overlapping walks. Results show that our methods perform better than the other heuristics in all benchmark instances.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
27,406
1201.3088
An Adaptive Modulation Scheme for Two-user Fading MAC with Quantized Fade State Feedback
With no CSI at the users, transmission over the two-user Gaussian Multiple Access Channel with fading and finite constellation at the input, is not efficient because error rates will be high when the channel conditions are poor. However, perfect CSI at the users is an unrealistic assumption in the wireless scenario, as it would involve massive feedback overheads. In this paper we propose a scheme which uses only quantized knowledge of CSI at the transmitters with the overhead being nominal. The users rotate their constellation without varying their transmit power to adapt to the existing channel conditions, in order to meet certain pre-determined minimum Euclidean distance requirement in the equivalent constellation at the destination. The optimal modulation scheme has been described for the case when both the users use symmetric M-PSK constellations at the input, where $ M=2^\lambda $, $ \lambda $ being a positive integer. The strategy has been illustrated by considering examples where both users use QPSK or 8-PSK signal sets at the input. It is shown that the proposed scheme has better throughput and error performance compared to the conventional non-adaptive scheme, at the cost of a feedback overhead of just $\lceil \log_2(\frac{M^2}{8}-\frac{M}{4}+2)\rceil + 1 $ bits, for the M-PSK case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,822
2101.01667
Incremental learning with online SVMs on LiDAR sensory data
The pipelines transmission system is one of the growing aspects, which has existed for a long time in the energy industry. The cost of in-pipe exploration for maintaining service always draws lots of attention in this industry. Normally exploration methods (e.g. Magnetic flux leakage and eddy current) will establish the sensors stationary for each pipe milestone or carry sensors to travel inside the pipe. It makes the maintenance process very difficult due to the massive amount of sensors. One of the solutions is to implement machine learning techniques for the analysis of sensory data. Although SVMs can resolve this issue with kernel trick, the problem is that computing the kernel depends on the data size too. It is because the process can be exaggerated quickly if the number of support vectors becomes really large. Particularly LiDAR spins with an extremely rapid rate and the flow of input data might eventually lead to massive expansion. In our proposed approach, each sample is learned in an instant way and the supported kernel is computed simultaneously. In this research, incremental learning approach with online support vector machines (SVMs) is presented, which aims to deal with LiDAR sensory data only.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
214,418
1912.06874
The Liar's Walk: Detecting Deception with Gait and Gesture
We present a data-driven deep neural algorithm for detecting deceptive walking behavior using nonverbal cues like gaits and gestures. We conducted an elaborate user study, where we recorded many participants performing tasks involving deceptive walking. We extract the participants' walking gaits as series of 3D poses. We annotate various gestures performed by participants during their tasks. Based on the gait and gesture data, we train an LSTM-based deep neural network to obtain deep features. Finally, we use a combination of psychology-based gait, gesture, and deep features to detect deceptive walking with an accuracy of 88.41%. This is an improvement of 10.6% over handcrafted gait and gesture features and an improvement of 4.7% and 9.2% over classifiers based on the state-of-the-art emotion and action classification algorithms, respectively. Additionally, we present a novel dataset, DeceptiveWalk, that contains gaits and gestures with their associated deception labels. To the best of our knowledge, ours is the first algorithm to detect deceptive behavior using non-verbal cues of gait and gesture.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
157,450
2401.14489
The Case for Co-Designing Model Architectures with Hardware
While GPUs are responsible for training the vast majority of state-of-the-art deep learning models, the implications of their architecture are often overlooked when designing new deep learning (DL) models. As a consequence, modifying a DL model to be more amenable to the target hardware can significantly improve the runtime performance of DL training and inference. In this paper, we provide a set of guidelines for users to maximize the runtime performance of their transformer models. These guidelines have been created by carefully considering the impact of various model hyperparameters controlling model shape on the efficiency of the underlying computation kernels executed on the GPU. We find the throughput of models with efficient model shapes is up to 39\% higher while preserving accuracy compared to models with a similar number of parameters but with unoptimized shapes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
424,109
2407.00147
Predicting Elevated Risk of Hospitalization Following Emergency Department Discharges
Hospitalizations that follow closely on the heels of one or more emergency department visits are often symptoms of missed opportunities to form a proper diagnosis. These diagnostic errors imply a failure to recognize the need for hospitalization and deliver appropriate care, and thus also bear important connotations for patient safety. In this paper, we show how data mining techniques can be applied to a large existing hospitalization data set to learn useful models that predict these upcoming hospitalizations with high accuracy. Specifically, we use an ensemble of logistics regression, na\"ive Bayes and association rule classifiers to successfully predict hospitalization within 3, 7 and 14 days of an emergency department discharge. Aside from high accuracy, one of the advantages of the techniques proposed here is that the resulting classifier is easily inspected and interpreted by humans so that the learned rules can be readily operationalized. These rules can then be easily distributed and applied directly by physicians in emergency department settings to predict the risk of early admission prior to discharging their emergency department patients.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
468,752
1310.6876
Application of Fourier and Wavelet Transform for analysing 300 years Sunspot numbers to Explain the Solar Cycles
In this paper Fourier Transform and Wavelet Transform are applied in case of recent 300 years of sunspot numbers to explain the solar cycles. Here basically parallel study of Fourier and Wavelet analysis are done and we have observed that the better result can be obtained from Wavelet analysis during sunspot number analysis. We are able to show various minima and maxima in the recent ages of solar cycles with this tool. The exact periodicity and other possible periodicities in the cyclic phenomenon of sunspot activity are determined.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
27,997
2106.04455
Adaptive transfer learning
In transfer learning, we wish to make inference about a target population when we have access to data both from the distribution itself, and from a different but related source distribution. We introduce a flexible framework for transfer learning in the context of binary classification, allowing for covariate-dependent relationships between the source and target distributions that are not required to preserve the Bayes decision boundary. Our main contributions are to derive the minimax optimal rates of convergence (up to poly-logarithmic factors) in this problem, and show that the optimal rate can be achieved by an algorithm that adapts to key aspects of the unknown transfer relationship, as well as the smoothness and tail parameters of our distributional classes. This optimal rate turns out to have several regimes, depending on the interplay between the relative sample sizes and the strength of the transfer relationship, and our algorithm achieves optimality by careful, decision tree-based calibration of local nearest-neighbour procedures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
239,727
2212.03302
Barriers to implementation of blockchain technology in agricultural supply chain
Emerging technologies, such as Blockchain and the Internet of Things (IoT), have had an immense role in propelling the agricultural industry towards the fourth agricultural revolution. Blockchain and IoT can greatly improve the traceability, efficiency, and safety of food along the supply chain. Given these contributions, there are many barriers to widespread adoption of this technology, including a deficit in many workers' ability to understand and effectively use this technology in addition to a lack of infrastructure to educate and support these workers. This paper discusses the barriers to adoption of blockchain and IoT technology in the agricultural supply chain. The authors analyse the impact of Blockchain and IoT in the food supply chain and methods in which governments and corporations can become more adaptable. Through the reduction in imports and protection of demand for local farmers, developing economies can create local sustainable agricultural ecosystems. Furthermore, the use of both public and private Research and Development can greatly contribute to the global knowledge on new technologies and improve many aspects of the food supply chain. In conclusion, both governments and corporations have a big role to play in the increased implementation of progressive technologies and the overall improvement of the food supply chain along with it.
false
false
false
false
false
false
false
false
false
true
false
false
true
true
false
false
false
false
335,068
2105.04387
Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey
Dialogue systems are a popular natural language processing (NLP) task as it is promising in real-life applications. It is also a complicated task since many NLP tasks deserving study are involved. As a result, a multitude of novel works on this task are carried out, and most of them are deep learning based due to the outstanding performance. In this survey, we mainly focus on the deep learning based dialogue systems. We comprehensively review state-of-the-art research outcomes in dialogue systems and analyze them from two angles: model type and system type. Specifically, from the angle of model type, we discuss the principles, characteristics, and applications of different models that are widely used in dialogue systems. This will help researchers acquaint these models and see how they are applied in state-of-the-art frameworks, which is rather helpful when designing a new dialogue system. From the angle of system type, we discuss task-oriented and open-domain dialogue systems as two streams of research, providing insight into the hot topics related. Furthermore, we comprehensively review the evaluation methods and datasets for dialogue systems to pave the way for future research. Finally, some possible research trends are identified based on the recent research outcomes. To the best of our knowledge, this survey is the most comprehensive and up-to-date one at present for deep learning based dialogue systems, extensively covering the popular techniques. We speculate that this work is a good starting point for academics who are new to the dialogue systems or those who want to quickly grasp up-to-date techniques in this area.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
234,493
1907.03793
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems. The main idea is to combine two stochastic estimators to create a new hybrid one. We first introduce our hybrid estimator and then investigate its fundamental properties to form a foundational theory for algorithmic development. Next, we apply our theory to develop several variants of stochastic gradient methods to solve both expectation and finite-sum composite optimization problems. Our first algorithm can be viewed as a variant of proximal stochastic gradient methods with a single-loop, but can achieve $\mathcal{O}(\sigma^3\varepsilon^{-1} + \sigma \varepsilon^{-3})$-oracle complexity bound, matching the best-known ones from state-of-the-art double-loop algorithms in the literature, where $\sigma > 0$ is the variance and $\varepsilon$ is a desired accuracy. Then, we consider two different variants of our method: adaptive step-size and restarting schemes that have similar theoretical guarantees as in our first algorithm. We also study two mini-batch variants of the proposed methods. In all cases, we achieve the best-known complexity bounds under standard assumptions. We test our methods on several numerical examples with real datasets and compare them with state-of-the-arts. Our numerical experiments show that the new methods are comparable and, in many cases, outperform their competitors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,934
2106.01586
A Systematic Investigation of KB-Text Embedding Alignment at Scale
Knowledge bases (KBs) and text often contain complementary knowledge: KBs store structured knowledge that can support long range reasoning, while text stores more comprehensive and timely knowledge in an unstructured way. Separately embedding the individual knowledge sources into vector spaces has demonstrated tremendous successes in encoding the respective knowledge, but how to jointly embed and reason with both knowledge sources to fully leverage the complementary information is still largely an open problem. We conduct a large-scale, systematic investigation of aligning KB and text embeddings for joint reasoning. We set up a novel evaluation framework with two evaluation tasks, few-shot link prediction and analogical reasoning, and evaluate an array of KB-text embedding alignment methods. We also demonstrate how such alignment can infuse textual information into KB embeddings for more accurate link prediction on emerging entities and events, using COVID-19 as a case study.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
238,549
1801.01814
Automated Conjecturing VII: The Graph Brain Project & Big Mathematics
The Graph Brain Project is an experiment in how the use of automated mathematical discovery software, databases, large collaboration, and systematic investigation provide a model for how mathematical research might proceed in the future. Our Project began with the development of a program that can be used to generate invariant-relation and property-relation conjectures in many areas of mathematics. This program can produce conjectures which are not implied by existing (published) theorems. Here we propose a new approach to push forward existing mathematical research goals---using automated mathematical discovery software. We suggest how to initiate and harness large-scale collaborative mathematics. We envision mathematical research labs similar to what exist in other sciences, new avenues for funding, new opportunities for training students, and a more efficient and effective use of published mathematical research. And our experiment in graph theory can be imitated in many other areas of mathematical research. Big Mathematics is the idea of large, systematic, collaborative research on problems of existing mathematical interest. What is possible when we put our skills, tools, and results together systematically?
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
87,810
2009.10337
Learning Task-Agnostic Action Spaces for Movement Optimization
We propose a novel method for exploring the dynamics of physically based animated characters, and learning a task-agnostic action space that makes movement optimization easier. Like several previous papers, we parameterize actions as target states, and learn a short-horizon goal-conditioned low-level control policy that drives the agent's state towards the targets. Our novel contribution is that with our exploration data, we are able to learn the low-level policy in a generic manner and without any reference movement data. Trained once for each agent or simulation environment, the policy improves the efficiency of optimizing both trajectories and high-level policies across multiple tasks and optimization algorithms. We also contribute novel visualizations that show how using target states as actions makes optimized trajectories more robust to disturbances; this manifests as wider optima that are easy to find. Due to its simplicity and generality, our proposed approach should provide a building block that can improve a large variety of movement optimization methods and applications.
false
false
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
196,871
1601.00626
Scalable Models for Computing Hierarchies in Information Networks
Information hierarchies are organizational structures that often used to organize and present large and complex information as well as provide a mechanism for effective human navigation. Fortunately, many statistical and computational models exist that automatically generate hierarchies; however, the existing approaches do not consider linkages in information {\em networks} that are increasingly common in real-world scenarios. Current approaches also tend to present topics as an abstract probably distribution over words, etc rather than as tangible nodes from the original network. Furthermore, the statistical techniques present in many previous works are not yet capable of processing data at Web-scale. In this paper we present the Hierarchical Document Topic Model (HDTM), which uses a distributed vertex-programming process to calculate a nonparametric Bayesian generative model. Experiments on three medium size data sets and the entire Wikipedia dataset show that HDTM can infer accurate hierarchies even over large information networks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
50,661
1807.04307
Manifold regularization with GANs for semi-supervised learning
Generative Adversarial Networks are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating a variant of the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the semi-supervised feature-matching GAN we achieve state-of-the-art results for GAN-based semi-supervised learning on CIFAR-10 and SVHN benchmarks, with a method that is significantly easier to implement than competing methods. We also find that manifold regularization improves the quality of generated images, and is affected by the quality of the GAN used to approximate the regularizer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
102,706
1906.03684
Trajectory Optimization for Robust Humanoid Locomotion with Sample-Efficient Learning
Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to an interactable problem. Furthermore, since the models used in the TO have always some level of abstraction, it is hard to find a realistic set of uncertainty in the space of abstract model. In this paper we aim at leveraging a sample-efficient learning technique (Bayesian optimization) to robustify trajectory optimization for humanoid locomotion. The main idea is to use Bayesian optimization to find the optimal set of cost weights which compromises performance with respect to robustness with a few realistic simulation/experiment. The results show that the proposed approach is able to generate robust motions for different set of disturbances and uncertainties.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
134,457
2312.13422
Texture Matching GAN for CT Image Enhancement
Deep neural networks (DNN) are commonly used to denoise and sharpen X-ray computed tomography (CT) images with the goal of reducing patient X-ray dosage while maintaining reconstruction quality. However, naive application of DNN-based methods can result in image texture that is undesirable in clinical applications. Alternatively, generative adversarial network (GAN) based methods can produce appropriate texture, but naive application of GANs can introduce inaccurate or even unreal image detail. In this paper, we propose a texture matching generative adversarial network (TMGAN) that enhances CT images while generating an image texture that can be matched to a target texture. We use parallel generators to separate anatomical features from the generated texture, which allows the GAN to be trained to match the desired texture without directly affecting the underlying CT image. We demonstrate that TMGAN generates enhanced image quality while also producing image texture that is desirable for clinical application.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
417,300
2501.14708
Decision-Focused Learning for Complex System Identification: HVAC Management System Application
As opposed to conventional training methods tailored to minimize a given statistical metric or task-agnostic loss (e.g., mean squared error), Decision-Focused Learning (DFL) trains machine learning models for optimal performance in downstream decision-making tools. We argue that DFL can be leveraged to learn the parameters of system dynamics, expressed as constraint of the convex optimization control policy, while the system control signal is being optimized, thus creating an end-to-end learning framework. This is particularly relevant for systems in which behavior changes once the control policy is applied, hence rendering historical data less applicable. The proposed approach can perform system identification - i.e., determine appropriate parameters for the system analytical model - and control simultaneously to ensure that the model's accuracy is focused on areas most relevant to control. Furthermore, because black-box systems are non-differentiable, we design a loss function that requires solely to measure the system response. We propose pre-training on historical data and constraint relaxation to stabilize the DFL and deal with potential infeasibilities in learning. We demonstrate the usefulness of the method on a building Heating, Ventilation, and Air Conditioning day-ahead management system for a realistic 15-zone building located in Denver, US. The results show that the conventional RC building model, with the parameters obtained from historical data using supervised learning, underestimates HVAC electrical power consumption. For our case study, the ex-post cost is on average six times higher than the expected one. Meanwhile, the same RC model with parameters obtained via DFL underestimates the ex-post cost only by 3%.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
527,225
1105.1969
Capacity of Discrete Molecular Diffusion Channels
In diffusion-based molecular communications, messages can be conveyed via the variation in the concentration of molecules in the medium. In this paper, we intend to analyze the achievable capacity in transmission of information from one node to another in a diffusion channel. We observe that because of the molecular diffusion in the medium, the channel possesses memory. We then model the memory of the channel by a two-step Markov chain and obtain the equations describing the capacity of the diffusion channel. By performing a numerical analysis, we obtain the maximum achievable rate for different levels of the transmitter power, i.e., the molecule production rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,317
1906.01554
Stabilizing Traffic via Autonomous Vehicles: A Continuum Mean Field Game Approach
This paper presents scalable traffic stability analysis for both pure autonomous vehicle (AV) traffic and mixed traffic based on continuum traffic flow models. Human vehicles are modeled by a non-equilibrium traffic flow model, i.e., Aw-Rascle-Zhang (ARZ), which is unstable. AVs are modeled by the mean field game which assumes AVs are rational agents with anticipation capacities. It is shown from linear stability analysis and numerical experiments that AVs help stabilize the traffic. Further, we quantify the impact of AV's penetration rate and controller design on the traffic stability. The results may provide insights for AV manufacturers and city planners.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
133,753
1905.11669
CompactNet: Platform-Aware Automatic Optimization for Convolutional Neural Networks
Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet\footnote{Project URL: \url{https://github.com/CompactNet/CompactNet}}, which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,506
2310.03591
Impact of Artificial Intelligence on Electrical and Electronics Engineering Productivity in the Construction Industry
Artificial intelligence (AI) can revolutionize the development industry, primarily electrical and electronics engineering. By automating recurring duties, AI can grow productivity and efficiency in creating. For instance, AI can research constructing designs, discover capability troubles, and generate answers, reducing the effort and time required for manual analysis. AI also can be used to optimize electricity consumption in buildings, which is a critical difficulty in the construction enterprise. Via machines gaining knowledge of algorithms to investigate electricity usage patterns, AI can discover areas wherein power may be stored and offer guidelines for enhancements. This can result in significant value financial savings and reduced carbon emissions. Moreover, AI may be used to improve the protection of creation websites. By studying statistics from sensors and cameras, AI can locate capacity dangers and alert workers to take suitable action. This could help save you from injuries and accidents on production sites, lowering the chance for workers and enhancing overall safety in the enterprise. The impact of AI on electric and electronics engineering productivity inside the creation industry is enormous. AI can transform how we layout, build, and function buildings by automating ordinary duties, optimising electricity intake, and enhancing safety. However, ensuring that AI is used ethically and responsibly and that the advantages are shared fairly throughout the enterprise is essential.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
397,338
2006.15505
1st Place Solution for Waymo Open Dataset Challenge -- 3D Detection and Domain Adaptation
In this technical report, we introduce our winning solution "HorizonLiDAR3D" for the 3D detection track and the domain adaptation track in Waymo Open Dataset Challenge at CVPR 2020. Many existing 3D object detectors include prior-based anchor box design to account for different scales and aspect ratios and classes of objects, which limits its capability of generalization to a different dataset or domain and requires post-processing (e.g. Non-Maximum Suppression (NMS)). We proposed a one-stage, anchor-free and NMS-free 3D point cloud object detector AFDet, using object key-points to encode the 3D attributes, and to learn an end-to-end point cloud object detection without the need of hand-engineering or learning the anchors. AFDet serves as a strong baseline in our winning solution and significant improvements are made over this baseline during the challenges. Specifically, we design stronger networks and enhance the point cloud data using densification and point painting. To leverage camera information, we append/paint additional attributes to each point by projecting them to camera space and gathering image-based perception information. The final detection performance also benefits from model ensemble and Test-Time Augmentation (TTA) in both the 3D detection track and the domain adaptation track. Our solution achieves the 1st place with 77.11% mAPH/L2 and 69.49% mAPH/L2 respectively on the 3D detection track and the domain adaptation track.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
184,529
2306.00673
Attribute-Efficient PAC Learning of Low-Degree Polynomial Threshold Functions with Nasty Noise
The concept class of low-degree polynomial threshold functions (PTFs) plays a fundamental role in machine learning. In this paper, we study PAC learning of $K$-sparse degree-$d$ PTFs on $\mathbb{R}^n$, where any such concept depends only on $K$ out of $n$ attributes of the input. Our main contribution is a new algorithm that runs in time $({nd}/{\epsilon})^{O(d)}$ and under the Gaussian marginal distribution, PAC learns the class up to error rate $\epsilon$ with $O(\frac{K^{4d}}{\epsilon^{2d}} \cdot \log^{5d} n)$ samples even when an $\eta \leq O(\epsilon^d)$ fraction of them are corrupted by the nasty noise of Bshouty et al. (2002), possibly the strongest corruption model. Prior to this work, attribute-efficient robust algorithms are established only for the special case of sparse homogeneous halfspaces. Our key ingredients are: 1) a structural result that translates the attribute sparsity to a sparsity pattern of the Chow vector under the basis of Hermite polynomials, and 2) a novel attribute-efficient robust Chow vector estimation algorithm which uses exclusively a restricted Frobenius norm to either certify a good approximation or to validate a sparsity-induced degree-$2d$ polynomial as a filter to detect corrupted samples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
370,117
1309.4577
Detection of pose orientation across single and multiple axes in case of 3D face images
In this paper, we propose a new approach that takes as input a 3D face image across X, Y and Z axes as well as both Y and X axes and gives output as its pose i.e. it tells whether the face is oriented with respect the X, Y or Z axes or is it oriented across multiple axes with angles of rotation up to 42 degree. All the experiments have been performed on the FRAV3D, GAVADB and Bosphorus database which has two figures of each individual across multiple axes. After applying the proposed algorithm to the 3D facial surface from FRAV3D on 848 3D faces, 566 3D faces were correctly recognized for pose thus giving 67% of correct identification rate. We had experimented on 420 images from the GAVADB database, and only 336 images were detected for correct pose identification rate i.e. 80% and from Bosphorus database on 560 images only 448 images were detected for correct pose identification i.e. 80%.abstract goes here.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
27,112
2208.10228
Review of Natural Language Processing in Pharmacology
Natural language processing (NLP) is an area of artificial intelligence that applies information technologies to process the human language, understand it to a certain degree, and use it in various applications. This area has rapidly developed in the last few years and now employs modern variants of deep neural networks to extract relevant patterns from large text corpora. The main objective of this work is to survey the recent use of NLP in the field of pharmacology. As our work shows, NLP is a highly relevant information extraction and processing approach for pharmacology. It has been used extensively, from intelligent searches through thousands of medical documents to finding traces of adversarial drug interactions in social media. We split our coverage into five categories to survey modern NLP methodology, commonly addressed tasks, relevant textual data, knowledge bases, and useful programming libraries. We split each of the five categories into appropriate subcategories, describe their main properties and ideas, and summarize them in a tabular form. The resulting survey presents a comprehensive overview of the area, useful to practitioners and interested observers.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
313,974
2207.03067
Cross-Scale Vector Quantization for Scalable Neural Speech Coding
Bitrate scalability is a desirable feature for audio coding in real-time communications. Existing neural audio codecs usually enforce a specific bitrate during training, so different models need to be trained for each target bitrate, which increases the memory footprint at the sender and the receiver side and transcoding is often needed to support multiple receivers. In this paper, we introduce a cross-scale scalable vector quantization scheme (CSVQ), in which multi-scale features are encoded progressively with stepwise feature fusion and refinement. In this way, a coarse-level signal is reconstructed if only a portion of the bitstream is received, and progressively improves the quality as more bits are available. The proposed CSVQ scheme can be flexibly applied to any neural audio coding network with a mirrored auto-encoder structure to achieve bitrate scalability. Subjective results show that the proposed scheme outperforms the classical residual VQ (RVQ) with scalability. Moreover, the proposed CSVQ at 3 kbps outperforms Opus at 9 kbps and Lyra at 3kbps and it could provide a graceful quality boost with bitrate increase.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
306,715
1311.2651
MIMO Broadcast Channel with an Unknown Eavesdropper: Secrecy Degrees of Freedom
We study a multi-antenna broadcast channel with two legitimate receivers and an external eavesdropper. We assume that the channel matrix of the eavesdropper is unknown to the legitimate terminals but satisfies a maximum rank constraint. As our main result we characterize the associated secrecy degrees of freedom for the broadcast channel with common and private messages. We show that a direct extension of the single-user wiretap codebook does not achieve the secrecy degrees of freedom. Our proposed optimal scheme involves decomposing the signal space into a common subspace, which can be observed by both receivers, and private subspaces which can be observed by only one of the receivers, and carefully transmitting a subset of messages in each subspace. We also consider the case when each user's private message must additionally remain confidential from the other legitimate receiver and characterize the s.d.o.f.\ region in this case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
28,340
2002.09518
Memory-Based Graph Networks
Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. We introduce an efficient memory layer for GNNs that can jointly learn node representations and coarsen the graph. We also introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results shows that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data. Code and reference implementations are released at: https://github.com/amirkhas/GraphMemoryNet
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
165,082
1611.01606
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
63,406
2111.10978
Deformation Robust Roto-Scale-Translation Equivariant CNNs
Incorporating group symmetry directly into the learning process has proved to be an effective guideline for model design. By producing features that are guaranteed to transform covariantly to the group actions on the inputs, group-equivariant convolutional neural networks (G-CNNs) achieve significantly improved generalization performance in learning tasks with intrinsic symmetry. General theory and practical implementation of G-CNNs have been studied for planar images under either rotation or scaling transformation, but only individually. We present, in this paper, a roto-scale-translation equivariant CNN (RST-CNN), that is guaranteed to achieve equivariance jointly over these three groups via coupled group convolutions. Moreover, as symmetry transformations in reality are rarely perfect and typically subject to input deformation, we provide a stability analysis of the equivariance of representation to input distortion, which motivates the truncated expansion of the convolutional filters under (pre-fixed) low-frequency spatial modes. The resulting model provably achieves deformation-robust RST equivariance, i.e., the RST symmetry is still "approximately" preserved when the transformation is "contaminated" by a nuisance data deformation, a property that is especially important for out-of-distribution generalization. Numerical experiments on MNIST, Fashion-MNIST, and STL-10 demonstrate that the proposed model yields remarkable gains over prior arts, especially in the small data regime where both rotation and scaling variations are present within the data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
267,511
2012.08479
Bayes Meets Entailment and Prediction: Commonsense Reasoning with Non-monotonicity, Paraconsistency and Predictive Accuracy
The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
211,773
2007.12799
Score-Based Explanations in Data Management and Machine Learning
We describe some approaches to explanations for observed outcomes in data management and machine learning. They are based on the assignment of numerical scores to predefined and potentially relevant inputs. More specifically, we consider explanations for query answers in databases, and for results from classification models. The described approaches are mostly of a causal and counterfactual nature. We argue for the need to bring domain and semantic knowledge into score computations; and suggest some ways to do this.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
188,922
2308.08692
Joint Optimization of Resource Allocation and User Association in Multi-Frequency Cellular Networks Assisted by RIS
Due to the development of communication technology and the rise of user network demand, a reasonable resource allocation for wireless networks is the key to guaranteeing regular operation and improving system performance. Various frequency bands exist in the natural network environment, and heterogeneous cellular network (HCN) has become a hot topic for current research. Meanwhile, Reconfigurable Intelligent Surface (RIS) has become a key technology for developing next-generation wireless networks. By modifying the phase of the incident signal arriving at the RIS surface, RIS can improve the signal quality at the receiver and reduce co-channel interference. In this paper, we develop a RIS-assisted HCN model for a multi-base station (BS) multi-frequency network, which includes 4G, 5G, millimeter wave (mmwave), and terahertz networks, and considers the case of multiple network coverage users, which is more in line with the realistic network characteristics and the concept of 6G networks. We propose the optimization objective of maximizing the system sum rate, which is decomposed into two subproblems, i.e., the user resource allocation and the phase shift optimization problem of RIS components. Due to the NP-hard and coupling relationship, we use the block coordinate descent (BCD) method to alternately optimize the local solutions of the coalition game and the local discrete phase search algorithm to obtain the global solution. In contrast, most previous studies have used the coalition game algorithm to solve the resource allocation problem alone. Simulation results show that the algorithm performs better than the rest of the algorithms, effectively improves the system sum rate, and achieves performance close to the optimal solution of the traversal algorithm with low complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
385,990
2005.11467
Joint Training Capsule Network for Cold Start Recommendation
This paper proposes a novel neural network, joint training capsule network (JTCN), for the cold start recommendation task. We propose to mimic the high-level user preference other than the raw interaction history based on the side information for the fresh users. Specifically, an attentive capsule layer is proposed to aggregate high-level user preference from the low-level interaction history via a dynamic routing-by-agreement mechanism. Moreover, JTCN jointly trains the loss for mimicking the user preference and the softmax loss for the recommendation together in an end-to-end manner. Experiments on two publicly available datasets demonstrate the effectiveness of the proposed model. JTCN improves other state-of-the-art methods at least 7.07% for CiteULike and 16.85% for Amazon in terms of Recall@100 in cold start recommendation.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
178,479
2412.02399
OMENN: One Matrix to Explain Neural Networks
Deep Learning (DL) models are often black boxes, making their decision-making processes difficult to interpret. This lack of transparency has driven advancements in eXplainable Artificial Intelligence (XAI), a field dedicated to clarifying the reasoning behind DL model predictions. Among these, attribution-based methods such as LRP and GradCAM are widely used, though they rely on approximations that can be imprecise. To address these limitations, we introduce One Matrix to Explain Neural Networks (OMENN), a novel post-hoc method that represents a neural network as a single, interpretable matrix for each specific input. This matrix is constructed through a series of linear transformations that represent the processing of the input by each successive layer in the neural network. As a result, OMENN provides locally precise, attribution-based explanations of the input across various modern models, including ViTs and CNNs. We present a theoretical analysis of OMENN based on dynamic linearity property and validate its effectiveness with extensive tests on two XAI benchmarks, demonstrating that OMENN is competitive with state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
513,525
2008.11603
An End-to-End Attack on Text-based CAPTCHAs Based on Cycle-Consistent Generative Adversarial Network
As a widely deployed security scheme, text-based CAPTCHAs have become more and more difficult to resist machine learning-based attacks. So far, many researchers have conducted attacking research on text-based CAPTCHAs deployed by different companies (such as Microsoft, Amazon, and Apple) and achieved certain results.However, most of these attacks have some shortcomings, such as poor portability of attack methods, requiring a series of data preprocessing steps, and relying on large amounts of labeled CAPTCHAs. In this paper, we propose an efficient and simple end-to-end attack method based on cycle-consistent generative adversarial networks. Compared with previous studies, our method greatly reduces the cost of data labeling. In addition, this method has high portability. It can attack common text-based CAPTCHA schemes only by modifying a few configuration parameters, which makes the attack easier. Firstly, we train CAPTCHA synthesizers based on the cycle-GAN to generate some fake samples. Basic recognizers based on the convolutional recurrent neural network are trained with the fake data. Subsequently, an active transfer learning method is employed to optimize the basic recognizer utilizing tiny amounts of labeled real-world CAPTCHA samples. Our approach efficiently cracked the CAPTCHA schemes deployed by 10 popular websites, indicating that our attack is likely very general. Additionally, we analyzed the current most popular anti-recognition mechanisms. The results show that the combination of more anti-recognition mechanisms can improve the security of CAPTCHA, but the improvement is limited. Conversely, generating more complex CAPTCHAs may cost more resources and reduce the availability of CAPTCHAs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,325