id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2311.02957
Safe and Efficient Trajectory Optimization for Autonomous Vehicles using B-spline with Incremental Path Flattening
Gradient-based trajectory optimization with B-spline curves is widely used for unmanned aerial vehicles (UAVs) due to its fast convergence and continuous trajectory generation. However, the application of B-spline curves for path-velocity coupled trajectory planning in autonomous vehicles (AVs) has been highly limited because it is challenging to reduce the over-approximation of the vehicle shape and to create a collision-free trajectory using B-spline curves while satisfying kinodynamic constraints. To address these challenges, this paper proposes novel disc-type swept volume (SV), incremental path flattening (IPF), and kinodynamic feasibility penalty methods. The disc-type SV estimation method is a new technique to reduce SV over-approximation and is used to find collision points for IPF. In IPF, the collision points are used to push the trajectory away from obstacles and to iteratively increase the curvature weight, thereby reducing SV and generating a collision-free trajectory. Additionally, to satisfy kinodynamic constraints for AVs using B-spline curves, we apply a clamped B-spline curvature penalty along with longitudinal and lateral velocity and acceleration penalties. Our experimental results demonstrate that our method outperforms state-of-the-art baselines in various simulated environments. We also conducted a real-world experiment using an AV, and our results validate the simulated tracking performance of the proposed approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
405,657
2501.01704
Optimal Fiducial Marker Placement for Satellite Proximity Operations Using Observability Gramians
This paper investigates optimal fiducial marker placement on the surface of a satellite performing relative proximity operations with an observer satellite. The absolute and relative translation and attitude equations of motion for the satellite pair are modeled using dual quaternions. The observability of the relative dual quaternion system is analyzed using empirical observability Gramian methods. The optimal placement of a fiducial marker set, in which each marker gives simultaneous optical range and attitude measurements, is determined for the pair of satellites. A geostationary flyby between the observing body (chaser) and desired (target) satellites is numerically simulated and the optimal fiducial placement sets of five and ten on the surface of the desired satellite are solved. It is shown that the optimal solution maximizes the distance between fiducial markers and selects marker locations that are most sensitive to measuring changes in the state during the nonlinear trajectory, despite being visible for less time than other candidate marker locations. Definitions and properties of quaternions and dual quaternions, and parallels between the two, are presented alongside the relative motion model.
false
false
false
false
false
false
false
true
false
false
true
true
false
false
false
false
false
false
522,176
2106.14587
Topos and Stacks of Deep Neural Networks
Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck's topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud's stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon's entropy of P.Baudot and D.Bennequin in 2015). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Loef) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck's derivators.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
243,452
2205.01308
Contrastive Learning for Prompt-Based Few-Shot Language Learners
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspired work on better fine-tuning of moderately-sized models under this paradigm. Following this line of work, we present a contrastive learning framework that clusters inputs from the same class for better generality of models trained with only limited examples. Specifically, we propose a supervised contrastive framework that clusters inputs from the same class under different augmented "views" and repel the ones from different classes. We create different "views" of an example by appending it with different language prompts and contextual demonstrations. Combining a contrastive loss with the standard masked language modeling (MLM) loss in prompt-based few-shot learners, the experimental results show that our method can improve over the state-of-the-art methods in a diverse set of 15 language tasks. Our framework makes minimal assumptions on the task or the base model, and can be applied to many recent methods with little modification. The code will be made available at: https://github.com/yiren-jian/LM-SupCon.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
294,539
2101.00744
Learning to Optimize Under Constraints with Unsupervised Deep Neural Networks
In this paper, we propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem. To the best of our knowledge, the generic methods that learn to optimize, focus on unconstrained optimization problems and those dealing with constrained problems are not easy-to-generalize. This approach is quite useful in optimization tasks where the problem's parameters constantly change and require resolving the optimization task per parameter update. In such problems, the computational complexity of optimization algorithms such as gradient descent or interior point method preclude near-optimal designs in real-time applications. In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time by relegating the main computation load to offline training phase. This paper's main contribution is proposing a method for enforcing the equality and inequality constraints to the DL-generated solutions for generic optimization tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
214,188
1708.03519
Preconditioning immersed isogeometric finite element methods with application to flow problems
Immersed finite element methods generally suffer from conditioning problems when cut elements intersect the physical domain only on a small fraction of their volume. De Prenter et al. [Computer Methods in Applied Mechanics and Engineering, 316 (2017) pp. 297-327] present an analysis for symmetric positive definite (SPD) immersed problems, and for this class of problems an algebraic preconditioner is developed. In this contribution the conditioning analysis is extended to immersed finite element methods for systems that are not SPD and the preconditioning technique is generalized to a connectivity-based preconditioner inspired by Additive-Schwarz preconditioning. This Connectivity-based Additive-Schwarz (CbAS) preconditioner is applicable to problems that are not SPD and to mixed problems, such as the Stokes and Navier-Stokes equations. A detailed numerical investigation of the effectivity of the CbAS preconditioner to a range of flow problems is presented.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
78,786
1712.02294
Joint 3D Proposal Generation and Object Detection from View Aggregation
We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: https://github.com/kujason/avod
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
86,270
1801.09877
On the Use of the Observability Gramian for Partially Observed Robotic Path Planning Problems
Optimizing measures of the observability Gramian as a surrogate for the estimation performance may provide irrelevant or misleading trajectories for planning under observation uncertainty.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
89,190
2310.16795
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
Mixture-of-Experts (MoE) architectures offer a general solution to the high inference costs of large language models (LLMs) via sparse routing, bringing faster and more accurate models, at the cost of massive parameter counts. For example, the SwitchTransformer-c2048 model has 1.6 trillion parameters, requiring 3.2TB of accelerator memory to run efficiently, which makes practical deployment challenging and expensive. In this paper, we present a solution to this memory problem, in form of a new compression and execution framework called QMoE. Specifically, QMoE consists of a scalable algorithm which accurately compresses trillion-parameter MoEs to less than 1 bit per parameter, in a custom format co-designed with bespoke GPU decoding kernels to facilitate efficient end-to-end compressed inference, with minor runtime overheads relative to uncompressed execution. Concretely, QMoE can compress the 1.6 trillion parameter SwitchTransformer-c2048 model to less than 160GB (20x compression, 0.8 bits per parameter) at only minor accuracy loss, in less than a day on a single GPU. This enables, for the first time, the execution of a trillion-parameter model on affordable commodity hardware, like a single server with 4x NVIDIA A6000 or 8x NVIDIA 3090 GPUs, at less than 5% runtime overhead relative to ideal uncompressed inference. The source code and compressed models are available at github.com/IST-DASLab/qmoe.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,880
2003.08529
Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections
Summarizing data samples by quantitative measures has a long history, with descriptive statistics being a case in point. However, as natural language processing methods flourish, there are still insufficient characteristic metrics to describe a collection of texts in terms of the words, sentences, or paragraphs they comprise. In this work, we propose metrics of diversity, density, and homogeneity that quantitatively measure the dispersion, sparsity, and uniformity of a text collection. We conduct a series of simulations to verify that each metric holds desired properties and resonates with human intuitions. Experiments on real-world datasets demonstrate that the proposed characteristic metrics are highly correlated with text classification performance of a renowned model, BERT, which could inspire future applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
168,765
1312.7422
Proceedings of Answer Set Programming and Other Computing Paradigms (ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey
This volume contains the papers presented at the sixth workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP 2013) held on August 25th, 2013 in Istanbul, co-located with the 29th International Conference on Logic Programming (ICLP 2013). It thus continues a series of previous events co-located with ICLP, aiming at facilitating the discussion about crossing the boundaries of current ASP techniques in theory, solving, and applications, in combination with or inspired by other computing paradigms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
29,475
2311.18140
ROBBIE: Robust Bias Evaluation of Large Generative Language Models
As generative large language models (LLMs) grow more performant and prevalent, we must develop comprehensive enough tools to measure and improve their fairness. Different prompt-based datasets can be used to measure social bias across multiple text domains and demographic axes, meaning that testing LLMs on more datasets can potentially help us characterize their biases more fully, and better ensure equal and equitable treatment of marginalized demographic groups. In this work, our focus is two-fold: (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity metrics across 12 demographic axes and 5 families of generative LLMs. Out of those 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in the paper. The comparison of those benchmarks gives us insights about the bias and toxicity of the compared models. Therefore, we explore the frequency of demographic terms in common LLM pre-training corpora and how this may relate to model biases. (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity mitigation techniques perform across our suite of measurements. ROBBIE aims to provide insights for practitioners while deploying a model, emphasizing the need to not only measure potential harms, but also understand how they arise by characterizing the data, mitigate harms once found, and balance any trade-offs. We open-source our analysis code in hopes of encouraging broader measurements of bias in future LLMs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
411,570
1011.3834
Ising-like agent-based technology diffusion model: adoption patterns vs. seeding strategies
The well-known Ising model used in statistical physics was adapted to a social dynamics context to simulate the adoption of a technological innovation. The model explicitly combines (a) an individual's perception of the advantages of an innovation and (b) social influence from members of the decision-maker's social network. The micro-level adoption dynamics are embedded into an agent-based model that allows exploration of macro-level patterns of technology diffusion throughout systems with different configurations (number and distributions of early adopters, social network topologies). In the present work we carry out many numerical simulations. We find that when the gap between the individual's perception of the options is high, the adoption speed increases if the dispersion of early adopters grows. Another test was based on changing the network topology by means of stochastic connections to a common opinion reference (hub), which resulted in an increment in the adoption speed. Finally, we performed a simulation of competition between options for both regular and small world networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
8,259
1910.02576
Exact matrix completion based on low rank Hankel structure in the Fourier domain
Matrix completion is about recovering a matrix from its partial revealed entries, and it can often be achieved by exploiting the inherent simplicity or low dimensional structure of the target matrix. For instance, a typical notion of matrix simplicity is low rank. In this paper we study matrix completion based on another low dimensional structure, namely the low rank Hankel structure in the Fourier domain. It is shown that matrices with this structure can be exactly recovered by solving a convex optimization program provided the sampling complexity is nearly optimal. Empirical results are also presented to justify the effectiveness of the convex method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
148,286
2209.04911
Keke AI Competition: Solving puzzle levels in a dynamically changing mechanic space
The Keke AI Competition introduces an artificial agent competition for the game Baba is You - a Sokoban-like puzzle game where players can create rules that influence the mechanics of the game. Altering a rule can cause temporary or permanent effects for the rest of the level that could be part of the solution space. The nature of these dynamic rules and the deterministic aspect of the game creates a challenge for AI to adapt to a variety of mechanic combinations in order to solve a level. This paper describes the framework and evaluation metrics used to rank submitted agents and baseline results from sample tree search agents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
316,923
2209.02211
Multi-Armed Bandits with Self-Information Rewards
This paper introduces the informational multi-armed bandit (IMAB) model in which at each round, a player chooses an arm, observes a symbol, and receives an unobserved reward in the form of the symbol's self-information. Thus, the expected reward of an arm is the Shannon entropy of the probability mass function of the source that generates its symbols. The player aims to maximize the expected total reward associated with the entropy values of the arms played. Under the assumption that the alphabet size is known, two UCB-based algorithms are proposed for the IMAB model which consider the biases of the plug-in entropy estimator. The first algorithm optimistically corrects the bias term in the entropy estimation. The second algorithm relies on data-dependent confidence intervals that adapt to sources with small entropy values. Performance guarantees are provided by upper bounding the expected regret of each of the algorithms. Furthermore, in the Bernoulli case, the asymptotic behavior of these algorithms is compared to the Lai-Robbins lower bound for the pseudo regret. Additionally, under the assumption that the \textit{exact} alphabet size is unknown, and instead the player only knows a loose upper bound on it, a UCB-based algorithm is proposed, in which the player aims to reduce the regret caused by the unknown alphabet size in a finite time regime. Numerical results illustrating the expected regret of the algorithms presented in the paper are provided.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
316,136
1407.6128
Permutation Models for Collaborative Ranking
We study the problem of collaborative filtering where ranking information is available. Focusing on the core of the collaborative ranking process, the user and their community, we propose new models for representation of the underlying permutations and prediction of ranks. The first approach is based on the assumption that the user makes successive choice of items in a stage-wise manner. In particular, we extend the Plackett-Luce model in two ways - introducing parameter factoring to account for user-specific contribution, and modelling the latent community in a generative setting. The second approach relies on log-linear parameterisation, which relaxes the discrete-choice assumption, but makes learning and inference much more involved. We propose MCMC-based learning and inference methods and derive linear-time prediction algorithms.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
34,845
2303.15834
Enabling Inter-organizational Analytics in Business Networks Through Meta Machine Learning
Successful analytics solutions that provide valuable insights often hinge on the connection of various data sources. While it is often feasible to generate larger data pools within organizations, the application of analytics within (inter-organizational) business networks is still severely constrained. As data is distributed across several legal units, potentially even across countries, the fear of disclosing sensitive information as well as the sheer volume of the data that would need to be exchanged are key inhibitors for the creation of effective system-wide solutions -- all while still reaching superior prediction performance. In this work, we propose a meta machine learning method that deals with these obstacles to enable comprehensive analyses within a business network. We follow a design science research approach and evaluate our method with respect to feasibility and performance in an industrial use case. First, we show that it is feasible to perform network-wide analyses that preserve data confidentiality as well as limit data transfer volume. Second, we demonstrate that our method outperforms a conventional isolated analysis and even gets close to a (hypothetical) scenario where all data could be shared within the network. Thus, we provide a fundamental contribution for making business networks more effective, as we remove a key obstacle to tap the huge potential of learning from data that is scattered throughout the network.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
354,642
2304.11513
Detecting Socially Abnormal Highway Driving Behaviors via Recurrent Graph Attention Networks
With the rapid development of Internet of Things technologies, the next generation traffic monitoring infrastructures are connected via the web, to aid traffic data collection and intelligent traffic management. One of the most important tasks in traffic is anomaly detection, since abnormal drivers can reduce traffic efficiency and cause safety issues. This work focuses on detecting abnormal driving behaviors from trajectories produced by highway video surveillance systems. Most of the current abnormal driving behavior detection methods focus on a limited category of abnormal behaviors that deal with a single vehicle without considering vehicular interactions. In this work, we consider the problem of detecting a variety of socially abnormal driving behaviors, i.e., behaviors that do not conform to the behavior of other nearby drivers. This task is complicated by the variety of vehicular interactions and the spatial-temporal varying nature of highway traffic. To solve this problem, we propose an autoencoder with a Recurrent Graph Attention Network that can capture the highway driving behaviors contextualized on the surrounding cars, and detect anomalies that deviate from learned patterns. Our model is scalable to large freeways with thousands of cars. Experiments on data generated from traffic simulation software show that our model is the only one that can spot the exact vehicle conducting socially abnormal behaviors, among the state-of-the-art anomaly detection models. We further show the performance on real world HighD traffic dataset, where our model detects vehicles that violate the local driving norms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
359,844
2406.04271
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning approach for enhancing accuracy, efficiency and robustness of large language models (LLMs). Specifically, we propose meta-buffer to store a series of informative high-level thoughts, namely thought-template, distilled from the problem-solving processes across various tasks. Then for each problem, we retrieve a relevant thought-template and adaptively instantiate it with specific reasoning structures to conduct efficient reasoning. To guarantee the scalability and stability, we further propose buffer-manager to dynamically update the meta-buffer, thus enhancing the capacity of meta-buffer as more tasks are solved. We conduct extensive experiments on 10 challenging reasoning-intensive tasks, and achieve significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Further analysis demonstrate the superior generalization ability and model robustness of our BoT, while requiring only 12% of the cost of multi-query prompting methods (e.g., tree/graph of thoughts) on average. Notably, we find that our Llama3-8B+BoT has the potential to surpass Llama3-70B model. Our project is available at: https://github.com/YangLing0818/buffer-of-thought-llm
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
461,594
2309.03426
Adapting Static Fairness to Sequential Decision-Making: Bias Mitigation Strategies towards Equal Long-term Benefit Rate
Decisions made by machine learning models can have lasting impacts, making long-term fairness a critical consideration. It has been observed that ignoring the long-term effect and directly applying fairness criterion in static settings can actually worsen bias over time. To address biases in sequential decision-making, we introduce a long-term fairness concept named Equal Long-term Benefit Rate (ELBERT). This concept is seamlessly integrated into a Markov Decision Process (MDP) to consider the future effects of actions on long-term fairness, thus providing a unified framework for fair sequential decision-making problems. ELBERT effectively addresses the temporal discrimination issues found in previous long-term fairness notions. Additionally, we demonstrate that the policy gradient of Long-term Benefit Rate can be analytically simplified to standard policy gradients. This simplification makes conventional policy optimization methods viable for reducing bias, leading to our bias mitigation approach ELBERT-PO. Extensive experiments across various diverse sequential decision-making environments consistently reveal that ELBERT-PO significantly diminishes bias while maintaining high utility. Code is available at https://github.com/umd-huang-lab/ELBERT.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
390,372
2405.15294
Semi-Supervised Learning guided by the Generalized Bayes Rule under Soft Revision
We provide a theoretical and computational investigation of the Gamma-Maximin method with soft revision, which was recently proposed as a robust criterion for pseudo-label selection (PLS) in semi-supervised learning. Opposed to traditional methods for PLS we use credal sets of priors ("generalized Bayes") to represent the epistemic modeling uncertainty. These latter are then updated by the Gamma-Maximin method with soft revision. We eventually select pseudo-labeled data that are most likely in light of the least favorable distribution from the so updated credal set. We formalize the task of finding optimal pseudo-labeled data w.r.t. the Gamma-Maximin method with soft revision as an optimization problem. A concrete implementation for the class of logistic models then allows us to compare the predictive power of the method with competing approaches. It is observed that the Gamma-Maximin method with soft revision can achieve very promising results, especially when the proportion of labeled data is low.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,855
1206.6390
Incorporating Causal Prior Knowledge as Path-Constraints in Bayesian Networks and Maximal Ancestral Graphs
We consider the incorporation of causal knowledge about the presence or absence of (possibly indirect) causal relations into a causal model. Such causal relations correspond to directed paths in a causal model. This type of knowledge naturally arises from experimental data, among others. Specifically, we consider the formalisms of Causal Bayesian Networks and Maximal Ancestral Graphs and their Markov equivalence classes: Partially Directed Acyclic Graphs and Partially Oriented Ancestral Graphs. We introduce sound and complete procedures which are able to incorporate causal prior knowledge in such models. In simulated experiments, we show that often considering even a few causal facts leads to a significant number of new inferences. In a case study, we also show how to use real experimental data to infer causal knowledge and incorporate it into a real biological causal network. The code is available at mensxmachina.org.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
16,925
2311.00674
Recovering Linear Causal Models with Latent Variables via Cholesky Factorization of Covariance Matrix
Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a well-known challenging combinatorial problem. When there are latent variables, the problem becomes even more difficult. In this paper, we first propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data. The algorithm is fast and easy to implement and has theoretical grantees for exact recovery. On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance. Furthermore, under the equal error variances assumption, we incorporate an optimization procedure into the Cholesky factorization based algorithm to handle the DAG recovering problem with latent variables. Numerical simulations show that the modified "Cholesky + optimization" algorithm is able to recover the ground truth graph in most cases and outperforms existing algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
404,731
2406.18164
Nebula: A discourse aware Minecraft Builder
When engaging in collaborative tasks, humans efficiently exploit the semantic structure of a conversation to optimize verbal and nonverbal interactions. But in recent "language to code" or "language to action" models, this information is lacking. We show how incorporating the prior discourse and nonlinguistic context of a conversation situated in a nonlinguistic environment can improve the "language to action" component of such interactions. We finetune an LLM to predict actions based on prior context; our model, Nebula, doubles the net-action F1 score over the baseline on this task of Jayannavar et al.(2020). We also investigate our model's ability to construct shapes and understand location descriptions using a synthetic dataset
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
467,903
0805.4583
Channels that Heat Up
This work considers an additive noise channel where the time-k noise variance is a weighted sum of the channel input powers prior to time k. This channel is motivated by point-to-point communication between two terminals that are embedded in the same chip. Transmission heats up the entire chip and hence increases the thermal noise at the receiver. The capacity of this channel (both with and without feedback) is studied at low transmit powers and at high transmit powers. At low transmit powers, the slope of the capacity-vs-power curve at zero is computed and it is shown that the heating-up effect is beneficial. At high transmit powers, conditions are determined under which the capacity is bounded, i.e., under which the capacity does not grow to infinity as the allowed average power tends to infinity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,848
2502.06204
Non-literal Understanding of Number Words by Language Models
Humans naturally interpret numbers non-literally, effortlessly combining context, world knowledge, and speaker intent. We investigate whether large language models (LLMs) interpret numbers similarly, focusing on hyperbole and pragmatic halo effects. Through systematic comparison with human data and computational models of pragmatic reasoning, we find that LLMs diverge from human interpretation in striking ways. By decomposing pragmatic reasoning into testable components, grounded in the Rational Speech Act framework, we pinpoint where LLM processing diverges from human cognition -- not in prior knowledge, but in reasoning with it. This insight leads us to develop a targeted solution -- chain-of-thought prompting inspired by an RSA model makes LLMs' interpretations more human-like. Our work demonstrates how computational cognitive models can both diagnose AI-human differences and guide development of more human-like language understanding capabilities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
531,982
1805.10852
A Pragmatic AI Approach to Creating Artistic Visual Variations by Neural Style Transfer
On a constant quest for inspiration, designers can become more effective with tools that facilitate their creative process and let them overcome design fixation. This paper explores the practicality of applying neural style transfer as an emerging design tool for generating creative digital content. To this aim, the present work explores a well-documented neural style transfer algorithm (Johnson 2016) in four experiments on four relevant visual parameters: number of iterations, learning rate, total variation, content vs. style weight. The results allow a pragmatic recommendation of parameter configuration (number of iterations: 200 to 300, learning rate: 2e-1 to 4e-1, total variation: 1e-4 to 1e-8, content weights vs. style weights: 50:100 to 200:100) that saves extensive experimentation time and lowers the technical entry barrier. With this rule-of-thumb insight, visual designers can effectively apply deep learning to create artistic visual variations of digital content. This could enable designers to leverage AI for creating design works as state-of-the-art.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
98,783
2012.10360
When Machine Learning Meets Quantum Computers: A Case Study
Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored computing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,326
1907.01869
Simple vs complex temporal recurrences for video saliency prediction
This paper investigates modifying an existing neural network architecture for static saliency prediction using two types of recurrences that integrate information from the temporal domain. The first modification is the addition of a ConvLSTM within the architecture, while the second is a conceptually simple exponential moving average of an internal convolutional state. We use weights pre-trained on the SALICON dataset and fine-tune our model on DHF1K. Our results show that both modifications achieve state-of-the-art results and produce similar saliency maps. Source code is available at https://git.io/fjPiB.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
137,462
1805.11264
Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data
Multimodal sensory data resembles the form of information perceived by humans for learning, and are easy to obtain in large quantities. Compared to unimodal data, synchronization of concepts between modalities in such data provides supervision for disentangling the underlying explanatory factors of each modality. Previous work leveraging multimodal data has mainly focused on retaining only the modality-invariant factors while discarding the rest. In this paper, we present a partitioned variational autoencoder (PVAE) and several training objectives to learn disentangled representations, which encode not only the shared factors, but also modality-dependent ones, into separate latent variables. Specifically, PVAE integrates a variational inference framework and a multimodal generative model that partitions the explanatory factors and conditions only on the relevant subset of them for generation. We evaluate our model on two parallel speech/image datasets, and demonstrate its ability to learn disentangled representations by qualitatively exploring within-modality and cross-modality conditional generation with semantics and styles specified by examples. For quantitative analysis, we evaluate the classification accuracy of automatically discovered semantic units. Our PVAE can achieve over 99% accuracy on both modalities.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
98,885
2012.04477
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory?
Neural Tangent Kernel (NTK) theory is widely used to study the dynamics of infinitely-wide deep neural networks (DNNs) under gradient descent. But do the results for infinitely-wide networks give us hints about the behavior of real finite-width ones? In this paper, we study empirically when NTK theory is valid in practice for fully-connected ReLU and sigmoid DNNs. We find out that whether a network is in the NTK regime depends on the hyperparameters of random initialization and the network's depth. In particular, NTK theory does not explain the behavior of sufficiently deep networks initialized so that their gradients explode as they propagate through the network's layers: the kernel is random at initialization and changes significantly during training in this case, contrary to NTK theory. On the other hand, in the case of vanishing gradients, DNNs are in the the NTK regime but become untrainable rapidly with depth. We also describe a framework to study generalization properties of DNNs, in particular the variance of network's output function, by means of NTK theory and discuss its limits.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,471
1606.09281
Multiphase Segmentation For Simultaneously Homogeneous and Textural Images
Segmentation remains an important problem in image processing. For homogeneous (piecewise smooth) images, a number of important models have been developed and refined over the past several decades. However, these models often fail when applied to the substantially larger class of natural images that simultaneously contain regions of both texture and homogeneity. This work introduces a bi-level constrained minimization model for simultaneous multiphase segmentation of images containing both homogeneous and textural regions. We develop novel norms defined in different functional Banach spaces for the segmentation which results in a non-convex minimization. Finally, we develop a generalized notion of segmentation delving into approximation theory and demonstrating that a more refined decomposition of these images results in multiple meaningful components. Both theoretical results and demonstrations on natural images are provided.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
57,974
0901.0269
Random Linear Network Coding For Time Division Duplexing: Energy Analysis
We study the energy performance of random linear network coding for time division duplexing channels. We assume a packet erasure channel with nodes that cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receiver to acknowledge the number of degrees of freedom, if any, that are required to decode correctly the information. Our analysis shows that, in terms of mean energy consumed, there is an optimal number of coded data packets to send before stopping to listen. This number depends on the energy needed to transmit each coded packet and the acknowledgment (ACK), probabilities of packet and ACK erasure, and the number of degrees of freedom that the receiver requires to decode the data. We show that its energy performance is superior to that of a full-duplex system. We also study the performance of our scheme when the number of coded packets is chosen to minimize the mean time to complete transmission as in [1]. Energy performance under this optimization criterion is found to be close to optimal, thus providing a good trade-off between energy and time required to complete transmissions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,879
2105.11131
Unsupervised Video Summarization with a Convolutional Attentive Adversarial Network
With the explosive growth of video data, video summarization, which attempts to seek the minimum subset of frames while still conveying the main story, has become one of the hottest topics. Nowadays, substantial achievements have been made by supervised learning techniques, especially after the emergence of deep learning. However, it is extremely expensive and difficult to collect human annotation for large-scale video datasets. To address this problem, we propose a convolutional attentive adversarial network (CAAN), whose key idea is to build a deep summarizer in an unsupervised way. Upon the generative adversarial network, our overall framework consists of a generator and a discriminator. The former predicts importance scores for all frames of a video while the latter tries to distinguish the score-weighted frame features from original frame features. Specifically, the generator employs a fully convolutional sequence network to extract global representation of a video, and an attention-based network to output normalized importance scores. To learn the parameters, our objective function is composed of three loss functions, which can guide the frame-level importance score prediction collaboratively. To validate this proposed method, we have conducted extensive experiments on two public benchmarks SumMe and TVSum. The results show the superiority of our proposed method against other state-of-the-art unsupervised approaches. Our method even outperforms some published supervised approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
236,607
2206.06882
An Accurate HDDL Domain Learning Algorithm from Partial and Noisy Observations
The Hierarchical Task Network ({\sf HTN}) formalism is very expressive and used to express a wide variety of planning problems. In contrast to the classical {\sf STRIPS} formalism in which only the action model needs to be specified, the {\sf HTN} formalism requires to specify, in addition, the tasks of the problem and their decomposition into subtasks, called {\sf HTN} methods. For this reason, hand-encoding {\sf HTN} problems is considered more difficult and more error-prone by experts than classical planning problem. To tackle this problem, we propose a new approach (HierAMLSI) based on grammar induction to acquire {\sf HTN} planning domain knowledge, by learning action models and {\sf HTN} methods with their preconditions. Unlike other approaches, HierAMLSI is able to learn both actions and methods with noisy and partial inputs observation with a high level or accuracy.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
302,531
2308.08054
Consensus on Lie groups for the Riemannian Center of Mass
In this paper, we develop a consensus algorithm for distributed computation of the Riemannian center of mass (RCM) on Lie Groups. The algorithm is built upon a distributed optimization reformulation that allows developing an intrinsic, distributed (without relying on a consensus subroutine), and a computationally efficient protocol for the RCM computation. The novel idea for developing this fast distributed algorithm is to utilize a Riemannian version of distributed gradient flow combined with a gradient tracking technique. We first guarantee that, under certain conditions, the limit point of our algorithm is the RCM point of interest. We then provide a proof of global convergence in the Euclidean setting, that can be viewed as a "geometric" dynamic consensus that converges to the average from arbitrary initial points. Finally, we proceed to showcase the superior convergence properties of the proposed approach as compared with other classes of consensus optimization-based algorithms for the RCM computation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
385,745
2405.07500
PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking
Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code is available at https://github.com/constantjxyz/PromptLink.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
453,741
1606.01243
BES with FEM: Building Energy Simulation using Finite Element Methods
An overall objective of energy efficiency in the built environment is to improve building and systems performances in terms of durability, comfort and economics. In order to predict, improve and meet a certain set of performance requirements related to the indoor climate of buildings and the associated energy demand, building energy simulation (BES) tools are indispensable. Due to the rapid development of FEM software and the Multiphysics approaches, it should possible to build and simulate full 3D models of buildings regarding the energy demand. The paper presents a methodology for performing building energy simulation with Comsol. The method was applied to an international test box experiment. The results showed an almost perfect agreement between the used BES model and Comsol. These preliminary results confirm the great opportunities to use FEM related software for building energy performance simulation.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
56,772
1211.6496
TwitterPaul: Extracting and Aggregating Twitter Predictions
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88%. The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
19,981
1812.00615
Spatial-temporal Fusion Convolutional Neural Network for Simulated Driving Behavior Recognition
Abnormal driving behaviour is one of the leading cause of terrible traffic accidents endangering human life. Therefore, study on driving behaviour surveillance has become essential to traffic security and public management. In this paper, we conduct this promising research and employ a two stream CNN framework for video-based driving behaviour recognition, in which spatial stream CNN captures appearance information from still frames, whilst temporal stream CNN captures motion information with pre-computed optical flow displacement between a few adjacent video frames. We investigate different spatial-temporal fusion strategies to combine the intra frame static clues and inter frame dynamic clues for final behaviour recognition. So as to validate the effectiveness of the designed spatial-temporal deep learning based model, we create a simulated driving behaviour dataset, containing 1237 videos with 6 different driving behavior for recognition. Experiment result shows that our proposed method obtains noticeable performance improvements compared to the existing methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
115,312
2409.09412
Label Convergence: Defining an Upper Performance Bound in Object Recognition through Contradictory Annotations
Annotation errors are a challenge not only during training of machine learning models, but also during their evaluation. Label variations and inaccuracies in datasets often manifest as contradictory examples that deviate from established labeling conventions. Such inconsistencies, when significant, prevent models from achieving optimal performance on metrics such as mean Average Precision (mAP). We introduce the notion of "label convergence" to describe the highest achievable performance under the constraint of contradictory test annotations, essentially defining an upper bound on model accuracy. Recognizing that noise is an inherent characteristic of all data, our study analyzes five real-world datasets, including the LVIS dataset, to investigate the phenomenon of label convergence. We approximate that label convergence is between 62.63-67.52 mAP@[0.5:0.95:0.05] for LVIS with 95% confidence, attributing these bounds to the presence of real annotation errors. With current state-of-the-art (SOTA) models at the upper end of the label convergence interval for the well-studied LVIS dataset, we conclude that model capacity is sufficient to solve current object detection problems. Therefore, future efforts should focus on three key aspects: (1) updating the problem specification and adjusting evaluation practices to account for unavoidable label noise, (2) creating cleaner data, especially test data, and (3) including multi-annotated data to investigate annotation variation and make these issues visible from the outset.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
488,308
2304.04227
Video ChatCaptioner: Towards Enriched Spatiotemporal Descriptions
Video captioning aims to convey dynamic scenes from videos using natural language, facilitating the understanding of spatiotemporal information within our environment. Although there have been recent advances, generating detailed and enriched video descriptions continues to be a substantial challenge. In this work, we introduce Video ChatCaptioner, an innovative approach for creating more comprehensive spatiotemporal video descriptions. Our method employs a ChatGPT model as a controller, specifically designed to select frames for posing video content-driven questions. Subsequently, a robust algorithm is utilized to answer these visual queries. This question-answer framework effectively uncovers intricate video details and shows promise as a method for enhancing video content. Following multiple conversational rounds, ChatGPT can summarize enriched video content based on previous conversations. We qualitatively demonstrate that our Video ChatCaptioner can generate captions containing more visual details about the videos. The code is publicly available at https://github.com/Vision-CAIR/ChatCaptioner
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
357,139
1908.04594
A Building-Block Approach to State-Space Modeling of DC-DC Converter Systems
Small-signal models of DC-DC converters are often based on a state-space averaging approach, from which both control-oriented and other frequency-domain characteristics, such as input or output impedance, can be derived. Updating these models when extending the converter by filters or non-trivial loads, or adding control loops, can become a tedious task, however. To simplify this potentially error-prone process, a modular modeling approach is being proposed in this article. It consists of small state-space models for certain building blocks of a converter system on the one hand, and standardized operations for connecting these subsystem models to an overall converter system model on the other hand. The resulting state-space system model builds upon a two-port converter description and allows the extraction of control-oriented and impedance characteristics at any modeling stage, be it open loop or closed loop, single converter or series connections of converters. The ease of creating more complex models enabled by the proposed approach is also demonstrated with examples comprising multiple control loops or cascaded converters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
141,523
2302.13380
Closed-loop Error Correction Learning Accelerates Experimental Discovery of Thermoelectric Materials
The exploration of thermoelectric materials is challenging considering the large materials space, combined with added exponential degrees of freedom coming from doping and the diversity of synthetic pathways. Here we seek to incorporate historical data and update and refine it using experimental feedback by employing error-correction learning (ECL). We thus learn from prior datasets and then adapt the model to differences in synthesis and characterization that are otherwise difficult to parameterize. We then apply this strategy to discovering thermoelectric materials where we prioritize synthesis at temperatures < 300{\deg}C. We document a previously unreported chemical family of thermoelectric materials, PbSe:SnSb, finding that the best candidate in this chemical family, 2 wt% SnSb doped PbSe, exhibits a power factor more than 2x that of PbSe. Our investigations show that our closed-loop experimentation strategy reduces the required number of experiments to find an optimized material by as much as 3x compared to high-throughput searches powered by state-of-the-art machine learning models. We also observe that this improvement is dependent on the accuracy of prior in a manner that exhibits diminishing returns, and after a certain accuracy is reached, it is factors associated with experimental pathways that dictate the trends.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
347,925
2502.02054
RAPID: Robust and Agile Planner Using Inverse Reinforcement Learning for Vision-Based Drone Navigation
This paper introduces a learning-based visual planner for agile drone flight in cluttered environments. The proposed planner generates collision-free waypoints in milliseconds, enabling drones to perform agile maneuvers in complex environments without building separate perception, mapping, and planning modules. Learning-based methods, such as behavior cloning (BC) and reinforcement learning (RL), demonstrate promising performance in visual navigation but still face inherent limitations. BC is susceptible to compounding errors due to limited expert imitation, while RL struggles with reward function design and sample inefficiency. To address these limitations, this paper proposes an inverse reinforcement learning (IRL)-based framework for high-speed visual navigation. By leveraging IRL, it is possible to reduce the number of interactions with simulation environments and improve capability to deal with high-dimensional spaces while preserving the robustness of RL policies. A motion primitive-based path planning algorithm collects an expert dataset with privileged map data from diverse environments, ensuring comprehensive scenario coverage. By leveraging both the acquired expert and learner dataset gathered from the agent's interactions with the simulation environments, a robust reward function and policy are learned across diverse states. While the proposed method is trained in a simulation environment only, it can be directly applied to real-world scenarios without additional training or tuning. The performance of the proposed method is validated in both simulation and real-world environments, including forests and various structures. The trained policy achieves an average speed of 7 m/s and a maximum speed of 8.8 m/s in real flight experiments. To the best of our knowledge, this is the first work to successfully apply an IRL framework for high-speed visual navigation of drones.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
530,162
2209.15219
Optimal Query Complexities for Dynamic Trace Estimation
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly, such as during an optimization process. Specifically, for any $m$ matrices $A_1,...,A_m$ with consecutive differences bounded in Schatten-$1$ norm by $\alpha$, we provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $\epsilon$ error with $\delta$ failure probability with an optimal query complexity of $\widetilde{O}\left(m \alpha\sqrt{\log(1/\delta)}/\epsilon + m\log(1/\delta)\right)$, improving the dependence on both $\alpha$ and $\delta$ from Dharangutte and Musco (NeurIPS, 2021). Our procedure works without additional norm bounds on $A_i$ and can be generalized to a bound for the $p$-th Schatten norm for $p \in [1,2]$, giving a complexity of $\widetilde{O}\left(m \alpha\left(\sqrt{\log(1/\delta)}/\epsilon\right)^p +m \log(1/\delta)\right)$. By using novel reductions to communication complexity and information-theoretic analyses of Gaussian matrices, we provide matching lower bounds for static and dynamic trace estimation in all relevant parameters, including the failure probability. Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation, resolving open questions of prior work.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
320,522
2010.04480
MLQE-PE: A Multilingual Quality Estimation and Post-Editing Dataset
We present MLQE-PE, a new dataset for Machine Translation (MT) Quality Estimation (QE) and Automatic Post-Editing (APE). The dataset contains eleven language pairs, with human labels for up to 10,000 translations per language pair in the following formats: sentence-level direct assessments and post-editing effort, and word-level good/bad labels. It also contains the post-edited sentences, as well as titles of the articles where the sentences were extracted from, and the neural MT models used to translate the text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
199,755
2501.15077
NetChain: Authenticated Blockchain Top-k Graph Data Queries and its Application in Asset Management
As a valuable digital resource, graph data is an important data asset, which has been widely utilized across various fields to optimize decision-making and enable smarter solutions. To manage data assets, blockchain is widely used to enable data sharing and trading, but it cannot supply complex analytical queries. vChain was proposed to achieve verifiable boolean queries over blockchain by designing an embedded authenticated data structure (ADS). However, for generating (non-)existence proofs, vChain suffers from expensive storage and computation costs in ADS construction, along with high communication and verification costs. In this paper, we propose a novel NetChain framework that enables efficient top-k queries over on-chain graph data with verifiability. Specifically, we design a novel authenticated two-layer index that supports (non-)existence proof generation in block-level and built-in verifiability for matched objects. To further alleviate the computation and verification overhead, an optimized variant NetChain+ is derived. The authenticity of our frameworks is validated through security analysis. Evaluations show that NetChain and NetChain+ outperform vChain, respectively achieving up to 85X and 31X improvements on ADS construction. Moreover, compared with vChain, NetChain+ reduces the communication and verification costs by 87% and 96% respectively.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
527,394
2410.03063
Integrating Natural Language Prompting Tasks in Introductory Programming Courses
Introductory programming courses often emphasize mastering syntax and basic constructs before progressing to more complex and interesting programs. This bottom-up approach can be frustrating for novices, shifting the focus away from problem solving and potentially making computing less appealing to a broad range of students. The rise of generative AI for code production could partially address these issues by fostering new skills via interaction with AI models, including constructing high-level prompts and evaluating code that is automatically generated. In this experience report, we explore the inclusion of two prompt-focused activities in an introductory course, implemented across four labs in a six-week module. The first requires students to solve computational problems by writing natural language prompts, emphasizing problem-solving over syntax. The second involves students crafting prompts to generate code equivalent to provided fragments, to foster an understanding of the relationship between prompts and code. Most of the students in the course had reported finding programming difficult to learn, often citing frustrations with syntax and debugging. We found that self-reported difficulty with learning programming had a strong inverse relationship with performance on traditional programming assessments such as tests and projects, as expected. However, performance on the natural language tasks was less strongly related to self-reported difficulty, suggesting they may target different skills. Learning how to communicate with AI coding models is becoming an important skill, and natural language prompting tasks may appeal to a broad range of students.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
494,596
0705.3949
Translating a first-order modal language to relational algebra
This paper is about Kripke structures that are inside a relational database and queried with a modal language. At first the modal language that is used is introduced, followed by a definition of the database and relational algebra. Based on these definitions two things are presented: a mapping from components of the modal structure to a relational database schema and instance, and a translation from queries in the modal language to relational algebra queries.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
285
2409.14454
A Unified Approach for Learning the Dynamics of Power System Generators and Inverter-based Resources
The growing prevalence of inverter-based resources (IBRs) for renewable energy integration and electrification greatly challenges power system dynamic analysis. To account for both synchronous generators (SGs) and IBRs, this work presents an approach for learning the model of an individual dynamic component. The recurrent neural network (RNN) model is used to match the recursive structure in predicting the key dynamical states of a component from its terminal bus voltage and set-point input. To deal with the fast transients especially due to IBRs, we develop a Stable Integral (SI-)RNN to mimic high-order integral methods that can enhance the stability and accuracy for the dynamic learning task. We demonstrate that the proposed SI-RNN model not only can successfully predict the component's dynamic behaviors, but also offers the possibility of efficiently computing the dynamic sensitivity relative to a set-point change. These capabilities have been numerically validated based on full-order Electromagnetic Transient (EMT) simulations on a small test system with both SGs and IBRs, particularly for predicting the dynamics of grid-forming inverters.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
490,470
2409.19891
Opt-in Camera: Person Identification in Video via UWB Localization and Its Application to Opt-in Systems
This paper presents opt-in camera, a concept of privacy-preserving camera systems capable of recording only specific individuals in a crowd who explicitly consent to be recorded. Our system utilizes a mobile wireless communication tag attached to personal belongings as proof of opt-in and as a means of localizing tag carriers in video footage. Specifically, the on-ground positions of the wireless tag are first tracked over time using the unscented Kalman filter (UKF). The tag trajectory is then matched against visual tracking results for pedestrians found in videos to identify the tag carrier. Technically, we devise a dedicated trajectory matching technique based on constrained linear optimization, as well as a novel calibration technique that handles wireless tag-camera calibration and hyperparameter tuning for the UKF, which mitigates the non-line-of-sight (NLoS) issue in wireless localization. We realize the proposed opt-in camera system using ultra-wideband (UWB) devices and an off-the-shelf webcam installed in the environment. Experimental results demonstrate that our system can perform opt-in recording of individuals in near real-time at 10 fps, with reliable identification accuracy for a crowd of 8-23 people in a confined space.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
492,897
1311.2503
Predictable Feature Analysis
Every organism in an environment, whether biological, robotic or virtual, must be able to predict certain aspects of its environment in order to survive or perform whatever task is intended. It needs a model that is capable of estimating the consequences of possible actions, so that planning, control, and decision-making become feasible. For scientific purposes, such models are usually created in a problem specific manner using differential equations and other techniques from control- and system-theory. In contrast to that, we aim for an unsupervised approach that builds up the desired model in a self-organized fashion. Inspired by Slow Feature Analysis (SFA), our approach is to extract sub-signals from the input, that behave as predictable as possible. These "predictable features" are highly relevant for modeling, because predictability is a desired property of the needed consequence-estimating model by definition. In our approach, we measure predictability with respect to a certain prediction model. We focus here on the solution of the arising optimization problem and present a tractable algorithm based on algebraic methods which we call Predictable Feature Analysis (PFA). We prove that the algorithm finds the globally optimal signal, if this signal can be predicted with low error. To deal with cases where the optimal signal has a significant prediction error, we provide a robust, heuristically motivated variant of the algorithm and verify it empirically. Additionally, we give formal criteria a prediction-model must meet to be suitable for measuring predictability in the PFA setting and also provide a suitable default-model along with a formal proof that it meets these criteria.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
28,321
1711.01177
GRIM-Filter: Fast Seed Location Filtering in DNA Read Mapping Using Processing-in-Memory Technologies
Motivation: Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. Results: We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x--6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x--3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. Availability: The code is available online at: https://github.com/CMU-SAFARI/GRIM
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
83,842
2111.07555
Confucius, Cyberpunk and Mr. Science: Comparing AI ethics between China and the EU
The exponential development and application of artificial intelligence triggered an unprecedented global concern for potential social and ethical issues. Stakeholders from different industries, international foundations, governmental organisations and standards institutions quickly improvised and created various codes of ethics attempting to regulate AI. A major concern is the large homogeneity and presumed consensualism around these principles. While it is true that some ethical doctrines, such as the famous Kantian deontology, aspire to universalism, they are however not universal in practice. In fact, ethical pluralism is more about differences in which relevant questions to ask rather than different answers to a common question. When people abide by different moral doctrines, they tend to disagree on the very approach to an issue. Even when people from different cultures happen to agree on a set of common principles, it does not necessarily mean that they share the same understanding of these concepts and what they entail. In order to better understand the philosophical roots and cultural context underlying ethical principles in AI, we propose to analyse and compare the ethical principles endorsed by the Chinese National New Generation Artificial Intelligence Governance Professional Committee (CNNGAIGPC) and those elaborated by the European High-level Expert Group on AI (HLEGAI). China and the EU have very different political systems and diverge in their cultural heritages. In our analysis, we wish to highlight that principles that seem similar a priori may actually have different meanings, derived from different approaches and reflect distinct goals.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
266,414
2202.13388
PanoFlow: Learning 360{\deg} Optical Flow for Surrounding Temporal Understanding
Optical flow estimation is a basic task in self-driving and robotics systems, which enables to temporally interpret traffic scenes. Autonomous vehicles clearly benefit from the ultra-wide Field of View (FoV) offered by 360{\deg} panoramic sensors. However, due to the unique imaging process of panoramic cameras, models designed for pinhole images do not directly generalize satisfactorily to 360{\deg} panoramic images. In this paper, we put forward a novel network framework--PanoFlow, to learn optical flow for panoramic images. To overcome the distortions introduced by equirectangular projection in panoramic transformation, we design a Flow Distortion Augmentation (FDA) method, which contains radial flow distortion (FDA-R) or equirectangular flow distortion (FDA-E). We further look into the definition and properties of cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow Estimation (CFE) method by leveraging the cyclicity of spherical images to infer 360{\deg} optical flow and converting large displacement to relatively small displacement. PanoFlow is applicable to any existing flow estimation method and benefits from the progress of narrow-FoV flow estimation. In addition, we create and release a synthetic panoramic dataset FlowScape based on CARLA to facilitate training and quantitative analysis. PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the established FlowScape benchmarks. Our proposed approach reduces the End-Point-Error (EPE) on FlowScape by 27.3%. On OmniFlowNet, PanoFlow achieves a 55.5% error reduction from the best published result. We also qualitatively validate our method via a collection vehicle and a public real-world OmniPhotos dataset, indicating strong potential and robustness for real-world navigation applications. Code and dataset are publicly available at https://github.com/MasterHow/PanoFlow.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
282,583
2302.08242
Tuning computer vision models with task rewards
Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,995
2112.00183
Descriptive vs. inferential community detection in networks: pitfalls, myths, and half-truths
Community detection is one of the most important methodological fields of network science, and one which has attracted a significant amount of attention over the past decades. This area deals with the automated division of a network into fundamental building blocks, with the objective of providing a summary of its large-scale structure. Despite its importance and widespread adoption, there is a noticeable gap between what is arguably the state-of-the-art and the methods that are actually used in practice in a variety of fields. Here we attempt to address this discrepancy by dividing existing methods according to whether they have a "descriptive" or an "inferential" goal. While descriptive methods find patterns in networks based on context-dependent notions of community structure, inferential methods articulate generative models, and attempt to fit them to data. In this way, they are able to provide insights into the mechanisms of network formation, and separate structure from randomness in a manner supported by statistical evidence. We review how employing descriptive methods with inferential aims is riddled with pitfalls and misleading answers, and thus should be in general avoided. We argue that inferential methods are more typically aligned with clearer scientific questions, yield more robust results, and should be in many cases preferred. We attempt to dispel some myths and half-truths often believed when community detection is employed in practice, in an effort to improve both the use of such methods as well as the interpretation of their results.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
269,049
2308.06749
FastLLVE: Real-Time Low-Light Video Enhancement with Intensity-Aware Lookup Table
Low-Light Video Enhancement (LLVE) has received considerable attention in recent years. One of the critical requirements of LLVE is inter-frame brightness consistency, which is essential for maintaining the temporal coherence of the enhanced video. However, most existing single-image-based methods fail to address this issue, resulting in flickering effect that degrades the overall quality after enhancement. Moreover, 3D Convolution Neural Network (CNN)-based methods, which are designed for video to maintain inter-frame consistency, are computationally expensive, making them impractical for real-time applications. To address these issues, we propose an efficient pipeline named FastLLVE that leverages the Look-Up-Table (LUT) technique to maintain inter-frame brightness consistency effectively. Specifically, we design a learnable Intensity-Aware LUT (IA-LUT) module for adaptive enhancement, which addresses the low-dynamic problem in low-light scenarios. This enables FastLLVE to perform low-latency and low-complexity enhancement operations while maintaining high-quality results. Experimental results on benchmark datasets demonstrate that our method achieves the State-Of-The-Art (SOTA) performance in terms of both image quality and inter-frame brightness consistency. More importantly, our FastLLVE can process 1,080p videos at $\mathit{50+}$ Frames Per Second (FPS), which is $\mathit{2 \times}$ faster than SOTA CNN-based methods in inference time, making it a promising solution for real-time applications. The code is available at https://github.com/Wenhao-Li-777/FastLLVE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,256
2012.09538
Virtually Extended Coworking Spaces? -- The Reinforcement of Social Proximity, Motivation and Knowledge Sharing Through ICT
Coworking is characterized by different people sharing a workspace to benefit from the inspiring working atmosphere. Even before Covid-19, many positive effects and dynamics were not fully exploited by their users. One reason is a lack of trust among the users that leads to social isolation, although a coworking space should increase knowledge and idea exchange. As most people in coworking spaces use information and communication technologies (ICT) for their collaboration with their clients or employers, we examined if and how ICT can be used to support the positive effects and dynamics of coworking spaces. For this, we conducted eight interviews with freelancers and entrepreneurs who have already worked in coworking spaces in order to identify requirements for a complementary virtual coworking platform. We found that social proximity, motivation and knowledge sharing could be increased by such a platform. Based on the process virtualization theory, we derived six design principles.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
212,103
1601.03855
A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits
We study the K-armed dueling bandit problem which is a variation of the classical Multi-Armed Bandit (MAB) problem in which the learner receives only relative feedback about the selected pairs of arms. We propose a new algorithm called Relative Exponential-weight algorithm for Exploration and Exploitation (REX3) to handle the adversarial utility-based formulation of this problem. This algorithm is a non-trivial extension of the Exponential-weight algorithm for Exploration and Exploitation (EXP3) algorithm. We prove a finite time expected regret upper bound of order O(sqrt(K ln(K)T)) for this algorithm and a general lower bound of order omega(sqrt(KT)). At the end, we provide experimental results using real data from information retrieval applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
50,956
1805.09653
Uncertainty-Aware Attention for Reliable Interpretation and Prediction
Attention mechanism is effective in both focusing the deep learning models on relevant features and interpreting them. However, attentions may be unreliable since the networks that generate them are often trained in a weakly-supervised manner. To overcome this limitation, we introduce the notion of input-dependent uncertainty to the attention mechanism, such that it generates attention for each feature with varying degrees of noise based on the given input, to learn larger variance on instances it is uncertain about. We learn this Uncertainty-aware Attention (UA) mechanism using variational inference, and validate it on various risk prediction tasks from electronic health records on which our model significantly outperforms existing attention models. The analysis of the learned attentions shows that our model generates attentions that comply with clinicians' interpretation, and provide richer interpretation via learned variance. Further evaluation of both the accuracy of the uncertainty calibration and the prediction performance with "I don't know" decision show that UA yields networks with high reliability as well.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
98,467
2107.01130
Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeping dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
244,380
2308.06032
Large Language Models in Cryptocurrency Securities Cases: Can a GPT Model Meaningfully Assist Lawyers?
Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5's legal reasoning and ChatGPT's legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on complaints written by a lawyer compared to ChatGPT. We feed fact patterns from real-life cases to GPT-3.5 and evaluate its ability to determine correct potential violations from the scenario and exclude spurious violations. Second, we had mock jurors assess complaints written by ChatGPT and lawyers. GPT-3.5's legal reasoning skills proved weak, though we expect improvement in future models, particularly given the violations it suggested tended to be correct (it merely missed additional, correct violations). ChatGPT performed better at legal drafting, and jurors' decisions were not statistically significantly associated with the author of the document upon which they based their decisions. Because GPT-3.5 cannot satisfactorily conduct legal reasoning tasks, it would be unlikely to be able to help lawyers in a meaningful way at this stage. However, ChatGPT's drafting skills (though, perhaps, still inferior to lawyers) could assist lawyers in providing legal services. Our research is the first to systematically study an LLM's legal drafting and reasoning capabilities in litigation, as well as in securities law and cryptocurrency-related misconduct.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
385,003
2410.22217
Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective
Autoregression in large language models (LLMs) has shown impressive scalability by unifying all language tasks into the next token prediction paradigm. Recently, there is a growing interest in extending this success to vision foundation models. In this survey, we review the recent advances and discuss future directions for autoregressive vision foundation models. First, we present the trend for next generation of vision foundation models, i.e., unifying both understanding and generation in vision tasks. We then analyze the limitations of existing vision foundation models, and present a formal definition of autoregression with its advantages. Later, we categorize autoregressive vision foundation models from their vision tokenizers and autoregression backbones. Finally, we discuss several promising research challenges and directions. To the best of our knowledge, this is the first survey to comprehensively summarize autoregressive vision foundation models under the trend of unifying understanding and generation. A collection of related resources is available at https://github.com/EmmaSRH/ARVFM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
503,546
1912.05205
Efficient Robotic Task Generalization Using Deep Model Fusion Reinforcement Learning
Learning-based methods have been used to pro-gram robotic tasks in recent years. However, extensive training is usually required not only for the initial task learning but also for generalizing the learned model to the same task but in different environments. In this paper, we propose a novel Deep Reinforcement Learning algorithm for efficient task generalization and environment adaptation in the robotic task learning problem. The proposed method is able to efficiently generalize the previously learned task by model fusion to solve the environment adaptation problem. The proposed Deep Model Fusion (DMF) method reuses and combines the previously trained model to improve the learning efficiency and results.Besides, we also introduce a Multi-objective Guided Reward(MGR) shaping technique to further improve training efficiency.The proposed method was benchmarked with previous methods in various environments to validate its effectiveness.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
157,052
2010.02798
Policy learning in SE(3) action spaces
In the spatial action representation, the action space spans the space of target poses for robot motion commands, i.e. SE(2) or SE(3). This approach has been used to solve challenging robotic manipulation problems and shows promise. However, the method is often limited to a three dimensional action space and short horizon tasks. This paper proposes ASRSE3, a new method for handling higher dimensional spatial action spaces that transforms an original MDP with high dimensional action space into a new MDP with reduced action space and augmented state space. We also propose SDQfD, a variation of DQfD designed for large action spaces. ASRSE3 and SDQfD are evaluated in the context of a set of challenging block construction tasks. We show that both methods outperform standard baselines and can be used in practice on real robotics systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
199,165
1609.02252
On Performance Modeling for MANETs under General Limited Buffer Constraint
Understanding the real achievable performance of mobile ad hoc networks (MANETs) under practical network constraints is of great importance for their applications in future highly heterogeneous wireless network environments. This paper explores, for the first time, the performance modeling for MANETs under a general limited buffer constraint, where each network node maintains a limited source buffer of size $B_s$ to store its locally generated packets and also a limited shared relay buffer of size $B_r$ to store relay packets for other nodes. Based on the Queuing theory and birth-death chain theory, we first develop a general theoretical framework to fully depict the source/relay buffer occupancy process in such a MANET, which applies to any distributed MAC protocol and any mobility model that leads to the uniform distribution of nodes' locations in steady state. With the help of this framework, we then derive the exact expressions of several key network performance metrics, including achievable throughput, throughput capacity, and expected end-to-end delay. We further conduct case studies under two network scenarios and provide the corresponding theoretical/simulation results to demonstrate the application as well as the efficiency of our theoretical framework. Finally, we present extensive numerical results to illustrate the impacts of buffer constraint on the performance of a buffer-limited MANET.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
60,711
1004.4732
Minimum energy required to copy one bit of information
In this paper, we calculate energy required to copy one bit of useful information in the presence of thermal noise. For this purpose, we consider a quantum system capable of storing one bit of classical information, which is initially in a mixed state corresponding to temperature T. We calculate how many of these systems must be used to store useful information and control bits protecting the content against transmission errors. Finally, we analyze how adding these extra bits changes the total energy consumed during the copying.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,298
2312.06709
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
A handful of visual foundation models (VFMs) have recently emerged as the backbones for numerous downstream tasks. VFMs like CLIP, DINOv2, SAM are trained with distinct objectives, exhibiting unique characteristics for various downstream tasks. We find that despite their conceptual differences, these models can be effectively merged into a unified model through multi-teacher distillation. We name this approach AM-RADIO (Agglomerative Model -- Reduce All Domains Into One). This integrative approach not only surpasses the performance of individual teacher models but also amalgamates their distinctive features, such as zero-shot vision-language comprehension, detailed pixel-level understanding, and open vocabulary segmentation capabilities. In pursuit of the most hardware-efficient backbone, we evaluated numerous architectures in our multi-teacher distillation pipeline using the same training recipe. This led to the development of a novel architecture (E-RADIO) that exceeds the performance of its predecessors and is at least 7x faster than the teacher models. Our comprehensive benchmarking process covers downstream tasks including ImageNet classification, ADE20k semantic segmentation, COCO object detection and LLaVa-1.5 framework. Code: https://github.com/NVlabs/RADIO
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
414,645
2109.12773
Rumour Detection via Zero-shot Cross-lingual Transfer Learning
Most rumour detection models for social media are designed for one specific language (mostly English). There are over 40 languages on Twitter and most languages lack annotated resources to build rumour detection models. In this paper we propose a zero-shot cross-lingual transfer learning framework that can adapt a rumour detection model trained for a source language to another target language. Our framework utilises pretrained multilingual language models (e.g.\ multilingual BERT) and a self-training loop to iteratively bootstrap the creation of ''silver labels'' in the target language to adapt the model from the source language to the target language. We evaluate our methodology on English and Chinese rumour datasets and demonstrate that our model substantially outperforms competitive benchmarks in both source and target language rumour detection.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
257,408
2401.12181
Universal Neurons in GPT2 Language Models
A basic question within the emerging field of mechanistic interpretability is the degree to which neural networks learn the same underlying mechanisms. In other words, are neural mechanisms universal across different models? In this work, we study the universality of individual neurons across GPT2 models trained from different initial random seeds, motivated by the hypothesis that universal neurons are likely to be interpretable. In particular, we compute pairwise correlations of neuron activations over 100 million tokens for every neuron pair across five different seeds and find that 1-5\% of neurons are universal, that is, pairs of neurons which consistently activate on the same inputs. We then study these universal neurons in detail, finding that they usually have clear interpretations and taxonomize them into a small number of neuron families. We conclude by studying patterns in neuron weights to establish several universal functional roles of neurons in simple circuits: deactivating attention heads, changing the entropy of the next token distribution, and predicting the next token to (not) be within a particular set.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
423,282
2207.08020
Sampling of the Wiener Process for Remote Estimation over a Channel with Unknown Delay Statistics
In this paper, we study an online sampling problem of the Wiener process. The goal is to minimize the mean squared error (MSE) of the remote estimator under a sampling frequency constraint when the transmission delay distribution is unknown. The sampling problem is reformulated into an optional stopping problem, and we propose an online sampling algorithm that can adaptively learn the optimal stopping threshold through stochastic approximation. We prove that the cumulative MSE regret grows with rate $\mathcal{O}(\ln k)$, where $k$ is the number of samples. Through Le Cam's two point method, we show that the worst-case cumulative MSE regret of any online sampling algorithm is lower bounded by $\Omega(\ln k)$. Hence, the proposed online sampling algorithm is minimax order-optimal. Finally, we validate the performance of the proposed algorithm via numerical simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
308,421
2501.02781
From Dense to Sparse: Event Response for Enhanced Residential Load Forecasting
Residential load forecasting (RLF) is crucial for resource scheduling in power systems. Most existing methods utilize all given load records (dense data) to indiscriminately extract the dependencies between historical and future time series. However, there exist important regular patterns residing in the event-related associations among different appliances (sparse knowledge), which have yet been ignored. In this paper, we propose an Event-Response Knowledge Guided approach (ERKG) for RLF by incorporating the estimation of electricity usage events for different appliances, mining event-related sparse knowledge from the load series. With ERKG, the event-response estimation enables portraying the electricity consumption behaviors of residents, revealing regular variations in appliance operational states. To be specific, ERKG consists of knowledge extraction and guidance: i) a forecasting model is designed for the electricity usage events by estimating appliance operational states, aiming to extract the event-related sparse knowledge; ii) a novel knowledge-guided mechanism is established by fusing such state estimates of the appliance events into the RLF model, which can give particular focuses on the patterns of users' electricity consumption behaviors. Notably, ERKG can flexibly serve as a plug-in module to boost the capability of existing forecasting models by leveraging event response. In numerical experiments, extensive comparisons and ablation studies have verified the effectiveness of our ERKG, e.g., over 8% MAE can be reduced on the tested state-of-the-art forecasting models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,628
1405.0521
Blind MIMOME Wiretap Channel with Delayed CSIT
We study the Gaussian MIMOME wiretap channel where a transmitter wishes to communicate a confidential message to a legitimate receiver in the presence of eavesdroppers, while the eavesdroppers should not be able to decode the confidential message. Each node in the network is equipped with arbitrary number of antennas. Furthermore, channels are time varying, and there is no channel state information available at the transmitter (CSIT) with respect to eavesdroppers' channels; and transmitter only has access to delayed CSIT of the channel to the legitimate receiver. The secure degrees of freedom (SDoF) for such network has only been characterized for special cases, and is unknown in general. We completely characterize the SDoF of this network for all antenna configurations. In particular, we strictly improve the state-of-the-art achievable scheme for this network by proposing more efficient artificial noise alignment at the receivers. Furthermore, we develop a tight upper bound by utilizing 4 important inequalities that provide lower bounds on the received signal dimensions at receivers which supply delayed CSIT or no CSIT, or at a collection of receivers where some supply no CSIT. These inequalities together allow for analysis of signal dimensions in networks with asymmetric CSIT; and as a result, we present a converse proof that leads to characterization of SDoF for all possible antenna configurations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
32,770
2003.03715
OVC-Net: Object-Oriented Video Captioning with Temporal Graph and Detail Enhancement
Traditional video captioning requests a holistic description of the video, yet the detailed descriptions of the specific objects may not be available. Without associating the moving trajectories, these image-based data-driven methods cannot understand the activities from the spatio-temporal transitions in the inter-object visual features. Besides, adopting ambiguous clip-sentence pairs in training, it goes against learning the multi-modal functional mappings owing to the one-to-many nature. In this paper, we propose a novel task to understand the videos in object-level, named object-oriented video captioning. We introduce the video-based object-oriented video captioning network (OVC)-Net via temporal graph and detail enhancement to effectively analyze the activities along time and stably capture the vision-language connections under small-sample condition. The temporal graph provides useful supplement over previous image-based approaches, allowing to reason the activities from the temporal evolution of visual features and the dynamic movement of spatial locations. The detail enhancement helps to capture the discriminative features among different objects, with which the subsequent captioning module can yield more informative and precise descriptions. Thereafter, we construct a new dataset, providing consistent object-sentence pairs, to facilitate effective cross-modal learning. To demonstrate the effectiveness, we conduct experiments on the new dataset and compare it with the state-of-the-art video captioning methods. From the experimental results, the OVC-Net exhibits the ability of precisely describing the concurrent objects, and achieves the state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
167,335
1904.08861
The Simplest Thing That Can Possibly Work: Pseudo-Relevance Feedback Using Text Classification
Motivated by recent commentary that has questioned today's pursuit of ever-more complex models and mathematical formalisms in applied machine learning and whether meaningful empirical progress is actually being made, this paper tries to tackle the decades-old problem of pseudo-relevance feedback with "the simplest thing that can possibly work". I present a technique based on training a document relevance classifier for each information need using pseudo-labels from an initial ranked list and then applying the classifier to rerank the retrieved documents. Experiments demonstrate significant improvements across a number of newswire collections, with initial rankings supplied by "bag of words" BM25 as well as from a well-tuned query expansion model. While this simple technique draws elements from several well-known threads in the literature, to my knowledge this exact combination has not previously been proposed and evaluated.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
128,204
1612.06202
iCrawl: Improving the Freshness of Web Collections by Integrating Social Web and Focused Web Crawling
Researchers in the Digital Humanities and journalists need to monitor, collect and analyze fresh online content regarding current events such as the Ebola outbreak or the Ukraine crisis on demand. However, existing focused crawling approaches only consider topical aspects while ignoring temporal aspects and therefore cannot achieve thematically coherent and fresh Web collections. Especially Social Media provide a rich source of fresh content, which is not used by state-of-the-art focused crawlers. In this paper we address the issues of enabling the collection of fresh and relevant Web and Social Web content for a topic of interest through seamless integration of Web and Social Media in a novel integrated focused crawler. The crawler collects Web and Social Media content in a single system and exploits the stream of fresh Social Media content for guiding the crawler.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
65,794
2308.05970
Focused Specific Objects NeRF
Most NeRF-based models are designed for learning the entire scene, and complex scenes can lead to longer learning times and poorer rendering effects. This paper utilizes scene semantic priors to make improvements in fast training, allowing the network to focus on the specific targets and not be affected by complex backgrounds. The training speed can be increased by 7.78 times with better rendering effect, and small to medium sized targets can be rendered faster. In addition, this improvement applies to all NeRF-based models. Considering the inherent multi-view consistency and smoothness of NeRF, this paper also studies weak supervision by sparsely sampling negative ray samples. With this method, training can be further accelerated and rendering quality can be maintained. Finally, this paper extends pixel semantic and color rendering formulas and proposes a new scene editing technique that can achieve unique displays of the specific semantic targets or masking them in rendering. To address the problem of unsupervised regions incorrect inferences in the scene, we also designed a self-supervised loop that combines morphological operations and clustering.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
384,975
1705.04469
TraX: The visual Tracking eXchange Protocol and Library
In this paper we address the problem of developing on-line visual tracking algorithms. We present a specialized communication protocol that serves as a bridge between a tracker implementation and utilizing application. It decouples development of algorithms and application, encouraging re-usability. The primary use case is algorithm evaluation where the protocol facilitates more complex evaluation scenarios that are used nowadays thus pushing forward the field of visual tracking. We present a reference implementation of the protocol that makes it easy to use in several popular programming languages and discuss where the protocol is already used and some usage scenarios that we envision for the future.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
73,341
2109.05700
Exploiting Heterogeneity in Robust Federated Best-Arm Identification
We study a federated variant of the best-arm identification problem in stochastic multi-armed bandits: a set of clients, each of whom can sample only a subset of the arms, collaborate via a server to identify the best arm (i.e., the arm with the highest mean reward) with prescribed confidence. For this problem, we propose Fed-SEL, a simple communication-efficient algorithm that builds on successive elimination techniques and involves local sampling steps at the clients. To study the performance of Fed-SEL, we introduce a notion of arm-heterogeneity that captures the level of dissimilarity between distributions of arms corresponding to different clients. Interestingly, our analysis reveals the benefits of arm-heterogeneity in reducing both the sample- and communication-complexity of Fed-SEL. As a special case of our analysis, we show that for certain heterogeneous problem instances, Fed-SEL outputs the best-arm after just one round of communication. Our findings have the following key implication: unlike federated supervised learning where recent work has shown that statistical heterogeneity can lead to poor performance, one can provably reap the benefits of both local computation and heterogeneity for federated best-arm identification. As our final contribution, we develop variants of Fed-SEL, both for federated and peer-to-peer settings, that are robust to the presence of Byzantine clients, and hence suitable for deployment in harsh, adversarial environments.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
254,903
1507.00421
Categorical Matrix Completion
We consider the problem of completing a matrix with categorical-valued entries from partial observations. This is achieved by extending the formulation and theory of one-bit matrix completion. We recover a low-rank matrix $X$ by maximizing the likelihood ratio with a constraint on the nuclear norm of $X$, and the observations are mapped from entries of $X$ through multiple link functions. We establish theoretical upper and lower bounds on the recovery error, which meet up to a constant factor $\mathcal{O}(K^{3/2})$ where $K$ is the fixed number of categories. The upper bound in our case depends on the number of categories implicitly through a maximization of terms that involve the smoothness of the link functions. In contrast to one-bit matrix completion, our bounds for categorical matrix completion are optimal up to a factor on the order of the square root of the number of categories, which is consistent with an intuition that the problem becomes harder when the number of categories increases. By comparing the performance of our method with the conventional matrix completion method on the MovieLens dataset, we demonstrate the advantage of our method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
44,757
2307.12442
EnTri: Ensemble Learning with Tri-level Representations for Explainable Scene Recognition
Scene recognition based on deep-learning has made significant progress, but there are still limitations in its performance due to challenges posed by inter-class similarities and intra-class dissimilarities. Furthermore, prior research has primarily focused on improving classification accuracy, yet it has given less attention to achieving interpretable, precise scene classification. Therefore, we are motivated to propose EnTri, an ensemble scene recognition framework that employs ensemble learning using a hierarchy of visual features. EnTri represents features at three distinct levels of detail: pixel-level, semantic segmentation-level, and object class and frequency level. By incorporating distinct feature encoding schemes of differing complexity and leveraging ensemble strategies, our approach aims to improve classification accuracy while enhancing transparency and interpretability via visual and textual explanations. To achieve interpretability, we devised an extension algorithm that generates both visual and textual explanations highlighting various properties of a given scene that contribute to the final prediction of its category. This includes information about objects, statistics, spatial layout, and textural details. Through experiments on benchmark scene classification datasets, EnTri has demonstrated superiority in terms of recognition accuracy, achieving competitive performance compared to state-of-the-art approaches, with an accuracy of 87.69%, 75.56%, and 99.17% on the MIT67, SUN397, and UIUC8 datasets, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
381,258
2406.07247
Dynamical Mean-Field Theory of Self-Attention Neural Networks
Transformer-based models have demonstrated exceptional performance across diverse domains, becoming the state-of-the-art solution for addressing sequential machine learning problems. Even though we have a general understanding of the fundamental components in the transformer architecture, little is known about how they operate or what are their expected dynamics. Recently, there has been an increasing interest in exploring the relationship between attention mechanisms and Hopfield networks, promising to shed light on the statistical physics of transformer networks. However, to date, the dynamical regimes of transformer-like models have not been studied in depth. In this paper, we address this gap by using methods for the study of asymmetric Hopfield networks in nonequilibrium regimes --namely path integral methods over generating functionals, yielding dynamics governed by concurrent mean-field variables. Assuming 1-bit tokens and weights, we derive analytical approximations for the behavior of large self-attention neural networks coupled to a softmax output, which become exact in the large limit size. Our findings reveal nontrivial dynamical phenomena, including nonequilibrium phase transitions associated with chaotic bifurcations, even for very simple configurations with a few encoded features and a very short context window. Finally, we discuss the potential of our analytic approach to improve our understanding of the inner workings of transformer models, potentially reducing computational training costs and enhancing model interpretability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
462,960
2106.00396
Distance and Position Estimation in Visible Light Systems with RGB LEDs
In this manuscript, distance and position estimation problems are investigated for visible light positioning (VLP) systems with red-green-blue (RGB) light emitting diodes (LEDs). The accuracy limits on distance and position estimation are calculated in terms of the Cramer-Rao lower bound (CRLB) for three different scenarios. Scenario~1 and Scenario~2 correspond to synchronous and asynchronous systems, respectively, with known channel attenuation formulas at the receiver. In Scenario~3, a synchronous system is considered but channel attenuation formulas are not known at the receiver. The derived CRLB expressions reveal the relations among distance/position estimation accuracies in the considered scenarios and lead to intuitive explanations for the benefits of using RGB LEDs. In addition, maximum likelihood (ML) estimators are derived in all scenarios, and it is shown that they can achieve close performance to the CRLBs in some cases for sufficiently high source optical powers.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
238,107
2210.12935
360-MLC: Multi-view Layout Consistency for Self-training and Hyper-parameter Tuning
We present 360-MLC, a self-training method based on multi-view layout consistency for finetuning monocular room-layout models using unlabeled 360-images only. This can be valuable in practical scenarios where a pre-trained model needs to be adapted to a new data domain without using any ground truth annotations. Our simple yet effective assumption is that multiple layout estimations in the same scene must define a consistent geometry regardless of their camera positions. Based on this idea, we leverage a pre-trained model to project estimated layout boundaries from several camera views into the 3D world coordinate. Then, we re-project them back to the spherical coordinate and build a probability function, from which we sample the pseudo-labels for self-training. To handle unconfident pseudo-labels, we evaluate the variance in the re-projected boundaries as an uncertainty value to weight each pseudo-label in our loss function during training. In addition, since ground truth annotations are not available during training nor in testing, we leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene, allowing us to evaluate any layout estimator for hyper-parameter tuning, including model selection without ground truth annotations. Experimental results show that our solution achieves favorable performance against state-of-the-art methods when self-training from three publicly available source datasets to a unique, newly labeled dataset consisting of multi-view of the same scenes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,978
2405.20501
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane
The ability to shop independently, especially in grocery stores, is important for maintaining a high quality of life. This can be particularly challenging for people with visual impairments (PVI). Stores carry thousands of products, with approximately 30,000 new products introduced each year in the US market alone, presenting a challenge even for modern computer vision solutions. Through this work, we present a proof-of-concept socially assistive robotic system we call ShelfHelp, and propose novel technical solutions for enhancing instrumented canes traditionally meant for navigation tasks with additional capability within the domain of shopping. ShelfHelp includes a novel visual product locator algorithm designed for use in grocery stores and a novel planner that autonomously issues verbal manipulation guidance commands to guide the user during product retrieval. Through a human subjects study, we show the system's success in locating and providing effective manipulation guidance to retrieve desired products with novice users. We compare two autonomous verbal guidance modes achieving comparable performance to a human assistance baseline and present encouraging findings that validate our system's efficiency and effectiveness and through positive subjective metrics including competence, intelligence, and ease of use.
true
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
459,376
2406.02468
DL-KDD: Dual-Light Knowledge Distillation for Action Recognition in the Dark
Human action recognition in dark videos is a challenging task for computer vision. Recent research focuses on applying dark enhancement methods to improve the visibility of the video. However, such video processing results in the loss of critical information in the original (un-enhanced) video. Conversely, traditional two-stream methods are capable of learning information from both original and processed videos, but it can lead to a significant increase in the computational cost during the inference phase in the task of video classification. To address these challenges, we propose a novel teacher-student video classification framework, named Dual-Light KnowleDge Distillation for Action Recognition in the Dark (DL-KDD). This framework enables the model to learn from both original and enhanced video without introducing additional computational cost during inference. Specifically, DL-KDD utilizes the strategy of knowledge distillation during training. The teacher model is trained with enhanced video, and the student model is trained with both the original video and the soft target generated by the teacher model. This teacher-student framework allows the student model to predict action using only the original input video during inference. In our experiments, the proposed DL-KDD framework outperforms state-of-the-art methods on the ARID, ARID V1.5, and Dark-48 datasets. We achieve the best performance on each dataset and up to a 4.18% improvement on Dark-48, using only original video inputs, thus avoiding the use of two-stream framework or enhancement modules for inference. We further validate the effectiveness of the distillation strategy in ablative experiments. The results highlight the advantages of our knowledge distillation framework in dark human action recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
460,779
1604.02781
Dual-Timescale Spectrum Management in Small-Cell Wireless Networks
To attain the targeted data rates of next generation cellular networks requires dense deployment of small cells in addition to macro cells which provide wide coverage. Dynamic radio resource management is crucial to the success of such heterogeneous networks due to much more pronounced traffic and interference variations in small cells. This work proposes a framework for spectrum management organized according to two timescales, which include 1) centralized optimization on a moderate timescale corresponding to typical duration of user sessions (several seconds to minutes in today's networks), and 2) distributed spectrum allocation on a fast timescale corresponding to typical latency requirements (a few milliseconds). An optimization problem is formulated to allocate resources on the slower timescale with consideration of (distributed) opportunistic scheduling on the faster timescale. Both fixed and fully flexible user association schemes are considered. Iterative algorithms are developed to solve these optimization problems efficiently for a cluster of cells with guaranteed convergence. Simulation results demonstrate advantages of the proposed framework and algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
54,388
2204.02593
Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise
We introduce a general framework for nonlinear stochastic gradient descent (SGD) for the scenarios when gradient noise exhibits heavy tails. The proposed framework subsumes several popular nonlinearity choices, like clipped, normalized, signed or quantized gradient, but we also consider novel nonlinearity choices. We establish for the considered class of methods strong convergence guarantees assuming a strongly convex cost function with Lipschitz continuous gradients under very general assumptions on the gradient noise. Most notably, we show that, for a nonlinearity with bounded outputs and for the gradient noise that may not have finite moments of order greater than one, the nonlinear SGD's mean squared error (MSE), or equivalently, the expected cost function's optimality gap, converges to zero at rate~$O(1/t^\zeta)$, $\zeta \in (0,1)$. In contrast, for the same noise setting, the linear SGD generates a sequence with unbounded variances. Furthermore, for the nonlinearities that can be decoupled component wise, like, e.g., sign gradient or component-wise clipping, we show that the nonlinear SGD asymptotically (locally) achieves a $O(1/t)$ rate in the weak convergence sense and explicitly quantify the corresponding asymptotic variance. Experiments show that, while our framework is more general than existing studies of SGD under heavy-tail noise, several easy-to-implement nonlinearities from our framework are competitive with state of the art alternatives on real data sets with heavy tail noises.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
290,007
1904.07598
Scaling TCP's Congestion Window for Small Round Trip Times
This memo explains that deploying active queue management (AQM) to counter bufferbloat will not prevent TCP from overriding the AQM and building large queues in a range of not uncommon scenarios. This is a brief paper study to explain this effect which was observed in a number of low latency testbed experiments. To keep its queue short, an AQM drops (or marks) packets to make the TCP flow(s) traversing it reduce their packet rate. Nearly all TCP implementations will not run at less than two packets per round trip time (RTT). 2pkt / RTT need not imply low bit-rate if the RTT is small. For instance, it represents 2Mb/s over a 6ms round trip. When a few TCP flows share a link, in certain scenarios, including regular broadband and data centres, no matter how much the AQM signals to the flows to keep the queue short, they will not obey, because it is impossible for them to run below this floor. The memo proposes the necessary modification to the TCP standard.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
127,839
1811.07335
Distribution Discrepancy Maximization for Image Privacy Preserving
With the rapid increase in online photo sharing activities, image obfuscation algorithms become particularly important for protecting the sensitive information in the shared photos. However, existing image obfuscation methods based on hand-crafted principles are challenged by the dramatic development of deep learning techniques. To address this problem, we propose to maximize the distribution discrepancy between the original image domain and the encrypted image domain. Accordingly, we introduce a collaborative training scheme: a discriminator $D$ is trained to discriminate the reconstructed image from the encrypted image, and an encryption model $G_e$ is required to generate these two kinds of images to maximize the recognition rate of $D$, leading to the same training objective for both $D$ and $G_e$. We theoretically prove that such a training scheme maximizes two distributions' discrepancy. Compared with commonly-used image obfuscation methods, our model can produce satisfactory defense against the attack of deep recognition models indicated by significant accuracy decreases on FaceScrub, Casia-WebFace and LFW datasets.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
113,739
2109.05439
Concave Utility Reinforcement Learning with Zero-Constraint Violations
We consider the problem of tabular infinite horizon concave utility reinforcement learning (CURL) with convex constraints. For this, we propose a model-based learning algorithm that also achieves zero constraint violations. Assuming that the concave objective and the convex constraints have a solution interior to the set of feasible occupation measures, we solve a tighter optimization problem to ensure that the constraints are never violated despite the imprecise model knowledge and model stochasticity. We use Bellman error-based analysis for tabular infinite-horizon setups which allows analyzing stochastic policies. Combining the Bellman error-based analysis and tighter optimization equation, for $T$ interactions with the environment, we obtain a high-probability regret guarantee for objective which grows as $\Tilde{O}(1/\sqrt{T})$, excluding other factors. The proposed method can be applied for optimistic algorithms to obtain high-probability regret bounds and also be used for posterior sampling algorithms to obtain a loose Bayesian regret bounds but with significant improvement in computational complexity.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
254,790
2312.15622
Scalable Face Image Coding via StyleGAN Prior: Towards Compression for Human-Machine Collaborative Vision
The accelerated proliferation of visual content and the rapid development of machine vision technologies bring significant challenges in delivering visual data on a gigantic scale, which shall be effectively represented to satisfy both human and machine requirements. In this work, we investigate how hierarchical representations derived from the advanced generative prior facilitate constructing an efficient scalable coding paradigm for human-machine collaborative vision. Our key insight is that by exploiting the StyleGAN prior, we can learn three-layered representations encoding hierarchical semantics, which are elaborately designed into the basic, middle, and enhanced layers, supporting machine intelligence and human visual perception in a progressive fashion. With the aim of achieving efficient compression, we propose the layer-wise scalable entropy transformer to reduce the redundancy between layers. Based on the multi-task scalable rate-distortion objective, the proposed scheme is jointly optimized to achieve optimal machine analysis performance, human perception experience, and compression ratio. We validate the proposed paradigm's feasibility in face image compression. Extensive qualitative and quantitative experimental results demonstrate the superiority of the proposed paradigm over the latest compression standard Versatile Video Coding (VVC) in terms of both machine analysis as well as human perception at extremely low bitrates ($<0.01$ bpp), offering new insights for human-machine collaborative compression.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
418,069
2112.01521
Object-aware Monocular Depth Prediction with Instance Convolutions
With the advent of deep learning, estimating depth from a single RGB image has recently received a lot of attention, being capable of empowering many different applications ranging from path planning for robotics to computational cinematography. Nevertheless, while the depth maps are in their entirety fairly reliable, the estimates around object discontinuities are still far from satisfactory. This can be contributed to the fact that the convolutional operator naturally aggregates features across object discontinuities, resulting in smooth transitions rather than clear boundaries. Therefore, in order to circumvent this issue, we propose a novel convolutional operator which is explicitly tailored to avoid feature aggregation of different object parts. In particular, our method is based on estimating per-part depth values by means of superpixels. The proposed convolutional operator, which we dub "Instance Convolution", then only considers each object part individually on the basis of the estimated superpixels. Our evaluation with respect to the NYUv2 as well as the iBims dataset clearly demonstrates the superiority of Instance Convolutions over the classical convolution at estimating depth around occlusion boundaries, while producing comparable results elsewhere. Code will be made publicly available upon acceptance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
269,511
2302.04658
The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning
Suppose we are given access to $n$ independent samples from distribution $\mu$ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution $\nu$. In this work we show that the optimal total variation distance as a function of $n$ is given by $\tilde\Theta(\frac{D}{f'(n)})$ over the class of all pairs $\nu,\mu$ with a bounded $f$-divergence $D_f(\nu\|\mu)\leq D$. Previously, this question was studied only for the case when the Radon-Nikodym derivative of $\nu$ with respect to $\mu$ is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded $f$-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
344,773
2403.02688
DOCTOR: Dynamic On-Chip Temporal Variation Remediation Toward Self-Corrected Photonic Tensor Accelerators
Photonic computing has emerged as a promising solution for accelerating computation-intensive artificial intelligence (AI) workloads, offering unparalleled speed and energy efficiency, especially in resource-limited, latency-sensitive edge computing environments. However, the deployment of analog photonic tensor accelerators encounters reliability challenges due to hardware noise and environmental variations. While off-chip noise-aware training and on-chip training have been proposed to enhance the variation tolerance of optical neural accelerators with moderate, static noise, we observe a notable performance degradation over time due to temporally drifting variations, which requires a real-time, in-situ calibration mechanism. To tackle this challenging reliability issues, for the first time, we propose a lightweight dynamic on-chip remediation framework, dubbed DOCTOR, providing adaptive, in-situ accuracy recovery against temporally drifting noise. The DOCTOR framework intelligently monitors the chip status using adaptive probing and performs fast in-situ training-free calibration to restore accuracy when necessary. Recognizing nonuniform spatial variation distributions across devices and tensor cores, we also propose a variation-aware architectural remapping strategy to avoid executing critical tasks on noisy devices. Extensive experiments show that our proposed framework can guarantee sustained performance under drifting variations with 34% higher accuracy and 2-3 orders-of-magnitude lower overhead compared to state-of-the-art on-chip training methods. Our code is open-sourced at https://github.com/ScopeX-ASU/DOCTOR.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
434,897
2312.03052
Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models
Solving complex visual tasks such as "Who invented the musical instrument on the right?" involves a composition of skills: understanding space, recognizing instruments, and also retrieving prior knowledge. Recent work shows promise by decomposing such tasks using a large language model (LLM) into an executable program that invokes specialized vision models. However, generated programs are error-prone: they omit necessary steps, include spurious ones, and are unable to recover when the specialized models give incorrect outputs. Moreover, they require loading multiple models, incurring high latency and computation costs. We propose Visual Program Distillation (VPD), an instruction tuning framework that produces a vision-language model (VLM) capable of solving complex visual tasks with a single forward pass. VPD distills the reasoning ability of LLMs by using them to sample multiple candidate programs, which are then executed and verified to identify a correct one. It translates each correct program into a language description of the reasoning steps, which are then distilled into a VLM. Extensive experiments show that VPD improves the VLM's ability to count, understand spatial relations, and reason compositionally. Our VPD-trained PaLI-X outperforms all prior VLMs, achieving state-of-the-art performance across complex vision tasks, including MMBench, OK-VQA, A-OKVQA, TallyQA, POPE, and Hateful Memes. An evaluation with human annotators also confirms that VPD improves model response factuality and consistency. Finally, experiments on content moderation demonstrate that VPD is also helpful for adaptation to real-world applications with limited data.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
413,123
2401.06945
Knowledge-Centric Templatic Views of Documents
Authors seeking to communicate with broader audiences often share their ideas in various document formats, such as slide decks, newsletters, reports, and posters. Prior work on document generation has generally tackled the creation of each separate format to be a different task, leading to fragmented learning processes, redundancy in models and methods, and disjointed evaluation. We consider each of these documents as templatic views of the same underlying knowledge/content, and we aim to unify the generation and evaluation of these templatic views. We begin by showing that current LLMs are capable of generating various document formats with little to no supervision. Further, a simple augmentation involving a structured intermediate representation can improve performance, especially for smaller models. We then introduce a novel unified evaluation framework that can be adapted to measuring the quality of document generators for heterogeneous downstream applications. This evaluation is adaptable to a range of user defined criteria and application scenarios, obviating the need for task specific evaluation metrics. Finally, we conduct a human evaluation, which shows that people prefer 82% of the documents generated with our method, while correlating more highly with our unified evaluation framework than prior metrics in the literature.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
421,351