id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2403.06264
Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion
AI-based social media platforms has already transformed the nature of economic and social interaction. AI enables the massive scale and highly personalized nature of online information sharing that we now take for granted. Extensive attention has been devoted to the polarization that social media platforms appear to facilitate. However, a key implication of the transformation we are experiencing due to these AI-powered platforms has received much less attention: how platforms impact what observers of online discourse come to believe about community views. These observers include policymakers and legislators, who look to social media to gauge the prospects for policy and legislative change, as well as developers of AI models trained on large-scale internet data, whose outputs may similarly reflect a distorted view of public opinion. In this paper, we present a nested game-theoretic model to show how observed online opinion is produced by the interaction of the decisions made by users about whether and with what rhetorical intensity to share their opinions on a platform, the efforts of organizations (such as traditional media and advocacy organizations) that seek to encourage or discourage opinion-sharing online, and the operation of AI-powered recommender systems controlled by social media platforms. We show that signals from ideological organizations encourage an increase in rhetorical intensity, leading to the 'rational silence' of moderate users. This, in turn, creates a polarized impression of where average opinions lie. We also show that this observed polarization can also be amplified by recommender systems that encourage the formation of communities online that end up seeing a skewed sample of opinion. We also identify practical strategies platforms can implement, such as reducing exposure to signals from ideological organizations and a tailored approach to content moderation.
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
436,370
1907.07485
Multi-Adapter RGBT Tracking
The task of RGBT tracking aims to take the complementary advantages from visible spectrum and thermal infrared data to achieve robust visual tracking, and receives more and more attention in recent years. Existing works focus on modality-specific information integration by introducing modality weights to achieve adaptive fusion or learning robust feature representations of different modalities. Although these methods could effectively deploy the modality-specific properties, they ignore the potential values of modality-shared cues as well as instance-aware information, which are crucial for effective fusion of different modalities in RGBT tracking. In this paper, we propose a novel Multi-Adapter convolutional Network (MANet) to jointly perform modality-shared, modality-specific and instance-aware feature learning in an end-to-end trained deep framework for RGBT tracking. We design three kinds of adapters within our network. In a specific, the generality adapter is to extract shared object representations, the modality adapter aims at encoding modality-specific information to deploy their complementary advantages, and the instance adapter is to model the appearance properties and temporal variations of a certain object. Moreover, to reduce computational complexity for real-time demand of visual tracking, we design a parallel structure of generic adapter and modality adapter. Extensive experiments on two RGBT tracking benchmark datasets demonstrate the outstanding performance of the proposed tracker against other state-of-the-art RGB and RGBT tracking algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
138,886
2208.02512
Scalable Video Coding for Humans and Machines
Video content is watched not only by humans, but increasingly also by machines. For example, machine learning models analyze surveillance video for security and traffic monitoring, search through YouTube videos for inappropriate content, and so on. In this paper, we propose a scalable video coding framework that supports machine vision (specifically, object detection) through its base layer bitstream and human vision via its enhancement layer bitstream. The proposed framework includes components from both conventional and Deep Neural Network (DNN)-based video coding. The results show that on object detection, the proposed framework achieves 13-19% bit savings compared to state-of-the-art video codecs, while remaining competitive in terms of MS-SSIM on the human vision task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,493
2110.02645
A Weighted Generalized Coherence Approach for Sensing Matrix Design
As compared to using randomly generated sensing matrices, optimizing the sensing matrix w.r.t. a carefully designed criterion is known to lead to better quality signal recovery given a set of compressive measurements. In this paper, we propose generalizations of the well-known mutual coherence criterion for optimizing sensing matrices starting from random initial conditions. We term these generalizations as bi-coherence or tri-coherence and they are based on a criterion that discourages any one column of the sensing matrix from being close to a sparse linear combination of other columns. We also incorporate training data to further improve the sensing matrices through weighted coherence, weighted bi-coherence, or weighted tri-coherence criteria, which assign weights to sensing matrix columns as per their importance. An algorithm is also presented to solve the optimization problems. Finally, the effectiveness of the proposed algorithm is demonstrated through empirical results.
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
259,201
1911.06912
Fixed-horizon Active Hypothesis Testing
Two active hypothesis testing problems are formulated. In these problems, the agent can perform a fixed number of experiments and then decide on one of the hypotheses. The agent is also allowed to declare its experiments inconclusive if needed. The first problem is an asymmetric formulation in which the the objective is to minimize the probability of incorrectly declaring a particular hypothesis to be true while ensuring that the probability of correctly declaring that hypothesis is moderately high. This formulation can be seen as a generalization of the formulation in the classical Chernoff-Stein lemma to an active setting. The second problem is a symmetric formulation in which the objective is to minimize the probability of making an incorrect inference (misclassification probability) while ensuring that the true hypothesis is declared conclusively with moderately high probability. For these problems, lower and upper bounds on the optimal misclassification probabilities are derived and these bounds are shown to be asymptotically tight. Classical approaches for experiment selection suggest use of randomized and, in some cases, open-loop strategies. As opposed to these classical approaches, fully deterministic and adaptive experiment selection strategies are provided. It is shown that these strategies are asymptotically optimal and further, using numerical experiments, it is demonstrated that these novel experiment selection strategies (coupled with appropriate inference strategies) have a significantly better performance in the non-asymptotic regime.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
153,652
2501.05472
The 2nd Place Solution from the 3D Semantic Segmentation Track in the 2024 Waymo Open Dataset Challenge
3D semantic segmentation is one of the most crucial tasks in driving perception. The ability of a learning-based model to accurately perceive dense 3D surroundings often ensures the safe operation of autonomous vehicles. However, existing LiDAR-based 3D semantic segmentation databases consist of sequentially acquired LiDAR scans that are long-tailed and lack training diversity. In this report, we introduce MixSeg3D, a sophisticated combination of the strong point cloud segmentation model with advanced 3D data mixing strategies. Specifically, our approach integrates the MinkUNet family with LaserMix and PolarMix, two scene-scale data augmentation methods that blend LiDAR point clouds along the ego-scene's inclination and azimuth directions. Through empirical experiments, we demonstrate the superiority of MixSeg3D over the baseline and prior arts. Our team achieved 2nd place in the 3D semantic segmentation track of the 2024 Waymo Open Dataset Challenge.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
523,604
2401.03397
Predicting the Skies: A Novel Model for Flight-Level Passenger Traffic Forecasting
Accurate prediction of flight-level passenger traffic is of paramount importance in airline operations, influencing key decisions from pricing to route optimization. This study introduces a novel, multimodal deep learning approach to the challenge of predicting flight-level passenger traffic, yielding substantial accuracy improvements compared to traditional models. Leveraging an extensive dataset from American Airlines, our model ingests historical traffic data, fare closure information, and seasonality attributes specific to each flight. Our proposed neural network integrates the strengths of Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN), exploiting the temporal patterns and spatial relationships within the data to enhance prediction performance. Crucial to the success of our model is a comprehensive data processing strategy. We construct 3D tensors to represent data, apply careful masking strategies to mirror real-world dynamics, and employ data augmentation techniques to enrich the diversity of our training set. The efficacy of our approach is borne out in the results: our model demonstrates an approximate 33\% improvement in Mean Squared Error (MSE) compared to traditional benchmarks. This study, therefore, highlights the significant potential of deep learning techniques and meticulous data processing in advancing the field of flight traffic prediction.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
420,091
2212.01540
Quadcopter Tracking Using Euler-Angle-Free Flatness-Based Control
Quadcopter trajectory tracking control has been extensively investigated and implemented in the past. Available controls mostly use the Euler angle standards to describe the quadcopters rotational kinematics and dynamics. As a result, the same rotation can be translated into different roll, pitch, and yaw angles because there are multiple Euler angle standards for characterization of rotation in a 3-dimensional motion space. Additionally, it is computationally expensive to convert a quadcopters orientation to the associated roll, pitch, and yaw angles, which may make it difficult to track quick and aggressive trajectories. To address these issues, this paper will develop a flatness-based trajectory tracking control without using Euler angles. We assess and test the proposed controls performance in the Gazebo simulation environment and contrast its functionality with the existing Mellinger controller, which has been widely adopted by the robotics and unmanned aerial system (UAS) communities.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
334,467
1607.02334
Betweenness centrality profiles in trees
Betweenness centrality of a vertex in a graph measures the fraction of shortest paths going through the vertex. This is a basic notion for determining the importance of a vertex in a network. The k-betweenness centrality of a vertex is defined similarly, but only considers shortest paths of length at most k. The sequence of k-betweenness centralities for all possible values of k forms the betweenness centrality profile of a vertex. We study properties of betweenness centrality profiles in trees. We show that for scale-free random trees, for fixed k, the expectation of k-betweenness centrality strictly decreases as the index of the vertex increases. We also analyze worst-case properties of profiles in terms of the distance of profiles from being monotone, and the number of times pairs of profiles can cross. This is related to whether k-betweenness centrality, for small values of k, may be used instead of having to consider all shortest paths. Bounds are given that are optimal in order of magnitude. We also present some experimental results for scale-free random trees.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
58,334
2106.04193
Targeted Active Learning for Bayesian Decision-Making
Active learning is usually applied to acquire labels of informative data points in supervised learning, to maximize accuracy in a sample-efficient way. However, maximizing the accuracy is not the end goal when the results are used for decision-making, for example in personalized medicine or economics. We argue that when acquiring samples sequentially, separating learning and decision-making is sub-optimal, and we introduce an active learning strategy which takes the down-the-line decision problem into account. Specifically, we introduce a novel active learning criterion which maximizes the expected information gain on the posterior distribution of the optimal decision. We compare our targeted active learning strategy to existing alternatives on both simulated and real data, and show improved performance in decision-making accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
239,628
2409.15303
Disruptive RIS for Enhancing Key Generation and Secret Transmission in Low-Entropy Environments
Key generation, a pillar in physical-layer security (PLS), is the process of the exchanging signals from two legitimate users (Alice and Bob) to extract a common key from the random, common channels. The drawback of extracting keys from wireless channels is the ample dependence on the dynamicity and fluctuations of the radio channel, rendering the key vulnerable to estimation by Eve (an illegitimate user) in low-entropy environments because of insufficient randomness. Added to that, the lack of channel fluctuations lower the secret key rate (SKR) defined as the number of bits of key generated per channel use. In this work, we aim to address this challenge by using a reconfigurable intelligent surface (RIS) to produce random phases at certain, carefully curated intervals such that it disrupts the channel in low-entropy environments. We propose an RIS assisted key generation protocol, study its performance, and compare with benchmarks to observe the benefit of using an RIS while considering various important metrics such as key mismatch rate and secret key throughput. Furthermore, we characterize a scaling law as a function of the rate of change of RIS phase switching for the average secret information rate under this protocol. Then, we use both the key throughput and information rate to optimize the overall secrecy rate. Simulations are made to validate our theoretical findings and effectiveness of the proposed scheme showing an improvement in performance when an RIS is deployed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
490,847
2410.14892
Frequency Control and Disturbance Containment Using Grid-Forming Embedded Storage Networks
The paper discusses fast frequency control in bulk power systems using embedded networks of grid-forming energy storage resources. Differing from their traditional roles of regulating reserves, the storage resources in this work operate as fast-acting grid assets shaping transient dynamics. The storage resources in the network are autonomously controlled using local measurements for distributed frequency support during disturbance events. Further, the grid-forming inverter systems interfacing with the storage resources, are augmented with fast-acting safety controls designed to contain frequency transients within a prescribed tolerance band. The control action, derived from the storage network, improves the frequency nadirs in the system and prevents the severity of a disturbance from propagating far from the source. The paper also presents sensitivity studies to evaluate the impacts of storage capacity and inverter controller parameters on the dynamic performance of frequency control and disturbance localization. The performance of the safety-constrained grid-forming control is also compared with the more common grid-following control. The results are illustrated through case studies on an IEEE test system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
500,252
1303.4695
NetLogo Implementation of an Evacuation Scenario
The problem of evacuating crowded closed spaces, such as discotheques, public exhibition pavilions or concert houses, has become increasingly important and gained attention both from practitioners and from public authorities. A simulation implementation using NetLogo, an agent-based simulation framework that permits the quickly creation of prototypes, is presented. Our aim is to prove that this model developed using NetLogo, albeit simple can be expanded and adapted for fire safety experts test various scenarios and validate the outcome of their design. Some preliminary experiments are carried out, whose results are presented, validated and discussed so as to illustrate their efficiency. Finally, we draw some conclusions and point out ways in which this work can be further extended.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
23,027
2311.05988
Vision Big Bird: Random Sparsification for Full Attention
Recently, Transformers have shown promising performance in various vision tasks. However, the high costs of global self-attention remain challenging for Transformers, especially for high-resolution vision tasks. Inspired by one of the most successful transformers-based models for NLP: Big Bird, we propose a novel sparse attention mechanism for Vision Transformers (ViT). Specifically, we separate the heads into three groups, the first group used convolutional neural network (CNN) to extract local features and provide positional information for the model, the second group used Random Sampling Windows (RS-Win) for sparse self-attention calculation, and the third group reduces the resolution of the keys and values by average pooling for global attention. Based on these components, ViT maintains the sparsity of self-attention while maintaining the merits of Big Bird (i.e., the model is a universal approximator of sequence functions and is Turing complete). Moreover, our results show that the positional encoding, a crucial component in ViTs, can be safely removed in our model. Experiments show that Vision Big Bird demonstrates competitive performance on common vision tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
406,781
1304.3082
Reasoning With Uncertain Knowledge
A model of knowledge representation is described in which propositional facts and the relationships among them can be supported by other facts. The set of knowledge which can be supported is called the set of cognitive units, each having associated descriptions of their explicit and implicit support structures, summarizing belief and reliability of belief. This summary is precise enough to be useful in a computational model while remaining descriptive of the underlying symbolic support structure. When a fact supports another supportive relationship between facts we call this meta-support. This facilitates reasoning about both the propositional knowledge. and the support structures underlying it.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,798
2106.08462
Multi-Resolution Continuous Normalizing Flows
Recent work has shown that Neural Ordinary Differential Equations (ODEs) can serve as generative models of images using the perspective of Continuous Normalizing Flows (CNFs). Such models offer exact likelihood calculation, and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image. We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only 1 GPU. Further, we examine the out-of-distribution properties of (Multi-Resolution) Continuous Normalizing Flows, and find that they are similar to those of other likelihood-based generative models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
241,300
1403.3369
Controlling Recurrent Neural Networks by Conceptors
The human brain is a dynamical system whose extremely complex sensor-driven neural processes give rise to conceptual, logical cognition. Understanding the interplay between nonlinear neural dynamics and concept-level cognition remains a major scientific challenge. Here I propose a mechanism of neurodynamical organization, called conceptors, which unites nonlinear dynamics with basic principles of conceptual abstraction and logic. It becomes possible to learn, store, abstract, focus, morph, generalize, de-noise and recognize a large number of dynamical patterns within a single neural system; novel patterns can be added without interfering with previously acquired ones; neural noise is automatically filtered. Conceptors help explaining how conceptual-level information processing emerges naturally and robustly in neural systems, and remove a number of roadblocks in the theory and applications of recurrent neural networks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
31,566
2010.12916
Modeling and Optimization Trade-off in Meta-learning
By searching for shared inductive biases across tasks, meta-learning promises to accelerate learning on novel tasks, but with the cost of solving a complex bilevel optimization problem. We introduce and rigorously define the trade-off between accurate modeling and optimization ease in meta-learning. At one end, classic meta-learning algorithms account for the structure of meta-learning but solve a complex optimization problem, while at the other end domain randomized search (otherwise known as joint training) ignores the structure of meta-learning and solves a single level optimization problem. Taking MAML as the representative meta-learning algorithm, we theoretically characterize the trade-off for general non-convex risk functions as well as linear regression, for which we are able to provide explicit bounds on the errors associated with modeling and optimization. We also empirically study this trade-off for meta-reinforcement learning benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,929
2106.11118
SODA10M: A Large-Scale 2D Self/Semi-Supervised Object Detection Dataset for Autonomous Driving
Aiming at facilitating a real-world, ever-evolving and scalable autonomous driving system, we present a large-scale dataset for standardizing the evaluation of different self-supervised and semi-supervised approaches by learning from raw data, which is the first and largest dataset to date. Existing autonomous driving systems heavily rely on `perfect' visual perception models (i.e., detection) trained using extensive annotated data to ensure safety. However, it is unrealistic to elaborately label instances of all scenarios and circumstances (i.e., night, extreme weather, cities) when deploying a robust autonomous driving system. Motivated by recent advances of self-supervised and semi-supervised learning, a promising direction is to learn a robust detection model by collaboratively exploiting large-scale unlabeled data and few labeled data. Existing datasets either provide only a small amount of data or covers limited domains with full annotation, hindering the exploration of large-scale pre-trained models. Here, we release a Large-Scale 2D Self/semi-supervised Object Detection dataset for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories. To improve diversity, the images are collected within 27833 driving hours under different weather conditions, periods and location scenes of 32 different cities. We provide extensive experiments and deep analyses of existing popular self/semi-supervised approaches, and give some interesting findings in autonomous driving scope. Experiments show that SODA10M can serve as a promising pre-training dataset for different self-supervised learning methods, which gives superior performance when fine-tuning with different downstream tasks (i.e., detection, semantic/instance segmentation) in autonomous driving domain. More information can refer to https://soda-2d.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
242,282
2101.04632
Context Matters: Self-Attention for Sign Language Recognition
This paper proposes an attentional network for the task of Continuous Sign Language Recognition. The proposed approach exploits co-independent streams of data to model the sign language modalities. These different channels of information can share a complex temporal structure between each other. For that reason, we apply attention to synchronize and help capture entangled dependencies between the different sign language components. Even though Sign Language is multi-channel, handshapes represent the central entities in sign interpretation. Seeing handshapes in their correct context defines the meaning of a sign. Taking that into account, we utilize the attention mechanism to efficiently aggregate the hand features with their appropriate spatio-temporal context for better sign recognition. We found that by doing so the model is able to identify the essential Sign Language components that revolve around the dominant hand and the face areas. We test our model on the benchmark dataset RWTH-PHOENIX-Weather 2014, yielding competitive results.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
215,192
1905.08389
Time-varying Autoregression with Low Rank Tensors
We present a windowed technique to learn parsimonious time-varying autoregressive models from multivariate timeseries. This unsupervised method uncovers interpretable spatiotemporal structure in data via non-smooth and non-convex optimization. In each time window, we assume the data follow a linear model parameterized by a system matrix, and we model this stack of potentially different system matrices as a low rank tensor. Because of its structure, the model is scalable to high-dimensional data and can easily incorporate priors such as smoothness over time. We find the components of the tensor using alternating minimization and prove that any stationary point of this algorithm is a local minimum. We demonstrate on a synthetic example that our method identifies the true rank of a switching linear system in the presence of noise. We illustrate our model's utility and superior scalability over extant methods when applied to several synthetic and real-world example: two types of time-varying linear systems, worm behavior, sea surface temperature, and monkey brain datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
131,458
2407.15463
Integrated Access and Backhaul (IAB) in Low Altitude Platforms
In this paper, we explore the problem of utilizing Integrated Access and Backhaul (IAB) technology in Non-Terrestrial Networks (NTN), with a particular focus on aerial access networks. We consider an Uncrewed Aerial Vehicle (UAV)-based wireless network comprised of two layers of UAVs: (a) a lower layer consisting a number of flying users and a UAV Base Station (BS) that provides coverage for terrestrial users and, (b) an upper layer designated to provide both wireless access for flying users and backhaul connectivity for UAV BS. By adopting IAB technology, the backhaul and access links collaboratively share their resources, enabling aerial backhauling and the utilization of the same infrastructure and frequency resources for access links. A sum-rate maximization problem is formulated by considering aerial backhaul constraints to optimally allocate the frequency spectrum between aerial and terrestrial networks. We decompose the resulting non-convex optimization problem into two sub-problems of beamforming and spectrum allocation and then propose efficient solutions for each. Numerical results in different scenarios yield insightful findings about the effectiveness of using the IAB technique in aerial networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
475,197
2110.13624
Technology Fitness Landscape for Design Innovation: A Deep Neural Embedding Approach Based on Patent Data
Technology is essential to innovation and economic prosperity. Understanding technological changes can guide innovators to find new directions of design innovation and thus make breakthroughs. In this work, we construct a technology fitness landscape via deep neural embeddings of patent data. The landscape consists of 1,757 technology domains and their respective improvement rates. In the landscape, we found a high hill related to information and communication technologies (ICT) and a vast low plain of the remaining domains. The landscape presents a bird's eye view of the structure of the total technology space, providing a new way for innovators to interpret technology evolution with a biological analogy, and a biologically-inspired inference to the next innovation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,254
2406.17907
Delta-V-Optimal Centralized Guidance Strategy For Under-actuated N-Satellite Formations
This paper addresses the computation of Delta-V-optimal, safe, relative orbit reconfigurations for satellite formations in a centralized fashion. The formations under consideration comprise an uncontrolled chief spacecraft flying with an arbitrary number, N, of deputy satellites, where each deputy is equipped with a single electric thruster. Indeed, this represents a technological solution that is becoming widely employed by the producers of small-satellite platforms. While adopting a single electric thruster does reduce the required power, weight, and size of the orbit control system, it comes at the cost of rendering the satellite under-actuated. In this setting, the satellite can provide a desired thrust vector only after an attitude maneuver is carried out to redirect the thruster nozzle opposite to the desired thrust direction. In order to further extend the applicability range of such under-actuated platforms, guidance strategies are developed to support different reconfiguration scenarios for N-satellite formations. This paper starts from a classical non-convex quadratically constrained trajectory optimization formulation, which passes through multiple simplifications and approximations to arrive to two novel convex formulations, namely a second-order cone programming formulation, and a linear programming one. Out of five guidance formulations proposed in this article, the most promising three were compared through an extensive benchmark analysis that is applied to fifteen of the most widely-used solvers. This benchmark experiment provides information about the key distinctions between the different problem formulations, and under which conditions each one of them can be recommended.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
467,786
2403.16353
Energy-Efficient Hybrid Beamforming with Dynamic On-off Control for Integrated Sensing, Communications, and Powering
This paper investigates the energy-efficient hybrid beamforming design for a multi-functional integrated sensing, communications, and powering (ISCAP) system. In this system, a base station (BS) with a hybrid analog-digital (HAD) architecture sends unified wireless signals to communicate with multiple information receivers (IRs), sense multiple point targets, and wirelessly charge multiple energy receivers (ERs) at the same time. To facilitate the energy-efficient design, we present a novel HAD architecture for the BS transmitter, which allows dynamic on-off control of its radio frequency (RF) chains and analog phase shifters (PSs) through a switch network. We also consider a practical and comprehensive power consumption model for the BS, by taking into account the power-dependent non-linear power amplifier (PA) efficiency, and the on-off non-transmission power consumption model of RF chains and PSs. We jointly design the hybrid beamforming and dynamic on-off control at the BS, aiming to minimize its total power consumption, while guaranteeing the performance requirements on communication rates, sensing Cram\'er-Rao bound (CRB), and harvested power levels. The formulation also takes into consideration the per-antenna transmit power constraint and the constant modulus constraints for the analog beamformer at the BS. The resulting optimization problem for ISCAP is highly non-convex. Please refer to the paper for a complete abstract.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
440,990
2307.07713
Data-Driven Optimal Control of Tethered Space Robot Deployment with Learning Based Koopman Operator
To avoid complex constraints of the traditional nonlinear method for tethered space robot (TSR) deployment, this paper proposes a data-driven optimal control framework with an improved deep learning based Koopman operator that could be applied to complex environments. In consideration of TSR's nonlinearity, its finite dimensional lifted representation is derived with the state-dependent only embedding functions in the Koopman framework. A deep learning approach is adopted to approximate the global linear representation of TSR. Deep neural networks (DNN) are developed to parameterize Koopman operator and its embedding functions. An auxiliary neural network is developed to encode the nonlinear control term of finite dimensional lifted system. In addition, the state matrix A and control matrix B of lifted linear system in the embedding space are also estimated during training DNN. Then three loss functions that related to reconstruction and prediction ability of network and controllability of lifted linear system are designed for training the entire network. With the global linear system produced from DNN, Linear Quadratic Regulator (LQR) is applied to derive the optimal control policy for the TSR deployment. Finally, simulation results verify the effectiveness of proposed framework and show that it could deploy tethered space robot more quickly with less swing of in-plane angle.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
379,518
2405.13791
Multi-Type Point Cloud Autoencoder: A Complete Equivariant Embedding for Molecule Conformation and Pose
The point cloud is a flexible representation for a wide variety of data types, and is a particularly natural fit for the 3D conformations of molecules. Extant molecule embedding/representation schemes typically focus on internal degrees of freedom, ignoring the global 3D orientation. For tasks that depend on knowledge of both molecular conformation and 3D orientation, such as the generation of molecular dimers, clusters, or condensed phases, we require a representation which is provably complete in the types and positions of atomic nuclei and roto-inversion equivariant with respect to the input point cloud. We develop, train, and evaluate a new type of autoencoder, molecular O(3) encoding net (Mo3ENet), for multi-type point clouds, for which we propose a new reconstruction loss, capitalizing on a Gaussian mixture representation of the input and output point clouds. Mo3ENet is end-to-end equivariant, meaning the learned representation can be manipulated on O(3), a practical bonus for downstream learning tasks. An appropriately trained Mo3ENet latent space comprises a universal embedding for scalar and vector molecule property prediction tasks, as well as other downstream tasks incorporating the 3D molecular pose.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,082
2001.07608
Analytic Properties of Trackable Weak Models
We present several new results on the feasibility of inferring the hidden states in strongly-connected trackable weak models. Here, a weak model is a directed graph in which each node is assigned a set of colors which may be emitted when that node is visited. A hypothesis is a node sequence which is consistent with a given color sequence. A weak model is said to be trackable if the worst case number of such hypotheses grows as a polynomial in the sequence length. We show that the number of hypotheses in strongly-connected trackable models is bounded by a constant and give an expression for this constant. We also consider the problem of reconstructing which branch was taken at a node with same-colored out-neighbors, and show that it is always eventually possible to identify which branch was taken if the model is strongly connected and trackable. We illustrate these properties by assigning transition probabilities and employing standard tools for analyzing Markov chains. In addition, we present new results for the entropy rates of weak models according to whether they are trackable or not. These theorems indicate that the combination of trackability and strong connectivity dramatically simplifies the task of reconstructing which nodes were visited. This work has implications for any problem which can be described in terms of an agent traversing a colored graph, such as the reconstruction of hidden states in a hidden Markov model (HMM).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,080
2207.05060
Differentiable Physics Simulations with Contacts: Do They Have Correct Gradients w.r.t. Position, Velocity and Control?
In recent years, an increasing amount of work has focused on differentiable physics simulation and has produced a set of open source projects such as Tiny Differentiable Simulator, Nimble Physics, diffTaichi, Brax, Warp, Dojo and DiffCoSim. By making physics simulations end-to-end differentiable, we can perform gradient-based optimization and learning tasks. A majority of differentiable simulators consider collisions and contacts between objects, but they use different contact models for differentiability. In this paper, we overview four kinds of differentiable contact formulations - linear complementarity problems (LCP), convex optimization models, compliant models and position-based dynamics (PBD). We analyze and compare the gradients calculated by these models and show that the gradients are not always correct. We also demonstrate their ability to learn an optimal control strategy by comparing the learned strategies with the optimal strategy in an analytical form. The codebase to reproduce the experiment results is available at https://github.com/DesmondZhong/diff_sim_grads.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
307,410
2412.07573
Subtopic-aware View Sampling and Temporal Aggregation for Long-form Document Matching
Long-form document matching aims to judge the relevance between two documents and has been applied to various scenarios. Most existing works utilize hierarchical or long context models to process documents, which achieve coarse understanding but may ignore details. Some researchers construct a document view with similar sentences about aligned document subtopics to focus on detailed matching signals. However, a long document generally contains multiple subtopics. The matching signals are heterogeneous from multiple topics. Considering only the homologous aligned subtopics may not be representative enough and may cause biased modeling. In this paper, we introduce a new framework to model representative matching signals. First, we propose to capture various matching signals through subtopics of document pairs. Next, We construct multiple document views based on subtopics to cover heterogeneous and valuable details. However, existing spatial aggregation methods like attention, which integrate all these views simultaneously, are hard to integrate heterogeneous information. Instead, we propose temporal aggregation, which effectively integrates different views gradually as the training progresses. Experimental results show that our learning framework is effective on several document-matching tasks, including news duplication and legal case retrieval.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
515,717
2412.12565
PBVS 2024 Solution: Self-Supervised Learning and Sampling Strategies for SAR Classification in Extreme Long-Tail Distribution
The Multimodal Learning Workshop (PBVS 2024) aims to improve the performance of automatic target recognition (ATR) systems by leveraging both Synthetic Aperture Radar (SAR) data, which is difficult to interpret but remains unaffected by weather conditions and visible light, and Electro-Optical (EO) data for simultaneous learning. The subtask, known as the Multi-modal Aerial View Imagery Challenge - Classification, focuses on predicting the class label of a low-resolution aerial image based on a set of SAR-EO image pairs and their respective class labels. The provided dataset consists of SAR-EO pairs, characterized by a severe long-tail distribution with over a 1000-fold difference between the largest and smallest classes, making typical long-tail methods difficult to apply. Additionally, the domain disparity between the SAR and EO datasets complicates the effectiveness of standard multimodal methods. To address these significant challenges, we propose a two-stage learning approach that utilizes self-supervised techniques, combined with multimodal learning and inference through SAR-to-EO translation for effective EO utilization. In the final testing phase of the PBVS 2024 Multi-modal Aerial View Image Challenge - Classification (SAR Classification) task, our model achieved an accuracy of 21.45%, an AUC of 0.56, and a total score of 0.30, placing us 9th in the competition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
517,926
2312.12121
Towards Learning-Based Gyrocompassing
Inertial navigation systems (INS) are widely used in both manned and autonomous platforms. One of the most critical tasks prior to their operation is to accurately determine their initial alignment while stationary, as it forms the cornerstone for the entire INS operational trajectory. While low-performance accelerometers can easily determine roll and pitch angles (leveling), establishing the heading angle (gyrocompassing) with low-performance gyros proves to be a challenging task without additional sensors. This arises from the limited signal strength of Earth's rotation rate, often overridden by gyro noise itself. To circumvent this deficiency, in this study we present a practical deep learning framework to effectively compensate for the inherent errors in low-performance gyroscopes. The resulting capability enables gyrocompassing, thereby eliminating the need for subsequent prolonged filtering phase (fine alignment). Through the development of theory and experimental validation, we demonstrate that the improved initial conditions establish a new lower error bound, bringing affordable gyros one step closer to being utilized in high-end tactical tasks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
416,850
1805.11706
Supervised Policy Update for Deep Reinforcement Learning
We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
false
false
false
false
98,980
1606.00623
Spectrally-Precoded OFDM for 5G Wideband Operation in Fragmented sub-6GHz Spectrum
We consider spectrally-precoded OFDM waveforms for 5G wideband transmission in sub-6GHz band. In this densely packed spectrum, a low out-of-band (OOB) waveform is a critical 5G component to achieve the promised high spectral efficiency. By precoding data symbols before OFDM modulation, it is possible to achieve extremely low out-of-band emission with very sharp spectrum transition enabling an efficient and flexible usage of frequency resources. Spectrally-precoded OFDM shows promising results for reaching 5G targets in high-data rate enhanced mobile broadband and ultra-reliable low-latency communications use cases. Spectral precoding is particularly efficient for wideband transmission enabling short-time transmission, which will often require flexible fragmented spectrum usage.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
56,690
2206.02144
Product safety idioms: a method for building causal Bayesian networks for product safety and risk assessment
Idioms are small, reusable Bayesian network (BN) fragments that represent generic types of uncertain reasoning. This paper shows how idioms can be used to build causal BNs for product safety and risk assessment that use a combination of data and knowledge. We show that the specific product safety idioms that we introduce are sufficient to build full BN models to evaluate safety and risk for a wide range of products. The resulting models can be used by safety regulators and product manufacturers even when there are limited (or no) product testing data.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
300,767
2012.11243
Get It Scored Using AutoSAS -- An Automated System for Scoring Short Answers
In the era of MOOCs, online exams are taken by millions of candidates, where scoring short answers is an integral part. It becomes intractable to evaluate them by human graders. Thus, a generic automated system capable of grading these responses should be designed and deployed. In this paper, we present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS). We propose and explain the design and development of a system for SAS, namely AutoSAS. Given a question along with its graded samples, AutoSAS can learn to grade that prompt successfully. This paper further lays down the features such as lexical diversity, Word2Vec, prompt, and content overlap that plays a pivotal role in building our proposed model. We also present a methodology for indicating the factors responsible for scoring an answer. The trained model is evaluated on an extensively used public dataset, namely Automated Student Assessment Prize Short Answer Scoring (ASAP-SAS). AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts as measured by Quadratic Weighted Kappa (QWK), showing performance comparable to humans.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
212,584
2006.07235
SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020)
We present the results and main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval 2020). The task involves three subtasks corresponding to the hierarchical taxonomy of the OLID schema (Zampieri et al., 2019a) from OffensEval 2019. The task featured five languages: English, Arabic, Danish, Greek, and Turkish for Subtask A. In addition, English also featured Subtasks B and C. OffensEval 2020 was one of the most popular tasks at SemEval-2020 attracting a large number of participants across all subtasks and also across all languages. A total of 528 teams signed up to participate in the task, 145 teams submitted systems during the evaluation period, and 70 submitted system description papers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
181,731
2401.06354
Initial Analysis of Data-Driven Haptic Search for the Smart Suction Cup
Suction cups offer a useful gripping solution, particularly in industrial robotics and warehouse applications. Vision-based grasp algorithms, like Dex-Net, show promise but struggle to accurately perceive dark or reflective objects, sub-resolution features, and occlusions, resulting in suction cup grip failures. In our prior work, we designed the Smart Suction Cup, which estimates the flow state within the cup and provides a mechanically resilient end-effector that can inform arm feedback control through a sense of touch. We then demonstrated how this cup's signals enable haptically-driven search behaviors for better grasping points on adversarial objects. This prior work uses a model-based approach to predict the desired motion direction, which opens up the question: does a data-driven approach perform better? This technical report provides an initial analysis harnessing the data previously collected. Specifically, we compare the model-based method with a preliminary data-driven approach to accurately estimate lateral pose adjustment direction for improved grasp success.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
421,128
2403.16694
Design and Performance of Resonant Beam Communications -- Part II: Mobile Scenario
This two-part paper focuses on the system design and performance analysis for a point-to-point resonant beam communication (RBCom) system under both the quasi-static and mobile scenarios. Part I of this paper proposes a synchronization-based information transmission scheme and derives the capacity upper and lower bounds for the quasi-static channel case. In Part II, we address the mobile scenario, where the receiver is in relative motion to the transmitter, and derive a mobile RBCom channel model that jointly considers the Doppler effect, channel variation, and echo interference. With the obtained channel model, we prove that the channel gain of the mobile RBCom decreases as the number of transmitted frames increases, and thus show that the considered mobile RBCom terminates after the transmitter sends a certain number of frames without frequency compensation. By deriving an upper bound on the number of successfully transmitted frames, we formulate the throughput maximization problem for the considered mobile RBCom system, and solve it via a sequential parametric convex approximation (SPCA) method. Finally, simulation results validate the analysis of our proposed method in some typical scenarios.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
441,157
2203.14849
Safe Active Learning for Multi-Output Gaussian Processes
Multi-output regression problems are commonly encountered in science and engineering. In particular, multi-output Gaussian processes have been emerged as a promising tool for modeling these complex systems since they can exploit the inherent correlations and provide reliable uncertainty estimates. In many applications, however, acquiring the data is expensive and safety concerns might arise (e.g. robotics, engineering). We propose a safe active learning approach for multi-output Gaussian process regression. This approach queries the most informative data or output taking the relatedness between the regressors and safety constraints into account. We prove the effectiveness of our approach by providing theoretical analysis and by demonstrating empirical results on simulated datasets and on a real-world engineering dataset. On all datasets, our approach shows improved convergence compared to its competitors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,152
2210.02636
Geodesic Graph Neural Network for Efficient Graph Representation Learning
Graph Neural Networks (GNNs) have recently been applied to graph learning tasks and achieved state-of-the-art (SOTA) results. However, many competitive methods run GNNs multiple times with subgraph extraction and customized labeling to capture information that is hard for normal GNNs to learn. Such operations are time-consuming and do not scale to large graphs. In this paper, we propose an efficient GNN framework called Geodesic GNN (GDGNN) that requires only one GNN run and injects conditional relationships between nodes into the model without labeling. This strategy effectively reduces the runtime of subgraph methods. Specifically, we view the shortest paths between two nodes as the spatial graph context of the neighborhood around them. The GNN embeddings of nodes on the shortest paths are used to generate geodesic representations. Conditioned on the geodesic representations, GDGNN can generate node, link, and graph representations that carry much richer structural information than plain GNNs. We theoretically prove that GDGNN is more powerful than plain GNNs. We present experimental results to show that GDGNN achieves highly competitive performance with SOTA GNN models on various graph learning tasks while taking significantly less time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,721
2009.01030
Privacy Leakage of SIFT Features via Deep Generative Model based Image Reconstruction
Many practical applications, e.g., content based image retrieval and object recognition, heavily rely on the local features extracted from the query image. As these local features are usually exposed to untrustworthy parties, the privacy leakage problem of image local features has received increasing attention in recent years. In this work, we thoroughly evaluate the privacy leakage of Scale Invariant Feature Transform (SIFT), which is one of the most widely-used image local features. We first consider the case that the adversary can fully access the SIFT features, i.e., both the SIFT descriptors and the coordinates are available. We propose a novel end-to-end, coarse-to-fine deep generative model for reconstructing the latent image from its SIFT features. The designed deep generative model consists of two networks, where the first one attempts to learn the structural information of the latent image by transforming from SIFT features to Local Binary Pattern (LBP) features, while the second one aims to reconstruct the pixel values guided by the learned LBP. Compared with the state-of-the-art algorithms, the proposed deep generative model produces much improved reconstructed results over three public datasets. Furthermore, we address more challenging cases that only partial SIFT features (either SIFT descriptors or coordinates) are accessible to the adversary. It is shown that, if the adversary can only have access to the SIFT descriptors while not their coordinates, then the modest success of reconstructing the latent image can be achieved for highly-structured images (e.g., faces) and would fail in general settings. In addition, the latent image can be reconstructed with reasonably good quality solely from the SIFT coordinates. Our results would suggest that the privacy leakage problem can be largely avoided if the SIFT coordinates can be well protected.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
194,202
2408.00783
Data-driven Verification of DNNs for Object Recognition
The paper proposes a new testing approach for Deep Neural Networks (DNN) using gradient-free optimization to find perturbation chains that successfully falsify the tested DNN, going beyond existing grid-based or combinatorial testing. Applying it to an image segmentation task of detecting railway tracks in images, we demonstrate that the approach can successfully identify weaknesses of the tested DNN regarding particular combinations of common perturbations (e.g., rain, fog, blur, noise) on specific clusters of test images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
477,982
2309.11414
EDMP: Ensemble-of-costs-guided Diffusion for Motion Planning
Classical motion planning for robotic manipulation includes a set of general algorithms that aim to minimize a scene-specific cost of executing a given plan. This approach offers remarkable adaptability, as they can be directly used off-the-shelf for any new scene without needing specific training datasets. However, without a prior understanding of what diverse valid trajectories are and without specially designed cost functions for a given scene, the overall solutions tend to have low success rates. While deep-learning-based algorithms tremendously improve success rates, they are much harder to adopt without specialized training datasets. We propose EDMP, an Ensemble-of-costs-guided Diffusion for Motion Planning that aims to combine the strengths of classical and deep-learning-based motion planning. Our diffusion-based network is trained on a set of diverse kinematically valid trajectories. Like classical planning, for any new scene at the time of inference, we compute scene-specific costs such as "collision cost" and guide the diffusion to generate valid trajectories that satisfy the scene-specific constraints. Further, instead of a single cost function that may be insufficient in capturing diversity across scenes, we use an ensemble of costs to guide the diffusion process, significantly improving the success rate compared to classical planners. EDMP performs comparably with SOTA deep-learning-based methods while retaining the generalization capabilities primarily associated with classical planners.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
393,399
2302.14015
CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design
We formalize the problem of contextual optimization through the lens of Bayesian experimental design and propose CO-BED -- a general, model-agnostic framework for designing contextual experiments using information-theoretic principles. After formulating a suitable information-based objective, we employ black-box variational methods to simultaneously estimate it and optimize the designs in a single stochastic gradient scheme. In addition, to accommodate discrete actions within our framework, we propose leveraging continuous relaxation schemes, which can naturally be integrated into our variational objective. As a result, CO-BED provides a general and automated solution to a wide range of contextual optimization problems. We illustrate its effectiveness in a number of experiments, where CO-BED demonstrates competitive performance even when compared to bespoke, model-specific alternatives.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
348,125
2307.06630
Image Transformation Sequence Retrieval with General Reinforcement Learning
In this work, the novel Image Transformation Sequence Retrieval (ITSR) task is presented, in which a model must retrieve the sequence of transformations between two given images that act as source and target, respectively. Given certain characteristics of the challenge such as the multiplicity of a correct sequence or the correlation between consecutive steps of the process, we propose a solution to ITSR using a general model-based Reinforcement Learning such as Monte Carlo Tree Search (MCTS), which is combined with a deep neural network. Our experiments provide a benchmark in both synthetic and real domains, where the proposed approach is compared with supervised training. The results report that a model trained with MCTS is able to outperform its supervised counterpart in both the simplest and the most complex cases. Our work draws interesting conclusions about the nature of ITSR and its associated challenges.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
379,129
1911.02524
A Spoken Dialogue System for Spatial Question Answering in a Physical Blocks World
The blocks world is a classic toy domain that has long been used to build and test spatial reasoning systems. Despite its relative simplicity, tackling this domain in its full complexity requires the agent to exhibit a rich set of functional capabilities, ranging from vision to natural language understanding. There is currently a resurgence of interest in solving problems in such limited domains using modern techniques. In this work we tackle spatial question answering in a holistic way, using a vision system, speech input and output mediated by an animated avatar, a dialogue system that robustly interprets spatial queries, and a constraint solver that derives answers based on 3-D spatial modeling. The contributions of this work include a semantic parser that maps spatial questions into logical forms consistent with a general approach to meaning representation, a dialog manager based on a schema representation, and a constraint solver for spatial questions that provides answers in agreement with human perception. These and other components are integrated into a multi-modal human-computer interaction pipeline.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
152,383
2305.05803
Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation. Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels and use them to train a fully supervised semantic segmentation model. Although these pseudo-labels are class-aware, indicating the coarse regions for particular classes, they are not object-aware and fail to delineate accurate object boundaries. To address this, we introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts. We use CAM pseudo-labels as cues to select and combine SAM masks, resulting in high-quality pseudo-labels that are both class-aware and object-aware. Our approach is highly versatile and can be easily integrated into existing WSSS methods without any modification. Despite its simplicity, our approach shows consistent gain over the state-of-the-art WSSS methods on both PASCAL VOC and MS-COCO datasets.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
363,284
2502.01375
Compact Rule-Based Classifier Learning via Gradient Descent
Rule-based models play a crucial role in scenarios that require transparency and accountable decision-making. However, they primarily consist of discrete parameters and structures, which presents challenges for scalability and optimization. In this work, we introduce a new rule-based classifier trained using gradient descent, in which the user can control the maximum number and length of the rules. For numerical partitions, the user can also control the partitions used with fuzzy sets, which also helps keep the number of partitions small. We perform a series of exhaustive experiments on $40$ datasets to show how this classifier performs in terms of accuracy and rule base size. Then, we compare our results with a genetic search that fits an equivalent classifier and with other explainable and non-explainable state-of-the-art classifiers. Our results show how our method can obtain compact rule bases that use significantly fewer patterns than other rule-based methods and perform better than other explainable classifiers.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
529,827
2205.08249
Learnable Optimal Sequential Grouping for Video Scene Detection
Video scene detection is the task of dividing videos into temporal semantic chapters. This is an important preliminary step before attempting to analyze heterogeneous video content. Recently, Optimal Sequential Grouping (OSG) was proposed as a powerful unsupervised solution to solve a formulation of the video scene detection problem. In this work, we extend the capabilities of OSG to the learning regime. By giving the capability to both learn from examples and leverage a robust optimization formulation, we can boost performance and enhance the versatility of the technology. We present a comprehensive analysis of incorporating OSG into deep learning neural networks under various configurations. These configurations include learning an embedding in a straight-forward manner, a tailored loss designed to guide the solution of OSG, and an integrated model where the learning is performed through the OSG pipeline. With thorough evaluation and analysis, we assess the benefits and behavior of the various configurations, and show that our learnable OSG approach exhibits desirable behavior and enhanced performance compared to the state of the art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
296,867
2305.04522
Event Knowledge Incorporation with Posterior Regularization for Event-Centric Question Answering
We propose a simple yet effective strategy to incorporate event knowledge extracted from event trigger annotations via posterior regularization to improve the event reasoning capability of mainstream question-answering (QA) models for event-centric QA. In particular, we define event-related knowledge constraints based on the event trigger annotations in the QA datasets, and subsequently use them to regularize the posterior answer output probabilities from the backbone pre-trained language models used in the QA setting. We explore two different posterior regularization strategies for extractive and generative QA separately. For extractive QA, the sentence-level event knowledge constraint is defined by assessing if a sentence contains an answer event or not, which is later used to modify the answer span extraction probability. For generative QA, the token-level event knowledge constraint is defined by comparing the generated token from the backbone language model with the answer event in order to introduce a reward or penalty term, which essentially adjusts the answer generative probability indirectly. We conduct experiments on two event-centric QA datasets, TORQUE and ESTER. The results show that our proposed approach can effectively inject event knowledge into existing pre-trained language models and achieves strong performance compared to existing QA models in answer evaluation. Code and models can be found: https://github.com/LuJunru/EventQAviaPR.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
362,803
2004.07982
Analytical Factors for Describing the Control Ability of Linear Discrete-time Systems
In this paper, the analytical volume computations of the zonotopes generated by the matrix pair with $n$ different or repeated real eigenvalues are discussed firstly, and then by deconstructing the volume computing equations, 3 classes of the shape factors are constructed. These analytical volume and shape factors can describe accurately the size and shape of the zonotopes. Because the control ability for LDT systems with the unit input variables (i.e. the input variable is with bounded value as 1) is directly related to the controllability region \cite{zhaomw202003}, based on these analytical expressions on the volume and shape factors, the control ability can be quantized conveniently. By choosing these analytical expressions as the objective function or constraint conditions, a novel optimizing problems and solving method for the control ability can be founded. Based on the optimization, not only the open-loop control ability, and but also the some closed-loop control performance, such as the optimal time waste, robustness of the control strategy, etc, can be promoted, according to the conclusions in paper \cite{zhaomw202003}.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
172,922
1810.11787
A Hitchhiker's Guide On Distributed Training of Deep Neural Networks
Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat however is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take upto a week, distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time to a time as low as 4 minutes by using a cluster of 2048 GPUs. This paper surveys the various algorithms and techniques used to distribute training and presents the current state of the art for a modern distributed training framework. More specifically, we explore the synchronous and asynchronous variants of distributed Stochastic Gradient Descent, various All Reduce gradient aggregation strategies and best practices for obtaining higher throughout and lower latency over a cluster such as mixed precision training, large batch training and gradient compression.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
111,597
1504.00657
Eliciting Disease Data from Wikipedia Articles
Traditional disease surveillance systems suffer from several disadvantages, including reporting lags and antiquated technology, that have caused a movement towards internet-based disease surveillance systems. Internet systems are particularly attractive for disease outbreaks because they can provide data in near real-time and can be verified by individuals around the globe. However, most existing systems have focused on disease monitoring and do not provide a data repository for policy makers or researchers. In order to fill this gap, we analyzed Wikipedia article content. We demonstrate how a named-entity recognizer can be trained to tag case counts, death counts, and hospitalization counts in the article narrative that achieves an F1 score of 0.753. We also show, using the 2014 West African Ebola virus disease epidemic article as a case study, that there are detailed time series data that are consistently updated that closely align with ground truth data. We argue that Wikipedia can be used to create the first community-driven open-source emerging disease detection, monitoring, and repository system.
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
false
41,721
2209.07714
Variational quantum algorithm for measurement extraction from the Navier-Stokes, Einstein, Maxwell, B-type, Lin-Tsien, Camassa-Holm, DSW, H-S, KdV-B, non-homogeneous KdV, generalized KdV, KdV, translational KdV, sKdV, B-L and Airy equations
Classical-quantum hybrid algorithms have recently garnered significant attention, which are characterized by combining quantum and classical computing protocols to obtain readout from quantum circuits of interest. Recent progress due to Lubasch et al in a 2019 paper provides readout for solutions to the Schrodinger and Inviscid Burgers equations, by making use of a new variational quantum algorithm (VQA) which determines the ground state of a cost function expressed with a superposition of expectation values and variational parameters. In the following, we analyze additional computational prospects in which the VQA can reliably produce solutions to other PDEs that are comparable to solutions that have been previously realized classically, which are characterized with noiseless quantum simulations. To determine the range of nonlinearities that the algorithm can process for other IVPs, we study several PDEs, first beginning with the Navier-Stokes equations and progressing to other equations underlying physical phenomena ranging from electromagnetism, gravitation, and wave propagation, from simulations of the Einstein, Boussniesq-type, Lin-Tsien, Camassa-Holm, Drinfeld-Sokolov-Wilson (DSW), and Hunter-Saxton equations. To formulate optimization routines that the VQA undergoes for numerical approximations of solutions that are obtained as readout from quantum circuits, cost functions corresponding to each PDE are provided in the supplementary section after which simulations results from hundreds of ZGR-QFT ansatzae are generated.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
317,859
2203.08029
Optimal dispatch schedule for a fast EV charging station with account to supplementary battery health degradation
This paper investigates the usage of battery storage systems in a fast charging station (FCS) for participation in energy markets and charging electrical vehicles (EVs) simultaneously. In particular, we focus on optimizing the scheduling strategies to reduce the overall operational cost of the system over its lifetime by combining the model of battery degradation and energy arbitrage. We implement the battery degradation as a penalty term within an energy arbitrage model and show that the battery degradation plays an important role in the optimal energy dispatch scheduling of the FCS system. In this case study, with different penalty coefficients for the battery degradation penalty term, it is found that including the penalty of battery usage in the scheduling model will reduce the number of small charging/discharging cycles, thereby prolonging the battery lifetime, while maintaining near optimal revenue from grid services.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
285,648
1911.01106
Singular points detection with semantic segmentation networks
Singular points detection is one of the most classical and important problem in the field of fingerprint recognition. However, current detection rates of singular points are still unsatisfactory, especially for low-quality fingerprints. Compared with traditional image processing-based detection methods, methods based on deep learning only need the original fingerprint image but not the fingerprint orientation field. In this paper, different from other detection methods based on deep learning, we treat singular points detection as a semantic segmentation problem and just use few data for training. Furthermore, we propose a new convolutional neural network called SinNet to extract the singular regions of interest and then use a blob detection method called SimpleBlobDetector to locate the singular points. The experiments are carried out on the test dataset from SPD2010, and the proposed method has much better performance than the other advanced methods in most aspects. Compared with the state-of-art algorithms in SPD2010, our method achieves an increase of 11% in the percentage of correctly detected fingerprints and an increase of more than 18% in the core detection rate.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
152,018
1905.08910
A Neural-Symbolic Architecture for Inverse Graphics Improved by Lifelong Meta-Learning
We follow the idea of formulating vision as inverse graphics and propose a new type of element for this task, a neural-symbolic capsule. It is capable of de-rendering a scene into semantic information feed-forward, as well as rendering it feed-backward. An initial set of capsules for graphical primitives is obtained from a generative grammar and connected into a full capsule network. Lifelong meta-learning continuously improves this network's detection capabilities by adding capsules for new and more complex objects it detects in a scene using few-shot learning. Preliminary results demonstrate the potential of our novel approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,605
2310.02930
Small-Disturbance Input-to-State Stability of Perturbed Gradient Flows: Applications to LQR Problem
This paper studies the effect of perturbations on the gradient flow of a general nonlinear programming problem, where the perturbation may arise from inaccurate gradient estimation in the setting of data-driven optimization. Under suitable conditions on the objective function, the perturbed gradient flow is shown to be small-disturbance input-to-state stable (ISS), which implies that, in the presence of a small-enough perturbation, the trajectories of the perturbed gradient flow must eventually enter a small neighborhood of the optimum. This work was motivated by the question of robustness of direct methods for the linear quadratic regulator problem, and specifically the analysis of the effect of perturbations caused by gradient estimation or round-off errors in policy optimization. We show small-disturbance ISS for three of the most common optimization algorithms: standard gradient flow, natural gradient flow, and Newton gradient flow.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
397,046
2410.14738
Advancements In Heart Disease Prediction: A Machine Learning Approach For Early Detection And Risk Assessment
The primary aim of this paper is to comprehend, assess, and analyze the role, relevance, and efficiency of machine learning models in predicting heart disease risks using clinical data. While the importance of heart disease risk prediction cannot be overstated, the application of machine learning (ML) in identifying and evaluating the impact of various features on the classification of patients with and without heart disease, as well as in generating a reliable clinical dataset, is equally significant. This study relies primarily on cross-sectional clinical data. The ML approach is designed to enhance the consideration of various clinical features in the heart disease prognosis process. Some features emerge as strong predictors, adding significant value. The paper evaluates seven ML classifiers: Logistic Regression, Random Forest, Decision Tree, Naive Bayes, k-Nearest Neighbors, Neural Networks, and Support Vector Machine (SVM). The performance of each model is assessed based on accuracy metrics. Notably, the Support Vector Machine (SVM) demonstrates the highest accuracy at 91.51%, confirming its superiority among the evaluated models in terms of predictive capability. The overall findings of this research highlight the advantages of advanced computational methodologies in the evaluation, prediction, improvement, and management of cardiovascular risks. In other words, the strong performance of the SVM model illustrates its applicability and value in clinical settings, paving the way for further advancements in personalized medicine and healthcare.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,181
2003.06917
End-to-End Velocity Estimation For Autonomous Racing
Velocity estimation plays a central role in driverless vehicles, but standard and affordable methods struggle to cope with extreme scenarios like aggressive maneuvers due to the presence of high sideslip. To solve this, autonomous race cars are usually equipped with expensive external velocity sensors. In this paper, we present an end-to-end recurrent neural network that takes available raw sensors as input (IMU, wheel odometry, and motor currents) and outputs velocity estimates. The results are compared to two state-of-the-art Kalman filters, which respectively include and exclude expensive velocity sensors. All methods have been extensively tested on a formula student driverless race car with very high sideslip (10{\deg} at the rear axle) and slip ratio (~20%), operating close to the limits of handling. The proposed network is able to estimate lateral velocity up to 15x better than the Kalman filter with the equivalent sensor input and matches (0.06 m/s RMSE) the Kalman filter with the expensive velocity sensor setup.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
168,265
1411.2328
Modeling Word Relatedness in Latent Dirichlet Allocation
Standard LDA model suffers the problem that the topic assignment of each word is independent and word correlation hence is neglected. To address this problem, in this paper, we propose a model called Word Related Latent Dirichlet Allocation (WR-LDA) by incorporating word correlation into LDA topic models. This leads to new capabilities that standard LDA model does not have such as estimating infrequently occurring words or multi-language topic modeling. Experimental results demonstrate the effectiveness of our model compared with standard LDA.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
37,403
1806.07916
RSDD-Time: Temporal Annotation of Self-Reported Mental Health Diagnoses
Self-reported diagnosis statements have been widely employed in studying language related to mental health in social media. However, existing research has largely ignored the temporality of mental health diagnoses. In this work, we introduce RSDD-Time: a new dataset of 598 manually annotated self-reported depression diagnosis posts from Reddit that include temporal information about the diagnosis. Annotations include whether a mental health condition is present and how recently the diagnosis happened. Furthermore, we include exact temporal spans that relate to the date of diagnosis. This information is valuable for various computational methods to examine mental health through social media because one's mental health state is not static. We also test several baseline classification and extraction approaches, which suggest that extracting temporal information from self-reported diagnosis statements is challenging.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
101,046
1811.01174
Nonparallel Emotional Speech Conversion
We propose a nonparallel data-driven emotional speech conversion method. It enables the transfer of emotion-related characteristics of a speech signal while preserving the speaker's identity and linguistic content. Most existing approaches require parallel data and time alignment, which is not available in most real applications. We achieve nonparallel training based on an unsupervised style transfer technique, which learns a translation model between two distributions instead of a deterministic one-to-one mapping between paired examples. The conversion model consists of an encoder and a decoder for each emotion domain. We assume that the speech signal can be decomposed into an emotion-invariant content code and an emotion-related style code in latent space. Emotion conversion is performed by extracting and recombining the content code of the source speech and the style code of the target emotion. We tested our method on a nonparallel corpora with four emotions. Both subjective and objective evaluations show the effectiveness of our approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,293
1303.0058
A Cooperative MARC Scheme Using Analogue Network Coding to Achieve Second-Order Diversity
A multiple access relay channel (MARC) is considered in which an analogue-like network coding is implemented in the relay node. This analogue coding is a simple addition of the received signals at the relay node. Using "nulling detection" structure employed in V-BLAST receiver, we propose a detection scheme in the destination which is able to provide a diversity order of two for all users. We analytically evaluate the performance of our proposed scheme for the MARC with two users where tight upper bounds for both uncoded and Convolutionally coded transmission blocks are provided. We verify our analytical evaluations by simulations and compare the results with those of noncooperative transmission and Alamouti's scheme for the same power and rate transmission. Our results indicate that while our proposed scheme shows a comparable performance compared to the Alamouti's scheme, it substantially outperforms the non-cooperate transmission.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
22,525
2311.04378
Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models
Watermarking generative models consists of planting a statistical signal (watermark) in a model's output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used. Our attack is based on two assumptions: (1) The attacker has access to a "quality oracle" that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a "perturbation oracle" which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.
false
false
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
406,201
2405.15964
A hierarchical Bayesian model for syntactic priming
The effect of syntactic priming exhibits three well-documented empirical properties: the lexical boost, the inverse frequency effect, and the asymmetrical decay. We aim to show how these three empirical phenomena can be reconciled in a general learning framework, the hierarchical Bayesian model (HBM). The model represents syntactic knowledge in a hierarchical structure of syntactic statistics, where a lower level represents the verb-specific biases of syntactic decisions, and a higher level represents the abstract bias as an aggregation of verb-specific biases. This knowledge is updated in response to experience by Bayesian inference. In simulations, we show that the HBM captures the above-mentioned properties of syntactic priming. The results indicate that some properties of priming which are usually explained by a residual activation account can also be explained by an implicit learning account. We also discuss the model's implications for the lexical basis of syntactic priming.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
457,183
2401.01522
LORE++: Logical Location Regression Network for Table Structure Recognition with Pre-training
Table structure recognition (TSR) aims at extracting tables in images into machine-understandable formats. Recent methods solve this problem by predicting the adjacency relations of detected cell boxes or learning to directly generate the corresponding markup sequences from the table images. However, existing approaches either count on additional heuristic rules to recover the table structures, or face challenges in capturing long-range dependencies within tables, resulting in increased complexity. In this paper, we propose an alternative paradigm. We model TSR as a logical location regression problem and propose a new TSR framework called LORE, standing for LOgical location REgression network, which for the first time regresses logical location as well as spatial location of table cells in a unified network. Our proposed LORE is conceptually simpler, easier to train, and more accurate than other paradigms of TSR. Moreover, inspired by the persuasive success of pre-trained models on a number of computer vision and natural language processing tasks, we propose two pre-training tasks to enrich the spatial and logical representations at the feature level of LORE, resulting in an upgraded version called LORE++. The incorporation of pre-training in LORE++ has proven to enjoy significant advantages, leading to a substantial enhancement in terms of accuracy, generalization, and few-shot capability compared to its predecessor. Experiments on standard benchmarks against methods of previous paradigms demonstrate the superiority of LORE++, which highlights the potential and promising prospect of the logical location regression paradigm for TSR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
419,389
cs/0006021
Compiling Language Models from a Linguistically Motivated Unification Grammar
Systems now exist which are able to compile unification grammars into language models that can be included in a speech recognizer, but it is so far unclear whether non-trivial linguistically principled grammars can be used for this purpose. We describe a series of experiments which investigate the question empirically, by incrementally constructing a grammar and discovering what problems emerge when successively larger versions are compiled into finite state graph representations and used as language models for a medium-vocabulary recognition task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,130
1802.00176
Perceptual Compressive Sensing
Compressive sensing (CS) works to acquire measurements at sub-Nyquist rate and recover the scene images. Existing CS methods always recover the scene images in pixel level. This causes the smoothness of recovered images and lack of structure information, especially at a low measurement rate. To overcome this drawback, in this paper, we propose perceptual CS to obtain high-level structured recovery. Our task no longer focuses on pixel level. Instead, we work to make a better visual effect. In detail, we employ perceptual loss, defined on feature level, to enhance the structure information of the recovered images. Experiments show that our method achieves better visual results with stronger structure information than existing CS methods at the same measurement rate.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,363
1905.04360
Kesten-McKay law for random subensembles of Paley equiangular tight frames
We apply the method of moments to prove a recent conjecture of Haikin, Zamir and Gavish (2017) concerning the distribution of the singular values of random subensembles of Paley equiangular tight frames. Our analysis applies more generally to real equiangular tight frames of redundancy 2, and we suspect similar ideas will eventually produce more general results for arbitrary choices of redundancy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
130,436
1602.03725
A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation
Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
52,044
2409.17981
BlinkTrack: Feature Tracking over 100 FPS via Events and Images
Feature tracking is crucial for, structure from motion (SFM), simultaneous localization and mapping (SLAM), object tracking and various computer vision tasks. Event cameras, known for their high temporal resolution and ability to capture asynchronous changes, have gained significant attention for their potential in feature tracking, especially in challenging conditions. However, event cameras lack the fine-grained texture information that conventional cameras provide, leading to error accumulation in tracking. To address this, we propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking. Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches. This approach improves single-modality tracking, resolves ambiguities, and supports asynchronous data fusion. We also introduce new synthetic and augmented datasets to better evaluate our model. Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods, exceeding 100 FPS with preprocessed event data and 80 FPS with multi-modality data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
492,049
2408.04556
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Models
Large language models (LLMs) have demonstrated remarkable proficiency across various natural language processing (NLP) tasks. However, adapting LLMs to downstream applications requires computationally intensive and memory-demanding fine-tuning procedures. To alleviate these burdens, parameter-efficient fine-tuning (PEFT) techniques have emerged as a promising approach to tailor LLMs with minimal computational overhead. While PEFT methods offer substantial advantages, they do not fully address the pervasive issue of bias propagation from pre-training data. This work introduces Bias-Alleviating Low-Rank Adaptation (BA-LoRA), a novel PEFT method designed to counteract bias inheritance. BA-LoRA incorporates three distinct regularization terms: (1) a consistency regularizer, (2) a diversity regularizer, and (3) a singular value decomposition regularizer. These regularizers aim to enhance the models' consistency, diversity, and generalization capabilities during fine-tuning. We conduct extensive experiments on natural language understanding (NLU) and natural language generation (NLG) tasks using prominent LLMs such as LLaMA, Mistral, and Gemma. The results demonstrate that BA-LoRA outperforms LoRA and its state-of-the-art variants. Moreover, our method effectively mitigates the adverse effects of pre-training bias, leading to more reliable and robust model outputs. The code is available at https://github.com/cyp-jlu-ai/BA-LoRA.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
479,425
1810.00122
A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent
Despite its empirical success and recent theoretical progress, there generally lacks a quantitative analysis of the effect of batch normalization (BN) on the convergence and stability of gradient descent. In this paper, we provide such an analysis on the simple problem of ordinary least squares (OLS). Since precise dynamical properties of gradient descent (GD) is completely known for the OLS problem, it allows us to isolate and compare the additional effects of BN. More precisely, we show that unlike GD, gradient descent with BN (BNGD) converges for arbitrary learning rates for the weights, and the convergence remains linear under mild conditions. Moreover, we quantify two different sources of acceleration of BNGD over GD -- one due to over-parameterization which improves the effective condition number and another due having a large range of learning rates giving rise to fast descent. These phenomena set BNGD apart from GD and could account for much of its robustness properties. These findings are confirmed quantitatively by numerical experiments, which further show that many of the uncovered properties of BNGD in OLS are also observed qualitatively in more complex supervised learning problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
109,097
1204.1276
Distribution-Dependent Sample Complexity of Large Margin Learning
We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the margin-adapted dimension, which is a simple function of the second order statistics of the data distribution, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the margin-adapted dimension of the data distribution. The upper bounds are universal, and the lower bounds hold for the rich family of sub-Gaussian distributions with independent features. We conclude that this new quantity tightly characterizes the true sample complexity of large-margin classification. To prove the lower bound, we develop several new tools of independent interest. These include new connections between shattering and hardness of learning, new properties of shattering with linear classifiers, and a new lower bound on the smallest eigenvalue of a random Gram matrix generated by sub-Gaussian variables. Our results can be used to quantitatively compare large margin learning to other learning rules, and to improve the effectiveness of methods that use sample complexity bounds, such as active learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
15,305
2106.08596
Temporal Convolution Networks with Positional Encoding for Evoked Expression Estimation
This paper presents an approach for Evoked Expressions from Videos (EEV) challenge, which aims to predict evoked facial expressions from video. We take advantage of pre-trained models on large-scale datasets in computer vision and audio signals to extract the deep representation of timestamps in the video. A temporal convolution network, rather than an RNN like architecture, is used to explore temporal relationships due to its advantage in memory consumption and parallelism. Furthermore, to address the missing annotations of some timestamps, positional encoding is employed to ensure continuity of input data when discarding these timestamps during training. We achieved state-of-the-art results on the EEV challenge with a Pearson correlation coefficient of 0.05477, the first ranked performance in the EEV 2021 challenge.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
241,348
1307.0193
A Sampling Algebra for Aggregate Estimation
As of 2005, sampling has been incorporated in all major database systems. While efficient sampling techniques are realizable, determining the accuracy of an estimate obtained from the sample is still an unresolved problem. In this paper, we present a theoretical framework that allows an elegant treatment of the problem. We base our work on generalized uniform sampling (GUS), a class of sampling methods that subsumes a wide variety of sampling techniques. We introduce a key notion of equivalence that allows GUS sampling operators to commute with selection and join, and derivation of confidence intervals. We illustrate the theory through extensive examples and give indications on how to use it to provide meaningful estimations in database systems.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
25,530
2406.15571
Texture Feature Analysis for Classification of Early-Stage Prostate Cancer in mpMRI
Magnetic resonance imaging (MRI) has become a crucial tool in the diagnosis and staging of prostate cancer, owing to its superior tissue contrast. However, it also creates large volumes of data that must be assessed by trained experts, a time-consuming and laborious task. This has prompted the development of machine learning tools for the automation of Prostate cancer (PCa) risk classification based on multiple MRI modalities (T2W, ADC, and high-b-value DWI). Understanding and interpreting the predictions made by the models, however, remains a challenge. We analyze Random Forests (RF) and Support Vector Machines (SVM), for two complementary datasets, the public Prostate-X dataset, and an in-house, mostly early-stage PCa dataset to elucidate the contributions made by first-order statistical features, Haralick texture features, and local binary patterns to the classification. Using correlation analysis and Shapley impact scores, we find that many of the features typically used are strongly correlated, and that the majority of features have negligible impact on the classification. We identify a small set of features that determine the classification outcome, which may aid the development of explainable AI approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
466,779
1905.04611
Data description and retrieval using periods represented by uncertain time intervals
Time periods are frequently used to specify time in metadata and retrieval. However, it is not easy to describe and retrieve information about periods, because the temporal ranges represented by periods are often ambiguous. This is because these temporal ranges do not have fixed beginning and end points. To solve this problem, basic logics to describe and process uncertain time intervals were developed in this study. An uncertain time interval is represented as a set of time intervals that indicate states when the uncertain time interval is determined. Based on this concept, a logic to retrieve uncertain time intervals satisfying a given condition was established, and it was revealed that retrieval results belong to three states: reliable, impossible, and possible matches. Additionally, to describe data about uncertain periods, an ontology (the HuTime Ontology) was constructed based on the logic. This ontology is characterized by the fact that uncertain time intervals can be defined recursively. It is expected that more data about time periods will be created and released using the result of this study.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
130,514
1912.00342
Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives
Modern dialog managers face the challenge of having to fulfill human-level conversational skills as part of common user expectations, including but not limited to discourse with no clear objective. Along with these requirements, agents are expected to extrapolate intent from the user's dialogue even when subjected to non-canonical forms of speech. This depends on the agent's comprehension of paraphrased forms of such utterances. Especially in low-resource languages, the lack of data is a bottleneck that prevents advancements of the comprehension performance for these types of agents. In this regard, here we demonstrate the necessity of extracting the intent argument of non-canonical directives in a natural language format, which may yield more accurate parsing, and suggest guidelines for building a parallel corpus for this purpose. Following the guidelines, we construct a Korean corpus of 50K instances of question/command-intent pairs, including the labels for classification of the utterance type. We also propose a method for mitigating class imbalance, demonstrating the potential applications of the corpus generation method and its multilingual extensibility.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
155,739
1807.03537
Soft-TTL: Time-Varying Fractional Caching
Standard Time-to-Live (TTL) cache management prescribes the storage of entire files, or possibly fractions thereof, for a given amount of time after a request. As a generalization of this approach, this work proposes the storage of a time-varying, diminishing, fraction of a requested file. Accordingly, the cache progressively evicts parts of the file over an interval of time following a request. The strategy, which is referred to as soft-TTL, is justified by the fact that traffic traces are often characterized by arrival processes that display a decreasing, but non-negligible, probability of observing a request as the time elapsed since the last request increases. An optimization-based analysis of soft-TTL is presented, demonstrating the important role played by the hazard function of the inter-arrival request process, which measures the likelihood of observing a request as a function of the time since the most recent request.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
102,552
2005.07776
Efficient Federated Learning over Multiple Access Channel with Differential Privacy Constraints
In this paper, the problem of federated learning (FL) through digital communication between clients and a parameter server (PS) over a multiple access channel (MAC), also subject to differential privacy (DP) constraints, is studied. More precisely, we consider the setting in which clients in a centralized network are prompted to train a machine learning model using their local datasets. The information exchange between the clients and the PS takes places over a MAC channel and must also preserve the DP of the local datasets. Accordingly, the objective of the clients is to minimize the training loss subject to (i) rate constraints for reliable communication over the MAC and (ii) DP constraint over the local datasets. For this optimization scenario, we proposed a novel consensus scheme in which digital distributed stochastic gradient descent (D-DSGD) is performed by each client. To preserve DP, a digital artificial noise is also added by the users to the locally quantized gradients. The performance of the scheme is evaluated in terms of the convergence rate and DP level for a given MAC capacity. The performance is optimized over the choice of the quantization levels and the artificial noise parameters. Numerical evaluations are presented to validate the performance of the proposed scheme.
false
false
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
true
177,375
2111.08330
Bayesian Optimization for Cascade-type Multi-stage Processes
Complex processes in science and engineering are often formulated as multistage decision-making problems. In this paper, we consider a type of multistage decision-making process called a cascade process. A cascade process is a multistage process in which the output of one stage is used as an input for the subsequent stage. When the cost of each stage is expensive, it is difficult to search for the optimal controllable parameters for each stage exhaustively. To address this problem, we formulate the optimization of the cascade process as an extension of the Bayesian optimization framework and propose two types of acquisition functions based on credible intervals and expected improvement. We investigate the theoretical properties of the proposed acquisition functions and demonstrate their effectiveness through numerical experiments. In addition, we consider an extension called suspension setting in which we are allowed to suspend the cascade process at the middle of the multistage decision-making process that often arises in practical problems. We apply the proposed method in a test problem involving a solar cell simulator, which was the motivation for this study.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,651
2108.00596
GTNet:Guided Transformer Network for Detecting Human-Object Interactions
The human-object interaction (HOI) detection task refers to localizing humans, localizing objects, and predicting the interactions between each human-object pair. HOI is considered one of the fundamental steps in truly understanding complex visual scenes. For detecting HOI, it is important to utilize relative spatial configurations and object semantics to find salient spatial regions of images that highlight the interactions between human object pairs. This issue is addressed by the novel self-attention based guided transformer network, GTNet. GTNet encodes this spatial contextual information in human and object visual features via self-attention while achieving state of the art results on both the V-COCO and HICO-DET datasets. Code will be made available online.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
248,765
2406.18055
Filtering Reconfigurable Intelligent Computational Surface for RF Spectrum Purification
The increasing demand for communication is degrading the electromagnetic (EM) transmission environment due to severe EM interference, significantly reducing the efficiency of the radio frequency (RF) spectrum. Metasurfaces, a promising technology for controlling desired EM waves, have recently received significant attention from both academia and industry. However, the potential impact of out-of-band signals has been largely overlooked, leading to RF spectrum pollution and degradation of wireless transmissions. To address this issue, we propose a novel surface structure called the Filtering Reconfigurable Intelligent Computational Surface (FRICS). We introduce two types of FRICS structures: one that dynamically reflects resonance band signals through a tunable spatial filter while absorbing out-of-band signals using metamaterials and the other one that dynamically amplifies in-band signals using computational metamaterials while reflecting out-of-band signals. To evaluate the performance of FRICS, we implement it in device-to-device (D2D) communication and vehicular-to-everything (V2X) scenarios. The experiments demonstrate the superiority of FRICS in signal-to-interference-noise ratio (SINR) and energy efficiency (EE). Finally, we discuss the critical challenges faced and promising techniques for implementing FRICS in future wireless systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
467,854
2409.19414
Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks
Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to "mix" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks. To this end, we introduce Sequential Signal Mixing Aggregation (SSMA), a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings. We published our code at \url{https://almogdavid.github.io/SSMA/}
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
492,665
2305.12584
Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces
Sparsity of a learning solution is a desirable feature in machine learning. Certain reproducing kernel Banach spaces (RKBSs) are appropriate hypothesis spaces for sparse learning methods. The goal of this paper is to understand what kind of RKBSs can promote sparsity for learning solutions. We consider two typical learning models in an RKBS: the minimum norm interpolation (MNI) problem and the regularization problem. We first establish an explicit representer theorem for solutions of these problems, which represents the extreme points of the solution set by a linear combination of the extreme points of the subdifferential set, of the norm function, which is data-dependent. We then propose sufficient conditions on the RKBS that can transform the explicit representation of the solutions to a sparse kernel representation having fewer terms than the number of the observed data. Under the proposed sufficient conditions, we investigate the role of the regularization parameter on sparsity of the regularized solutions. We further show that two specific RKBSs: the sequence space $\ell_1(\mathbb{N})$ and the measure space can have sparse representer theorems for both MNI and regularization models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
366,074
1902.05978
GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction
In the past few years, a lot of work has been done towards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modeling textures in high fidelity. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful generator of facial texture in UV space. Then, we revisit the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,649
2210.02159
Differentiable Mathematical Programming for Object-Centric Representation Learning
We propose topology-aware feature partitioning into $k$ disjoint partitions for given scene features as a method for object-centric representation learning. To this end, we propose to use minimum $s$-$t$ graph cuts as a partitioning method which is represented as a linear program. The method is topologically aware since it explicitly encodes neighborhood relationships in the image graph. To solve the graph cuts our solution relies on an efficient, scalable, and differentiable quadratic programming approximation. Optimizations specific to cut problems allow us to solve the quadratic programs and compute their gradients significantly more efficiently compared with the general quadratic programming approach. Our results show that our approach is scalable and outperforms existing methods on object discovery tasks with textured scenes and objects.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
321,546
1310.8467
Reinforcement Learning Framework for Opportunistic Routing in WSNs
Routing packets opportunistically is an essential part of multihop ad hoc wireless sensor networks. The existing routing techniques are not adaptive opportunistic. In this paper we have proposed an adaptive opportunistic routing scheme that routes packets opportunistically in order to ensure that packet loss is avoided. Learning and routing are combined in the framework that explores the optimal routing possibilities. In this paper we implemented this Reinforced learning framework using a customer simulator. The experimental results revealed that the scheme is able to exploit the opportunistic to optimize routing of packets even though the network structure is unknown.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
28,106
2002.10376
The Two Regimes of Deep Network Training
Learning rate schedule has a major impact on the performance of deep learning models. Still, the choice of a schedule is often heuristical. We aim to develop a precise understanding of the effects of different learning rate schedules and the appropriate way to select them. To this end, we isolate two distinct phases of training, the first, which we refer to as the "large-step" regime, exhibits a rather poor performance from an optimization point of view but is the primary contributor to model generalization; the latter, "small-step" regime exhibits much more "convex-like" optimization behavior but used in isolation produces models that generalize poorly. We find that by treating these regimes separately-and em specializing our training algorithm to each one of them, we can significantly simplify learning rate schedules.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
165,384
2104.11014
Network Space Search for Pareto-Efficient Spaces
Network spaces have been known as a critical factor in both handcrafted network designs or defining search spaces for Neural Architecture Search (NAS). However, an effective space involves tremendous prior knowledge and/or manual effort, and additional constraints are required to discover efficiency-aware architectures. In this paper, we define a new problem, Network Space Search (NSS), as searching for favorable network spaces instead of a single architecture. We propose an NSS method to directly search for efficient-aware network spaces automatically, reducing the manual effort and immense cost in discovering satisfactory ones. The resultant network spaces, named Elite Spaces, are discovered from Expanded Search Space with minimal human expertise imposed. The Pareto-efficient Elite Spaces are aligned with the Pareto front under various complexity constraints and can be further served as NAS search spaces, benefiting differentiable NAS approaches (e.g. In CIFAR-100, an averagely 2.3% lower error rate and 3.7% closer to target constraint than the baseline with around 90% fewer samples required to find satisfactory networks). Moreover, our NSS approach is capable of searching for superior spaces in future unexplored spaces, revealing great potential in searching for network spaces automatically. Website: https://minhungchen.netlify.app/publication/nss.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
231,789
0906.2635
Bayesian History Reconstruction of Complex Human Gene Clusters on a Phylogeny
Clusters of genes that have evolved by repeated segmental duplication present difficult challenges throughout genomic analysis, from sequence assembly to functional analysis. Improved understanding of these clusters is of utmost importance, since they have been shown to be the source of evolutionary innovation, and have been linked to multiple diseases, including HIV and a variety of cancers. Previously, Zhang et al. (2008) developed an algorithm for reconstructing parsimonious evolutionary histories of such gene clusters, using only human genomic sequence data. In this paper, we propose a probabilistic model for the evolution of gene clusters on a phylogeny, and an MCMC algorithm for reconstruction of duplication histories from genomic sequences in multiple species. Several projects are underway to obtain high quality BAC-based assemblies of duplicated clusters in multiple species, and we anticipate that our method will be useful in analyzing these valuable new data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
3,881
1908.05959
Multi-Domain Adaptation in Brain MRI through Paired Consistency and Adversarial Learning
Supervised learning algorithms trained on medical images will often fail to generalize across changes in acquisition parameters. Recent work in domain adaptation addresses this challenge and successfully leverages labeled data in a source domain to perform well on an unlabeled target domain. Inspired by recent work in semi-supervised learning we introduce a novel method to adapt from one source domain to $n$ target domains (as long as there is paired data covering all domains). Our multi-domain adaptation method utilises a consistency loss combined with adversarial learning. We provide results on white matter lesion hyperintensity segmentation from brain MRIs using the MICCAI 2017 challenge data as the source domain and two target domains. The proposed method significantly outperforms other domain adaptation baselines.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
141,868
1703.05260
InScript: Narrative texts annotated with script information
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
70,047
1902.00197
Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits
Monte Carlo (MC) permutation test is considered the gold standard for statistical hypothesis testing, especially when standard parametric assumptions are not clear or likely to fail. However, in modern data science settings where a large number of hypothesis tests need to be performed simultaneously, it is rarely used due to its prohibitive computational cost. In genome-wide association studies, for example, the number of hypothesis tests $m$ is around $10^6$ while the number of MC samples $n$ for each test could be greater than $10^8$, totaling more than $nm$=$10^{14}$ samples. In this paper, we propose Adaptive MC multiple Testing (AMT) to estimate MC p-values and control false discovery rate in multiple testing. The algorithm outputs the same result as the standard full MC approach with high probability while requiring only $\tilde{O}(\sqrt{n}m)$ samples. This sample complexity is shown to be optimal. On a Parkinson GWAS dataset, the algorithm reduces the running time from 2 months for full MC to an hour. The AMT algorithm is derived based on the theory of multi-armed bandits.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
120,355
2405.13341
Wealth inequality and utility: Effect evaluation of redistribution and consumption morals using macro-econophysical coupled approach
Reducing wealth inequality and increasing utility are critical issues. This study reveals the effects of redistribution and consumption morals on wealth inequality and utility. To this end, we present a novel approach that couples the dynamic model of capital, consumption, and utility in macroeconomics with the interaction model of joint business and redistribution in econophysics. With this approach, we calculate the capital (wealth), the utility based on consumption, and the Gini index of these inequality using redistribution and consumption thresholds as moral parameters. The results show that: under-redistribution and waste exacerbate inequality; conversely, over-redistribution and stinginess reduce utility; and a balanced moderate moral leads to achieve both reduced inequality and increased utility. These findings provide renewed economic and numerical support for the moral importance known from philosophy, anthropology, and religion. The revival of redistribution and consumption morals should promote the transformation to a human mutual-aid economy, as indicated by philosopher and anthropologist, instead of the capitalist economy that has produced the current inequality. The practical challenge is to implement bottom-up social business, on a foothold of worker coops and platform cooperatives as a community against the state and the market, with moral consensus and its operation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
455,899
2501.01557
Click-Calib: A Robust Extrinsic Calibration Method for Surround-View Systems
Surround-View System (SVS) is an essential component in Advanced Driver Assistance System (ADAS) and requires precise calibrations. However, conventional offline extrinsic calibration methods are cumbersome and time-consuming as they rely heavily on physical patterns. Additionally, these methods primarily focus on short-range areas surrounding the vehicle, resulting in lower calibration quality in more distant zones. To address these limitations, we propose Click-Calib, a pattern-free approach for offline SVS extrinsic calibration. Without requiring any special setup, the user only needs to click a few keypoints on the ground in natural scenes. Unlike other offline calibration approaches, Click-Calib optimizes camera poses over a wide range by minimizing reprojection distance errors of keypoints, thereby achieving accurate calibrations at both short and long distances. Furthermore, Click-Calib supports both single-frame and multiple-frame modes, with the latter offering even better results. Evaluations on our in-house dataset and the public WoodScape dataset demonstrate its superior accuracy and robustness compared to baseline methods. Code is available at https://github.com/lwangvaleo/click_calib.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
522,117
1705.07364
Stabilizing Adversarial Nets With Prediction Methods
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non-prediction method. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
73,820