id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2006.05071
C-SL: Contrastive Sound Localization with Inertial-Acoustic Sensors
Human brain employs perceptual information about the head and eye movements to update the spatial relationship between the individual and the surrounding environment. Based on this cognitive process known as spatial updating, we introduce contrastive sound localization (C-SL) with mobile inertial-acoustic sensor arrays of arbitrary geometry. C-SL uses unlabeled multi-channel audio recordings and inertial measurement unit (IMU) readings collected during free rotational movements of the array to learn mappings from acoustical measurements to an array-centered direction-of-arrival (DOA) in a self-supervised manner. Contrary to conventional DOA estimation methods that require the knowledge of either the array geometry or source locations in the calibration stage, C-SL is agnostic to both, and can be trained on data collected in minimally constrained settings. To achieve this capability, our proposed method utilizes a customized contrastive loss measuring the spatial contrast between source locations predicted for disjoint segments of the input to jointly update estimated DOAs and the acoustic-spatial mapping in linear time. We provide quantitative and qualitative evaluations of C-SL comparing its performance with baseline DOA estimation methods in a wide range of conditions. We believe the relaxed calibration process offered by C-SL paves the way toward truly personalized augmented hearing applications.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,926
2212.11522
AsyncFLEO: Asynchronous Federated Learning for LEO Satellite Constellations with High-Altitude Platforms
Low Earth Orbit (LEO) constellations, each comprising a large number of satellites, have become a new source of big data "from the sky". Downloading such data to a ground station (GS) for big data analytics demands very high bandwidth and involves large propagation delays. Federated Learning (FL) offers a promising solution because it allows data to stay in-situ (never leaving satellites) and it only needs to transmit machine learning model parameters (trained on the satellites' data). However, the conventional, synchronous FL process can take several days to train a single FL model in the context of satellite communication (Satcom), due to a bottleneck caused by straggler satellites. In this paper, we propose an asynchronous FL framework for LEO constellations called AsyncFLEO to improve FL efficiency in Satcom. Not only does AsynFLEO address the bottleneck (idle waiting) in synchronous FL, but it also solves the issue of model staleness caused by straggler satellites. AsyncFLEO utilizes high-altitude platforms (HAPs) positioned "in the sky" as parameter servers, and consists of three technical components: (1) a ring-of-stars communication topology, (2) a model propagation algorithm, and (3) a model aggregation algorithm with satellite grouping and staleness discounting. Our extensive evaluation with both IID and non-IID data shows that AsyncFLEO outperforms the state of the art by a large margin, cutting down convergence delay by 22 times and increasing accuracy by 40%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
337,819
2108.02753
Safe Motion Planning against Multimodal Distributions based on a Scenario Approach
We present the design of a motion planning algorithm that ensures safety for an autonomous vehicle. In particular, we consider a multimodal distribution over uncertainties; for example, the uncertain predictions of future trajectories of surrounding vehicles reflect discrete decisions, such as turning or going straight at intersections. We develop a computationally efficient, scenario-based approach that solves the motion planning problem with high confidence given a quantifiable number of samples from the multimodal distribution. Our approach is based on two preprocessing steps, which 1) separate the samples into distinct clusters and 2) compute a bounding polytope for each cluster. Then, we rewrite the motion planning problem approximately as a mixed-integer problem using the polytopes. We demonstrate via simulation on the nuScenes dataset that our approach ensures safety with high probability in the presence of multimodal uncertainties, and is computationally more efficient and less conservative than a conventional scenario approach.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
249,434
2406.16408
A Symmetry Property of Christoffel Words
Motivated by the theory of trapezoidal words, whose sequences of cardinality of factors by length are symmetric, we introduce a bivariate variant of this symmetry. We show that this symmetry characterizes Christoffel words, and establish other related results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
467,119
2104.07630
Otaku: Intelligent Management System for Student-Intensive Dormitory
In most student dorms in developing countries, a large number of people live in single-function dorm units. The division of the dormitory is too fixed, resulting in the dormitory often lacking functional spaces such as entertainment, sports, meetings, etc. At the same time, a large number of people are likely to cause aggregation at a fixed time, which is not conducive to maintaining social distance under pandemic conditions such as COVID-19. This brings a lot of inconvenience to students' life studies and management staff. In this paper, we present a smart dormitory system named Otaku using the Internet of Things technology to integrate facilities related to student dormitory life. By splitting the dormitory into several different categories according to their functionality by using smart door lock design, the system can achieve a more effective and flexible resource allocation, which not only helps the school management but also benefits students.
false
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
230,487
2410.17785
TranSPORTmer: A Holistic Approach to Trajectory Understanding in Multi-Agent Sports
Understanding trajectories in multi-agent scenarios requires addressing various tasks, including predicting future movements, imputing missing observations, inferring the status of unseen agents, and classifying different global states. Traditional data-driven approaches often handle these tasks separately with specialized models. We introduce TranSPORTmer, a unified transformer-based framework capable of addressing all these tasks, showcasing its application to the intricate dynamics of multi-agent sports scenarios like soccer and basketball. Using Set Attention Blocks, TranSPORTmer effectively captures temporal dynamics and social interactions in an equivariant manner. The model's tasks are guided by an input mask that conceals missing or yet-to-be-predicted observations. Additionally, we introduce a CLS extra agent to classify states along soccer trajectories, including passes, possessions, uncontrolled states, and out-of-play intervals, contributing to an enhancement in modeling trajectories. Evaluations on soccer and basketball datasets show that TranSPORTmer outperforms state-of-the-art task-specific models in player forecasting, player forecasting-imputation, ball inference, and ball imputation. https://youtu.be/8VtSRm8oGoE
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
501,616
2202.04176
Crime Hot-Spot Modeling via Topic Modeling and Relative Density Estimation
We present a method to capture groupings of similar calls and determine their relative spatial distribution from a collection of crime record narratives. We first obtain a topic distribution for each narrative, and then propose a nearest neighbors relative density estimation (kNN-RDE) approach to obtain spatial relative densities per topic. Experiments over a large corpus ($n=475,019$) of narrative documents from the Atlanta Police Department demonstrate the viability of our method in capturing geographic hot-spot trends which call dispatchers do not initially pick up on and which go unnoticed due to conflation with elevated event density in general.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
279,477
2304.14767
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
361,085
2102.11479
Minimally-Supervised Structure-Rich Text Categorization via Learning on Text-Rich Networks
Text categorization is an essential task in Web content analysis. Considering the ever-evolving Web data and new emerging categories, instead of the laborious supervised setting, in this paper, we focus on the minimally-supervised setting that aims to categorize documents effectively, with a couple of seed documents annotated per category. We recognize that texts collected from the Web are often structure-rich, i.e., accompanied by various metadata. One can easily organize the corpus into a text-rich network, joining raw text documents with document attributes, high-quality phrases, label surface names as nodes, and their associations as edges. Such a network provides a holistic view of the corpus' heterogeneous data sources and enables a joint optimization for network-based analysis and deep textual model training. We therefore propose a novel framework for minimally supervised categorization by learning from the text-rich network. Specifically, we jointly train two modules with different inductive biases -- a text analysis module for text understanding and a network learning module for class-discriminative, scalable network learning. Each module generates pseudo training labels from the unlabeled document set, and both modules mutually enhance each other by co-training using pooled pseudo labels. We test our model on two real-world datasets. On the challenging e-commerce product categorization dataset with 683 categories, our experiments show that given only three seed documents per category, our framework can achieve an accuracy of about 92%, significantly outperforming all compared methods; our accuracy is only less than 2% away from the supervised BERT model trained on about 50K labeled documents.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
221,432
2403.04001
Bidirectional Progressive Neural Networks with Episodic Return Progress for Emergent Task Sequencing and Robotic Skill Transfer
Human brain and behavior provide a rich venue that can inspire novel control and learning methods for robotics. In an attempt to exemplify such a development by inspiring how humans acquire knowledge and transfer skills among tasks, we introduce a novel multi-task reinforcement learning framework named Episodic Return Progress with Bidirectional Progressive Neural Networks (ERP-BPNN). The proposed ERP-BPNN model (1) learns in a human-like interleaved manner by (2) autonomous task switching based on a novel intrinsic motivation signal and, in contrast to existing methods, (3) allows bidirectional skill transfer among tasks. ERP-BPNN is a general architecture applicable to several multi-task learning settings; in this paper, we present the details of its neural architecture and show its ability to enable effective learning and skill transfer among morphologically different robots in a reaching task. The developed Bidirectional Progressive Neural Network (BPNN) architecture enables bidirectional skill transfer without requiring incremental training and seamlessly integrates with online task arbitration. The task arbitration mechanism developed is based on soft Episodic Return progress (ERP), a novel intrinsic motivation (IM) signal. To evaluate our method, we use quantifiable robotics metrics such as 'expected distance to goal' and 'path straightness' in addition to the usual reward-based measure of episodic return common in reinforcement learning. With simulation experiments, we show that ERP-BPNN achieves faster cumulative convergence and improves performance in all metrics considered among morphologically different robots compared to the baselines.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
435,413
1101.4343
Fundamental Tradeoffs on Green Wireless Networks
Traditional design of mobile wireless networks mainly focuses on ubiquitous access and large capacity. However, as energy saving and environmental protection become a global demand and inevitable trend, wireless researchers and engineers need to shift their focus to energy-efficiency oriented design, that is, green radio. In this paper, we propose a framework for green radio research and integrate the fundamental issues that are currently scattered. The skeleton of the framework consists of four fundamental tradeoffs: deployment efficiency - energy efficiency tradeoff, spectrum efficiency - energy efficiency tradeoff, bandwidth - power tradeoff, and delay - power tradeoff. With the help of the four fundamental tradeoffs, we demonstrate that key network performance/cost indicators are all stringed together.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
8,888
2106.03135
Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction
Recently normalizing flows (NFs) have demonstrated state-of-the-art performance on modeling 3D point clouds while allowing sampling with arbitrary resolution at inference time. However, these flow-based models still require long training times and large models for representing complicated geometries. This work enhances their representational power by applying mixtures of NFs to point clouds. We show that in this more general framework each component learns to specialize in a particular subregion of an object in a completely unsupervised fashion. By instantiating each mixture component with a comparatively small NF we generate point clouds with improved details compared to single-flow-based models while using fewer parameters and considerably reducing the inference runtime. We further demonstrate that by adding data augmentation, individual mixture components can learn to specialize in a semantically meaningful manner. We evaluate mixtures of NFs on generation, autoencoding and single-view reconstruction based on the ShapeNet dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
239,190
2303.11477
NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using Diffusion Models
In recent years, computational pathology has seen tremendous progress driven by deep learning methods in segmentation and classification tasks aiding prognostic and diagnostic settings. Nuclei segmentation, for instance, is an important task for diagnosing different cancers. However, training deep learning models for nuclei segmentation requires large amounts of annotated data, which is expensive to collect and label. This necessitates explorations into generative modeling of histopathological images. In this work, we use recent advances in conditional diffusion modeling to formulate a first-of-its-kind nuclei-aware semantic tissue generation framework (NASDM) which can synthesize realistic tissue samples given a semantic instance mask of up to six different nuclei types, enabling pixel-perfect nuclei localization in generated samples. These synthetic images are useful in applications in pathology pedagogy, validation of models, and supplementation of existing nuclei segmentation datasets. We demonstrate that NASDM is able to synthesize high-quality histopathology images of the colon with superior quality and semantic controllability over existing generative methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,868
2207.07303
Towards Better Dermoscopic Image Feature Representation Learning for Melanoma Classification
Deep learning-based melanoma classification with dermoscopic images has recently shown great potential in automatic early-stage melanoma diagnosis. However, limited by the significant data imbalance and obvious extraneous artifacts, i.e., the hair and ruler markings, discriminative feature extraction from dermoscopic images is very challenging. In this study, we seek to resolve these problems respectively towards better representation learning for lesion features. Specifically, a GAN-based data augmentation (GDA) strategy is adapted to generate synthetic melanoma-positive images, in conjunction with the proposed implicit hair denoising (IHD) strategy. Wherein the hair-related representations are implicitly disentangled via an auxiliary classifier network and reversely sent to the melanoma-feature extraction backbone for better melanoma-specific representation learning. Furthermore, to train the IHD module, the hair noises are additionally labeled on the ISIC2020 dataset, making it the first large-scale dermoscopic dataset with annotation of hair-like artifacts. Extensive experiments demonstrate the superiority of the proposed framework as well as the effectiveness of each component. The improved dataset publicly avaliable at https://github.com/kirtsy/DermoscopicDataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
308,166
1711.00326
The quoter model: a paradigmatic model of the social flow of written information
We propose a model for the social flow of information in the form of text data, which simulates the posting and sharing of short social media posts. Nodes in a graph representing a social network take turns generating words, leading to a symbolic time series associated with each node. Information propagates over the graph via a quoting mechanism, where nodes randomly copy short segments of text from each other. We characterize information flows from these text via information-theoretic estimators, and we derive analytic relationships between model parameters and the values of these estimators. We explore and validate the model with simulations on small network motifs and larger random graphs. Tractable models such as ours that generate symbolic data while controlling the information flow allow us to test and compare measures of information flow applicable to real social media data. In particular, by choosing different network structures, we can develop test scenarios to determine whether or not measures of information flow can distinguish between true and spurious interactions, and how topological network properties relate to information flow.
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
83,697
1302.3209
"Groupware for Groups": Problem-Driven Design in Deme
Design choices can be clarified when group interaction software is directed at solving the interaction needs of particular groups that pre-date the groupware. We describe an example: the Deme platform for online deliberation. Traditional threaded conversation systems are insufficient for solving the problem at which Deme is aimed, namely, that the democratic process in grassroots community groups is undermined both by the limited availability of group members for face-to-face meetings and by constraints on the use of information in real-time interactions. We describe and motivate design elements, either implemented or planned for Deme, that addresses this problem. We believe that "problem focused" design of software for preexisting groups provides a useful framework for evaluating the appropriateness of design elements in groupware generally.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
21,996
2404.19391
ZSMILES: an approach for efficient SMILES storage for random access in Virtual Screening
Virtual screening is a technique used in drug discovery to select the most promising molecules to test in a lab. To perform virtual screening, we need a large set of molecules as input, and storing these molecules can become an issue. In fact, extreme-scale high-throughput virtual screening applications require a big dataset of input molecules and produce an even bigger dataset as output. These molecules' databases occupy tens of TB of storage space, and domain experts frequently sample a small portion of this data. In this context, SMILES is a popular data format for storing large sets of molecules since it requires significantly less space to represent molecules than other formats (e.g., MOL2, SDF). This paper proposes an efficient dictionary-based approach to compress SMILES-based datasets. This approach takes advantage of domain knowledge to provide a readable output with separable SMILES, enabling random access. We examine the benefits of storing these datasets using ZSMILES to reduce the cold storage footprint in HPC systems. The main contributions concern a custom dictionary-based approach and a data pre-processing step. From experimental results, we can notice how ZSMILES leverage domain knowledge to compress x1.13 more than state of the art in similar scenarios and up to $0.29$ compression ratio. We tested a CUDA version of ZSMILES targetting NVIDIA's GPUs, showing a potential speedup of 7x.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
450,626
1604.04166
Analysis of Interference Correlation in Non-Poisson Networks
The correlation of interference has been well quantified in Poisson networks where the interferers are independent of each other. However, there exists dependence among the base stations (BSs) in wireless networks. In view of this, we quantify the interference correlation in non-Poisson networks where the interferers are distributed as a Matern cluster process (MCP) and a second-order cluster process (SOCP). Interestingly, it is found that the correlation coefficient of interference for the Matern cluster networks, $\zeta_{MCP}$, is equal to that for second-order cluster networks, $\zeta_{SOCP}$. Furthermore, they are greater than their counterpart for the Poisson networks. This shows that clustering in interferers enhances the interference correlation. In addition, we show that the correlation coefficients $\zeta_{MCP}$ and $\zeta_{SOCP}$ increase as the average number of points in each cluster, $c$, grows, but decrease with the increase in the cluster radius, $R$. More importantly, we point that the effects of clustering on interference correlation can be neglected as $\frac{c}{\pi^{2}R^{2}}\rightarrow0$. Finally, the analytical results are validated by simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
54,607
2412.02780
WxC-Bench: A Novel Dataset for Weather and Climate Downstream Tasks
High-quality machine learning (ML)-ready datasets play a foundational role in developing new artificial intelligence (AI) models or fine-tuning existing models for scientific applications such as weather and climate analysis. Unfortunately, despite the growing development of new deep learning models for weather and climate, there is a scarcity of curated, pre-processed machine learning (ML)-ready datasets. Curating such high-quality datasets for developing new models is challenging particularly because the modality of the input data varies significantly for different downstream tasks addressing different atmospheric scales (spatial and temporal). Here we introduce WxC-Bench (Weather and Climate Bench), a multi-modal dataset designed to support the development of generalizable AI models for downstream use-cases in weather and climate research. WxC-Bench is designed as a dataset of datasets for developing ML-models for a complex weather and climate system, addressing selected downstream tasks as machine learning phenomenon. WxC-Bench encompasses several atmospheric processes from meso-$\beta$ (20 - 200 km) scale to synoptic scales (2500 km), such as aviation turbulence, hurricane intensity and track monitoring, weather analog search, gravity wave parameterization, and natural language report generation. We provide a comprehensive description of the dataset and also present a technical validation for baseline analysis. The dataset and code to prepare the ML-ready data have been made publicly available on Hugging Face -- https://huggingface.co/datasets/nasa-impact/WxC-Bench
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
513,672
2411.11511
Structure learning with Temporal Gaussian Mixture for model-based Reinforcement Learning
Model-based reinforcement learning refers to a set of approaches capable of sample-efficient decision making, which create an explicit model of the environment. This model can subsequently be used for learning optimal policies. In this paper, we propose a temporal Gaussian Mixture Model composed of a perception model and a transition model. The perception model extracts discrete (latent) states from continuous observations using a variational Gaussian mixture likelihood. Importantly, our model constantly monitors the collected data searching for new Gaussian components, i.e., the perception model performs a form of structure learning (Smith et al., 2020; Friston et al., 2018; Neacsu et al., 2022) as it learns the number of Gaussian components in the mixture. Additionally, the transition model learns the temporal transition between consecutive time steps by taking advantage of the Dirichlet-categorical conjugacy. Both the perception and transition models are able to forget part of the data points, while integrating the information they provide within the prior, which ensure fast variational inference. Finally, decision making is performed with a variant of Q-learning which is able to learn Q-values from beliefs over states. Empirically, we have demonstrated the model's ability to learn the structure of several mazes: the model discovered the number of states and the transition probabilities between these states. Moreover, using its learned Q-values, the agent was able to successfully navigate from the starting position to the maze's exit.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
509,082
2003.06171
Towards a Framework for Visual Intelligence in Service Robotics: Epistemic Requirements and Gap Analysis
A key capability required by service robots operating in real-world, dynamic environments is that of Visual Intelligence, i.e., the ability to use their vision system, reasoning components and background knowledge to make sense of their environment. In this paper, we analyze the epistemic requirements for Visual Intelligence, both in a top-down fashion, using existing frameworks for human-like Visual Intelligence in the literature, and from the bottom up, based on the errors emerging from object recognition trials in a real-world robotic scenario. Finally, we use these requirements to evaluate current knowledge bases for Service Robotics and to identify gaps in the support they provide for Visual Intelligence. These gaps provide the basis of a research agenda for developing more effective knowledge representations for Visual Intelligence.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
168,050
2502.12923
On-Device LLMs for Home Assistant: Dual Role in Intent Detection and Response Generation
This paper investigates whether Large Language Models (LLMs), fine-tuned on synthetic but domain-representative data, can perform the twofold task of (i) slot and intent detection and (ii) natural language response generation for a smart home assistant, while running solely on resource-limited, CPU-only edge hardware. We fine-tune LLMs to produce both JSON action calls and text responses. Our experiments show that 16-bit and 8-bit quantized variants preserve high accuracy on slot and intent detection and maintain strong semantic coherence in generated text, while the 4-bit model, while retaining generative fluency, suffers a noticeable drop in device-service classification accuracy. Further evaluations on noisy human (non-synthetic) prompts and out-of-domain intents confirm the models' generalization ability, obtaining around 80--86\% accuracy. While the average inference time is 5--6 seconds per query -- acceptable for one-shot commands but suboptimal for multi-turn dialogue -- our results affirm that an on-device LLM can effectively unify command interpretation and flexible response generation for home automation without relying on specialized hardware.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
535,111
2310.11309
Intelligent Surfaces Aided High-Mobility Communications: Opportunities and Design Issues
Intelligent reflecting/refracting surface (IRS) is envisioned as a promising technology to reconfigure wireless propagation environment for enhancing the communication performance, by smartly controlling the signal reflection/refraction with a large number of tunable passive elements. In particular, the application of IRS in high-mobility scenarios can convert wireless channels from fast fading to slow fading, thus achieving more reliable communications. In this paper, we first provide an overview of the new applications and opportunities of IRS in high-mobility communications. Next, we present two practical strategies for deploying IRS to aid high-mobility communications, namely, roadside IRS versus vehicle-side IRS, and compare their different channel characteristics, handover requirements, and deployment costs. Then, the main issues in designing IRS-aided high-mobility communications, including node discovery, mode switching, beam alignment/tracking, handover, and multiuser scheduling are discussed for both IRS deployment strategies. Moreover, numerical results are presented to demonstrate the potential performance gains of IRSs in vehicular communications. Finally, new research directions are pointed out for future work.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
400,593
1701.08888
Integrating Reviews into Personalized Ranking for Cold Start Recommendation
Item recommendation task predicts a personalized ranking over a set of items for each individual user. One paradigm is the rating-based methods that concentrate on explicit feedbacks and hence face the difficulties in collecting them. Meanwhile, the ranking-based methods are presented with rated items and then rank the rated above the unrated. This paradigm takes advantage of widely available implicit feedback. It, however, usually ignores a kind of important information: item reviews. Item reviews not only justify the preferences of users, but also help alleviate the cold-start problem that fails the collaborative filtering. In this paper, we propose two novel and simple models to integrate item reviews into Bayesian personalized ranking. In each model, we make use of text features extracted from item reviews using word embeddings. On top of text features we uncover the review dimensions that explain the variation in users' feedback and these review factors represent a prior preference of users. Experiments on six real-world data sets show the benefits of leveraging item reviews on ranking prediction. We also conduct analyses to understand the proposed models.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
67,546
cmp-lg/9612003
Metrics for Evaluating Dialogue Strategies in a Spoken Language System
In this paper, we describe a set of metrics for the evaluation of different dialogue management strategies in an implemented real-time spoken language system. The set of metrics we propose offers useful insights in evaluating how particular choices in the dialogue management can affect the overall quality of the man-machine dialogue. The evaluation makes use of established metrics: the transaction success, the contextual appropriateness of system answers, the calculation of normal and correction turns in a dialogue. We also define a new metric, the implicit recovery, which allows to measure the ability of a dialogue manager to deal with errors by different levels of analysis. We report evaluation data from several experiments, and we compare two different approaches to dialogue repair strategies using the set of metrics we argue for.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,681
2501.14808
HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location
Large language models (LLMs) have facilitated a wide range of applications with distinct service-level objectives (SLOs), from latency-sensitive online tasks like interactive chatbots to throughput-oriented offline workloads like document summarization. The existing deployment model, which dedicates machines to each workload, simplifies SLO management but often leads to poor resource utilization. This paper introduces HyGen, an interference-aware LLM serving system that enables efficient co-location of online and offline workloads while preserving latency requirements. HyGen incorporates two key innovations: (1) performance control mechanisms, including a latency predictor to estimate batch execution time and an SLO-aware profiler to quantify latency interference, and (2) SLO-aware offline scheduling policies that maximize serving throughput and prevent starvation, without compromising online serving latency. Our evaluation on production workloads shows that HyGen achieves up to 3.87x overall throughput and 5.84x offline throughput gains over online and hybrid serving baselines, respectively, while strictly satisfying latency SLOs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
527,282
1310.2632
Bilinear Generalized Approximate Message Passing
We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
27,690
2405.15544
Knowledge-enhanced Relation Graph and Task Sampling for Few-shot Molecular Property Prediction
Recently, few-shot molecular property prediction (FSMPP) has garnered increasing attention. Despite impressive breakthroughs achieved by existing methods, they often overlook the inherent many-to-many relationships between molecules and properties, which limits their performance. For instance, similar substructures of molecules can inspire the exploration of new compounds. Additionally, the relationships between properties can be quantified, with high-related properties providing more information in exploring the target property than those low-related. To this end, this paper proposes a novel meta-learning FSMPP framework (KRGTS), which comprises the Knowledge-enhanced Relation Graph module and the Task Sampling module. The knowledge-enhanced relation graph module constructs the molecule-property multi-relation graph (MPMRG) to capture the many-to-many relationships between molecules and properties. The task sampling module includes a meta-training task sampler and an auxiliary task sampler, responsible for scheduling the meta-training process and sampling high-related auxiliary tasks, respectively, thereby achieving efficient meta-knowledge learning and reducing noise introduction. Empirically, extensive experiments on five datasets demonstrate the superiority of KRGTS over a variety of state-of-the-art methods. The code is available in https://github.com/Vencent-Won/KRGTS-public.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,985
2407.00629
Identification of LFT Structured Descriptor Systems with Slow and Non-uniform Sampling
Time domain identification is studied in this paper for parameters of a continuous-time multi-input multi-output descriptor system, with these parameters affecting system matrices through a linear fractional transformation. Sampling is permitted to be slow and non-uniform, and there are no necessities to satisfy the Nyquist frequency restrictions. This model can be used to described the behaviors of a networked dynamic system, and the obtained results can be straightforwardly applied to an ordinary state-space model, as well as a lumped system. An explicit formula is obtained respectively for the transient and steady-state responses of the system stimulated by an arbitrary signal. Some relations have been derived between the system steady-state response and its transfer function matrix (TFM), which reveal that the value of a TFM at almost any interested point, as well as its derivatives and a right tangential interpolation along an arbitrary direction, can in principle be estimated from input-output experimental data. Based on these relations, an estimation algorithm is suggested respectively for the parameters of the descriptor system and the values of its TFM. Their properties like asymptotic unbiasedness, consistency, etc., are analyzed. A simple numerical example is included to illustrate characteristics of the suggested estimation algorithms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
468,942
2410.02074
Price-guided user attention in large-scale E-commerce group recommendation
Existing group recommender systems utilize attention mechanisms to identify critical users who influence group decisions the most. We analyzed user attention scores from a widely-used group recommendation model on a real-world E-commerce dataset and found that item price and user interaction history significantly influence the selection of critical users. When item prices are low, users with extensive interaction histories are more influential in group decision-making. Conversely, their influence diminishes with higher item prices. Based on these observations, we propose a novel group recommendation approach that incorporates item price as a guiding factor for user aggregation. Our model employs an adaptive sigmoid function to adjust output logits based on item prices, enhancing the accuracy of user aggregation. Our model can be plugged into any attention-based group recommender system if the price information is available. We evaluate our model's performance on a public benchmark and a real-world dataset. We compare it with other state-of-the-art group recommendation methods. Our results demonstrate that our price-guided user attention approach outperforms the state-of-the-art methods in terms of hit ratio and mean square error.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
494,087
2401.13530
Continuous-time Riemannian SGD and SVRG Flows on Wasserstein Probabilistic Space
Recently, optimization on the Riemannian manifold has provided new insights to the optimization community. In this regard, the manifold taken as the probability measure metric space equipped with the second-order Wasserstein distance is of particular interest, since optimization on it can be linked to practical sampling processes. In general, the standard (continuous) optimization method on Wasserstein space is Riemannian gradient flow (i.e., Langevin dynamics when minimizing KL divergence). In this paper, we aim to enrich the continuous optimization methods in the Wasserstein space, by extending the gradient flow on it into the stochastic gradient descent (SGD) flow and stochastic variance reduction gradient (SVRG) flow. The two flows in Euclidean space are standard continuous stochastic methods, while their Riemannian counterparts are unexplored. By leveraging the property of Wasserstein space, we construct stochastic differential equations (SDEs) to approximate the corresponding discrete dynamics of desired Riemannian stochastic methods in Euclidean space. Then, our probability measures flows are obtained by the Fokker-Planck equation. Finally, the convergence rates of our Riemannian stochastic flows are proven, which match the results in Euclidean space.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
423,756
1805.00116
A Canonical Image Set for Examining and Comparing Image Processing Algorithms
The purpose of this paper is to introduce a set of four test images containing features and structures that can facilitate effective examination and comparison of image processing algorithms. More specifically, the images are designed to more explicitly expose the characteristic properties of algorithms for image compression, virtual resolution adjustment, and enhancement. This set was developed at the Naval Research Laboratory (NRL) in the late 1990s as a more rigorous alternative to Lena and other images that have come into common use for purely ad hoc reasons with little or no rigorous consideration of their suitability. The increasing number of test images appearing in the literature not only makes it more difficult to compare results from different papers, it also introduces the potential for cherry-picking to influence results. The key contribution of this paper is the proposal to establish {\em some} canonical set to ensure that published results can be analyzed and compared in a rigorous way from one paper to another, and consideration of the four NRL images is proposed for this purpose.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
96,359
1701.08289
Face Detection using Deep Learning: An Improved Faster RCNN Approach
In this report, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detetion benchmark evaluation. In particular, we improve the state-of-the-art faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pretraining, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance, making it the best model in terms of ROC curves among all the published methods on the FDDB benchmark.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
67,438
2306.02029
Model-aided Federated Reinforcement Learning for Multi-UAV Trajectory Planning in IoT Networks
Deploying teams of unmanned aerial vehicles (UAVs) to harvest data from distributed Internet of Things (IoT) devices requires efficient trajectory planning and coordination algorithms. Multi-agent reinforcement learning (MARL) has emerged as a solution, but requires extensive and costly real-world training data. To tackle this challenge, we propose a novel model-aided federated MARL algorithm to coordinate multiple UAVs on a data harvesting mission with only limited knowledge about the environment. The proposed algorithm alternates between building an environment simulation model from real-world measurements, specifically learning the radio channel characteristics and estimating unknown IoT device positions, and federated QMIX training in the simulated environment. Each UAV agent trains a local QMIX model in its simulated environment and continuously consolidates it through federated learning with other agents, accelerating the learning process. A performance comparison with standard MARL algorithms demonstrates that our proposed model-aided FedQMIX algorithm reduces the need for real-world training experiences by around three magnitudes while attaining similar data collection performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
370,729
2501.18062
FinanceQA: A Benchmark for Evaluating Financial Analysis Capabilities of Large Language Models
FinanceQA is a testing suite that evaluates LLMs' performance on complex numerical financial analysis tasks that mirror real-world investment work. Despite recent advances, current LLMs fail to meet the strict accuracy requirements of financial institutions, with models failing approximately 60% of realistic tasks that mimic on-the-job analyses at hedge funds, private equity firms, investment banks, and other financial institutions. The primary challenges include hand-spreading metrics, adhering to standard accounting and corporate valuation conventions, and performing analysis under incomplete information - particularly in multi-step tasks requiring assumption generation. This performance gap highlights the disconnect between existing LLM capabilities and the demands of professional financial analysis that are inadequately tested by current testing architectures. Results show that higher-quality training data is needed to support such tasks, which we experiment with using OpenAI's fine-tuning API. FinanceQA is publicly released at [this https URL](https://huggingface.co/datasets/AfterQuery/FinanceQA).
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
528,551
1807.01442
Modeling Sparse Deviations for Compressed Sensing using Generative Models
In compressed sensing, a small number of linear measurements can be used to reconstruct an unknown signal. Existing approaches leverage assumptions on the structure of these signals, such as sparsity or the availability of a generative model. A domain-specific generative model can provide a stronger prior and thus allow for recovery with far fewer measurements. However, unlike sparsity-based approaches, existing methods based on generative models guarantee exact recovery only over their support, which is typically only a small subset of the space on which the signals are defined. We propose Sparse-Gen, a framework that allows for sparse deviations from the support set, thereby achieving the best of both worlds by using a domain specific prior and allowing reconstruction over the full space of signals. Theoretically, our framework provides a new class of signals that can be acquired using compressed sensing, reducing classic sparse vector recovery to a special case and avoiding the restrictive support due to a generative model prior. Empirically, we observe consistent improvements in reconstruction accuracy over competing approaches, especially in the more practical setting of transfer compressed sensing where a generative model for a data-rich, source domain aids sensing on a data-scarce, target domain.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
102,063
2202.01463
Minimax rate of consistency for linear models with missing values
Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...). In fact, the very nature of missing values usually prevents us from running standard learning algorithms. In this paper, we focus on the extensively-studied linear models, but in presence of missing values, which turns out to be quite a challenging task. Indeed, the Bayes rule can be decomposed as a sum of predictors corresponding to each missing pattern. This eventually requires to solve a number of learning tasks, exponential in the number of input features, which makes predictions impossible for current real-world datasets. First, we propose a rigorous setting to analyze a least-square type estimator and establish a bound on the excess risk which increases exponentially in the dimension. Consequently, we leverage the missing data distribution to propose a new algorithm, andderive associated adaptive risk bounds that turn out to be minimax optimal. Numerical experiments highlight the benefits of our method compared to state-of-the-art algorithms used for predictions with missing values.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
278,491
2204.05104
Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data, and multi-source domain adaptation (MSDA) is very attractive for real world applications. By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning. It is worth noting that both self-supervised learning and multi-source domain adaptation share a similar goal: they both aim to leverage unlabeled data to learn more expressive representations. Unfortunately, traditional multi-task self-supervised learning faces two challenges: (1) the pretext task may not strongly relate to the downstream task, thus it could be difficult to learn useful knowledge being shared from the pretext task to the target task; (2) when the same feature extractor is shared between the pretext task and the downstream one and only different prediction heads are used, it is ineffective to enable inter-task information exchange and knowledge sharing. To address these issues, we propose a novel \textbf{S}elf-\textbf{S}upervised \textbf{G}raph Neural Network (SSG), where a graph neural network is used as the bridge to enable more effective inter-task information exchange and knowledge sharing. More expressive representation is learned by adopting a mask token strategy to mask some domain information. Our extensive experiments have demonstrated that our proposed SSG method has achieved state-of-the-art results over four multi-source domain adaptation datasets, which have shown the effectiveness of our proposed SSG method from different aspects.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
290,911
2310.08095
Multi-Satellite Cooperative Networks: Joint Hybrid Beamforming and User Scheduling Design
In this paper, we consider a cooperative communication network where multiple low-Earth-orbit (LEO) satellites provide services to multiple ground users (GUs) cooperatively at the same time and on the same frequency. The multi-satellite cooperation has great potential in extending communication coverage and increasing spectral efficiency. Considering that the on-board radio-frequency circuit resources and computation resources on each satellite are restricted, we aim to propose a low-complexity yet efficient multi-satellite cooperative transmission framework. Specifically, we first propose a hybrid beamforming method consisting of analog beamforming for beam alignment and digital beamforming for interference mitigation. Then, to establish appropriate connections between the satellites and GUs, we propose a heuristic user scheduling algorithm which determines the connections according to the total spectral efficiency increment of the multi-satellite cooperative network. Next, considering the intrinsic connection between beamforming and user scheduling, a joint hybrid beamforming and user scheduling (JHU) scheme is proposed to dramatically improve the performance of the multi-satellite cooperative network. In addition to the single-connection scenario, we also consider the multi-connection case using the JHU scheme. Extensive simulations conducted over different LEO satellite constellations and across various GU locations demonstrate the superiority of the proposed schemes in both overall and per-user spectral efficiencies.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
399,263
2302.10879
$k$NN-Adapter: Efficient Domain Adaptation for Black-Box Language Models
Fine-tuning a language model on a new domain is standard practice for domain adaptation. However, it can be infeasible when it comes to modern large-scale language models such as GPT-3, which can only be accessed through APIs, making it difficult to access the internal parameters of the model. In this paper, we propose $k$NN-Adapter, a method to effectively adapt these black-box large language models (LLMs) to a new domain. The $k$NN-Adapter builds on top of the retrieval-augmented language model, and adaptively learns to interpolate the output of the language model with retrieval results from a datastore consisting of the target domain data. Our experiments on four different domains demonstrate that $k$NN-Adapter significantly improves perplexity, and works particularly well in settings with limited access to LLMs. Additionally, we show that $k$NN-Adapter is more effective than fine-tuning when the amount of training data is limited. We also release a dataset to encourage further study.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
346,990
2001.00329
On Consequentialism and Fairness
Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage "fair" outcomes. Less attention has been paid, however, to the ethical foundations which underlie such efforts. Among the ethical perspectives that should be taken into consideration is consequentialism, the position that, roughly speaking, outcomes are all that matter. Although consequentialism is not free from difficulties, and although it does not necessarily provide a tractable way of choosing actions (because of the combined problems of uncertainty, subjectivity, and aggregation), it nevertheless provides a powerful foundation from which to critique the existing literature on machine learning fairness. Moreover, it brings to the fore some of the tradeoffs involved, including the problem of who counts, the pros and cons of using a policy, and the relative value of the distant future. In this paper we provide a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism. We conclude with a broader discussion of the issues of learning and randomization, which have important implications for the ethics of automated decision making systems.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
159,186
2109.06073
An End-to-end Point of Interest (POI) Conflation Framework
Point of interest (POI) data serves as a valuable source of semantic information for places of interest and has many geospatial applications in real estate, transportation, and urban planning. With the availability of different data sources, POI conflation serves as a valuable technique for enriching data quality and coverage by merging the POI data from multiple sources. This study proposes a novel end-to-end POI conflation framework consisting of six steps, starting with data procurement, schema standardisation, taxonomy mapping, POI matching, POI unification, and data verification. The feasibility of the proposed framework was demonstrated in a case study conducted in the eastern region of Singapore, where the POI data from five data sources was conflated to form a unified POI dataset. Based on the evaluation conducted, the resulting unified dataset was found to be more comprehensive and complete than any of the five POI data sources alone. Furthermore, the proposed approach for identifying POI matches between different data sources outperformed all baseline approaches with a matching accuracy of 97.6% with an average run time below 3 minutes when matching over 12,000 POIs to result in 8,699 unique POIs, thereby demonstrating the framework's scalability for large scale implementation in dense urban contexts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
255,035
2205.01858
DeeptDCS: Deep Learning-Based Estimation of Currents Induced During Transcranial Direct Current Stimulation
Objective: Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique used to generate conduction currents in the head and disrupt brain functions. To rapidly evaluate the tDCS-induced current density in near real-time, this paper proposes a deep learning-based emulator, named DeeptDCS. Methods: The emulator leverages Attention U-net taking the volume conductor models (VCMs) of head tissues as inputs and outputting the three-dimensional current density distribution across the entire head. The electrode configurations are also incorporated into VCMs without increasing the number of input channels; this enables the straightforward incorporation of the non-parametric features of electrodes (e.g., thickness, shape, size, and position) in the training and testing of the proposed emulator. Results: Attention U-net outperforms standard U-net and its other three variants (Residual U-net, Attention Residual U-net, and Multi-scale Residual U-net) in terms of accuracy. The generalization ability of DeeptDCS to non-trained electrode configurations can be greatly enhanced through fine-tuning the model. The computational time required by one emulation via DeeptDCS is a fraction of a second. Conclusion: DeeptDCS is at least two orders of magnitudes faster than a physics-based open-source simulator, while providing satisfactorily accurate results. Significance: The high computational efficiency permits the use of DeeptDCS in applications requiring its repetitive execution, such as uncertainty quantification and optimization studies of tDCS.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
294,740
2109.10982
Quantifying nonlocality: how outperforming local quantum codes is expensive
Quantum low-density parity-check (LDPC) codes are a promising avenue to reduce the cost of constructing scalable quantum circuits. However, it is unclear how to implement these codes in practice. Seminal results of Bravyi & Terhal, and Bravyi, Poulin & Terhal have shown that quantum LDPC codes implemented through local interactions obey restrictions on their dimension $k$ and distance $d$. Here we address the complementary question of how many long-range interactions are required to implement a quantum LDPC code with parameters $k$ and $d$. In particular, in 2D we show that a quantum LDPC with distance $n^{1/2 + \epsilon}$ code requires $\Omega(n^{1/2 + \epsilon})$ interactions of length $\widetilde{\Omega}(n^{\epsilon})$. Further a code satisfying $k \propto n$ with distance $d \propto n^\alpha$ requires $\widetilde{\Omega}(n)$ interactions of length $\widetilde{\Omega}(n^{\alpha/2})$. Our results are derived using bounds on quantum codes from graph metrics. As an application of these results, we consider a model called a stacked architecture, which has previously been considered as a potential way to implement quantum LDPC codes. In this model, although most interactions are local, a few of them are allowed to be very long. We prove that limited long-range connectivity implies quantitative bounds on the distance and code dimension.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
256,805
2011.14894
Uncertainty-driven ensembles of deep architectures for multiclass classification. Application to COVID-19 diagnosis in chest X-ray images
Respiratory diseases kill million of people each year. Diagnosis of these pathologies is a manual, time-consuming process that has inter and intra-observer variability, delaying diagnosis and treatment. The recent COVID-19 pandemic has demonstrated the need of developing systems to automatize the diagnosis of pneumonia, whilst Convolutional Neural Network (CNNs) have proved to be an excellent option for the automatic classification of medical images. However, given the need of providing a confidence classification in this context it is crucial to quantify the reliability of the model's predictions. In this work, we propose a multi-level ensemble classification system based on a Bayesian Deep Learning approach in order to maximize performance while quantifying the uncertainty of each classification decision. This tool combines the information extracted from different architectures by weighting their results according to the uncertainty of their predictions. Performance of the Bayesian network is evaluated in a real scenario where simultaneously differentiating between four different pathologies: control vs bacterial pneumonia vs viral pneumonia vs COVID-19 pneumonia. A three-level decision tree is employed to divide the 4-class classification into three binary classifications, yielding an accuracy of 98.06% and overcoming the results obtained by recent literature. The reduced preprocessing needed for obtaining this high performance, in addition to the information provided about the reliability of the predictions evidence the applicability of the system to be used as an aid for clinicians.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
208,920
2112.06667
Long-Term Benefits of Network Boosters for Renewables Integration and Corrective Grid Security
The preventative strategies for $N-1$ network security dominant in European networks mean that network capacity is kept free in case a line fails. If instead fast corrective actions are used to overcome network overloading when single lines fail, this has the potential to free up network capacity that is otherwise underused in preventive $N-1$ security strategies. In this paper, we investigate the impact on renewable integration of a corrective network security strategy, whereby storage or other flexibility assets are used to correct overloading shortly after line outages. In this way, we find significant cost savings for the integration of renewable energy of up to 2.4 billion euros per year in an aggregated 50-bus model of the German power system utilizing these flexibility assets, so-called network boosters (NB). This offers a role for storage beyond energy arbitrage or ancillary services like frequency control. While previous literature has focused on the potential savings of NB in the short-term operation, we focus on the long-term benefits in systems with high shares of renewable energy sources, where the capacities and dispatch of generation and NB are optimised. We demonstrate the benefits of NB for various shares of renewable energy, NB and flexibility costs, as well as different allowed levels of temporary overloading the lines in both (i) a sequential model, where long-run generation investments are optimised separately from the NB capacities, and (ii) a simultaneous model, where generation is co-optimised with NB investment so that mixed preventive-corrective approaches are possible.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
271,258
0711.3605
Very strict selectional restrictions
We discuss the characteristics and behaviour of two parallel classes of verbs in two Romance languages, French and Portuguese. Examples of these verbs are Port. abater [gado] and Fr. abattre [b\'etail], both meaning "slaughter [cattle]". In both languages, the definition of the class of verbs includes several features: - They have only one essential complement, which is a direct object. - The nominal distribution of the complement is very limited, i.e., few nouns can be selected as head nouns of the complement. However, this selection is not restricted to a single noun, as would be the case for verbal idioms such as Fr. monter la garde "mount guard". - We excluded from the class constructions which are reductions of more complex constructions, e.g. Port. afinar [instrumento] com "tune [instrument] with".
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
944
2401.04988
Optimising Graph Representation for Hardware Implementation of Graph Convolutional Networks for Event-based Vision
Event-based vision is an emerging research field involving processing data generated by Dynamic Vision Sensors (neuromorphic cameras). One of the latest proposals in this area are Graph Convolutional Networks (GCNs), which allow to process events in its original sparse form while maintaining high detection and classification performance. In this paper, we present the hardware implementation of a~graph generation process from an event camera data stream, taking into account both the advantages and limitations of FPGAs. We propose various ways to simplify the graph representation and use scaling and quantisation of values. We consider both undirected and directed graphs that enable the use of PointNet convolution. The results obtained show that by appropriately modifying the graph representation, it is possible to create a~hardware module for graph generation. Moreover, the proposed modifications have no significant impact on object detection performance, only 0.08% mAP less for the base model and the N-Caltech data set.Finally, we describe the proposed hardware architecture of the graph generation module.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,615
2105.04017
Infill topology and shape optimisation of lattice-skin structures
Lattice-skin structures composed of a thin-shell skin and a lattice infill are widespread in nature and large-scale engineering due to their efficiency and exceptional mechanical properties. Recent advances in additive manufacturing, or 3D printing, make it possible to create lattice-skin structures of almost any size with arbitrary shape and geometric complexity. We propose a novel gradient-based approach to optimising both the shape and infill of lattice-skin structures to improve their efficiency further. The respective gradients are computed by fully considering the lattice-skin coupling while the lattice topology and shape optimisation problems are solved in a sequential manner. The shell is modelled as a Kirchhoff-Love shell and analysed using isogeometric subdivision surfaces, whereas the lattice is modelled as a pin-jointed truss. The lattice consists of many cells, possibly of different sizes, with each containing a small number of struts. We propose a penalisation approach akin to the SIMP (solid isotropic material with penalisation) method for topology optimisation of the lattice. Furthermore, a corresponding sensitivity filter and a lattice extraction technique are introduced to ensure the stability of the optimisation process and to eliminate scattered struts of small cross-sectional areas. The developed topology optimisation technique is suitable for non-periodic, non-uniform lattices. For shape optimisation of both the shell and the lattice, the geometry of the lattice-skin structure is parameterised using the free-form deformation technique. The topology and shape optimisation problems are solved in an iterative, sequential manner. The effectiveness of the proposed approach and the influence of different algorithmic parameters are demonstrated with several numerical examples.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
234,347
1912.11747
Score and Lyrics-Free Singing Voice Generation
Generative models for singing voice have been mostly concerned with the task of ``singing voice synthesis,'' i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we outline three such generation schemes, and propose a pipeline to tackle these new tasks. Moreover, we implement such models using generative adversarial networks and evaluate them both objectively and subjectively.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,651
1907.04235
A Simple Derivation of AMP and its State Evolution via First-Order Cancellation
We consider the linear regression problem, where the goal is to recover the vector $\boldsymbol{x}\in\mathbb{R}^n$ from measurements $\boldsymbol{y}=\boldsymbol{A}\boldsymbol{x}+\boldsymbol{w}\in\mathbb{R}^m$ under known matrix $\boldsymbol{A}$ and unknown noise $\boldsymbol{w}$. For large i.i.d. sub-Gaussian $\boldsymbol{A}$, the approximate message passing (AMP) algorithm is precisely analyzable through a state-evolution (SE) formalism, which furthermore shows that AMP is Bayes optimal in certain regimes. The rigorous SE proof, however, is long and complicated. And, although the AMP algorithm can be derived as an approximation of loop belief propagation (LBP), this viewpoint provides little insight into why large i.i.d. $\boldsymbol{A}$ matrices are important for AMP, and why AMP has a state evolution. In this work, we provide a heuristic derivation of AMP and its state evolution, based on the idea of "first-order cancellation," that provides insights missing from the LBP derivation while being much shorter than the rigorous SE proof.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
138,056
1908.05596
Two-stage Federated Phenotyping and Patient Representation Learning
A large percentage of medical information is in unstructured text format in electronic medical record systems. Manual extraction of information from clinical notes is extremely time consuming. Natural language processing has been widely used in recent years for automatic information extraction from medical texts. However, algorithms trained on data from a single healthcare provider are not generalizable and error-prone due to the heterogeneity and uniqueness of medical documents. We develop a two-stage federated natural language processing method that enables utilization of clinical notes from different hospitals or clinics without moving the data, and demonstrate its performance using obesity and comorbities phenotyping as medical task. This approach not only improves the quality of a specific clinical task but also facilitates knowledge progression in the whole healthcare system, which is an essential part of learning health system. To the best of our knowledge, this is the first application of federated machine learning in clinical NLP.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
141,753
2004.10382
Image Processing Failure and Deep Learning Success in Lawn Measurement
Lawn area measurement is an application of image processing and deep learning. Researchers have been used hierarchical networks, segmented images and many other methods to measure lawn area. Methods effectiveness and accuracy varies. In this project Image processing and deep learning methods has been compared to find the best way to measure the lawn area. Three Image processing methods using OpenCV has been compared to Convolutional Neural network which is one of the most famous and effective deep learning methods. We used Keras and TensorFlow to estimate the lawn area. Convolutional Neural Network or shortly CNN shows very high accuracy (94-97%) for this purpose. In image processing methods, Thresholding with 80-87% accuracy and Edge detection are effective methods to measure the lawn area but Contouring with 26-31% accuracy does not calculate the lawn area successfully. We may conclude that deep learning methods especially CNN could be the best detective method comparing to image processing and other deep learning techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,620
2410.20735
Murine AI excels at cats and cheese: Structural differences between human and mouse neurons and their implementation in generative AIs
Mouse and human brains have different functions that depend on their neuronal networks. In this study, we analyzed nanometer-scale three-dimensional structures of brain tissues of the mouse medial prefrontal cortex and compared them with structures of the human anterior cingulate cortex. The obtained results indicated that mouse neuronal somata are smaller and neurites are thinner than those of human neurons. These structural features allow mouse neurons to be integrated in the limited space of the brain, though thin neurites should suppress distal connections according to cable theory. We implemented this mouse-mimetic constraint in convolutional layers of a generative adversarial network (GAN) and a denoising diffusion implicit model (DDIM), which were then subjected to image generation tasks using photo datasets of cat faces, cheese, human faces, and birds. The mouse-mimetic GAN outperformed a standard GAN in the image generation task using the cat faces and cheese photo datasets, but underperformed for human faces and birds. The mouse-mimetic DDIM gave similar results, suggesting that the nature of the datasets affected the results. Analyses of the four datasets indicated differences in their image entropy, which should influence the number of parameters required for image generation. The preferences of the mouse-mimetic AIs coincided with the impressions commonly associated with mice. The relationship between the neuronal network and brain function should be investigated by implementing other biological findings in artificial neural networks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
502,939
2303.05329
Tucker Bilinear Attention Network for Multi-scale Remote Sensing Object Detection
Object detection on VHR remote sensing images plays a vital role in applications such as urban planning, land resource management, and rescue missions. The large-scale variation of the remote-sensing targets is one of the main challenges in VHR remote-sensing object detection. Existing methods improve the detection accuracy of high-resolution remote sensing objects by improving the structure of feature pyramids and adopting different attention modules. However, for small targets, there still be seriously missed detections due to the loss of key detail features. There is still room for improvement in the way of multiscale feature fusion and balance. To address this issue, this paper proposes two novel modules: Guided Attention and Tucker Bilinear Attention, which are applied to the stages of early fusion and late fusion respectively. The former can effectively retain clean key detail features, and the latter can better balance features through semantic-level correlation mining. Based on two modules, we build a new multi-scale remote sensing object detection framework. No bells and whistles. The proposed method largely improves the average precisions of small objects and achieves the highest mean average precisions compared with 9 state-of-the-art methods on DOTA, DIOR, and NWPU VHR-10.Code and models are available at https://github.com/Shinichict/GTNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,425
1803.00118
Surges of collective human activity emerge from simple pairwise correlations
Human populations exhibit complex behaviors---characterized by long-range correlations and surges in activity---across a range of social, political, and technological contexts. Yet it remains unclear where these collective behaviors come from, or if there even exists a set of unifying principles. Indeed, existing explanations typically rely on context-specific mechanisms, such as traffic jams driven by work schedules or spikes in online traffic induced by significant events. However, analogies with statistical mechanics suggest a more general mechanism: that collective patterns can emerge organically from fine-scale interactions within a population. Here, across four different modes of human activity, we show that the simplest correlations in a population---those between pairs of individuals---can yield accurate quantitative predictions for the large-scale behavior of the entire population. To quantify the minimal consequences of pairwise correlations, we employ the principle of maximum entropy, making our description equivalent to an Ising model whose interactions and external fields are notably calculated from past observations of population activity. In addition to providing accurate quantitative predictions, we show that the topology of learned Ising interactions resembles the network of inter-human communication within a population. Together, these results demonstrate that fine-scale correlations can be used to predict large-scale social behaviors, a perspective that has critical implications for modeling and resource allocation in human populations.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
91,594
2212.06882
Envisioning a Human-AI collaborative system to transform policies into decision models
Regulations govern many aspects of citizens' daily lives. Governments and businesses routinely automate these in the form of coded rules (e.g., to check a citizen's eligibility for specific benefits). However, the path to automation is long and challenging. To address this, recent global initiatives for digital government, proposing to simultaneously express policy in natural language for human consumption as well as computationally amenable rules or code, are gathering broad public-sector interest. We introduce the problem of semi-automatically building decision models from eligibility policies for social services, and present an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs. Despite the many open domain challenges, in this position paper we explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules, while improving transparency, interpretability, traceability and accountability of the decision making.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
336,239
2304.13479
Fundamental Tradeoffs in Learning with Prior Information
We seek to understand fundamental tradeoffs between the accuracy of prior information that a learner has on a given problem and its learning performance. We introduce the notion of prioritized risk, which differs from traditional notions of minimax and Bayes risk by allowing us to study such fundamental tradeoffs in settings where reality does not necessarily conform to the learner's prior. We present a general reduction-based approach for extending classical minimax lower-bound techniques in order to lower bound the prioritized risk for statistical estimation problems. We also introduce a novel generalization of Fano's inequality (which may be of independent interest) for lower bounding the prioritized risk in more general settings involving unbounded losses. We illustrate the ability of our framework to provide insights into tradeoffs between prior information and learning performance for problems in estimation, regression, and reinforcement learning.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
360,591
2210.00025
Artificial Replay: A Meta-Algorithm for Harnessing Historical Data in Bandits
Most real-world deployments of bandit algorithms exist somewhere in between the offline and online set-up, where some historical data is available upfront and additional data is collected dynamically online. How best to incorporate historical data to "warm start" bandit algorithms is an open question: naively initializing reward estimates using all historical samples can suffer from spurious data and imbalanced data coverage, leading to computation and storage issues-particularly for continuous action spaces. To address these challenges, we propose Artificial-Replay, a meta-algorithm for incorporating historical data into any arbitrary base bandit algorithm. We show that Artificial-Replay uses only a fraction of the historical data compared to a full warm-start approach, while still achieving identical regret for base algorithms that satisfy independence of irrelevant data (IIData), a novel and broadly applicable property that we introduce. We complement these theoretical results with experiments on (i) K-armed bandits and (ii) continuous combinatorial bandits, on which we model green security domains using real poaching data. Our results show the practical benefits of Artificial-Replayin reducing computation and space complexity, including for base algorithms that do not satisfy IIData.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
320,692
2312.09778
Hypergraph-MLP: Learning on Hypergraphs without Message Passing
Hypergraphs are vital in modelling data with higher-order relations containing more than two entities, gaining prominence in machine learning and signal processing. Many hypergraph neural networks leverage message passing over hypergraph structures to enhance node representation learning, yielding impressive performances in tasks like hypergraph node classification. However, these message-passing-based models face several challenges, including oversmoothing as well as high latency and sensitivity to structural perturbations at inference time. To tackle those challenges, we propose an alternative approach where we integrate the information about hypergraph structures into training supervision without explicit message passing, thus also removing the reliance on it at inference. Specifically, we introduce Hypergraph-MLP, a novel learning framework for hypergraph-structured data, where the learning model is a straightforward multilayer perceptron (MLP) supervised by a loss function based on a notion of signal smoothness on hypergraphs. Experiments on hypergraph node classification tasks demonstrate that Hypergraph-MLP achieves competitive performance compared to existing baselines, and is considerably faster and more robust against structural perturbations at inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,874
1807.05324
Generating Synthetic Data for Neural Keyword-to-Question Models
Search typically relies on keyword queries, but these are often semantically ambiguous. We propose to overcome this by offering users natural language questions, based on their keyword queries, to disambiguate their intent. This keyword-to-question task may be addressed using neural machine translation techniques. Neural translation models, however, require massive amounts of training data (keyword-question pairs), which is unavailable for this task. The main idea of this paper is to generate large amounts of synthetic training data from a small seed set of hand-labeled keyword-question pairs. Since natural language questions are available in large quantities, we develop models to automatically generate the corresponding keyword queries. Further, we introduce various filtering mechanisms to ensure that synthetic training data is of high quality. We demonstrate the feasibility of our approach using both automatic and manual evaluation. This is an extended version of the article published with the same title in the Proceedings of ICTIR'18.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
102,903
1105.0256
Easy-to-compute parameterizations of all wavelet filters: input-output and state-space
We here use notions from the theory linear shift-invariant dynamical systems to provide an easy-to-compute characterization of all rational wavelet filters. For a given N bigger or equql to 2, the number of inputs, the construction is based on a factorization to an elementary wavelet filter along with of m elementary unitary matrices. We shall call this m the index of the filter. It turns out that the resulting wavelet filter is of McMillan degree $N((N-1)/2+m). Rational wavelet filters bounded at infinity, admit state space realization. The above input-output parameterization is exploited for a step-by-step construction (where in each the index m is increased by one) of state space model of wavelet filters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
10,204
1908.06003
Exploring Properties of Icosoku by Constraint Satisfaction Approach
Icosoku is a challenging and interesting puzzle that exhibits highly symmetrical and combinatorial nature. In this paper, we pose the questions derived from the puzzle, but with more difficulty and generality. In addition, we also present a constraint programming model for the proposed questions, which can provide the answers to our first two questions. The purpose of this paper is to share our preliminary result and problems to encourage researchers in both group theory and constraint communities to consider this topic further.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
141,879
2404.04887
A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images
Representation learning offers a conduit to elucidate distinctive features within the latent space and interpret the deep models. However, the randomness of lesion distribution and the complexity of low-quality factors in medical images pose great challenges for models to extract key lesion features. Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation. Nevertheless, the effectiveness of CL is highly dependent on the quality of the positive and negative sample pairs. In this work, we propose a clinical-oriented multi-level CL framework that aims to enhance the model's capacity to extract lesion features and discriminate between lesion and low-quality factors, thereby enabling more accurate disease diagnosis from low-quality medical images. Specifically, we first construct multi-level positive and negative pairs to enhance the model's comprehensive recognition capability of lesion features by integrating information from different levels and qualities of medical images. Moreover, to improve the quality of the learned lesion embeddings, we introduce a dynamic hard sample mining method based on self-paced learning. The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray, demonstrating superior performance compared to other state-of-the-art disease diagnostic methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
444,842
1907.03336
Search-Based Serving Architecture of Embeddings-Based Recommendations
Over the past 10 years, many recommendation techniques have been based on embedding users and items in latent vector spaces, where the inner product of a (user,item) pair of vectors represents the predicted affinity of the user to the item. A wealth of literature has focused on the various modeling approaches that result in embeddings, and has compared their quality metrics, learning complexity, etc. However, much less attention has been devoted to the issues surrounding productization of an embeddings-based high throughput, low latency recommender system. In particular, how the system might keep up with the changing embeddings as new models are learnt. This paper describes a reference architecture of a high-throughput, large scale recommendation service which leverages a search engine as its runtime core. We describe how the search index and the query builder adapt to changes in the embeddings, which often happen at a different cadence than index builds. We provide solutions for both id-based and feature-based embeddings, as well as for batch indexing and incremental indexing setups. The described system is at the core of a Web content discovery service that serves tens of billions recommendations per day in response to billions of user requests.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
137,837
2104.12827
Learning-based decentralized offloading decision making in an adversarial environment
Vehicular fog computing (VFC) pushes the cloud computing capability to the distributed fog nodes at the edge of the Internet, enabling compute-intensive and latency-sensitive computing services for vehicles through task offloading. However, a heterogeneous mobility environment introduces uncertainties in terms of resource supply and demand, which are inevitable bottlenecks for the optimal offloading decision. Also, these uncertainties bring extra challenges to task offloading under the oblivious adversary attack and data privacy risks. In this article, we develop a new adversarial online learning algorithm with bandit feedback based on the adversarial multi-armed bandit theory, to enable scalable and low-complexity offloading decision making. Specifically, we focus on optimizing fog node selection with the aim of minimizing the offloading service costs in terms of delay and energy. The key is to implicitly tune the exploration bonus in the selection process and the assessment rules of the designed algorithm, taking into account volatile resource supply and demand. We theoretically prove that the input-size dependent selection rule allows to choose a suitable fog node without exploring the sub-optimal actions, and also an appropriate score patching rule allows to quickly adapt to evolving circumstances, which reduce variance and bias simultaneously, thereby achieving a better exploitation-exploration balance. Simulation results verify the effectiveness and robustness of the proposed algorithm.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
232,321
2408.07718
Impact of Inaccurate Contamination Ratio on Robust Unsupervised Anomaly Detection
Training data sets intended for unsupervised anomaly detection, typically presumed to be anomaly-free, often contain anomalies (or contamination), a challenge that significantly undermines model performance. Most robust unsupervised anomaly detection models rely on contamination ratio information to tackle contamination. However, in reality, contamination ratio may be inaccurate. We investigate on the impact of inaccurate contamination ratio information in robust unsupervised anomaly detection. We verify whether they are resilient to misinformed contamination ratios. Our investigation on 6 benchmark data sets reveals that such models are not adversely affected by exposure to misinformation. In fact, they can exhibit improved performance when provided with such inaccurate contamination ratios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
480,699
2406.15581
Necessary and sufficient condition for neutral-type delay systems: Polynomial approximations
A new necessary and sufficient stability test in a tractable number of operations for linear neutral-type delay systems is introduced. It is developed in the Lyapunov-Krasovskii framework via functionals with prescribed derivatives. The necessary conditions, which stem from substituting any polynomial approximation of the functional argument, reduce to a quadratic form of monomials whose matrix is independent of the coefficients of the approximation under consideration. In the particular case of Chebyshev polynomials, the functional approximation error is quantified, leading to an estimate of the order of approximation such that the positive semi-definiteness of the functional is verified. Some examples illustrate the obtained results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
466,783
2303.01684
BO-Muse: A human expert and AI teaming framework for accelerated experimental design
In this paper we introduce BO-Muse, a new approach to human-AI teaming for the optimization of expensive black-box functions. Inspired by the intrinsic difficulty of extracting expert knowledge and distilling it back into AI models and by observations of human behavior in real-world experimental design, our algorithm lets the human expert take the lead in the experimental process. The human expert can use their domain expertise to its full potential, while the AI plays the role of a muse, injecting novelty and searching for areas of weakness to break the human out of over-exploitation induced by cognitive entrenchment. With mild assumptions, we show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone. We validate our algorithm using synthetic data and with human experts performing real-world experiments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
349,060
1710.02081
Online Photometric Calibration for Auto Exposure Video for Realtime Visual Odometry and SLAM
Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
82,104
2204.03776
TorMentor: Deterministic dynamic-path, data augmentations with fractals
We propose the use of fractals as a means of efficient data augmentation. Specifically, we employ plasma fractals for adapting global image augmentation transformations into continuous local transforms. We formulate the diamond square algorithm as a cascade of simple convolution operations allowing efficient computation of plasma fractals on the GPU. We present the TorMentor image augmentation framework that is totally modular and deterministic across images and point-clouds. All image augmentation operations can be combined through pipelining and random branching to form flow networks of arbitrary width and depth. We demonstrate the efficiency of the proposed approach with experiments on document image segmentation (binarization) with the DIBCO datasets. The proposed approach demonstrates superior performance to traditional image augmentation techniques. Finally, we use extended synthetic binary text images in a self-supervision regiment and outperform the same model when trained with limited data and simple extensions.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
290,431
2306.01638
Do we become wiser with time? On causal equivalence with tiered background knowledge
Equivalence classes of DAGs (represented by CPDAGs) may be too large to provide useful causal information. Here, we address incorporating tiered background knowledge yielding restricted equivalence classes represented by 'tiered MPDAGs'. Tiered knowledge leads to considerable gains in informativeness and computational efficiency: We show that construction of tiered MPDAGs only requires application of Meek's 1st rule, and that tiered MPDAGs (unlike general MPDAGs) are chain graphs with chordal components. This entails simplifications e.g. of determining valid adjustment sets for causal effect estimation. Further, we characterise when one tiered ordering is more informative than another, providing insights into useful aspects of background knowledge.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
370,522
2003.04151
Embedding Propagation: Smoother Manifold for Few-Shot Classification
Few-shot classification is challenging because the data distribution of the training set can be widely different to the test set as their classes are disjoint. This distribution shift often results in poor generalization. Manifold smoothing has been shown to address the distribution shift problem by extending the decision boundaries and reducing the noise of the class representations. Moreover, manifold smoothness is a key factor for semi-supervised learning and transductive learning algorithms. In this work, we propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification. Embedding propagation leverages interpolations between the extracted features of a neural network based on a similarity graph. We empirically show that embedding propagation yields a smoother embedding manifold. We also show that applying embedding propagation to a transductive classifier achieves new state-of-the-art results in mini-Imagenet, tiered-Imagenet, Imagenet-FS, and CUB. Furthermore, we show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16\% points. The proposed embedding propagation operation can be easily integrated as a non-parametric layer into a neural network. We provide the training code and usage examples at https://github.com/ElementAI/embedding-propagation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
167,466
2412.17003
Anonymous Shamir's Secret Sharing via Reed-Solomon Codes Against Permutations, Insertions, and Deletions
In this work, we study the performance of Reed-Solomon codes against an adversary that first permutes the symbols of the codeword and then performs insertions and deletions. This adversarial model is motivated by the recent interest in fully anonymous secret-sharing schemes [EBG+24],[BGI+24]. A fully anonymous secret-sharing scheme has two key properties: (1) the identities of the participants are not revealed before the secret is reconstructed, and (2) the shares of any unauthorized set of participants are uniform and independent. In particular, the shares of any unauthorized subset reveal no information about the identity of the participants who hold them. In this work, we first make the following observation: Reed-Solomon codes that are robust against an adversary that permutes the codeword and then deletes symbols from the permuted codeword can be used to construct ramp threshold secret-sharing schemes that are fully anonymous. Then, we show that over large enough fields of size, there are $[n,k]$ Reed-Solomon codes that are robust against an adversary that arbitrary permutes the codeword and then performs $n-2k+1$ insertions and deletions to the permuted codeword. This implies the existence of a $(k-1, 2k-1, n)$ ramp secret sharing scheme that is fully anonymous. That is, any $k-1$ shares reveal nothing about the secret, and, moreover, this set of shares reveals no information about the identities of the players who hold them. On the other hand, any $2k-1$ shares can reconstruct the secret without revealing their identities. We also provide explicit constructions of such schemes based on previous works on Reed-Solomon codes correcting insertions and deletions. The constructions in this paper give the first gap threshold secret-sharing schemes that satisfy the strongest notion of anonymity together with perfect reconstruction.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
519,790
2309.00514
A Machine Vision Method for Correction of Eccentric Error: Based on Adaptive Enhancement Algorithm
In the procedure of surface defects detection for large-aperture aspherical optical elements, it is of vital significance to adjust the optical axis of the element to be coaxial with the mechanical spin axis accurately. Therefore, a machine vision method for eccentric error correction is proposed in this paper. Focusing on the severe defocus blur of reference crosshair image caused by the imaging characteristic of the aspherical optical element, which may lead to the failure of correction, an Adaptive Enhancement Algorithm (AEA) is proposed to strengthen the crosshair image. AEA is consisted of existed Guided Filter Dark Channel Dehazing Algorithm (GFA) and proposed lightweight Multi-scale Densely Connected Network (MDC-Net). The enhancement effect of GFA is excellent but time-consuming, and the enhancement effect of MDC-Net is slightly inferior but strongly real-time. As AEA will be executed dozens of times during each correction procedure, its real-time performance is very important. Therefore, by setting the empirical threshold of definition evaluation function SMD2, GFA and MDC-Net are respectively applied to highly and slightly blurred crosshair images so as to ensure the enhancement effect while saving as much time as possible. AEA has certain robustness in time-consuming performance, which takes an average time of 0.2721s and 0.0963s to execute GFA and MDC-Net separately on ten 200pixels 200pixels Region of Interest (ROI) images with different degrees of blur. And the eccentricity error can be reduced to within 10um by our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,326
2404.08869
Misinformation Resilient Search Rankings with Webgraph-based Interventions
The proliferation of unreliable news domains on the internet has had wide-reaching negative impacts on society. We introduce and evaluate interventions aimed at reducing traffic to unreliable news domains from search engines while maintaining traffic to reliable domains. We build these interventions on the principles of fairness (penalize sites for what is in their control), generality (label/fact-check agnostic), targeted (increase the cost of adversarial behavior), and scalability (works at webscale). We refine our methods on small-scale webdata as a testbed and then generalize the interventions to a large-scale webgraph containing 93.9M domains and 1.6B edges. We demonstrate that our methods penalize unreliable domains far more than reliable domains in both settings and we explore multiple avenues to mitigate unintended effects on both the small-scale and large-scale webgraph experiments. These results indicate the potential of our approach to reduce the spread of misinformation and foster a more reliable online information ecosystem. This research contributes to the development of targeted strategies to enhance the trustworthiness and quality of search engine results, ultimately benefiting users and the broader digital community.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
446,439
2201.06924
A Synthetic Prediction Market for Estimating Confidence in Published Work
Explainably estimating confidence in published scholarly work offers opportunity for faster and more robust scientific progress. We develop a synthetic prediction market to assess the credibility of published claims in the social and behavioral sciences literature. We demonstrate our system and detail our findings using a collection of known replication projects. We suggest that this work lays the foundation for a research agenda that creatively uses AI for peer review.
false
false
false
false
true
true
true
false
false
false
false
false
false
true
true
false
false
false
275,877
2201.07705
GEMEL: Model Merging for Memory-Efficient, Real-Time Video Analytics at the Edge
Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy violations. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, GEMEL, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that GEMEL reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time/space sharing alone.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
276,109
1901.01855
A* Tree Search for Portfolio Management
We propose a planning-based method to teach an agent to manage portfolio from scratch. Our approach combines deep reinforcement learning techniques with search techniques like AlphaGo. By uniting the advantages in A* search algorithm with Monte Carlo tree search, we come up with a new algorithm named A* tree search in which best information is returned to guide next search. Also, the expansion mode of Monte Carlo tree is improved for a higher utilization of the neural network. The suggested algorithm can also optimize non-differentiable utility function by combinatorial search. This technique is then used in our trading system. The major component is a neural network that is trained by trading experiences from tree search and outputs prior probability to guide search by pruning away branches in turn. Experimental results on simulated and real financial data verify the robustness of the proposed trading system and the trading system produces better strategies than several approaches based on reinforcement learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
118,065
1704.05123
Resolution-Exact Planner for Thick Non-Crossing 2-Link Robots
We consider the path planning problem for a 2-link robot amidst polygonal obstacles. Our robot is parametrizable by the lengths $\ell_1, \ell_2>0$ of its two links, the thickness $\tau \ge 0$ of the links, and an angle $\kappa$ that constrains the angle between the 2 links to be strictly greater than $\kappa$. The case $\tau>0$ and $\kappa \ge 0$ corresponds to "thick non-crossing" robots. This results in a novel 4DOF configuration space ${\mathbb R}^2\times ({\mathbb T}\setminus\Delta(\kappa))$ where ${\mathbb T}$ is the torus and $\Delta(\kappa)$ the diagonal band of width $\kappa$. We design a resolution-exact planner for this robot using the framework of Soft Subdivision Search (SSS). First, we provide an analysis of the space of forbidden angles, leading to a soft predicate for classifying configuration boxes. We further exploit the T/R splitting technique which was previously introduced for self-crossing thin 2-link robots. Our open-source implementation in Core Library achieves real-time performance for a suite of combinatorially non-trivial obstacle sets. Experimentally, our algorithm is significantly better than any of the state-of-art sampling algorithms we looked at, in timing and in success rate.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
71,942
2302.14386
Practical Algorithms for Orientations of Partially Directed Graphical Models
In observational studies, the true causal model is typically unknown and needs to be estimated from available observational and limited experimental data. In such cases, the learned causal model is commonly represented as a partially directed acyclic graph (PDAG), which contains both directed and undirected edges indicating uncertainty of causal relations between random variables. The main focus of this paper is on the maximal orientation task, which, for a given PDAG, aims to orient the undirected edges maximally such that the resulting graph represents the same Markov equivalent DAGs as the input PDAG. This task is a subroutine used frequently in causal discovery, e. g., as the final step of the celebrated PC algorithm. Utilizing connections to the problem of finding a consistent DAG extension of a PDAG, we derive faster algorithms for computing the maximal orientation by proposing two novel approaches for extending PDAGs, both constructed with an emphasis on simplicity and practical effectiveness.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
348,274
2209.09185
Active Inference for Autonomous Decision-Making with Contextual Multi-Armed Bandits
In autonomous robotic decision-making under uncertainty, the tradeoff between exploitation and exploration of available options must be considered. If secondary information associated with options can be utilized, such decision-making problems can often be formulated as contextual multi-armed bandits (CMABs). In this study, we apply active inference, which has been actively studied in the field of neuroscience in recent years, as an alternative action selection strategy for CMABs. Unlike conventional action selection strategies, it is possible to rigorously evaluate the uncertainty of each option when calculating the expected free energy (EFE) associated with the decision agent's probabilistic model, as derived from the free-energy principle. We specifically address the case where a categorical observation likelihood function is used, such that EFE values are analytically intractable. We introduce new approximation methods for computing the EFE based on variational and Laplace approximations. Extensive simulation study results demonstrate that, compared to other strategies, active inference generally requires far fewer iterations to identify optimal options and generally achieves superior cumulative regret, for relatively low extra computational cost.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
318,415
2408.16849
Machine Learning-Based Research on the Adaptability of Adolescents to Online Education
With the rapid advancement of internet technology, the adaptability of adolescents to online learning has emerged as a focal point of interest within the educational sphere. However, the academic community's efforts to develop predictive models for adolescent online learning adaptability require further refinement and expansion. Utilizing data from the "Chinese Adolescent Online Education Survey" spanning the years 2014 to 2016, this study implements five machine learning algorithms - logistic regression, K-nearest neighbors, random forest, XGBoost, and CatBoost - to analyze the factors influencing adolescent online learning adaptability and to determine the model best suited for prediction. The research reveals that the duration of courses, the financial status of the family, and age are the primary factors affecting students' adaptability in online learning environments. Additionally, age significantly impacts students' adaptive capacities. Among the predictive models, the random forest, XGBoost, and CatBoost algorithms demonstrate superior forecasting capabilities, with the random forest model being particularly adept at capturing the characteristics of students' adaptability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
484,469
1609.04602
New MDS or near MDS self-dual codes over finite fields
The study of MDS self-dual codes has attracted lots of attention in recent years. There are many papers on determining existence of $q-$ary MDS self-dual codes for various lengths. There are not existence of $q-$ary MDS self-dual codes of some lengths, even these lengths $< q$. We generalize MDS Euclidean self-dual codes to near MDS Euclidean self-dual codes and near MDS isodual codes. And we obtain many new near MDS isodual codes from extended negacyclic duadic codes and we obtain many new MDS Euclidean self-dual codes from MDS Euclidean self-dual codes. We generalize MDS Hermitian self-dual codes to near MDS Hermitian self-dual codes. We obtain near MDS Hermitian self-dual codes from extended negacyclic duadic codes and from MDS Hermitian self-dual codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
61,009
1606.03838
Laplacian LRR on Product Grassmann Manifolds for Human Activity Clustering in Multi-Camera Video Surveillance
In multi-camera video surveillance, it is challenging to represent videos from different cameras properly and fuse them efficiently for specific applications such as human activity recognition and clustering. In this paper, a novel representation for multi-camera video data, namely the Product Grassmann Manifold (PGM), is proposed to model video sequences as points on the Grassmann manifold and integrate them as a whole in the product manifold form. Additionally, with a new geometry metric on the product manifold, the conventional Low Rank Representation (LRR) model is extended onto PGM and the new LRR model can be used for clustering non-linear data, such as multi-camera video data. To evaluate the proposed method, a number of clustering experiments are conducted on several multi-camera video datasets of human activity, including Dongzhimen Transport Hub Crowd action dataset, ACT 42 Human action dataset and SKIG action dataset. The experiment results show that the proposed method outperforms many state-of-the-art clustering methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
57,153
2310.14028
GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding
Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from higher up the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a Graph-based Attentive Semantic COntext Modeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilise both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a token-level multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modeling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
401,679
1310.3358
A Kalman Filtering approach of improved precision for fault diagnosis in distributed parameter systems
The Derivative-free nonlinear Kalman Filter is proposed for state estimation and fault diagnosis in distributed parameter systems and particularly in dynamical systems described by partial differential equations of the nonlinear wave type. At a first stage, a nonlinear filtering approach for estimating the dynamics of a 1D nonlinear wave equation, from measurements provided from a small number of sensors is developed. It is shown that the numerical solution of the associated partial differential equation results into a set of nonlinear ordinary differential equations. With the application of diffeomorphism that is based on differential flatness theory it is shown that an equivalent description of the system is obtained in the linear canonical (Brunovsky) form. This transformation enables to obtain local estimates about the state vector of the system through the application of the standard Kalman Filter recursion. At a second stage, the local statistical approach to fault diagnosis is used to perform fault diagnosis for the distributed parameters system by processing with elaborated statistical tools the differences (residuals) between the output of the Kalman Filter and the measurements obtained from the distributed parameter system. Optimal selection of the fault threshold is succeeded by using the local statistical approach to fault diagnosis. The efficiency of the proposed filtering approach for performing fault diagnosis in distributed parameters systems is confirmed through simulation experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
27,738
2412.10424
LLM-as-an-Interviewer: Beyond Static Testing Through Dynamic LLM Evaluation
We introduce LLM-as-an-Interviewer, a novel paradigm for evaluating large language models (LLMs). This approach leverages multi-turn interactions where the LLM interviewer actively provides feedback on responses and poses follow-up questions to the evaluated LLM. At the start of the interview, the LLM interviewer dynamically modifies datasets to generate initial questions, mitigating data contamination. We apply the LLM-as-an-Interviewer framework to evaluate six models on the MATH and DepthQA tasks. Our results show that the framework effectively provides insights into LLM performance, including the quality of initial responses, adaptability to feedback, and ability to address follow-up queries like clarification or additional knowledge requests. The framework also addresses key limitations of conventional methods like LLM-as-a-Judge, including verbosity bias and inconsistency across runs. Finally, we propose the Interview Report, which aggregates insights from the interview process, providing examples and a comprehensive analysis of the LLM's strengths and weaknesses. This report offers a detailed snapshot of the model's real-world applicability. The code for our framework is publicly available at https://github.com/interview-eval/.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
516,915
2011.03742
Fast 3D Modeling of Anthropomorphic Robotic Hands Based on A Multi-layer Deformable Design
Current anthropomorphic robotic hands mainly focus on improving their dexterity by devising new mechanical structures and actuation systems. However, most of them rely on a single structure/system (e.g., bone-only) and ignore the fact that the human hand is composed of multiple functional structures (e.g., skin, bones, muscles, and tendons). This not only increases the difficulty of the design process but also lowers the robustness and flexibility of the fabricated hand. Besides, other factors like customization, the time and cost for production, and the degree of resemblance between human hands and robotic hands, remain omitted. To tackle these problems, this study proposes a 3D printable multi-layer design that models the hand with the layers of skin, tissues, and bones. The proposed design first obtains the 3D surface model of a target hand via 3D scanning, and then generates the 3D bone models from the surface model based on a fast template matching method. To overcome the disadvantage of the rigid bone layer in deformation, the tissue layer is introduced and represented by a concentric tube based structure, of which the deformability can be explicitly controlled by a parameter. Besides, a low-cost yet effective underactuated system is adopted to drive the fabricated hand. The proposed design is tested with 33 widely used object grasping types, as well as special objects like fragile silken tofu, and outperforms previous designs remarkably. With the proposed design, anthropomorphic robotic hands can be produced fast with low cost, and be customizable and deformable.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
205,338
2312.08022
Mono3DVG: 3D Visual Grounding in Monocular Images
We introduce a novel task of 3D visual grounding in monocular RGB images using language descriptions with both appearance and geometry information. Specifically, we build a large-scale dataset, Mono3DRefer, which contains 3D object targets with their corresponding geometric text descriptions, generated by ChatGPT and refined manually. To foster this task, we propose Mono3DVG-TR, an end-to-end transformer-based network, which takes advantage of both the appearance and geometry information in text embeddings for multi-modal learning and 3D object localization. Depth predictor is designed to explicitly learn geometry features. The dual text-guided adapter is proposed to refine multiscale visual and geometry features of the referred object. Based on depth-text-visual stacking attention, the decoder fuses object-level geometric cues and visual appearance into a learnable query. Comprehensive benchmarks and some insightful analyses are provided for Mono3DVG. Extensive comparisons and ablation studies show that our method significantly outperforms all baselines. The dataset and code will be publicly available at: https://github.com/ZhanYang-nwpu/Mono3DVG.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
415,162
1807.07998
Convolutional Neural Networks Analyzed via Inverse Problem Theory and Sparse Representations
Inverse problems in imaging such as denoising, deblurring, superresolution (SR) have been addressed for many decades. In recent years, convolutional neural networks (CNNs) have been widely used for many inverse problem areas. Although their indisputable success, CNNs are not mathematically validated as to how and what they learn. In this paper, we prove that during training, CNN elements solve for inverse problems which are optimum solutions stored as CNN neuron filters. We discuss the necessity of mutual coherence between CNN layer elements in order for a network to converge to the optimum solution. We prove that required mutual coherence can be provided by the usage of residual learning and skip connections. We have set rules over training sets and depth of networks for better convergence, i.e. performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
103,431
1701.04851
Synthesizing Normalized Faces from Facial Identity Features
We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph. This is achieved by learning to generate facial landmarks and textures from features extracted from a facial-recognition network. Unlike previous approaches, our encoding feature vector is largely invariant to lighting, pose, and facial expression. Exploiting this invariance, we train our decoder network using only frontal, neutral-expression photographs. Since these photographs are well aligned, we can decompose them into a sparse set of landmark points and aligned texture maps. The decoder then predicts landmarks and textures independently and combines them using a differentiable image warping operation. The resulting images can be used for a number of applications, such as analyzing facial attributes, exposure and white balance adjustment, or creating a 3-D avatar.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,896
2406.11070
Fine-grained Classes and How to Find Them
In many practical applications, coarse-grained labels are readily available compared to fine-grained labels that reflect subtle differences between classes. However, existing methods cannot leverage coarse labels to infer fine-grained labels in an unsupervised manner. To bridge this gap, we propose FALCON, a method that discovers fine-grained classes from coarsely labeled data without any supervision at the fine-grained level. FALCON simultaneously infers unknown fine-grained classes and underlying relationships between coarse and fine-grained classes. Moreover, FALCON is a modular method that can effectively learn from multiple datasets labeled with different strategies. We evaluate FALCON on eight image classification tasks and a single-cell classification task. FALCON outperforms baselines by a large margin, achieving 22% improvement over the best baseline on the tieredImageNet dataset with over 600 fine-grained classes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
464,702
2305.19582
Causal Discovery with Latent Confounders Based on Higher-Order Cumulants
Causal discovery with latent confounders is an important but challenging task in many scientific areas. Despite the success of some overcomplete independent component analysis (OICA) based methods in certain domains, they are computationally expensive and can easily get stuck into local optima. We notice that interestingly, by making use of higher-order cumulants, there exists a closed-form solution to OICA in specific cases, e.g., when the mixing procedure follows the One-Latent-Component structure. In light of the power of the closed-form solution to OICA corresponding to the One-Latent-Component structure, we formulate a way to estimate the mixing matrix using the higher-order cumulants, and further propose the testable One-Latent-Component condition to identify the latent variables and determine causal orders. By iteratively removing the share identified latent components, we successfully extend the results on the One-Latent-Component structure to the Multi-Latent-Component structure and finally provide a practical and asymptotically correct algorithm to learn the causal structure with latent variables. Experimental results illustrate the asymptotic correctness and effectiveness of the proposed method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
369,591
2401.09032
Improved Consensus ADMM for Cooperative Motion Planning of Large-Scale Connected Autonomous Vehicles with Limited Communication
This paper investigates a cooperative motion planning problem for large-scale connected autonomous vehicles (CAVs) under limited communications, which addresses the challenges of high communication and computing resource requirements. Our proposed methodology incorporates a parallel optimization algorithm with improved consensus ADMM considering a more realistic locally connected topology network, and time complexity of O(N) is achieved by exploiting the sparsity in the dual update process. To further enhance the computational efficiency, we employ a lightweight evolution strategy for the dynamic connectivity graph of CAVs, and each sub-problem split from the consensus ADMM only requires managing a small group of CAVs. The proposed method implemented with the receding horizon scheme is validated thoroughly, and comparisons with existing numerical solvers and approaches demonstrate the efficiency of our proposed algorithm. Also, simulations on large-scale cooperative driving tasks involving 80 vehicles are performed in the high-fidelity CARLA simulator, which highlights the remarkable computational efficiency, scalability, and effectiveness of our proposed development. Demonstration videos are available at https://henryhcliu.github.io/icadmm_cmp_carla.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
422,122
2212.07907
Automatic vehicle trajectory data reconstruction at scale
In this paper we propose an automatic trajectory data reconciliation to correct common errors in vision-based vehicle trajectory data. Given "raw" vehicle detection and tracking information from automatic video processing algorithms, we propose a pipeline including (a) an online data association algorithm to match fragments that describe the same object (vehicle), which is formulated as a min-cost network circulation problem of a graph, and (b) a one-step trajectory rectification procedure formulated as a quadratic program to enhance raw detection data. The pipeline leverages vehicle dynamics and physical constraints to associate tracked objects when they become fragmented, remove measurement noises and outliers and impute missing data due to fragmentations. We assess the capability of the proposed two-step pipeline to reconstruct three benchmarking datasets: (1) a microsimulation dataset that is artificially downgraded to replicate upstream errors, (2) a 15-min NGSIM data that is manually perturbed, and (3) tracking data consists of 3 scenes from collections of video data recorded from 16-17 cameras on a section of the I-24 MOTION system, and compare with the corresponding manually-labeled ground truth vehicle bounding boxes. All of the experiments show that the reconciled trajectories improve the accuracy on all the tested input data for a wide range of measures. Lastly, we show the design of a software architecture that is currently deployed on the full-scale I-24 MOTION system consisting of 276 cameras that covers 4.2 miles of I-24. We demonstrate the scalability of the proposed reconciliation pipeline to process high-volume data on a daily basis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
336,564
2411.05750
On Differentially Private String Distances
Given a database of bit strings $A_1,\ldots,A_m\in \{0,1\}^n$, a fundamental data structure task is to estimate the distances between a given query $B\in \{0,1\}^n$ with all the strings in the database. In addition, one might further want to ensure the integrity of the database by releasing these distance statistics in a secure manner. In this work, we propose differentially private (DP) data structures for this type of tasks, with a focus on Hamming and edit distance. On top of the strong privacy guarantees, our data structures are also time- and space-efficient. In particular, our data structure is $\epsilon$-DP against any sequence of queries of arbitrary length, and for any query $B$ such that the maximum distance to any string in the database is at most $k$, we output $m$ distance estimates. Moreover, - For Hamming distance, our data structure answers any query in $\widetilde O(mk+n)$ time and each estimate deviates from the true distance by at most $\widetilde O(k/e^{\epsilon/\log k})$; - For edit distance, our data structure answers any query in $\widetilde O(mk^2+n)$ time and each estimate deviates from the true distance by at most $\widetilde O(k/e^{\epsilon/(\log k \log n)})$. For moderate $k$, both data structures support sublinear query operations. We obtain these results via a novel adaptation of the randomized response technique as a bit flipping procedure, applied to the sketched strings.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
true
506,775
2306.15333
Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning
This paper proposes Shoggoth, an efficient edge-cloud collaborative architecture, for boosting inference performance on real-time video of changing scenes. Shoggoth uses online knowledge distillation to improve the accuracy of models suffering from data drift and offloads the labeling process to the cloud, alleviating constrained resources of edge devices. At the edge, we design adaptive training using small batches to adapt models under limited computing power, and adaptive sampling of training frames for robustness and reducing bandwidth. The evaluations on the realistic dataset show 15%-20% model accuracy improvement compared to the edge-only strategy and fewer network costs than the cloud-only strategy.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
375,987
2209.07946
Transport in reservoir computing
Reservoir computing systems are constructed using a driven dynamical system in which external inputs can alter the evolving states of a system. These paradigms are used in information processing, machine learning, and computation. A fundamental question that needs to be addressed in this framework is the statistical relationship between the input and the system states. This paper provides conditions that guarantee the existence and uniqueness of asymptotically invariant measures for driven systems and shows that their dependence on the input process is continuous when the set of input and output processes are endowed with the Wasserstein distance. The main tool in these developments is the characterization of those invariant measures as fixed points of naturally defined Foias operators that appear in this context and which have been profusely studied in the paper. Those fixed points are obtained by imposing a newly introduced stochastic state contractivity on the driven system that is readily verifiable in examples. Stochastic state contractivity can be satisfied by systems that are not state-contractive, which is a need typically evoked to guarantee the echo state property in reservoir computing. As a result, it may actually be satisfied even if the echo state property is not present.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
317,952
2207.11447
Handling Data Heterogeneity in Federated Learning via Knowledge Distillation and Fusion
Federated learning (FL) supports distributed training of a global machine learning model across multiple devices with the help of a central server. However, data heterogeneity across different devices leads to the client model drift issue and results in model performance degradation and poor model fairness. To address the issue, we design Federated learning with global-local Knowledge Fusion (FedKF) scheme in this paper. The key idea in FedKF is to let the server return the global knowledge to be fused with the local knowledge in each training round so that the local model can be regularized towards the global optima. Therefore, the client model drift issue can be mitigated. In FedKF, we first propose the active-inactive model aggregation technique that supports a precise global knowledge representation. Then, we propose a data-free knowledge distillation (KD) approach to enable each client model to learn the global knowledge (embedded in the global model) while each client model can still learn the local knowledge (embedded in the local dataset) simultaneously, thereby realizing the global-local knowledge fusion process. The theoretical analysis and intensive experiments demonstrate the superiority of FedKF over previous solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
309,642