id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2210.14896
DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models
With recent advancements in diffusion models, users can generate high-quality images by writing text prompts in natural language. However, generating images with desired details requires proper prompts, and it is often unclear how a model reacts to different prompts or what the best prompts are. To help researchers tackle these critical challenges, we introduce DiffusionDB, the first large-scale text-to-image prompt dataset totaling 6.5TB, containing 14 million images generated by Stable Diffusion, 1.8 million unique prompts, and hyperparameters specified by real users. We analyze the syntactic and semantic characteristics of prompts. We pinpoint specific hyperparameter values and prompt styles that can lead to model errors and present evidence of potentially harmful model usage, such as the generation of misinformation. The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models. DiffusionDB is publicly available at: https://poloclub.github.io/diffusiondb.
true
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
326,724
2310.09827
VFLAIR: A Research Library and Benchmark for Vertical Federated Learning
Vertical Federated Learning (VFL) has emerged as a collaborative training paradigm that allows participants with different features of the same group of users to accomplish cooperative training without exposing their raw data or model parameters. VFL has gained significant attention for its research potential and real-world applications in recent years, but still faces substantial challenges, such as in defending various kinds of data inference and backdoor attacks. Moreover, most of existing VFL projects are industry-facing and not easily used for keeping track of the current research progress. To address this need, we present an extensible and lightweight VFL framework VFLAIR (available at https://github.com/FLAIR-THU/VFLAIR), which supports VFL training with a variety of models, datasets and protocols, along with standardized modules for comprehensive evaluations of attacks and defense strategies. We also benchmark 11 attacks and 8 defenses performance under different communication and model partition settings and draw concrete insights and recommendations on the choice of defense strategies for different practical VFL deployment scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
399,964
2402.14874
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
We propose a straightforward approach called Distillation Contrastive Decoding (DCD) to enhance the reasoning capabilities of Large Language Models (LLMs) during inference. In contrast to previous approaches that relied on smaller amateur models or analysis of hidden state differences, DCD employs Contrastive Chain-of-thought Prompting and advanced distillation techniques, including Dropout and Quantization. This approach effectively addresses the limitations of Contrastive Decoding (CD), which typically requires both an expert and an amateur model, thus increasing computational resource demands. By integrating contrastive prompts with distillation, DCD obviates the need for an amateur model and reduces memory usage. Our evaluations demonstrate that DCD significantly enhances LLM performance across a range of reasoning benchmarks, surpassing both CD and existing methods in the GSM8K and StrategyQA datasets.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
431,891
2010.10295
Fisheye lens distortion correction
A new distortion correction algorithm for fisheye lens with equidistant mapping function is considered in the present study. The algorithm is much more data lossless and accurate than such a classical approach like Brown-Conrady model
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
201,847
2412.05850
Cooperative SQL Generation for Segmented Databases By Using Multi-functional LLM Agents
Text-to-SQL task aims to automatically yield SQL queries according to user text questions. To address this problem, we propose a Cooperative SQL Generation framework based on Multi-functional Agents (CSMA) through information interaction among large language model (LLM) based agents who own part of the database schema seperately. Inspired by the collaboration in human teamwork, CSMA consists of three stages: 1) Question-related schema collection, 2) Question-corresponding SQL query generation, and 3) SQL query correctness check. In the first stage, agents analyze their respective schema and communicate with each other to collect the schema information relevant to the question. In the second stage, agents try to generate the corresponding SQL query for the question using the collected information. In the third stage, agents check if the SQL query is created correctly according to their known information. This interaction-based method makes the question-relevant part of database schema from each agent to be used for SQL generation and check. Experiments on the Spider and Bird benckmark demonstrate that CSMA achieves a high performance level comparable to the state-of-the-arts, meanwhile holding the private data in these individual agents.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
515,000
1907.09438
Multi-Class Lane Semantic Segmentation using Efficient Convolutional Networks
Lane detection plays an important role in a self-driving vehicle. Several studies leverage a semantic segmentation network to extract robust lane features, but few of them can distinguish different types of lanes. In this paper, we focus on the problem of multi-class lane semantic segmentation. Based on the observation that the lane is a small-size and narrow-width object in a road scene image, we propose two techniques, Feature Size Selection (FSS) and Degressive Dilation Block (DD Block). The FSS allows a network to extract thin lane features using appropriate feature sizes. To acquire fine-grained spatial information, the DD Block is made of a series of dilated convolutions with degressive dilation rates. Experimental results show that the proposed techniques provide obvious improvement in accuracy, while they achieve the same or faster inference speed compared to the baseline system, and can run at real-time on high-resolution images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
139,356
2412.12486
Boosting Long-Context Management via Query-Guided Activation Refilling
Processing long contexts poses a significant challenge for large language models (LLMs) due to their inherent context-window limitations and the computational burden of extensive key-value (KV) activations, which severely impact efficiency. For information-seeking tasks, full context perception is often unnecessary, as a query's information needs can dynamically range from localized details to a global perspective, depending on its complexity. However, existing methods struggle to adapt effectively to these dynamic information needs. In the paper, we propose a method for processing long-context information-seeking tasks via query-guided Activation Refilling (ACRE). ACRE constructs a Bi-layer KV Cache for long contexts, where the layer-1 (L1) cache compactly captures global information, and the layer-2 (L2) cache provides detailed and localized information. ACRE establishes a proxying relationship between the two caches, allowing the input query to attend to the L1 cache and dynamically refill it with relevant entries from the L2 cache. This mechanism integrates global understanding with query-specific local details, thus improving answer decoding. Experiments on a variety of long-context information-seeking datasets demonstrate ACRE's effectiveness, achieving improvements in both performance and efficiency.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
517,886
2201.01190
Two-level Graph Neural Network
Graph Neural Networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs therefore suffer from representational limitations caused by the Local Permutation Invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the Two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has time complexity is O(n^3), n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
274,176
1802.00168
Deep Neural Nets with Interpolating Function as Output Activation
We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and code will be made publicly available.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
89,361
2105.07730
The State of Infodemic on Twitter
Following the wave of misinterpreted, manipulated and malicious information growing on the Internet, the misinformation surrounding COVID-19 has become a paramount issue. In the context of the current COVID-19 pandemic, social media posts and platforms are at risk of rumors and misinformation in the face of the serious uncertainty surrounding the virus itself. At the same time, the uncertainty and new nature of COVID-19 means that other unconfirmed information that may appear "rumored" may be an important indicator of the behavior and impact of this new virus. Twitter, in particular, has taken a center stage in this storm where Covid-19 has been a much talked about subject. We have presented an exploratory analysis of the tweets and the users who are involved in spreading misinformation and then delved into machine learning models and natural language processing techniques to identify if a tweet contains misinformation.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
235,537
2210.11019
Single Image Super-Resolution Using Lightweight Networks Based on Swin Transformer
Image super-resolution reconstruction is an important task in the field of image processing technology, which can restore low resolution image to high quality image with high resolution. In recent years, deep learning has been applied in the field of image super-resolution reconstruction. With the continuous development of deep neural network, the quality of the reconstructed images has been greatly improved, but the model complexity has also been increased. In this paper, we propose two lightweight models named as MSwinSR and UGSwinSR based on Swin Transformer. The most important structure in MSwinSR is called Multi-size Swin Transformer Block (MSTB), which mainly contains four parallel multi-head self-attention (MSA) blocks. UGSwinSR combines U-Net and GAN with Swin Transformer. Both of them can reduce the model complexity, but MSwinSR can reach a higher objective quality, while UGSwinSR can reach a higher perceptual quality. The experimental results demonstrate that MSwinSR increases PSNR by $\mathbf{0.07dB}$ compared with the state-of-the-art model SwinIR, while the number of parameters can reduced by $\mathbf{30.68\%}$, and the calculation cost can reduced by $\mathbf{9.936\%}$. UGSwinSR can effectively reduce the amount of calculation of the network, which can reduced by $\mathbf{90.92\%}$ compared with SwinIR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,150
2011.08333
2D+3D Facial Expression Recognition via Discriminative Dynamic Range Enhancement and Multi-Scale Learning
In 2D+3D facial expression recognition (FER), existing methods generate multi-view geometry maps to enhance the depth feature representation. However, this may introduce false estimations due to local plane fitting from incomplete point clouds. In this paper, we propose a novel Map Generation technique from the viewpoint of information theory, to boost the slight 3D expression differences from strong personality variations. First, we examine the HDR depth data to extract the discriminative dynamic range $r_{dis}$, and maximize the entropy of $r_{dis}$ to a global optimum. Then, to prevent the large deformation caused by over-enhancement, we introduce a depth distortion constraint and reduce the complexity from $O(KN^2)$ to $O(KN\tau)$. Furthermore, the constrained optimization is modeled as a $K$-edges maximum weight path problem in a directed acyclic graph, and we solve it efficiently via dynamic programming. Finally, we also design an efficient Facial Attention structure to automatically locate subtle discriminative facial parts for multi-scale learning, and train it with a proposed loss function $\mathcal{L}_{FA}$ without any facial landmarks. Experimental results on different datasets show that the proposed method is effective and outperforms the state-of-the-art 2D+3D FER methods in both FER accuracy and the output entropy of the generated maps.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,835
1812.04072
PlaneRCNN: 3D Plane Detection and Reconstruction from a Single Image
This paper proposes a deep neural architecture, PlaneRCNN, that detects and reconstructs piecewise planar surfaces from a single RGB image. PlaneRCNN employs a variant of Mask R-CNN to detect planes with their plane parameters and segmentation masks. PlaneRCNN then jointly refines all the segmentation masks with a novel loss enforcing the consistency with a nearby view during training. The paper also presents a new benchmark with more fine-grained plane segmentations in the ground-truth, in which, PlaneRCNN outperforms existing state-of-the-art methods with significant margins in the plane detection, segmentation, and reconstruction metrics. PlaneRCNN makes an important step towards robust plane extraction, which would have an immediate impact on a wide range of applications including Robotics, Augmented Reality, and Virtual Reality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,132
2412.06717
Toward Non-Invasive Diagnosis of Bankart Lesions with Deep Learning
Bankart lesions, or anterior-inferior glenoid labral tears, are diagnostically challenging on standard MRIs due to their subtle imaging features-often necessitating invasive MRI arthrograms (MRAs). This study develops deep learning (DL) models to detect Bankart lesions on both standard MRIs and MRAs, aiming to improve diagnostic accuracy and reduce reliance on MRAs. We curated a dataset of 586 shoulder MRIs (335 standard, 251 MRAs) from 558 patients who underwent arthroscopy. Ground truth labels were derived from intraoperative findings, the gold standard for Bankart lesion diagnosis. Separate DL models for MRAs and standard MRIs were trained using the Swin Transformer architecture, pre-trained on a public knee MRI dataset. Predictions from sagittal, axial, and coronal views were ensembled to optimize performance. The models were evaluated on a 20% hold-out test set (117 MRIs: 46 MRAs, 71 standard MRIs). Bankart lesions were identified in 31.9% of MRAs and 8.6% of standard MRIs. The models achieved AUCs of 0.87 (86% accuracy, 83% sensitivity, 86% specificity) and 0.90 (85% accuracy, 82% sensitivity, 86% specificity) on standard MRIs and MRAs, respectively. These results match or surpass radiologist performance on our dataset and reported literature metrics. Notably, our model's performance on non-invasive standard MRIs matched or surpassed the radiologists interpreting MRAs. This study demonstrates the feasibility of using DL to address the diagnostic challenges posed by subtle pathologies like Bankart lesions. Our models demonstrate potential to improve diagnostic confidence, reduce reliance on invasive imaging, and enhance accessibility to care.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
515,346
2406.18464
Bayesian inverse Navier-Stokes problems: joint flow field reconstruction and parameter learning
We formulate and solve a Bayesian inverse Navier-Stokes (N-S) problem that assimilates velocimetry data in order to jointly reconstruct a 3D flow field and learn the unknown N-S parameters, including the boundary position. By hardwiring a generalised N-S problem, and regularising its unknown parameters using Gaussian prior distributions, we learn the most likely parameters in a collapsed search space. The most likely flow field reconstruction is then the N-S solution that corresponds to the learned parameters. We develop the method in the variational setting and use a stabilised Nitsche weak form of the N-S problem that permits the control of all N-S parameters. To regularise the inferred the geometry, we use a viscous signed distance field (vSDF) as an auxiliary variable, which is given as the solution of a viscous Eikonal boundary value problem. We devise an algorithm that solves this inverse problem, and numerically implement it using an adjoint-consistent stabilised cut-cell finite element method. We then use this method to reconstruct magnetic resonance velocimetry (flow-MRI) data of a 3D steady laminar flow through a physical model of an aortic arch for two different Reynolds numbers and signal-to-noise ratio (SNR) levels (low/high). We find that the method can accurately i) reconstruct the low SNR data by filtering out the noise/artefacts and recovering flow features that are obscured by noise, and ii) reproduce the high SNR data without overfitting. Although the framework that we develop applies to 3D steady laminar flows in complex geometries, it readily extends to time-dependent laminar and Reynolds-averaged turbulent flows, as well as non-Newtonian (e.g. viscoelastic) fluids.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,020
1712.06021
Bendable Cuboid Robot Path Planning with Collision Avoidance using Generalized $L_p$ Norms
Optimal path planning problems for rigid and deformable (bendable) cuboid robots are considered by providing an analytic safety constraint using generalized $L_p$ norms. For regular cuboid robots, level sets of weighted $L_p$ norms generate implicit approximations of their surfaces. For bendable cuboid robots a weighted $L_p$ norm in polar coordinates implicitly approximates the surface boundary through a specified level set. Obstacle volumes, in the environment to navigate within, are presumed to be approximately described as sub-level sets of weighted $L_p$ norms. Using these approximate surface models, the optimal safe path planning problem is reformulated as a two stage optimization problem, where the safety constraint depends on a point on the robot which is closest to the obstacle in the obstacle's distance metric. A set of equality and inequality constraints are derived to replace the closest point problem, which is then defines additional analytic constraints on the original path planning problem. Combining all the analytic constraints with logical AND operations leads to a general optimal safe path planning problem. Numerically solving the problem involve conversion to a nonlinear programing problem. Simulations for rigid and bendable cuboid robot verify the proposed method.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
86,815
2104.02214
Intelligent Building Control Systems for Thermal Comfort and Energy-Efficiency: A Systematic Review of Artificial Intelligence-Assisted Techniques
Building operations represent a significant percentage of the total primary energy consumed in most countries due to the proliferation of Heating, Ventilation and Air-Conditioning (HVAC) installations in response to the growing demand for improved thermal comfort. Reducing the associated energy consumption while maintaining comfortable conditions in buildings are conflicting objectives and represent a typical optimization problem that requires intelligent system design. Over the last decade, different methodologies based on the Artificial Intelligence (AI) techniques have been deployed to find the sweet spot between energy use in HVAC systems and suitable indoor comfort levels to the occupants. This paper performs a comprehensive and an in-depth systematic review of AI-based techniques used for building control systems by assessing the outputs of these techniques, and their implementations in the reviewed works, as well as investigating their abilities to improve the energy-efficiency, while maintaining thermal comfort conditions. This enables a holistic view of (1) the complexities of delivering thermal comfort to users inside buildings in an energy-efficient way, and (2) the associated bibliographic material to assist researchers and experts in the field in tackling such a challenge. Among the 20 AI tools developed for both energy consumption and comfort control, functions such as identification and recognition patterns, optimization, predictive control. Based on the findings of this work, the application of AI technology in building control is a promising area of research and still an ongoing, i.e., the performance of AI-based control is not yet completely satisfactory. This is mainly due in part to the fact that these algorithms usually need a large amount of high-quality real-world data, which is lacking in the building or, more precisely, the energy sector.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
228,632
2109.05687
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Recent pretrained language models extend from millions to billions of parameters. Thus the need to fine-tune an extremely large pretrained model with a limited training corpus arises in various downstream tasks. In this paper, we propose a straightforward yet effective fine-tuning technique, Child-Tuning, which updates a subset of parameters (called child network) of large pretrained models via strategically masking out the gradients of the non-child network during the backward process. Experiments on various downstream tasks in GLUE benchmark show that Child-Tuning consistently outperforms the vanilla fine-tuning by 1.5~8.6 average score among four different pretrained models, and surpasses the prior fine-tuning techniques by 0.6~1.3 points. Furthermore, empirical results on domain transfer and task transfer show that Child-Tuning can obtain better generalization performance by large margins.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
254,896
1810.09155
A Simple Baseline Algorithm for Graph Classification
Graph classification has recently received a lot of attention from various fields of machine learning e.g. kernel methods, sequential modeling or graph embedding. All these approaches offer promising results with different respective strengths and weaknesses. However, most of them rely on complex mathematics and require heavy computational power to achieve their best performance. We propose a simple and fast algorithm based on the spectral decomposition of graph Laplacian to perform graph classification and get a first reference score for a dataset. We show that this method obtains competitive results compared to state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,002
2011.11263
Evaluating Input Representation for Language Identification in Hindi-English Code Mixed Text
Natural language processing (NLP) techniques have become mainstream in the recent decade. Most of these advances are attributed to the processing of a single language. More recently, with the extensive growth of social media platforms focus has shifted to code-mixed text. The code-mixed text comprises text written in more than one language. People naturally tend to combine local language with global languages like English. To process such texts, current NLP techniques are not sufficient. As a first step, the text is processed to identify the language of the words in the text. In this work, we focus on language identification in code-mixed sentences for Hindi-English mixed text. The task of language identification is formulated as a token classification task. In the supervised setting, each word in the sentence has an associated language label. We evaluate different deep learning models and input representation combinations for this task. Mainly, character, sub-word, and word embeddings are considered in combination with CNN and LSTM based models. We show that sub-word representation along with the LSTM model gives the best results. In general sub-word representations perform significantly better than other input representations. We report the best accuracy of 94.52% using a single layer LSTM model on the standard SAIL ICON 2017 test set.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
207,780
2006.11558
Seq2Seq and Joint Learning Based Unix Command Line Prediction System
Despite being an open-source operating system pioneered in the early 90s, UNIX based platforms have not been able to garner an overwhelming reception from amateur end users. One of the rationales for under popularity of UNIX based systems is the steep learning curve corresponding to them due to extensive use of command line interface instead of usual interactive graphical user interface. In past years, the majority of insights used to explore the concern are eminently centered around the notion of utilizing chronic log history of the user to make the prediction of successive command. The approaches directed at anatomization of this notion are predominantly in accordance with Probabilistic inference models. The techniques employed in past, however, have not been competent enough to address the predicament as legitimately as anticipated. Instead of deploying usual mechanism of recommendation systems, we have employed a simple yet novel approach of Seq2seq model by leveraging continuous representations of self-curated exhaustive Knowledge Base (KB) to enhance the embedding employed in the model. This work describes an assistive, adaptive and dynamic way of enhancing UNIX command line prediction systems. Experimental methods state that our model has achieved accuracy surpassing mixture of other techniques and adaptive command line interface mechanism as acclaimed in the past.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
183,283
2101.09634
Chance-Constrained Covariance Steering in a Gaussian Random Field via Successive Convex Programming
The problem of optimizing affine feedback laws that explicitly steer the mean and covariance of an uncertain system state in the presence of a Gaussian random field is considered. Spatially-dependent disturbances are successively approximated with respect to a nominal trajectory by a sequence of jointly Gaussian random vectors. Sequential updates to the nominal control inputs are computed via convex optimization that includes the effect of affine state feedback, the perturbing effects of spatial disturbances, and chance constraints on the closed-loop state and control. The developed method is applied to solve for an affine feedback law to minimize the 99th percentile of $\Delta v$ required to complete an aerocapture mission around a planet with a randomly disturbed atmosphere.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
216,658
2304.09246
Real-Time Helmet Violation Detection Using YOLOv5 and Ensemble Learning
The proper enforcement of motorcycle helmet regulations is crucial for ensuring the safety of motorbike passengers and riders, as roadway cyclists and passengers are not likely to abide by these regulations if no proper enforcement systems are instituted. This paper presents the development and evaluation of a real-time YOLOv5 Deep Learning (DL) model for detecting riders and passengers on motorbikes, identifying whether the detected person is wearing a helmet. We trained the model on 100 videos recorded at 10 fps, each for 20 seconds. Our study demonstrated the applicability of DL models to accurately detect helmet regulation violators even in challenging lighting and weather conditions. We employed several data augmentation techniques in the study to ensure the training data is diverse enough to help build a robust model. The proposed model was tested on 100 test videos and produced an mAP score of 0.5267, ranking 11th on the AI City Track 5 public leaderboard. The use of deep learning techniques for image classification tasks, such as identifying helmet-wearing riders, has enormous potential for improving road safety. The study shows the potential of deep learning models for application in smart cities and enforcing traffic regulations and can be deployed in real-time for city-wide monitoring.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
358,987
2402.11842
CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking
Transformer based code models have impressive performance in many software engineering tasks. However, their effectiveness degrades when symbols are missing or not informative. The reason is that the model may not learn to pay attention to the right correlations/contexts without the help of symbols. We propose a new method to pre-train general code models when symbols are lacking. We observe that in such cases, programs degenerate to something written in a very primitive language. We hence propose to use program analysis to extract contexts a priori (instead of relying on symbols and masked language modeling as in vanilla models). We then leverage a novel attention masking method to only allow the model attending to these contexts, e.g., bi-directional program dependence transitive closures and token co-occurrences. In the meantime, the inherent self-attention mechanism is utilized to learn which of the allowed attentions are more important compared to others. To realize the idea, we enhance the vanilla tokenization and model architecture of a BERT model, construct and utilize attention masks, and introduce a new pre-training algorithm. We pre-train this BERT-like model from scratch, using a dataset of 26 million stripped binary functions with explicit program dependence information extracted by our tool. We apply the model in three downstream tasks: binary similarity, type inference, and malware family classification. Our pre-trained model can improve the SOTAs in these tasks from 53% to 64%, 49% to 60%, and 74% to 94%, respectively. It also substantially outperforms other general pre-training techniques of code understanding models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
430,601
1101.3220
Decision-Feedback Differential Detection in Impulse-Radio Ultra-Wideband Systems
In this paper we present decision-feedback differential detection (DF-DD) schemes for autocorrelation-based detection in impulse-radio ultra-wideband (IR-UWB) systems, a signaling scheme regarded as a promising candidate in particular for low-complexity wireless sensor networks. To this end, we first discuss ideal noncoherent sequence estimation and approximations thereof based on block-wise multiple-symbol differential detection (MSDD) and the Viterbi algorithm (VA) from the perspective of tree-search/trellis decoding. Exploiting relations well-known from tree-search decoding, we are able to derive the novel decision-feedback differential detection (DF-DD) schemes. A comprehensive comparison with respect to performance and complexity of the presented schemes in a typical IR-UWB scenario reveals---along with novel insights in techniques for complexity reduction of the sphere decoder applied for MSDD---that sorted DF-DD achieves close-to-optimum performance at very low, and in particular constant receiver complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
8,836
1409.1556
Very Deep Convolutional Networks for Large-Scale Image Recognition
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
35,839
1809.03672
Deep Interest Evolution Network for Click-Through Rate Prediction
Click-through rate~(CTR) prediction, whose goal is to estimate the probability of the user clicks, has become one of the core tasks in advertising systems. For CTR prediction model, it is necessary to capture the latent user interest behind the user behavior data. Besides, considering the changing of the external environment and the internal cognition, user interest evolves over time dynamically. There are several CTR prediction methods for interest modeling, while most of them regard the representation of behavior as the interest directly, and lack specially modeling for latent interest behind the concrete behavior. Moreover, few work consider the changing trend of interest. In this paper, we propose a novel model, named Deep Interest Evolution Network~(DIEN), for CTR prediction. Specifically, we design interest extractor layer to capture temporal interests from history behavior sequence. At this layer, we introduce an auxiliary loss to supervise interest extracting at each step. As user interests are diverse, especially in the e-commerce system, we propose interest evolving layer to capture interest evolving process that is relative to the target item. At interest evolving layer, attention mechanism is embedded into the sequential structure novelly, and the effects of relative interests are strengthened during interest evolution. In the experiments on both public and industrial datasets, DIEN significantly outperforms the state-of-the-art solutions. Notably, DIEN has been deployed in the display advertisement system of Taobao, and obtained 20.7\% improvement on CTR.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
107,383
2405.20337
OccSora: 4D Occupancy Generation Models as World Simulators for Autonomous Driving
Understanding the evolution of 3D scenes is important for effective autonomous driving. While conventional methods mode scene development with the motion of individual instances, world models emerge as a generative framework to describe the general scene dynamics. However, most existing methods adopt an autoregressive framework to perform next-token prediction, which suffer from inefficiency in modeling long-term temporal evolutions. To address this, we propose a diffusion-based 4D occupancy generation model, OccSora, to simulate the development of the 3D world for autonomous driving. We employ a 4D scene tokenizer to obtain compact discrete spatial-temporal representations for 4D occupancy input and achieve high-quality reconstruction for long-sequence occupancy videos. We then learn a diffusion transformer on the spatial-temporal representations and generate 4D occupancy conditioned on a trajectory prompt. We conduct extensive experiments on the widely used nuScenes dataset with Occ3D occupancy annotations. OccSora can generate 16s-videos with authentic 3D layout and temporal consistency, demonstrating its ability to understand the spatial and temporal distributions of driving scenes. With trajectory-aware 4D generation, OccSora has the potential to serve as a world simulator for the decision-making of autonomous driving. Code is available at: https://github.com/wzzheng/OccSora.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
459,301
2008.09863
A Discrete-Time Matching Filtering Differentiator
This paper presents a time discretization of the robust exact filtering differentiator, a sliding mode differentiator coupled to filter, which provides a suitable approximation to the derivatives of some noisy signals. This proposal takes advantage of the homogeneity of the differentiator, allowing the use of similar techniques to those of the linear systems. As in the original case, the convergence robust exact filtering differentiator depends on the bound of a higher-order derivative; nevertheless, this new realization can be implemented with or without the knowledge of such constant. It is demonstrated that the system's trajectories converge to a neighborhood of the origin with a free-noise input. Finally, comparisons between the behavior of the differentiator with different design parameters are presented.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
192,836
2211.03733
An Iterative Bidirectional Gradient Boosting Approach for CVR Baseline Estimation
This paper presents a novel Iterative Bidirectional Gradient Boosting Model (IBi-GBM) for estimating the baseline of Conservation Voltage Reduction (CVR) programs. In contrast to many existing methods, we treat CVR baseline estimation as a missing data retrieval problem. The approach involves dividing the load and its corresponding temperature profiles into three periods: pre-CVR, CVR, and post-CVR. To restore the missing load profile during the CVR period, the method employs a three-step process. First, a forward-pass GBM is executed using data from the pre-CVR period as inputs. Subsequently, a backward-pass GBM is applied using data from the post-CVR period. The two restored load profiles are reconciled, considering pre-calculated weights derived from forecasting accuracy, and only the leftmost and rightmost points are retained. The newly restored points are then included as inputs for the subsequent iteration. This iterative procedure continues until the original load data in the CVR period is fully restored. We develop IBi-GBM using actual smart meter and Supervisory Control and Data Acquisition (SCADA) data. Our results demonstrate that IBi-GBM exhibits robust performance across various data resolutions and in different seasons and outperforms existing methods by achieving a 1-2% reduction in normalized Root Mean Square Error (nRMSE).
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
329,023
1302.4107
Using Complex Networks to Quantify Consistency in the Use of Words
In this paper we quantify the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood.} Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to the Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for 8 authors indicated that these indices may be employed for author recognition. Indeed, as expected authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
22,120
2409.12366
Bilevel Optimization for Real-Time Control with Application to Locomotion Gait Generation
Model Predictive Control (MPC) is a common tool for the control of nonlinear, real-world systems, such as legged robots. However, solving MPC quickly enough to enable its use in real-time is often challenging. One common solution is given by real-time iterations, which does not solve the MPC problem to convergence, but rather close enough to give an approximate solution. In this paper, we extend this idea to a bilevel control framework where a "high-level" optimization program modifies a controller parameter of a "low-level" MPC problem which generates the control inputs and desired state trajectory. We propose an algorithm to iterate on this bilevel program in real-time and provide conditions for its convergence and improvements in stability. We then demonstrate the efficacy of this algorithm by applying it to a quadrupedal robot where the high-level problem optimizes a contact schedule in real-time. We show through simulation that the algorithm can yield improvements in disturbance rejection and optimality, while creating qualitatively new gaits.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
489,546
1502.02179
Optimal Multiuser Scheduling Schemes for Simultaneous Wireless Information and Power Transfer
In this paper, we study the downlink multiuser scheduling problem for systems with simultaneous wireless information and power transfer (SWIPT). We design optimal scheduling algorithms that maximize the long-term average system throughput under different fairness requirements, such as proportional fairness and equal throughput fairness. In particular, the algorithm designs are formulated as non-convex optimization problems which take into account the minimum required average sum harvested energy in the system. The problems are solved by using convex optimization techniques and the proposed optimization framework reveals the tradeoff between the long-term average system throughput and the sum harvested energy in multiuser systems with fairness constraints. Simulation results demonstrate that substantial performance gains can be achieved by the proposed optimization framework compared to existing suboptimal scheduling algorithms from the literature.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,007
2002.01322
Training Keyword Spotters with Limited and Synthesized Speech Data
With the rise of low power speech-enabled devices, there is a growing demand to quickly produce models for recognizing arbitrary sets of keywords. As with many machine learning tasks, one of the most challenging parts in the model creation process is obtaining a sufficient amount of training data. In this paper, we explore the effectiveness of synthesized speech data in training small, spoken term detection models of around 400k parameters. Instead of training such models directly on the audio or low level features such as MFCCs, we use a pre-trained speech embedding model trained to extract useful features for keyword spotting models. Using this speech embedding, we show that a model which detects 10 keywords when trained on only synthetic speech is equivalent to a model trained on over 500 real examples. We also show that a model without our speech embeddings would need to be trained on over 4000 real examples to reach the same accuracy.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
162,626
1812.04128
Probabilistic Model Checking of Robots Deployed in Extreme Environments
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
116,146
1905.10289
MatchZoo: A Learning, Practicing, and Developing System for Neural Text Matching
Text matching is the core problem in many natural language processing (NLP) tasks, such as information retrieval, question answering, and conversation. Recently, deep leaning technology has been widely adopted for text matching, making neural text matching a new and active research domain. With a large number of neural matching models emerging rapidly, it becomes more and more difficult for researchers, especially those newcomers, to learn and understand these new models. Moreover, it is usually difficult to try these models due to the tedious data pre-processing, complicated parameter configuration, and massive optimization tricks, not to mention the unavailability of public codes sometimes. Finally, for researchers who want to develop new models, it is also not an easy task to implement a neural text matching model from scratch, and to compare with a bunch of existing models. In this paper, therefore, we present a novel system, namely MatchZoo, to facilitate the learning, practicing and designing of neural text matching models. The system consists of a powerful matching library and a user-friendly and interactive studio, which can help researchers: 1) to learn state-of-the-art neural text matching models systematically, 2) to train, test and apply these models with simple configurable steps; and 3) to develop their own models with rich APIs and assistance.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
132,009
2204.13730
Direct Air-to-Underwater Optical Wireless Communication: Statistical Characterization and Outage Performance
In general, a buoy relay is used to connect the underwater communication to the terrestrial network over a radio or optical wireless communication (OWC) link. The use of relay deployment may pose security and deployment issues. This paper investigates the feasibility of direct air-to-underwater (A2UW) communication from an over-the-sea OWC system to an underwater submarine without deploying a relaying node. We analyze the statistical performance of the direct transmission over the combined channel fading effect of atmospheric turbulence, random fog, air-to-water interface, oceanic turbulence, and pointing errors. We develop novel analytical expressions for the probability density function (PDF) and cumulative distribution function (CDF) of the resultant signal-to-noise ratio (SNR) in terms of bivariate Meijer-G and Fox-H functions. We use the derived statistical results to analyze the system performance by providing exact and asymptotic results of the outage probability in terms of system parameters. We use computer simulations to demonstrate the performance of direct A2UW transmissions compared to the relay-assisted system.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
293,915
2308.14087
A comprehensive review on Plant Leaf Disease detection using Deep learning
Leaf disease is a common fatal disease for plants. Early diagnosis and detection is necessary in order to improve the prognosis of leaf diseases affecting plant. For predicting leaf disease, several automated systems have already been developed using different plant pathology imaging modalities. This paper provides a systematic review of the literature on leaf disease-based models for the diagnosis of various plant leaf diseases via deep learning. The advantages and limitations of different deep learning models including Vision Transformer (ViT), Deep convolutional neural network (DCNN), Convolutional neural network (CNN), Residual Skip Network-based Super-Resolution for Leaf Disease Detection (RSNSR-LDD), Disease Detection Network (DDN), and YOLO (You only look once) are described in this review. The review also shows that the studies related to leaf disease detection applied different deep learning models to a number of publicly available datasets. For comparing the performance of the models, different metrics such as accuracy, precision, recall, etc. were used in the existing studies.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
388,184
2410.07505
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression
Post-Training Quantization (PTQ) is an effective technique for compressing Large Language Models (LLMs). While many studies focus on quantizing both weights and activations, it is still a challenge to maintain the accuracy of LLM after activating quantization. To investigate the primary cause, we extend the concept of kernel from linear algebra to quantization functions to define a new term, "quantization kernel", which refers to the set of elements in activations that are quantized to zero. Through quantitative analysis of the quantization kernel, we find that these elements are crucial for maintaining the accuracy of quantized LLMs. With the decrease of quantization kernel, the precision of quantized LLMs increases. If the quantization kernel proportion is kept below 19% for OPT models and below 1% for LLaMA models, the precision loss from quantizing activations to INT8 becomes negligible. Motivated by the goal of developing a quantization method with small quantization kernel, we propose CrossQuant: a simple yet effective method for quantizing activations. CrossQuant cross-quantizes elements using row and column-wise absolute maximum vectors, achieving a quantization kernel of approximately 16% for OPT models and less than 0.1% for LLaMA models. Experimental results on LLMs (LLaMA, OPT) ranging from 6.7B to 70B parameters demonstrate that CrossQuant improves or maintains perplexity and accuracy in language modeling, zero-shot, and few-shot tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
496,655
2310.05719
Transformer Fusion with Optimal Transport
Fusion is a technique for merging multiple independently-trained neural networks in order to combine their capabilities. Past attempts have been restricted to the case of fully-connected, convolutional, and residual networks. This paper presents a systematic approach for fusing two or more transformer-based networks exploiting Optimal Transport to (soft-)align the various architectural components. We flesh out an abstraction for layer alignment, that can generalize to arbitrary architectures - in principle - and we apply this to the key ingredients of Transformers such as multi-head self-attention, layer-normalization, and residual connections, and we discuss how to handle them via various ablation studies. Furthermore, our method allows the fusion of models of different sizes (heterogeneous fusion), providing a new and efficient way to compress Transformers. The proposed approach is evaluated on both image classification tasks via Vision Transformer and natural language modeling tasks using BERT. Our approach consistently outperforms vanilla fusion, and, after a surprisingly short finetuning, also outperforms the individual converged parent models. In our analysis, we uncover intriguing insights about the significant role of soft alignment in the case of Transformers. Our results showcase the potential of fusing multiple Transformers, thus compounding their expertise, in the budding paradigm of model fusion and recombination. Code is available at https://github.com/graldij/transformer-fusion.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
398,266
2403.14626
ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer
Obstacle detection and tracking represent a critical component in robot autonomous navigation. In this paper, we propose ODTFormer, a Transformer-based model to address both obstacle detection and tracking problems. For the detection task, our approach leverages deformable attention to construct a 3D cost volume, which is decoded progressively in the form of voxel occupancy grids. We further track the obstacles by matching the voxels between consecutive frames. The entire model can be optimized in an end-to-end manner. Through extensive experiments on DrivingStereo and KITTI benchmarks, our model achieves state-of-the-art performance in the obstacle detection task. We also report comparable accuracy to state-of-the-art obstacle tracking models while requiring only a fraction of their computation cost, typically ten-fold to twenty-fold less. The code and model weights will be publicly released.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
440,164
2409.01175
Logit Scaling for Out-of-Distribution Detection
The safe deployment of machine learning and AI models in open-world settings hinges critically on the ability to detect out-of-distribution (OOD) data accurately, data samples that contrast vastly from what the model was trained with. Current approaches to OOD detection often require further training the model, and/or statistics about the training data which may no longer be accessible. Additionally, many existing OOD detection methods struggle to maintain performance when transferred across different architectures. Our research tackles these issues by proposing a simple, post-hoc method that does not require access to the training data distribution, keeps a trained network intact, and holds strong performance across a variety of architectures. Our method, Logit Scaling (LTS), as the name suggests, simply scales the logits in a manner that effectively distinguishes between in-distribution (ID) and OOD samples. We tested our method on benchmarks across various scales, including CIFAR-10, CIFAR-100, ImageNet and OpenOOD. The experiments cover 3 ID and 14 OOD datasets, as well as 9 model architectures. Overall, we demonstrate state-of-the-art performance, robustness and adaptability across different architectures, paving the way towards a universally applicable solution for advanced OOD detection.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
485,243
2501.18315
Surface Defect Identification using Bayesian Filtering on a 3D Mesh
This paper presents a CAD-based approach for automated surface defect detection. We leverage the a-priori knowledge embedded in a CAD model and integrate it with point cloud data acquired from commercially available stereo and depth cameras. The proposed method first transforms the CAD model into a high-density polygonal mesh, where each vertex represents a state variable in 3D space. Subsequently, a weighted least squares algorithm is employed to iteratively estimate the state of the scanned workpiece based on the captured point cloud measurements. This framework offers the potential to incorporate information from diverse sensors into the CAD domain, facilitating a more comprehensive analysis. Preliminary results demonstrate promising performance, with the algorithm achieving convergence to a sub-millimeter standard deviation in the region of interest using only approximately 50 point cloud samples. This highlights the potential of utilising commercially available stereo cameras for high-precision quality control applications.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
528,656
1911.08551
Prediction Focused Topic Models for Electronic Health Records
Electronic Health Record (EHR) data can be represented as discrete counts over a high dimensional set of possible procedures, diagnoses, and medications. Supervised topic models present an attractive option for incorporating EHR data as features into a prediction problem: given a patient's record, we estimate a set of latent factors that are predictive of the response variable. However, existing methods for supervised topic modeling struggle to balance prediction quality and coherence of the latent factors. We introduce a novel approach, the prediction-focused topic model, that uses the supervisory signal to retain only features that improve, or do not hinder, prediction performance. By removing features with irrelevant signal, the topic model is able to learn task-relevant, interpretable topics. We demonstrate on a EHR dataset and a movie review dataset that compared to existing approaches, prediction-focused topic models are able to learn much more coherent topics while maintaining competitive predictions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
154,215
1508.02138
A Generalized Multiscale Finite Element Method for Poroelasticity Problems II: Nonlinear Coupling
In this paper, we consider the numerical solution of some nonlinear poroelasticity problems that are of Biot type and develop a general algorithm for solving nonlinear coupled systems. We discuss the difficulties associated with flow and mechanics in heterogenous media with nonlinear coupling. The central issue being how to handle the nonlinearities and the multiscale scale nature of the media. To compute an efficient numerical solution we develop and implement a Generalized Multiscale Finite Element Method (GMsFEM) that solves nonlinear problems on a coarse grid by constructing local multiscale basis functions and treating part of the nonlinearity locally as a parametric value. After linearization with a Picard Iteration, the procedure begins with construction of multiscale bases for both displacement and pressure in each coarse block by treating the staggered nonlinearity as a parametric value. Using a snapshot space and local spectral problems, we construct an offline basis of reduced dimension. From here an online, parametric dependent, space is constructed. Finally, after multiplying by a multiscale partitions of unity, the multiscale basis is constructed and the coarse grid problem then can be solved for arbitrary forcing and boundary conditions. We implement this algorithm on a geometry with a linear and nonlinear pressure dependent permeability field and compute error between the multiscale solution with the fine-scale solutions.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
45,867
1809.07615
Lessons learned in multilingual grounded language learning
Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language. Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
108,314
2304.12778
Loss- and Reward-Weighting for Efficient Distributed Reinforcement Learning
This paper introduces two learning schemes for distributed agents in Reinforcement Learning (RL) environments, namely Reward-Weighted (R-Weighted) and Loss-Weighted (L-Weighted) gradient merger. The R/L weighted methods replace standard practices for training multiple agents, such as summing or averaging the gradients. The core of our methods is to scale the gradient of each actor based on how high the reward (for R-Weighted) or the loss (for L-Weighted) is compared to the other actors. During training, each agent operates in differently initialized versions of the same environment, which gives different gradients from different actors. In essence, the R-Weights and L-Weights of each agent inform the other agents of its potential, which again reports which environment should be prioritized for learning. This approach of distributed learning is possible because environments that yield higher rewards, or low losses, have more critical information than environments that yield lower rewards or higher losses. We empirically demonstrate that the R-Weighted methods work superior to the state-of-the-art in multiple RL environments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
360,348
1807.02740
Data-driven Upsampling of Point Clouds
High quality upsampling of sparse 3D point clouds is critically useful for a wide range of geometric operations such as reconstruction, rendering, meshing, and analysis. In this paper, we propose a data-driven algorithm that enables an upsampling of 3D point clouds without the need for hard-coded rules. Our approach uses a deep network with Chamfer distance as the loss function, capable of learning the latent features in point clouds belonging to different object categories. We evaluate our algorithm across different amplification factors, with upsampling learned and performed on objects belonging to the same category as well as different categories. We also explore the desirable characteristics of input point clouds as a function of the distribution of the point samples. Finally, we demonstrate the performance of our algorithm in single-category training versus multi-category training scenarios. The final proposed model is compared against a baseline, optimization-based upsampling method. Results indicate that our algorithm is capable of generating more uniform and accurate upsamplings.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
102,337
1602.04032
A Truthful Mechanism with Biparameter Learning for Online Crowdsourcing
We study a problem of allocating divisible jobs, arriving online, to workers in a crowdsourcing setting which involves learning two parameters of strategically behaving workers. Each job is split into a certain number of tasks that are then allocated to workers. Each arriving job has to be completed within a deadline and each task has to be completed satisfying an upper bound on probability of failure. The job population is homogeneous while the workers are heterogeneous in terms of costs, completion times, and times to failure. The job completion time and time to failure of each worker are stochastic with fixed but unknown means. The requester is faced with the challenge of learning two separate parameters of each (strategically behaving) worker simultaneously, namely, the mean job completion time and the mean time to failure. The time to failure of a worker depends on the duration of the task handled by the worker. Assuming non-strategic workers to start with, we solve this biparameter learning problem by applying the Robust UCB algorithm. Then, we non-trivially extend this algorithm to the setting where the workers are strategic about their costs. Our proposed mechanism is dominant strategy incentive compatible and ex-post individually rational with asymptotically optimal regret performance.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
52,080
2409.10959
Leveraging Reviewer Experience in Code Review Comment Generation
Modern code review is a ubiquitous software quality assurance process aimed at identifying potential issues within newly written code. Despite its effectiveness, the process demands large amounts of effort from the human reviewers involved. To help alleviate this workload, researchers have trained deep learning models to imitate human reviewers in providing natural language code reviews. Formally, this task is known as code review comment generation. Prior work has demonstrated improvements in this task by leveraging machine learning techniques and neural models, such as transfer learning and the transformer architecture. However, the quality of the model generated reviews remain sub-optimal due to the quality of the open-source code review data used in model training. This is in part due to the data obtained from open-source projects where code reviews are conducted in a public forum, and reviewers possess varying levels of software development experience, potentially affecting the quality of their feedback. To accommodate for this variation, we propose a suite of experience-aware training methods that utilise the reviewers' past authoring and reviewing experiences as signals for review quality. Specifically, we propose experience-aware loss functions (ELF), which use the reviewers' authoring and reviewing ownership of a project as weights in the model's loss function. Through this method, experienced reviewers' code reviews yield larger influence over the model's behaviour. Compared to the SOTA model, ELF was able to generate higher quality reviews in terms of accuracy, informativeness, and comment types generated. The key contribution of this work is the demonstration of how traditional software engineering concepts such as reviewer experience can be integrated into the design of AI-based automated code review models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
488,950
2012.02310
BoxInst: High-Performance Instance Segmentation with Box Annotations
We present a high-performance method that can achieve mask-level instance segmentation with only bounding-box annotations for training. While this setting has been studied in the literature, here we show significantly stronger performance with a simple design (e.g., dramatically improving previous best reported mask AP of 21.1% in Hsu et al. (2019) to 31.6% on the COCO dataset). Our core idea is to redesign the loss of learning masks in instance segmentation, with no modification to the segmentation network itself. The new loss functions can supervise the mask training without relying on mask annotations. This is made possible with two loss terms, namely, 1) a surrogate term that minimizes the discrepancy between the projections of the ground-truth box and the predicted mask; 2) a pairwise loss that can exploit the prior that proximal pixels with similar colors are very likely to have the same category label. Experiments demonstrate that the redesigned mask loss can yield surprisingly high-quality instance masks with only box annotations. For example, without using any mask annotations, with a ResNet-101 backbone and 3x training schedule, we achieve 33.2% mask AP on COCO test-dev split (vs. 39.1% of the fully supervised counterpart). Our excellent experiment results on COCO and Pascal VOC indicate that our method dramatically narrows the performance gap between weakly and fully supervised instance segmentation. Code is available at: https://git.io/AdelaiDet
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
209,726
2410.01910
Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks
Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs. This property is desirable in applications in order to have number of trainable parameters that is independent of the size of the input graphs. Uniform expressivity of the two variable guarded fragment (GC2) of first order logic is a well-celebrated result for Rectified Linear Unit (ReLU) GNNs [Barcelo & al., 2020]. In this article, we prove that uniform expressivity of GC2 queries is not possible for GNNs with a wide class of Pfaffian activation functions (including the sigmoid and tanh), answering a question formulated by [Grohe, 2021]. We also show that despite these limitations, many of those GNNs can still efficiently express GC2 queries in a way that the number of parameters remains logarithmic on the maximal degree of the input graphs. Furthermore, we demonstrate that a log-log dependency on the degree is achievable for a certain choice of activation function. This shows that uniform expressivity can be successfully relaxed by covering large graphs appearing in practical applications. Our experiments illustrates that our theoretical estimates hold in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
494,007
1906.05721
Visual Wake Words Dataset
The emergence of Internet of Things (IoT) applications requires intelligence on the edge. Microcontrollers provide a low-cost compute platform to deploy intelligent IoT applications using machine learning at scale, but have extremely limited on-chip memory and compute capability. To deploy computer vision on such devices, we need tiny vision models that fit within a few hundred kilobytes of memory footprint in terms of peak usage and model size on device storage. To facilitate the development of microcontroller friendly models, we present a new dataset, Visual Wake Words, that represents a common microcontroller vision use-case of identifying whether a person is present in the image or not, and provides a realistic benchmark for tiny vision models. Within a limited memory footprint of 250 KB, several state-of-the-art mobile models achieve accuracy of 85-90% on the Visual Wake Words dataset. We anticipate the proposed dataset will advance the research on tiny vision models that can push the pareto-optimal boundary in terms of accuracy versus memory usage for microcontroller applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
135,100
1805.08297
Character-based Neural Networks for Sentence Pair Modeling
Sentence pair modeling is critical for many NLP tasks, such as paraphrase identification, semantic textual similarity, and natural language inference. Most state-of-the-art neural models for these tasks rely on pretrained word embedding and compose sentence-level semantics in varied ways; however, few works have attempted to verify whether we really need pretrained embeddings in these tasks. In this paper, we study how effective subword-level (character and character n-gram) representations are in sentence pair modeling. Though it is well-known that subword models are effective in tasks with single sentence input, including language modeling and machine translation, they have not been systematically studied in sentence pair modeling tasks where the semantic and string similarities between texts matter. Our experiments show that subword models without any pretrained word embedding can achieve new state-of-the-art results on two social media datasets and competitive results on news data for paraphrase identification.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
98,092
2308.09829
Learning from A Single Graph is All You Need for Near-Shortest Path Routing in Wireless Networks
We propose a learning algorithm for local routing policies that needs only a few data samples obtained from a single graph while generalizing to all random graphs in a standard model of wireless networks. We thus solve the all-pairs near-shortest path problem by training deep neural networks (DNNs) that efficiently and scalably learn routing policies that are local, i.e., they only consider node states and the states of neighboring nodes. Remarkably, one of these DNNs we train learns a policy that exactly matches the performance of greedy forwarding; another generally outperforms greedy forwarding. Our algorithm design exploits network domain knowledge in several ways: First, in the selection of input features and, second, in the selection of a ``seed graph'' and subsamples from its shortest paths. The leverage of domain knowledge provides theoretical explainability of why the seed graph and node subsampling suffice for learning that is efficient, scalable, and generalizable. Simulation-based results on uniform random graphs with diverse sizes and densities empirically corroborate that using samples generated from a few routing paths in a modest-sized seed graph quickly learns a model that is generalizable across (almost) all random graphs in the wireless network model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
386,437
2401.11492
Edge-Enabled Real-time Railway Track Segmentation
Accurate and rapid railway track segmentation can assist automatic train driving and is a key step in early warning to fixed or moving obstacles on the railway track. However, certain existing algorithms tailored for track segmentation often struggle to meet the requirements of real-time and efficiency on resource-constrained edge devices. Considering this challenge, we propose an edge-enabled real-time railway track segmentation algorithm, which is optimized to be suitable for edge applications by optimizing the network structure and quantizing the model after training. Initially, Ghost convolution is introduced to reduce the complexity of the backbone, thereby achieving the extraction of key information of the interested region at a lower cost. To further reduce the model complexity and calculation, a new lightweight detection head is proposed to achieve the best balance between accuracy and efficiency. Subsequently, we introduce quantization techniques to map the model's floating-point weights and activation values into lower bit-width fixed-point representations, reducing computational demands and memory footprint, ultimately accelerating the model's inference. Finally, we draw inspiration from GPU parallel programming principles to expedite the pre-processing and post-processing stages of the algorithm by doing parallel processing. The approach is evaluated with public and challenging dataset RailSem19 and tested on Jetson Nano. Experimental results demonstrate that our enhanced algorithm achieves an accuracy level of 83.3% while achieving a real-time inference rate of 25 frames per second when the input size is 480x480, thereby effectively meeting the requirements for real-time and high-efficiency operation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
423,027
2105.00329
ECNNs: Ensemble Learning Methods for Improving Planar Grasp Quality Estimation
We present an ensemble learning methodology that combines multiple existing robotic grasp synthesis algorithms and obtain a success rate that is significantly better than the individual algorithms. The methodology treats the grasping algorithms as "experts" providing grasp "opinions". An Ensemble Convolutional Neural Network (ECNN) is trained using a Mixture of Experts (MOE) model that integrates these opinions and determines the final grasping decision. The ECNN introduces minimal computational cost overhead, and the network can virtually run as fast as the slowest expert. We test this architecture using open-source algorithms in the literature by adopting GQCNN 4.0, GGCNN and a custom variation of GGCNN as experts and obtained a 6% increase in the grasp success on the Cornell Dataset compared to the best-performing individual algorithm. The performance of the method is also demonstrated using a Franka Emika Panda arm.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
233,178
2302.10895
CQnet: convex-geometric interpretation and constraining neural-network trajectories
We introduce CQnet, a neural network with origins in the CQ algorithm for solving convex split-feasibility problems and forward-backward splitting. CQnet's trajectories are interpretable as particles that are tracking a changing constraint set via its point-to-set distance function while being elements of another constraint set at every layer. More than just a convex-geometric interpretation, CQnet accommodates learned and deterministic constraints that may be sample or data-specific and are satisfied by every layer and the output. Furthermore, the states in CQnet progress toward another constraint set at every layer. We provide proof of stability/nonexpansiveness with minimal assumptions. The combination of constraint handling and stability put forward CQnet as a candidate for various tasks where prior knowledge exists on the network states or output.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
347,001
2102.02970
Optimizing RRH Placement Under a Noise-Limited Point-to-Point Wireless Backhaul
In this paper, we study the deployment decisions and location optimization for the remote radio heads (RRHs) in coordinated distributed networks in the presence of a wireless backhaul. We implement a scheme where the RRHs use zero-forcing beamforming (ZF-BF) for the access channel to jointly serve multiple users, while on the backhaul the RRHs are connected to their central units (CUs) through point-to-point wireless links. We investigate the effect of this scheme on the deployment of the RRHs and on the resulting achievable spectral efficiency over the access channel (under a backhaul outage constraint). Our results show that even for noise-limited backhaul links, a large bandwidth must be allocated to the backhaul to allow freely distributing the RRHs in the network. Additionally, our results show that distributing the available antennas on more RRHs is favored as compared to a more co-located antenna system. This motivates further works to study the efficiency of wireless backhaul schemes and their effect on the performance of coordinated distributed networks with joint transmission.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
218,584
1706.03847
Recurrent Neural Networks with Top-k Gains for Session-based Recommendations
RNNs have been shown to be excellent models for sequential data and in particular for data that is generated by users in an session-based manner. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce novel ranking loss functions tailored to RNNs in the recommendation setting. The improved performance of these losses over alternatives, along with further tricks and refinements described in this work, allow for an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 53% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly. We further demonstrate the performance gain of the RNN over baselines in an online A/B test.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
75,231
1506.05937
A Tight Runtime Analysis of the $(1+(\lambda, \lambda))$ Genetic Algorithm on OneMax
Understanding how crossover works is still one of the big challenges in evolutionary computation research, and making our understanding precise and proven by mathematical means might be an even bigger one. As one of few examples where crossover provably is useful, the $(1+(\lambda, \lambda))$ Genetic Algorithm (GA) was proposed recently in [Doerr, Doerr, Ebel: TCS 2015]. Using the fitness level method, the expected optimization time on general OneMax functions was analyzed and a $O(\max\{n\log(n)/\lambda, \lambda n\})$ bound was proven for any offspring population size $\lambda \in [1..n]$. We improve this work in several ways, leading to sharper bounds and a better understanding of how the use of crossover speeds up the runtime in this algorithm. We first improve the upper bound on the runtime to $O(\max\{n\log(n)/\lambda, n\lambda \log\log(\lambda)/\log(\lambda)\})$. This improvement is made possible from observing that in the parallel generation of $\lambda$ offspring via crossover (but not mutation), the best of these often is better than the expected value, and hence several fitness levels can be gained in one iteration. We then present the first lower bound for this problem. It matches our upper bound for all values of $\lambda$. This allows to determine the asymptotically optimal value for the population size. It is $\lambda = \Theta(\sqrt{\log(n)\log\log(n)/\log\log\log(n)})$, which gives an optimization time of $\Theta(n \sqrt{\log(n)\log\log\log(n)/\log\log(n)})$. Hence the improved runtime analysis gives a better runtime guarantee along with a better suggestion for the parameter $\lambda$. We finally give a tail bound for the upper tail of the runtime distribution, which shows that the actual runtime exceeds our runtime guarantee by a factor of $(1+\delta)$ with probability $O((n/\lambda^2)^{-\delta})$ only.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
44,359
1803.08086
Influence of augmented humans in online interactions during voting events
The advent of the digital era provided a fertile ground for the development of virtual societies, complex systems influencing real-world dynamics. Understanding online human behavior and its relevance beyond the digital boundaries is still an open challenge. Here we show that online social interactions during a massive voting event can be used to build an accurate map of real-world political parties and electoral ranks. We provide evidence that information flow and collective attention are often driven by a special class of highly influential users, that we name "augmented humans", who exploit thousands of automated agents, also known as bots, for enhancing their online influence. We show that augmented humans generate deep information cascades, to the same extent of news media and other broadcasters, while they uniformly infiltrate across the full range of identified groups. Digital augmentation represents the cyber-physical counterpart of the human desire to acquire power within social systems.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
93,191
2301.02021
Dynamic Sizing of Frequency Control Ancillary Service Requirements for a Philippine Grid
Sizing frequency control ancillary service (FCAS) requirements is crucial for the reliable operation of power systems amid a continuous influx of variable renewable energy (VRE) generation. Reserve sizing is especially pertinent for the Philippine grids due to an expected transition to new FCAS classifications established by its Grid Code. In lieu of the existing deterministic formulation, this work proposes a dynamic approach for sizing secondary and tertiary reserves that accounts for the stochasticity and variability of load demand and VRE. We propose a method where historical power imbalances were calculated and clustered according to the time and day of week they occurred. The conditional probabilities of forecast and noise errors were characterized using kernel density estimation. Recursive convolution was performed to obtain the total reserve requirement probability distribution. The method was tested on Visayas grid's historical system operation data and used target reliability levels on the error distributions to size upward and downward reserve needs. Finally, the methodology was extended to demonstrate through a numerical experiment that sizing FCAS at temporal resolutions higher than one-hour, e.g., five-minute, provides the benefit of shrinking the required capacities by as much as 86.2\% compared to current deterministic FCAS sizing.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
339,391
2210.09503
Towards Fair Classification against Poisoning Attacks
Fair classification aims to stress the classification models to achieve the equality (treatment or prediction quality) among different sensitive groups. However, fair classification can be under the risk of poisoning attacks that deliberately insert malicious training samples to manipulate the trained classifiers' performance. In this work, we study the poisoning scenario where the attacker can insert a small fraction of samples into training data, with arbitrary sensitive attributes as well as other predictive features. We demonstrate that the fairly trained classifiers can be greatly vulnerable to such poisoning attacks, with much worse accuracy & fairness trade-off, even when we apply some of the most effective defenses (originally proposed to defend traditional classification tasks). As countermeasures to defend fair classification tasks, we propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks. Through extensive experiments, the results validate that the proposed defense framework obtains better robustness in terms of accuracy and fairness than representative baseline methods.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
324,556
2406.11231
Enabling robots to follow abstract instructions and complete complex dynamic tasks
Completing complex tasks in unpredictable settings like home kitchens challenges robotic systems. These challenges include interpreting high-level human commands, such as "make me a hot beverage" and performing actions like pouring a precise amount of water into a moving mug. To address these challenges, we present a novel framework that combines Large Language Models (LLMs), a curated Knowledge Base, and Integrated Force and Visual Feedback (IFVF). Our approach interprets abstract instructions, performs long-horizon tasks, and handles various uncertainties. It utilises GPT-4 to analyse the user's query and surroundings, then generates code that accesses a curated database of functions during execution. It translates abstract instructions into actionable steps. Each step involves generating custom code by employing retrieval-augmented generalisation to pull IFVF-relevant examples from the Knowledge Base. IFVF allows the robot to respond to noise and disturbances during execution. We use coffee making and plate decoration to demonstrate our approach, including components ranging from pouring to drawer opening, each benefiting from distinct feedback types and methods. This novel advancement marks significant progress toward a scalable, efficient robotic framework for completing complex tasks in uncertain environments. Our findings are illustrated in an accompanying video and supported by an open-source GitHub repository (released upon paper acceptance).
false
false
false
false
true
false
true
true
true
false
false
false
false
false
false
false
false
false
464,778
1709.06428
Sensor Assignment Algorithms to Improve Observability while Tracking Targets
We study two sensor assignment problems for multi-target tracking with the goal of improving the observability of the underlying estimator. We consider various measures of the observability matrix as the assignment value function. We first study the general version where the sensors must form teams to track individual targets. If the value function is monotonically increasing and submodular then a greedy algorithm yields a 1/2-approximation. We then study a restricted version where exactly two sensors must be assigned to each target. We present a 1/3-approximation algorithm for this problem which holds for arbitrary value functions (not necessarily submodular or monotone). In addition to approximation algorithms, we also present various properties of observability measures. We show that the inverse of the condition number of the observability matrix is neither monotone nor submodular, but present other measures which are. Specifically, we show that the trace and rank of the symmetric observability matrix are monotone and submodular and the log determinant of the symmetric observability matrix is monotone and submodular when the matrix is non-singular. If the target's motion model is not known, the inverse cannot be computed exactly. Instead, we present a lower bound for distance sensors. In addition to theoretical results, we evaluate our results empirically through simulations.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
81,097
2303.13971
Optimal Transport for Offline Imitation Learning
With the advent of large datasets, offline reinforcement learning (RL) is a promising framework for learning good decision-making policies without the need to interact with the real environment. However, offline RL requires the dataset to be reward-annotated, which presents practical challenges when reward engineering is difficult or when obtaining reward annotations is labor-intensive. In this paper, we introduce Optimal Transport Reward labeling (OTR), an algorithm that assigns rewards to offline trajectories, with a few high-quality demonstrations. OTR's key idea is to use optimal transport to compute an optimal alignment between an unlabeled trajectory in the dataset and an expert demonstration to obtain a similarity measure that can be interpreted as a reward, which can then be used by an offline RL algorithm to learn the policy. OTR is easy to implement and computationally efficient. On D4RL benchmarks, we show that OTR with a single demonstration can consistently match the performance of offline RL with ground-truth rewards.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
353,903
2408.05896
Scalable recommender system based on factor analysis
Recommender systems have become crucial in the modern digital landscape, where personalized content, products, and services are essential for enhancing user experience. This paper explores statistical models for recommender systems, focusing on crossed random effects models and factor analysis. We extend the crossed random effects model to include random slopes, enabling the capture of varying covariate effects among users and items. Additionally, we investigate the use of factor analysis in recommender systems, particularly for settings with incomplete data. The paper also discusses scalable solutions using the Expectation Maximization (EM) and variational EM algorithms for parameter estimation, highlighting the application of these models to predict user-item interactions effectively.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
479,990
1810.08126
KTAN: Knowledge Transfer Adversarial Network
To reduce the large computation and storage cost of a deep convolutional neural network, the knowledge distillation based methods have pioneered to transfer the generalization ability of a large (teacher) deep network to a light-weight (student) network. However, these methods mostly focus on transferring the probability distribution of the softmax layer in a teacher network and thus neglect the intermediate representations. In this paper, we propose a knowledge transfer adversarial network to better train a student network. Our technique holistically considers both intermediate representations and probability distributions of a teacher network. To transfer the knowledge of intermediate representations, we set high-level teacher feature maps as a target, toward which the student feature maps are trained. Specifically, we arrange a Teacher-to-Student layer for enabling our framework suitable for various student structures. The intermediate representation helps the student network better understand the transferred generalization as compared to the probability distribution only. Furthermore, we infuse an adversarial learning process by employing a discriminator network, which can fully exploit the spatial correlation of feature maps in training a student network. The experimental results demonstrate that the proposed method can significantly improve the performance of a student network on both image classification and object detection tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
110,761
2112.03010
Optimized Deployment of Unmanned Aerial Vehicles for Wildfire Detection and Monitoring
In recent years, increased wildfires have caused irreversible damage to forest resources worldwide, threatening wildlives and human living conditions. The lack of accurate frontline information in real-time can pose great risks to firefighters. Though a plethora of machine learning algorithms have been developed to detect wildfires using aerial images and videos captured by drones, there is a lack of methods corresponding to drone deployment. We propose a wildfire rapid response system that optimizes the number and relative positions of drones to achieve full coverage of the whole wildfire area. Trained on the data from historical wildfire events, our model evaluates the possibility of wildfires at different scales and accordingly allocates the resources. It adopts plane geometry to deploy drones while balancing the capability and safety with inequality constrained nonlinear programming. The method can flexibly adapt to different terrains and the dynamic extension of the wildfire area. Lastly, the operation cost under extreme wildfire circumstances can be assessed upon the completion of the deployment. We applied our model to the wildfire data collected from eastern Victoria, Australia, and demonstrated its great potential in the real world.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
270,052
1907.11457
Two-hidden-layer Feedforward Neural Networks are Universal Approximators: A Constructive Approach
It is well known that Artificial Neural Networks are universal approximators. The classical result proves that, given a continuous function on a compact set on an n-dimensional space, then there exists a one-hidden-layer feedforward network which approximates the function. Such result proves the existence, but it does not provide a method for finding it. In this paper, a constructive approach to the proof of this property is given for the case of two-hidden-layer feedforward networks. This approach is based on an approximation of continuous functions by simplicial maps. Once a triangulation of the space is given, a concrete architecture and set of weights can be obtained. The quality of the approximation depends on the refinement of the covering of the space by simplicial complexes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
139,851
2009.07646
Eating Habits Discovery in Egocentric Photo-streams
Eating habits are learned throughout the early stages of our lives. However, it is not easy to be aware of how our food-related routine affects our healthy living. In this work, we address the unsupervised discovery of nutritional habits from egocentric photo-streams. We build a food-related behavioural pattern discovery model, which discloses nutritional routines from the activities performed throughout the days. To do so, we rely on Dynamic-Time-Warping for the evaluation of similarity among the collected days. Within this framework, we present a simple, but robust and fast novel classification pipeline that outperforms the state-of-the-art on food-related image classification with a weighted accuracy and F-score of 70% and 63%, respectively. Later, we identify days composed of nutritional activities that do not describe the habits of the person as anomalies in the daily life of the user with the Isolation Forest method. Furthermore, we show an application for the identification of food-related scenes when the camera wearer eats in isolation. Results have shown the good performance of the proposed model and its relevance to visualize the nutritional habits of individuals.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,004
1905.11075
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,325
2407.08238
Integrated User Matching and Pricing in Round-Trip Car-Sharing
Traditional round-trip car rental systems mandate users to return vehicles to their point of origin, limiting the system adaptability to meet diverse mobility demands. This constraint often leads to fleet under-utilization and incurs high parking costs for idle vehicles. To address this inefficiency, we propose a N-user matching algorithm which is designed to facilitate one-way trips within the round-trip rental framework. Our algorithm addresses the joint problem of optimal pricing and user matching through a Two-Stage Integer Linear Programming (ILP)-based formulation. In the first stage, optimal rental prices are determined by setting a risk factor that governs the likelihood of matching a set of N-user. The second stage involves maximizing expected profit through a novel ILP-based user-matching formulation. Testing our algorithm on real-world scenarios demonstrates an approximate 35\% increase in demand fulfillment. Additionally, we assess the model robustness under uncertainty by varying factors such as the risk factor (probability of user ride acceptance at the offered price), cost factor (rental cost-to-fare ratio), and maximum chain length.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
472,080
2405.06907
AIOS Compiler: LLM as Interpreter for Natural Language Programming and Flow Programming of AI Agents
Since their inception, programming languages have trended towards greater readability and lower barriers for programmers. Following this trend, natural language can be a promising type of programming language that provides great flexibility and usability and helps towards the democracy of programming. However, the inherent vagueness, ambiguity, and verbosity of natural language pose significant challenges in developing an interpreter that can accurately understand the programming logic and execute instructions written in natural language. Fortunately, recent advancements in Large Language Models (LLMs) have demonstrated remarkable proficiency in interpreting complex natural language. Inspired by this, we develop a novel system for Code Representation and Execution (CoRE), which employs LLM as interpreter to interpret and execute natural language instructions. The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents, while LLM serves as the interpreter to interpret and execute the agent programs. In this paper, we begin with defining the programming syntax that structures natural language instructions logically. During the execution, we incorporate external memory to minimize redundancy. Furthermore, we equip the designed interpreter with the capability to invoke external tools, compensating for the limitations of LLM in specialized domains or when accessing real-time information. This work is open-source at https://github.com/agiresearch/CoRE, https://github.com/agiresearch/OpenAGI, and https://github.com/agiresearch/AIOS.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
453,495
2208.04517
Attribute Controllable Beautiful Caucasian Face Generation by Aesthetics Driven Reinforcement Learning
In recent years, image generation has made great strides in improving the quality of images, producing high-fidelity ones. Also, quite recently, there are architecture designs, which enable GAN to unsupervisedly learn the semantic attributes represented in different layers. However, there is still a lack of research on generating face images more consistent with human aesthetics. Based on EigenGAN [He et al., ICCV 2021], we build the techniques of reinforcement learning into the generator of EigenGAN. The agent tries to figure out how to alter the semantic attributes of the generated human faces towards more preferable ones. To accomplish this, we trained an aesthetics scoring model that can conduct facial beauty prediction. We also can utilize this scoring model to analyze the correlation between face attributes and aesthetics scores. Empirically, using off-the-shelf techniques from reinforcement learning would not work well. So instead, we present a new variant incorporating the ingredients emerging in the reinforcement learning communities in recent years. Compared to the original generated images, the adjusted ones show clear distinctions concerning various attributes. Experimental results using the MindSpore, show the effectiveness of the proposed method. Altered facial images are commonly more attractive, with significantly improved aesthetic levels.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,135
2305.17498
A Model-Based Method for Minimizing CVaR and Beyond
We develop a variant of the stochastic prox-linear method for minimizing the Conditional Value-at-Risk (CVaR) objective. CVaR is a risk measure focused on minimizing worst-case performance, defined as the average of the top quantile of the losses. In machine learning, such a risk measure is useful to train more robust models. Although the stochastic subgradient method (SGM) is a natural choice for minimizing the CVaR objective, we show that our stochastic prox-linear (SPL+) algorithm can better exploit the structure of the objective, while still providing a convenient closed form update. Our SPL+ method also adapts to the scaling of the loss function, which allows for easier tuning. We then specialize a general convergence theorem for SPL+ to our setting, and show that it allows for a wider selection of step sizes compared to SGM. We support this theoretical finding experimentally.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,623
2410.12107
Just-In-Time Software Defect Prediction via Bi-modal Change Representation Learning
For predicting software defects at an early stage, researchers have proposed just-in-time defect prediction (JIT-DP) to identify potential defects in code commits. The prevailing approaches train models to represent code changes in history commits and utilize the learned representations to predict the presence of defects in the latest commit. However, existing models merely learn editions in source code, without considering the natural language intentions behind the changes. This limitation hinders their ability to capture deeper semantics. To address this, we introduce a novel bi-modal change pre-training model called BiCC-BERT. BiCC-BERT is pre-trained on a code change corpus to learn bi-modal semantic representations. To incorporate commit messages from the corpus, we design a novel pre-training objective called Replaced Message Identification (RMI), which learns the semantic association between commit messages and code changes. Subsequently, we integrate BiCC-BERT into JIT-DP and propose a new defect prediction approach -- JIT-BiCC. By leveraging the bi-modal representations from BiCC-BERT, JIT-BiCC captures more profound change semantics. We train JIT-BiCC using 27,391 code changes and compare its performance with 8 state-of-the-art JIT-DP approaches. The results demonstrate that JIT-BiCC outperforms all baselines, achieving a 10.8% improvement in F1-score. This highlights its effectiveness in learning the bi-modal semantics for JIT-DP.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
498,855
2203.06463
A Systematic Review on Computer Vision-Based Parking Lot Management Applied on Public Datasets
Computer vision-based parking lot management methods have been extensively researched upon owing to their flexibility and cost-effectiveness. To evaluate such methods authors often employ publicly available parking lot image datasets. In this study, we surveyed and compared robust publicly available image datasets specifically crafted to test computer vision-based methods for parking lot management approaches and consequently present a systematic and comprehensive review of existing works that employ such datasets. The literature review identified relevant gaps that require further research, such as the requirement of dataset-independent approaches and methods suitable for autonomous detection of position of parking spaces. In addition, we have noticed that several important factors such as the presence of the same cars across consecutive images, have been neglected in most studies, thereby rendering unrealistic assessment protocols. Furthermore, the analysis of the datasets also revealed that certain features that should be present when developing new benchmarks, such as the availability of video sequences and images taken in more diverse conditions, including nighttime and snow, have not been incorporated.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
285,125
2311.01875
Enhancing Functional Data Analysis with Sequential Neural Networks: Advantages and Comparative Study
Functional Data Analysis (FDA) is a statistical domain developed to handle functional data characterized by high dimensionality and complex data structures. Sequential Neural Networks (SNNs) are specialized neural networks capable of processing sequence data, a fundamental aspect of functional data. Despite their great flexibility in modeling functional data, SNNs have been inadequately employed in the FDA community. One notable advantage of SNNs is the ease of implementation, making them accessible to a broad audience beyond academia. Conversely, FDA-based methodologies present challenges, particularly for practitioners outside the field, due to their intricate complexity. In light of this, we propose utilizing SNNs in FDA applications and demonstrate their effectiveness through comparative analyses against popular FDA regression models based on numerical experiments and real-world data analysis. SNN architectures allow us to surpass the limitations of traditional FDA methods, offering scalability, flexibility, and improved analytical performance. Our findings highlight the potential of SNN-based methodologies as powerful tools for data applications involving functional data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
405,212
2204.08200
Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond
The vast majority of existing algorithms for unsupervised domain adaptation (UDA) focus on adapting from a labeled source domain to an unlabeled target domain directly in a one-off way. Gradual domain adaptation (GDA), on the other hand, assumes a path of $(T-1)$ unlabeled intermediate domains bridging the source and target, and aims to provide better generalization in the target domain by leveraging the intermediate ones. Under certain assumptions, Kumar et al. (2020) proposed a simple algorithm, Gradual Self-Training, along with a generalization bound in the order of $e^{O(T)} \left(\varepsilon_0+O\left(\sqrt{log(T)/n}\right)\right)$ for the target domain error, where $\varepsilon_0$ is the source domain error and $n$ is the data size of each domain. Due to the exponential factor, this upper bound becomes vacuous when $T$ is only moderately large. In this work, we analyze gradual self-training under more general and relaxed assumptions, and prove a significantly improved generalization bound as $\varepsilon_0+ O \left(T\Delta + T/\sqrt{n}\right) + \widetilde{O}\left(1/\sqrt{nT}\right)$, where $\Delta$ is the average distributional distance between consecutive domains. Compared with the existing bound with an exponential dependency on $T$ as a multiplicative factor, our bound only depends on $T$ linearly and additively. Perhaps more interestingly, our result implies the existence of an optimal choice of $T$ that minimizes the generalization error, and it also naturally suggests an optimal way to construct the path of intermediate domains so as to minimize the accumulative path length $T\Delta$ between the source and target. To corroborate the implications of our theory, we examine gradual self-training on multiple semi-synthetic and real datasets, which confirms our findings. We believe our insights provide a path forward toward the design of future GDA algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
292,005
2310.02638
P2CADNet: An End-to-End Reconstruction Network for Parametric 3D CAD Model from Point Clouds
Computer Aided Design (CAD), especially the feature-based parametric CAD, plays an important role in modern industry and society. However, the reconstruction of featured CAD model is more challenging than the reconstruction of other CAD models. To this end, this paper proposes an end-to-end network to reconstruct featured CAD model from point cloud (P2CADNet). Initially, the proposed P2CADNet architecture combines a point cloud feature extractor, a CAD sequence reconstructor and a parameter optimizer. Subsequently, in order to reconstruct the featured CAD model in an autoregressive way, the CAD sequence reconstructor applies two transformer decoders, one with target mask and the other without mask. Finally, for predicting parameters more precisely, we design a parameter optimizer with cross-attention mechanism to further refine the CAD feature parameters. We evaluate P2CADNet on the public dataset, and the experimental results show that P2CADNet has excellent reconstruction quality and accuracy. To our best knowledge, P2CADNet is the first end-to-end network to reconstruct featured CAD model from point cloud, and can be regarded as baseline for future works. Therefore, we open the source code at https://github.com/Blice0415/P2CADNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
396,935
1904.05374
Searching Heterogeneous Personal Digital Traces
Digital traces of our lives are now constantly produced by various connected devices, internet services and interactions. Our actions result in a multitude of heterogeneous data objects, or traces, kept in various locations in the cloud or on local devices. Users have very few tools to organize, understand, and search the digital traces they produce. We propose a simple but flexible data model to aggregate, organize, and find personal information within a collection of a user's personal digital traces. Our model uses as basic dimensions the six questions: what, when, where, who, why, and how. These natural questions model universal aspects of a personal data collection and serve as unifying features of each personal data object, regardless of its source. We propose indexing and search techniques to aid users in searching for their past information in their unified personal digital data sets using our model. Experiments performed over real user data from a variety of data sources such as Facebook, Dropbox, and Gmail show that our approach significantly improves search accuracy when compared with traditional search tools.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
127,294
0710.1469
Weight Distributions of Hamming Codes (II)
In a previous paper, we derived a recursive formula determining the weight distributions of the [n=(q^m-1)/(q-1)] Hamming code H(m,q), when (m,q-1)=1. Here q is a prime power. We note here that the formula actually holds for any positive integer m and any prime power q, without the restriction (m, q-1)=1.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
751
0903.0548
On the 3-Receiver Broadcast Channel with Degraded Message Sets and Confidential Messages
In this paper, bounds to the rate-equivocation region for the general 3-receiver broadcast channel (BC) with degraded message sets, are presented for confidential messages to be kept secret from one of the receivers. This model is more general than the 2-receiver BCs with confidential messages with an external wiretapper, and the recently studied 3-receiver degraded BCs with confidential messages, since in the model studied in this paper, the conditions on the receivers are general and the wiretapper receives the common message. Wyner's code partitioning combined with double-binning is used to show the achievable rate tuples. Error probability analysis and equivocation calculation are also provided. The secure coding scheme is sufficient to provide security for the 3-receiver BC with 2 or 3 degraded message sets, for the scenarios: (i) 3 degraded message sets, where the first confidential message is sent to receivers 1 and 2 and the second confidential message is sent to receiver 1, (ii) 2 degraded message sets, where one confidential message is sent to receiver 1, and (iii) 2 degraded message sets, where one confidential message is sent to receivers 1 and 2. The proof for the outer bound is shown for the cases where receiver 1 is more capable than the wiretap receiver 3, for the first two scenarios. Under the condition that both receivers 1 and 2 are less noisy than the wiretap receiver 3, the inner and outer bounds coincide, giving the rate-equivocation region for (iii). In addition, a new outer bound for the general 3-receiver BC with 3 degraded messages is obtained.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,275
cs/0312058
Acquiring Lexical Paraphrases from a Single Corpus
This paper studies the potential of identifying lexical paraphrases within a single corpus, focusing on the extraction of verb paraphrases. Most previous approaches detect individual paraphrase instances within a pair (or set) of comparable corpora, each of them containing roughly the same information, and rely on the substantial level of correspondence of such corpora. We present a novel method that successfully detects isolated paraphrase instances within a single corpus without relying on any a-priori structure and information. A comparison suggests that an instance-based approach may be combined with a vector based approach in order to assess better the paraphrase likelihood for many verb pairs.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
538,077
1902.03122
Fully Convolutional Neural Network for Semantic Segmentation of Anatomical Structure and Pathologies in Colour Fundus Images Associated with Diabetic Retinopathy
Diabetic retinopathy (DR) is the most common form of diabetic eye disease. Retinopathy can affect all diabetic patients and becomes particularly dangerous, increasing the risk of blindness, if it is left untreated. The success rate of its curability solemnly depends on diagnosis at an early stage. The development of automated computer aided disease diagnosis tools could help in faster detection of symptoms with a wider reach and reasonable cost. This paper proposes a method for the automated segmentation of retinal lesions and optic disk in fundus images using a deep fully convolutional neural network for semantic segmentation. This trainable segmentation pipeline consists of an encoder network, a corresponding decoder network followed by pixel-wise classification to segment microaneurysms, hemorrhages, hard exudates, soft exudates, optic disk from background. The network was trained using Binary cross entropy criterion with Sigmoid as the last layer, while during an additional SoftMax layer was used for boosting response of single class. The performance of the proposed method is evaluated using sensitivity, positive prediction value (PPV) and accuracy as the metrices. Further, the position of the Optic disk is localised using the segmented output map.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,030
2502.09768
Complex Network Modelling with Power-law Activating Patterns and Its Evolutionary Dynamics
Complex network theory provides a unifying framework for the study of structured dynamic systems. The current literature emphasizes a widely reported phenomenon of intermittent interaction among network vertices. In this paper, we introduce a complex network model that considers the stochastic switching of individuals between activated and quiescent states at power-law rates and the corresponding evolutionary dynamics. By using the Markov chain and renewal theory, we discover a homogeneous stationary distribution of activated sizes in the network with power-law activating patterns and infer some statistical characteristics. To better understand the effect of power-law activating patterns, we study the two-person-two-strategy evolutionary game dynamics, demonstrate the absorbability of strategies, and obtain the critical cooperation conditions for prisoner's dilemmas in homogeneous networks without mutation. The evolutionary dynamics in real networks are also discussed. Our results provide a new perspective to analyze and understand social physics in time-evolving network systems.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
533,589
2303.03919
Data Portraits: Recording Foundation Model Training Data
Foundation models are trained on increasingly immense and opaque datasets. Even while these models are now key in AI system building, it can be difficult to answer the straightforward question: has the model already encountered a given example during training? We therefore propose a widespread adoption of Data Portraits: artifacts that record training data and allow for downstream inspection. First we outline the properties of such an artifact and discuss how existing solutions can be used to increase transparency. We then propose and implement a solution based on data sketching, stressing fast and space efficient querying. Using our tools, we document a popular language modeling corpus (The Pile) and a recently released code modeling dataset (The Stack). We show that our solution enables answering questions about test set leakage and model plagiarism. Our tool is lightweight and fast, costing only 3% of the dataset size in overhead. We release a live interface of our tools at https://dataportraits.org/ and call on dataset and model creators to release Data Portraits as a complement to current documentation practices.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
349,893
2001.10719
Query-Sequence Optimization on a Reconfigurable Hardware-Accelerated System
Hardware acceleration of database query processing can be done with the help of FPGAs. In particular, they are partially reconfigurable during runtime, which allows for the runtime adaption of the hardware to a variety of queries. Reconfiguration itself, however, takes some time. As the affected area of the FPGA is not available for computations during the reconfiguration, avoiding some of the reconfigurations can improve overall performance. This paper presents optimizations based on query sequences, which reduces the impact of the reconfigurations. Knowledge of coming queries is used to (I) speculatively start reconfiguration already when a query is still running and (II) avoid overwriting of reconfigurable regions that will be used again in subsequent queries. We evaluate our optimizations with a calibrated model and measurements for various parameter values. Improvements in execution time of up to 21% can be obtained even with sequences of only two queries
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
161,890
2407.16309
A new visual quality metric for Evaluating the performance of multidimensional projections
Multidimensional projections (MP) are among the most essential approaches in the visual analysis of multidimensional data. It transforms multidimensional data into two-dimensional representations that may be shown as scatter plots while preserving their similarity with the original data. Human visual perception is frequently used to evaluate the quality of MP. In this work, we propose to study and improve on a well-known map called Local Affine Multidimensional Projection (LAMP), which takes a multidimensional instance and embeds it in Cartesian space via moving least squares deformation. We propose a new visual quality metric based on human perception. The new metric combines three previously used metrics: silhouette coefficient, neighborhood preservation, and silhouette ratio. We show that the proposed metric produces more precise results in analyzing the quality of MP than other previously used metrics. Finally, we describe an algorithm that attempts to overcome a limitation of the LAMP method which requires a similar scale for control points and their counterparts in the Cartesian space.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
475,552
2009.11937
daVinciNet: Joint Prediction of Motion and Surgical State in Robot-Assisted Surgery
This paper presents a technique to concurrently and jointly predict the future trajectories of surgical instruments and the future state(s) of surgical subtasks in robot-assisted surgeries (RAS) using multiple input sources. Such predictions are a necessary first step towards shared control and supervised autonomy of surgical subtasks. Minute-long surgical subtasks, such as suturing or ultrasound scanning, often have distinguishable tool kinematics and visual features, and can be described as a series of fine-grained states with transition schematics. We propose daVinciNet - an end-to-end dual-task model for robot motion and surgical state predictions. daVinciNet performs concurrent end-effector trajectory and surgical state predictions using features extracted from multiple data streams, including robot kinematics, endoscopic vision, and system events. We evaluate our proposed model on an extended Robotic Intra-Operative Ultrasound (RIOUS+) imaging dataset collected on a da Vinci Xi surgical system and the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our model achieves up to 93.85% short-term (0.5s) and 82.11% long-term (2s) state prediction accuracy, as well as 1.07mm short-term and 5.62mm long-term trajectory prediction error.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
197,283
1508.03664
Rethinking the Intercept Probability of Random Linear Network Coding
This letter considers a network comprising a transmitter, which employs random linear network coding to encode a message, a legitimate receiver, which can recover the message if it gathers a sufficient number of linearly independent coded packets, and an eavesdropper. Closed-form expressions for the probability of the eavesdropper intercepting enough coded packets to recover the message are derived. Transmission with and without feedback is studied. Furthermore, an optimization model that minimizes the intercept probability under delay and reliability constraints is presented. Results validate the proposed analysis and quantify the secrecy gain offered by a feedback link from the legitimate receiver.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
true
46,024
2111.02545
Multi-task Learning of Order-Consistent Causal Graphs
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
264,900
2405.19712
HINT: Learning Complete Human Neural Representations from Limited Viewpoints
No augmented application is possible without animated humanoid avatars. At the same time, generating human replicas from real-world monocular hand-held or robotic sensor setups is challenging due to the limited availability of views. Previous work showed the feasibility of virtual avatars but required the presence of 360 degree views of the targeted subject. To address this issue, we propose HINT, a NeRF-based algorithm able to learn a detailed and complete human model from limited viewing angles. We achieve this by introducing a symmetry prior, regularization constraints, and training cues from large human datasets. In particular, we introduce a sagittal plane symmetry prior to the appearance of the human, directly supervise the density function of the human model using explicit 3D body modeling, and leverage a co-learned human digitization network as additional supervision for the unseen angles. As a result, our method can reconstruct complete humans even from a few viewing angles, increasing performance by more than 15% PSNR compared to previous state-of-the-art algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
459,025
1703.05452
Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting
We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes. Existing algorithms are either heuristics with no guarantees, or scale poorly (with exponential run time in terms of the number of available tests). Moreover, these methods assume a known distribution over the test outcomes, which is often not the case in practice. We propose an efficient sampling-based online learning framework to address the above issues. First, assuming the distribution over hypotheses is known, we propose a dynamic hypothesis enumeration strategy, which allows efficient information gathering with strong theoretical guarantees. We show that with sufficient amount of samples, one can identify a near-optimal decision with high probability. Second, when the parameters of the hypotheses distribution are unknown, we propose an algorithm which learns the parameters progressively via posterior sampling in an online fashion. We further establish a rigorous bound on the expected regret. We demonstrate the effectiveness of our approach on a real-world interactive troubleshooting application and show that one can efficiently make high-quality decisions with low cost.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
70,083
2501.06505
Online Algorithm for Aggregating Experts' Predictions with Unbounded Quadratic Loss
We consider the problem of online aggregation of expert predictions with the quadratic loss function. We propose an algorithm for aggregating expert predictions which does not require a prior knowledge of the upper bound on the losses. The algorithm is based on the exponential reweighing of expert losses.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
524,012
2407.00478
Beyond Scaleup: Knowledge-aware Parsimony Learning from Deep Networks
The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as "building blocks" to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
468,876
2307.13427
Comprehensive Review on Semantic Information Retrieval and Ontology Engineering
Situation awareness is a crucial cognitive skill that enables individuals to perceive, comprehend, and project the current state of their environment accurately. It involves being conscious of relevant information, understanding its meaning, and using that understanding to make well-informed decisions. Awareness systems often need to integrate new knowledge and adapt to changing environments. Ontology reasoning facilitates knowledge integration and evolution, allowing for seamless updates and expansions of the ontology. With the consideration of above, we are providing a quick review on semantic information retrieval and ontology engineering to understand the emerging challenges and future research. In the review we have found that the ontology reasoning addresses the limitations of traditional systems by providing a formal, flexible, and scalable framework for knowledge representation, reasoning, and inference.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
381,585
1308.5133
Performance Measurement Under Increasing Environmental Uncertainty In The Context of Interval Type-2 Fuzzy Logic Based Robotic Sailing
Performance measurement of robotic controllers based on fuzzy logic, operating under uncertainty, is a subject area which has been somewhat ignored in the current literature. In this paper standard measures such as RMSE are shown to be inappropriate for use under conditions where the environmental uncertainty changes significantly between experiments. An overview of current methods which have been applied by other authors is presented, followed by a design of a more sophisticated method of comparison. This method is then applied to a robotic control problem to observe its outcome compared with a single measure. Results show that the technique described provides a more robust method of performance comparison than less complex methods allowing better comparisons to be drawn.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
true
false
false
26,600