id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2109.02369 | Point-Based Neural Rendering with Per-View Optimization | There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 253,724 |
1802.00373 | EMG Pattern Classification to Control a Hand Orthosis for Functional
Grasp Assistance after Stroke | Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 89,402 |
1812.10457 | Optimal Secure GDoF of Symmetric Gaussian Wiretap Channel with a Helper | We study a symmetric Gaussian wiretap channel with a helper, where a confidential message is sent from a transmitter to a legitimate receiver, in the presence of a helper and an eavesdropper, under a weak notion of secrecy constraint. For this setting, we characterize the optimal secure generalized degrees-of-freedom (GDoF). The result reveals that, adding a helper can significantly increase the secure GDoF of the wiretap channel. The result is supported by a new converse and a new scheme. In the proposed scheme, the helper sends a cooperative jamming signal at a specific power level and direction. In this way, it minimizes the penalty in GDoF incurred by the secrecy constraint. In the secure rate analysis, the techniques of noise removal and signal separation are used. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 117,369 |
2411.10500 | Edge-Only Universal Adversarial Attacks in Distributed Learning | Distributed learning frameworks, which partition neural network models across multiple computing nodes, enhance efficiency in collaborative edge-cloud systems but may also introduce new vulnerabilities. In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only, which consists in the first network layers. Unlike traditional universal adversarial perturbations (UAPs) that require full model knowledge, our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side. Specifically, we train lightweight classifiers from intermediate features available at the edge, i.e., before the split point, and use them in a novel targeted optimization to craft effective UAPs. Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part. Additionally, we analyze the capability of an attacker to achieve targeted adversarial effect with edge-only knowledge, revealing intriguing behaviors. By introducing the first adversarial attacks with edge-only knowledge in split inference, this work underscores the importance of addressing partial model access in adversarial robustness, encouraging further research in this area. | false | false | false | false | true | false | false | false | false | false | false | true | true | false | false | false | false | false | 508,670 |
1204.2114 | Image-based Vehicle Classification System | Electronic toll collection (ETC) system has been a common trend used for toll collection on toll road nowadays. The implementation of electronic toll collection allows vehicles to travel at low or full speed during the toll payment, which help to avoid the traffic delay at toll road. One of the major components of an electronic toll collection is the automatic vehicle detection and classification (AVDC) system which is important to classify the vehicle so that the toll is charged according to the vehicle classes. Vision-based vehicle classification system is one type of vehicle classification system which adopt camera as the input sensing device for the system. This type of system has advantage over the rest for it is cost efficient as low cost camera is used. The implementation of vision-based vehicle classification system requires lower initial investment cost and very suitable for the toll collection trend migration in Malaysia from single ETC system to full-scale multi-lane free flow (MLFF). This project includes the development of an image-based vehicle classification system as an effort to seek for a robust vision-based vehicle classification system. The techniques used in the system include scale-invariant feature transform (SIFT) technique, Canny's edge detector, K-means clustering as well as Euclidean distance matching. In this project, a unique way to image description as matching medium is proposed. This distinctiveness of method is analogous to the human DNA concept which is highly unique. The system is evaluated on open datasets and return promising results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 15,385 |
1908.05657 | Non-coherent Detection and Bit Error Rate for an Ambient Backscatter
Link in Time-Selective Fading | This paper focuses on the non-coherent detection in ambient backscatter communication, which is highly appealing for systems where the trade-off between signaling overhead and the actual data transmission is very critical. Modeling the time-selective fading channel as a first-order autoregressive (AR) process, we propose a new receiver architecture based on the direct averaging of the received signal samples for detection, which departs significantly from the energy averaging-based receivers considered in the literature. For the proposed setup, we characterize the exact asymptotic bit error rate (BER) for both single-antenna (SA) and multi-antenna (MA) receivers, and demonstrate the robustness of the new architecture to timing errors. Our results demonstrate that while the direct-link (DL) interference from the ambient power source leads to a BER floor in the SA receiver, the MA receiver can remove this interference by estimating the angle of arrival (AoA) of the DL. The analysis further quantifies the effect of improved angular resolution on the BER as a function of the number of receive antennas. A key intermediate result of our analysis is the derivation of a new concentration result for a general sum sequence that is central to the derivation of the conditional distributions of the received signal. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 141,774 |
2103.03932 | Prosumer Behavior: Decision Making with Bounded Horizon | Most studies of prosumer decision making in the smart grid have focused on single, temporally discrete decisions within the framework of expected utility theory (EUT) and behavioral theories such as prospect theory. In this work, we study prosumer decision making in a more natural, ongoing market situation in which a prosumer has to decide every day whether to sell any surplus energy units generated by the solar panels on her roof or hold (store) the energy units in anticipation of a future sale at a better price. Within this context, we propose a new behavioral model that extends EUT to take into account the notion of a bounded temporal horizon over which various decision parameters are considered. Specifically, we introduce the notion of a bounded time window (the number of upcoming days over which a prosumer evaluates the probability that each possible price will be the highest) that prosumers implicitly impose on their decision making in arriving at hold or sell decisions. The new behavioral model assumes that humans make decisions that will affect their lives within a bounded time window regardless of how far into the future their units may be sold. Modeling the utility of the prosumer using parameters such as the offered price on a day, the number of energy units the prosumer has available for sale on a day, and the probabilities of the forecast prices, we fit both traditional EUT and the proposed behavioral model with bounded time windows to data collected from 57 homeowners over 68 days in a simulated energy market. Each prosumer generated surplus units of solar power and had the opportunity to sell those units to the local utility at the price set that day by the utility or hold the units for sale in the future. For most participants, a bounded horizon in the range of 4-5 days provided a much better fit to their responses than was found for the traditional (unbounded) EUT model | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 223,457 |
2305.05845 | Sketching the Future (STF): Applying Conditional Control Techniques to
Text-to-Video Models | The proliferation of video content demands efficient and flexible neural network based approaches for generating new video content. In this paper, we propose a novel approach that combines zero-shot text-to-video generation with ControlNet to improve the output of these models. Our method takes multiple sketched frames as input and generates video output that matches the flow of these frames, building upon the Text-to-Video Zero architecture and incorporating ControlNet to enable additional input conditions. By first interpolating frames between the inputted sketches and then running Text-to-Video Zero using the new interpolated frames video as the control technique, we leverage the benefits of both zero-shot text-to-video generation and the robust control provided by ControlNet. Experiments demonstrate that our method excels at producing high-quality and remarkably consistent video content that more accurately aligns with the user's intended motion for the subject within the video. We provide a comprehensive resource package, including a demo video, project website, open-source GitHub repository, and a Colab playground to foster further research and application of our proposed method. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 363,308 |
1703.07475 | PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action
Understanding | Despite the fact that many 3D human activity benchmarks being proposed, most existing action datasets focus on the action recognition tasks for the segmented videos. There is a lack of standard large-scale benchmarks, especially for current popular data-hungry deep learning based methods. In this paper, we introduce a new large scale benchmark (PKU-MMD) for continuous multi-modality 3D human action understanding and cover a wide range of complex human activities with well annotated information. PKU-MMD contains 1076 long video sequences in 51 action categories, performed by 66 subjects in three camera views. It contains almost 20,000 action instances and 5.4 million frames in total. Our dataset also provides multi-modality data sources, including RGB, depth, Infrared Radiation and Skeleton. With different modalities, we conduct extensive experiments on our dataset in terms of two scenarios and evaluate different methods by various metrics, including a new proposed evaluation protocol 2D-AP. We believe this large-scale dataset will benefit future researches on action detection for the community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 70,400 |
2408.12772 | Symmetric masking strategy enhances the performance of Masked Image
Modeling | Masked Image Modeling (MIM) is a technique in self-supervised learning that focuses on acquiring detailed visual representations from unlabeled images by estimating the missing pixels in randomly masked sections. It has proven to be a powerful tool for the preliminary training of Vision Transformers (ViTs), yielding impressive results across various tasks. Nevertheless, most MIM methods heavily depend on the random masking strategy to formulate the pretext task. This strategy necessitates numerous trials to ascertain the optimal dropping ratio, which can be resource-intensive, requiring the model to be pre-trained for anywhere between 800 to 1600 epochs. Furthermore, this approach may not be suitable for all datasets. In this work, we propose a new masking strategy that effectively helps the model capture global and local features. Based on this masking strategy, SymMIM, our proposed training pipeline for MIM is introduced. SymMIM achieves a new SOTA accuracy of 85.9\% on ImageNet using ViT-Large and surpasses previous SOTA across downstream tasks such as image classification, semantic segmentation, object detection, instance segmentation tasks, and so on. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,870 |
2407.08148 | SCPNet: Unsupervised Cross-modal Homography Estimation via Intra-modal
Self-supervised Learning | We propose a novel unsupervised cross-modal homography estimation framework based on intra-modal Self-supervised learning, Correlation, and consistent feature map Projection, namely SCPNet. The concept of intra-modal self-supervised learning is first presented to facilitate the unsupervised cross-modal homography estimation. The correlation-based homography estimation network and the consistent feature map projection are combined to form the learnable architecture of SCPNet, boosting the unsupervised learning framework. SCPNet is the first to achieve effective unsupervised homography estimation on the satellite-map image pair cross-modal dataset, GoogleMap, under [-32,+32] offset on a 128x128 image, leading the supervised approach MHN by 14.0% of mean average corner error (MACE). We further conduct extensive experiments on several cross-modal/spectral and manually-made inconsistent datasets, on which SCPNet achieves the state-of-the-art (SOTA) performance among unsupervised approaches, and owns 49.0%, 25.2%, 36.4%, and 10.7% lower MACEs than the supervised approach MHN. Source code is available at https://github.com/RM-Zhang/SCPNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,032 |
1803.02728 | Towards the Creation of a Large Corpus of Synthetically-Identified
Clinical Notes | Clinical notes often describe the most important aspects of a patient's physiology and are therefore critical to medical research. However, these notes are typically inaccessible to researchers without prior removal of sensitive protected health information (PHI), a natural language processing (NLP) task referred to as deidentification. Tools to automatically de-identify clinical notes are needed but are difficult to create without access to those very same notes containing PHI. This work presents a first step toward creating a large synthetically-identified corpus of clinical notes and corresponding PHI annotations in order to facilitate the development de-identification tools. Further, one such tool is evaluated against this corpus in order to understand the advantages and shortcomings of this approach. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 92,115 |
1510.05569 | Estimating the Causal Impact of Recommendation Systems from
Observational Data | Recommendation systems are an increasingly prominent part of the web, accounting for up to a third of all traffic on several of the world's most popular sites. Nevertheless, little is known about how much activity such systems actually cause over and above activity that would have occurred via other means (e.g., search) if recommendations were absent. Although the ideal way to estimate the causal impact of recommendations is via randomized experiments, such experiments are costly and may inconvenience users. In this paper, therefore, we present a method for estimating causal effects from purely observational data. Specifically, we show that causal identification through an instrumental variable is possible when a product experiences an instantaneous shock in direct traffic and the products recommended next to it do not. We then apply our method to browsing logs containing anonymized activity for 2.1 million users on Amazon.com over a 9 month period and analyze over 4,000 unique products that experience such shocks. We find that although recommendation click-throughs do account for a large fraction of traffic among these products, at least 75% of this activity would likely occur in the absence of recommendations. We conclude with a discussion about the assumptions under which the method is appropriate and caveats around extrapolating results to other products, sites, or settings. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 48,037 |
2204.03687 | Statistical QoS Analysis of Reconfigurable Intelligent Surface-assisted
D2D Communication | This work performs the statistical QoS analysis of a Rician block-fading reconfigurable intelligent surface (RIS)-assisted D2D link in which the transmit node operates under delay QoS constraints. First, we perform mode selection for the D2D link, in which the D2D pair can either communicate directly by relaying data from RISs or through a base station (BS). Next, we provide closed-form expressions for the effective capacity (EC) of the RIS-assisted D2D link. When channel state information at the transmitter (CSIT) is available, the transmit D2D node communicates with the variable rate $r_t(n)$ (adjustable according to the channel conditions); otherwise, it uses a fixed rate $r_t$. It allows us to model the RIS-assisted D2D link as a Markov system in both cases. We also extend our analysis to overlay and underlay D2D settings. To improve the throughput of the RIS-assisted D2D link when CSIT is unknown, we use the HARQ retransmission scheme and provide the EC analysis of the HARQ-enabled RIS-assisted D2D link. Finally, simulation results demonstrate that: i) the EC increases with an increase in RIS elements, ii) the EC decreases when strict QoS constraints are imposed at the transmit node, iii) the EC decreases with an increase in the variance of the path loss estimation error, iv) the EC increases with an increase in the probability of ON states, v) EC increases by using HARQ when CSIT is unknown, and it can reach up to $5\times$ the usual EC (with no HARQ and without CSIT) by using the optimal number of retransmissions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 290,395 |
2104.03226 | Evaluation of Time Series Forecasting Models for Estimation of PM2.5
Levels in Air | Air contamination in urban areas has risen consistently over the past few years. Due to expanding industrialization and increasing concentration of toxic gases in the climate, the air is getting more poisonous step by step at an alarming rate. Since the arrival of the Coronavirus pandemic, it is getting more critical to lessen air contamination to reduce its impact. The specialists and environmentalists are making a valiant effort to gauge air contamination levels. However, its genuinely unpredictable to mimic subatomic communication in the air, which brings about off base outcomes. There has been an ascent in using machine learning and deep learning models to foresee the results on time series data. This study adopts ARIMA, FBProphet, and deep learning models such as LSTM, 1D CNN, to estimate the concentration of PM2.5 in the environment. Our predicted results convey that all adopted methods give comparative outcomes in terms of average root mean squared error. However, the LSTM outperforms all other models with reference to mean absolute percentage error. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 229,016 |
2206.00960 | SparseDet: Towards End-to-End 3D Object Detection | In this paper, we propose SparseDet for end-to-end 3D object detection from point cloud. Existing works on 3D object detection rely on dense object candidates over all locations in a 3D or 2D grid following the mainstream methods for object detection in 2D images. However, this dense paradigm requires expertise in data to fulfill the gap between label and detection. As a new detection paradigm, SparseDet maintains a fixed set of learnable proposals to represent latent candidates and directly perform classification and localization for 3D objects through stacked transformers. It demonstrates that effective 3D object detection can be achieved with none of post-processing such as redundant removal and non-maximum suppression. With a properly designed network, SparseDet achieves highly competitive detection accuracy while running with a more efficient speed of 34.5 FPS. We believe this end-to-end paradigm of SparseDet will inspire new thinking on the sparsity of 3D object detection. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 300,314 |
0907.1099 | Multi-User Diversity vs. Accurate Channel State Information in MIMO
Downlink Channels | In a multiple transmit antenna, single antenna per receiver downlink channel with limited channel state feedback, we consider the following question: given a constraint on the total system-wide feedback load, is it preferable to get low-rate/coarse channel feedback from a large number of receivers or high-rate/high-quality feedback from a smaller number of receivers? Acquiring feedback from many receivers allows multi-user diversity to be exploited, while high-rate feedback allows for very precise selection of beamforming directions. We show that there is a strong preference for obtaining high-quality feedback, and that obtaining near-perfect channel information from as many receivers as possible provides a significantly larger sum rate than collecting a few feedback bits from a large number of users. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,055 |
2108.13101 | Densely Semantic Enhancement for Domain Adaptive Region-free Detectors | Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data. Previous works focus on improving the domain adaptability of region-based detectors, e.g., Faster-RCNN, through matching cross-domain instance-level features that are explicitly extracted from a region proposal network (RPN). However, this is unsuitable for region-free detectors such as single shot detector (SSD), which perform a dense prediction from all possible locations in an image and do not have the RPN to encode such instance-level features. As a result, they fail to align important image regions and crucial instance-level features between the domains of region-free detectors. In this work, we propose an adversarial module to strengthen the cross-domain matching of instance-level features for region-free detectors. Firstly, to emphasize the important regions of image, the DSEM learns to predict a transferable foreground enhancement mask that can be utilized to suppress the background disturbance in an image. Secondly, considering that region-free detectors recognize objects of different scales using multi-scale feature maps, the DSEM encodes both multi-level semantic representations and multi-instance spatial-contextual relationships across different domains. Finally, the DSEM is pluggable into different region-free detectors, ultimately achieving the densely semantic feature matching via adversarial learning. Extensive experiments have been conducted on PASCAL VOC, Clipart, Comic, Watercolor, and FoggyCityscape benchmarks, and their results well demonstrate that the proposed approach not only improves the domain adaptability of region-free detectors but also outperforms existing domain adaptive region-based detectors under various domain shift settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 252,712 |
2406.18967 | Structural Attention: Rethinking Transformer for Unpaired Medical Image
Synthesis | Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training settings, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest), a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 468,255 |
1911.09781 | Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels | Performing controlled experiments on noisy data is essential in understanding deep learning across noise levels. Due to the lack of suitable datasets, previous research has only examined deep learning on controlled synthetic label noise, and real-world label noise has never been studied in a controlled setting. This paper makes three contributions. First, we establish the first benchmark of controlled real-world label noise from the web. This new benchmark enables us to study the web label noise in a controlled setting for the first time. The second contribution is a simple but effective method to overcome both synthetic and real noisy labels. We show that our method achieves the best result on our dataset as well as on two public benchmarks (CIFAR and WebVision). Third, we conduct the largest study by far into understanding deep neural networks trained on noisy labels across different noise levels, noise types, network architectures, and training settings. The data and code are released at the following link: http://www.lujiang.info/cnlw.html | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 154,607 |
2001.01347 | Elastic Bulk Synchronous Parallel Model for Distributed Deep Learning | The bulk synchronous parallel (BSP) is a celebrated synchronization model for general-purpose parallel computing that has successfully been employed for distributed training of machine learning models. A prevalent shortcoming of the BSP is that it requires workers to wait for the straggler at every iteration. To ameliorate this shortcoming of classic BSP, we propose ELASTICBSP a model that aims to relax its strict synchronization requirement. The proposed model offers more flexibility and adaptability during the training phase, without sacrificing on the accuracy of the trained model. We also propose an efficient method that materializes the model, named ZIPLINE. The algorithm is tunable and can effectively balance the trade-off between quality of convergence and iteration throughput, in order to accommodate different environments or applications. A thorough experimental evaluation demonstrates that our proposed ELASTICBSP model converges faster and to a higher accuracy than the classic BSP. It also achieves comparable (if not higher) accuracy than the other sensible synchronization models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 159,472 |
2409.08527 | EHC-MM: Embodied Holistic Control for Mobile Manipulation | Mobile manipulation typically entails the base for mobility, the arm for accurate manipulation, and the camera for perception. It is necessary to follow the principle of Distant Mobility, Close Grasping(DMCG) in holistic control. We propose Embodied Holistic Control for Mobile Manipulation(EHC-MM) with the embodied function of sig(w): By formulating the DMCG principle as a Quadratic Programming (QP) problem, sig(w) dynamically balances the robot's emphasis between movement and manipulation with the consideration of the robot's state and environment. In addition, we propose the Monitor-Position-Based Servoing (MPBS) with sig(w), enabling the tracking of the target during the operation. This approach allows coordinated control between the robot's base, arm, and camera. Through extensive simulations and real-world experiments, our approach significantly improves both the success rate and efficiency of mobile manipulation tasks, achieving a 95.6% success rate in the real-world scenarios and a 52.8% increase in time efficiency. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 487,947 |
2212.12011 | A Method for Crash Prediction and Avoidance Using Hidden Markov Models | In recent years, automotive technology has made a steady progress. In particular, Advanced Driver Assistance System (ADAS) has enabled many safety features in commercial vehicles, for instance, pedestrian detection, lane keeping assist, emergency automatic braking, etc. Although these features provide drivers with a safer operational environment, crashes still happen occasionally due to the complex road conditions and the unpredictable movement of road users including vehicles, pedestrians, bicyclists, and non-motorized vehicles. In this paper, we aim at predicting the possibilities of crashes between vehicles on highway and implementing an appropriate active safety system to prevent the same. In particular, hidden Markov models are developed for the traffic lanes and speed change of vehicles on highway. Algorithms are developed for the prediction of crash probabilities. Simulation experiments are conducted using Matlab, the results illustrate the effectiveness of the proposed research. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 337,935 |
2408.13185 | Dual Grid-Forming Converter | This letter proposes a dual model for grid-forming (GFM) controlled converters. The model is inspired from the observation that the structures of the active and reactive power equations of lossy synchronous machine models are almost symmetrical in terms of armature resistance and transient reactance. The proposed device is able to compensate grid power unbalance without requiring a frequency signal. In fact, the active power control is based on the rate of change of the voltage magnitude. On the other hand, synchronization and frequency control is obtained through the reactive power support. The letter shows that the proposed dual-GFM control is robust and capable of recovering a normal operating condition following large contingencies, such as load outages and three-phase faults. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 483,042 |
1111.0508 | Geometric Graph Properties of the Spatial Preferred Attachment model | The spatial preferred attachment (SPA) model is a model for networked information spaces such as domains of the World Wide Web, citation graphs, and on-line social networks. It uses a metric space to model the hidden attributes of the vertices. Thus, vertices are elements of a metric space, and link formation depends on the metric distance between vertices. We show, through theoretical analysis and simulation, that for graphs formed according to the SPA model it is possible to infer the metric distance between vertices from the link structure of the graph. Precisely, the estimate is based on the number of common neighbours of a pair of vertices, a measure known as {\sl co-citation}. To be able to calculate this estimate, we derive a precise relation between the number of common neighbours and metric distance. We also analyze the distribution of {\sl edge lengths}, where the length of an edge is the metric distance between its end points. We show that this distribution has three different regimes, and that the tail of this distribution follows a power law. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 12,879 |
1507.01839 | Dependency-based Convolutional Neural Networks for Sentence Embedding | In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 44,912 |
2305.09777 | BSGAN: A Novel Oversampling Technique for Imbalanced Pattern
Recognitions | Class imbalanced problems (CIP) are one of the potential challenges in developing unbiased Machine Learning (ML) models for predictions. CIP occurs when data samples are not equally distributed between the two or multiple classes. Borderline-Synthetic Minority Oversampling Techniques (SMOTE) is one of the approaches that has been used to balance the imbalance data by oversampling the minor (limited) samples. One of the potential drawbacks of existing Borderline-SMOTE is that it focuses on the data samples that lay at the border point and gives more attention to the extreme observations, ultimately limiting the creation of more diverse data after oversampling, and that is the almost scenario for the most of the borderline-SMOTE based oversampling strategies. As an effect, marginalization occurs after oversampling. To address these issues, in this work, we propose a hybrid oversampling technique by combining the power of borderline SMOTE and Generative Adversarial Network to generate more diverse data that follow Gaussian distributions. We named it BSGAN and tested it on four highly imbalanced datasets: Ecoli, Wine quality, Yeast, and Abalone. Our preliminary computational results reveal that BSGAN outperformed existing borderline SMOTE and GAN-based oversampling techniques and created a more diverse dataset that follows normal distribution after oversampling effect. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 364,765 |
2406.10200 | SSTFB: Leveraging self-supervised pretext learning and temporal
self-attention with feature branching for real-time video polyp segmentation | Polyps are early cancer indicators, so assessing occurrences of polyps and their removal is critical. They are observed through a colonoscopy screening procedure that generates a stream of video frames. Segmenting polyps in their natural video screening procedure has several challenges, such as the co-existence of imaging artefacts, motion blur, and floating debris. Most existing polyp segmentation algorithms are developed on curated still image datasets that do not represent real-world colonoscopy. Their performance often degrades on video data. We propose a video polyp segmentation method that performs self-supervised learning as an auxiliary task and a spatial-temporal self-attention mechanism for improved representation learning. Our end-to-end configuration and joint optimisation of losses enable the network to learn more discriminative contextual features in videos. Our experimental results demonstrate an improvement with respect to several state-of-the-art (SOTA) methods. Our ablation study also confirms that the choice of the proposed joint end-to-end training improves network accuracy by over 3% and nearly 10% on both the Dice similarity coefficient and intersection-over-union compared to the recently proposed method PNS+ and Polyp-PVT, respectively. Results on previously unseen video data indicate that the proposed method generalises. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 464,271 |
2202.00102 | Real-Time Facial Expression Recognition using Facial Landmarks and
Neural Networks | This paper presents a lightweight algorithm for feature extraction, classification of seven different emotions, and facial expression recognition in a real-time manner based on static images of the human face. In this regard, a Multi-Layer Perceptron (MLP) neural network is trained based on the foregoing algorithm. In order to classify human faces, first, some pre-processing is applied to the input image, which can localize and cut out faces from it. In the next step, a facial landmark detection library is used, which can detect the landmarks of each face. Then, the human face is split into upper and lower faces, which enables the extraction of the desired features from each part. In the proposed model, both geometric and texture-based feature types are taken into account. After the feature extraction phase, a normalized vector of features is created. A 3-layer MLP is trained using these feature vectors, leading to 96% accuracy on the test set. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 278,026 |
2111.12903 | Perturbed and Strict Mean Teachers for Semi-supervised Semantic
Segmentation | Consistency learning using input image, feature, or network perturbations has shown remarkable results in semi-supervised semantic segmentation, but this approach can be seriously affected by inaccurate predictions of unlabelled training images. There are two consequences of these inaccurate predictions: 1) the training based on the "strict" cross-entropy (CE) loss can easily overfit prediction mistakes, leading to confirmation bias; and 2) the perturbations applied to these inaccurate predictions will use potentially erroneous predictions as training signals, degrading consistency learning. In this paper, we address the prediction accuracy problem of consistency learning methods with novel extensions of the mean-teacher (MT) model, which include a new auxiliary teacher, and the replacement of MT's mean square error (MSE) by a stricter confidence-weighted cross-entropy (Conf-CE) loss. The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation, where the feature perturbations consist of a new adversarial perturbation. Results on public benchmarks show that our approach achieves remarkable improvements over the previous SOTA methods in the field. Our code is available at https://github.com/yyliu01/PS-MT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,116 |
2210.10176 | Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual
Question Answering | Most Outside-Knowledge Visual Question Answering (OK-VQA) systems employ a two-stage framework that first retrieves external knowledge given the visual question and then predicts the answer based on the retrieved content. However, the retrieved knowledge is often inadequate. Retrievals are frequently too general and fail to cover specific knowledge needed to answer the question. Also, the naturally available supervision (whether the passage contains the correct answer) is weak and does not guarantee question relevancy. To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes question-relevant entities to help retrieve more specific knowledge. Experiments show that our EnFoRe model achieves superior retrieval performance on OK-VQA, the currently largest outside-knowledge VQA dataset. We also combine the retrieved knowledge with state-of-the-art VQA models, and achieve a new state-of-the-art performance on OK-VQA. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 324,809 |
2109.05444 | RIS and Cell-Free Massive MIMO: A Marriage For Harsh Propagation
Environments | This paper considers Cell-Free Massive Multiple Input Multiple Output (MIMO) systems with the assistance of an RIS for enhancing the system performance. Distributed maximum-ratio combining (MRC) is considered at the access points (APs). We introduce an aggregated channel estimation method that provides sufficient information for data processing. The considered system is studied by using asymptotic analysis which lets the number of APs and/or the number of RIS elements grow large. A lower bound for the channel capacity is obtained for a finite number of APs and engineered scattering elements of the RIS, and closed-form expression for the uplink ergodic net throughput is formulated. In addition, a simple scheme for controlling the configuration of the RIS scattering elements is proposed. Numerical results verify the effectiveness of the proposed system design and the benefits of using RISs in Cell-Free Massive MIMO systems are quantified. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 254,793 |
2204.07520 | Resource-Aware Distributed Submodular Maximization: A Paradigm for
Multi-Robot Decision-Making | Multi-robot decision-making is the process where multiple robots coordinate actions. In this paper, we aim for efficient and effective multi-robot decision-making despite the robots' limited on-board resources and the often resource-demanding complexity of their tasks. We introduce the first algorithm enabling the robots to choose with which few other robots to coordinate and provably balance the trade-off of centralized vs. decentralized coordination. Particularly, centralization favors globally near-optimal decision-making but at the cost of increased on-board resource requirements; whereas, decentralization favors minimal resource requirements but at a global suboptimality cost. All robots can thus afford our algorithm, irrespective of their resources. We are motivated by the future of autonomy that involves multiple robots coordinating actions to complete resource-demanding tasks, such as target tracking, area coverage, and monitoring. To provide closed-form guarantees, we focus on maximization problems involving monotone and "doubly" submodular functions. To capture the cost of decentralization, we introduce the notion of Centralization Of Information among non-Neighbors (COIN). We validate our algorithm in simulated scenarios of image covering. | false | false | false | false | true | false | false | true | false | false | true | false | false | false | true | false | false | false | 291,736 |
2110.12175 | Analysis of Thompson Sampling for Partially Observable Contextual
Multi-Armed Bandits | Contextual multi-armed bandits are classical models in reinforcement learning for sequential decision-making associated with individual information. A widely-used policy for bandits is Thompson Sampling, where samples from a data-driven probabilistic belief about unknown parameters are used to select the control actions. For this computationally fast algorithm, performance analyses are available under full context-observations. However, little is known for problems that contexts are not fully observed. We propose a Thompson Sampling algorithm for partially observable contextual multi-armed bandits, and establish theoretical performance guarantees. Technically, we show that the regret of the presented policy scales logarithmically with time and the number of arms, and linearly with the dimension. Further, we establish rates of learning unknown parameters, and provide illustrative numerical analyses. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 262,739 |
2303.09012 | Exploring the Power of Generative Deep Learning for Image-to-Image
Translation and MRI Reconstruction: A Cross-Domain Review | Deep learning has become a prominent computational modeling tool in the areas of computer vision and image processing in recent years. This research comprehensively analyzes the different deep-learning methods used for image-to-image translation and reconstruction in the natural and medical imaging domains. We examine the famous deep learning frameworks, such as convolutional neural networks and generative adversarial networks, and their variants, delving into the fundamental principles and difficulties of each. In the field of natural computer vision, we investigate the development and extension of various deep-learning generative models. In comparison, we investigate the possible applications of deep learning to generative medical imaging problems, including medical image translation, MRI reconstruction, and multi-contrast MRI synthesis. This thorough review provides scholars and practitioners in the areas of generative computer vision and medical imaging with useful insights for summarizing past works and getting insight into future research paths. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,863 |
1504.02762 | Image patch analysis of sunspots and active regions. II. Clustering via
matrix factorization | Separating active regions that are quiet from potentially eruptive ones is a key issue in Space Weather applications. Traditional classification schemes such as Mount Wilson and McIntosh have been effective in relating an active region large scale magnetic configuration to its ability to produce eruptive events. However, their qualitative nature prevents systematic studies of an active region's evolution for example. We introduce a new clustering of active regions that is based on the local geometry observed in Line of Sight magnetogram and continuum images. We use a reduced-dimension representation of an active region that is obtained by factoring the corresponding data matrix comprised of local image patches. Two factorizations can be compared via the definition of appropriate metrics on the resulting factors. The distances obtained from these metrics are then used to cluster the active regions. We find that these metrics result in natural clusterings of active regions. The clusterings are related to large scale descriptors of an active region such as its size, its local magnetic field distribution, and its complexity as measured by the Mount Wilson classification scheme. We also find that including data focused on the neutral line of an active region can result in an increased correspondence between our clustering results and other active region descriptors such as the Mount Wilson classifications and the $R$ value. We provide some recommendations for which metrics, matrix factorization techniques, and regions of interest to use to study active regions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 41,954 |
2106.06083 | Analyzing Neural Jacobian Methods in Applications of Visual Servoing and
Kinematic Control | Designing adaptable control laws that can transfer between different robots is a challenge because of kinematic and dynamic differences, as well as in scenarios where external sensors are used. In this work, we empirically investigate a neural networks ability to approximate the Jacobian matrix for an application in Cartesian control schemes. Specifically, we are interested in approximating the kinematic Jacobian, which arises from kinematic equations mapping a manipulator's joint angles to the end-effector's location. We propose two different approaches to learn the kinematic Jacobian. The first method arises from visual servoing where we learn the kinematic Jacobian as an approximate linear system of equations from the k-nearest neighbors for a desired joint configuration. The second, motivated by forward models in machine learning, learns the kinematic behavior directly and calculates the Jacobian by differentiating the learned neural kinematics model. Simulation experimental results show that both methods achieve better performance than alternative data-driven methods for control, provide closer approximations to the proper kinematics Jacobian matrix, and on average produce better-conditioned Jacobian matrices. Real-world experiments were conducted on a Kinova Gen-3 lightweight robotic manipulator, which includes an uncalibrated visual servoing experiment, a practical application of our methods, as well as a 7-DOF point-to-point task highlighting that our methods are applicable on real robotic manipulators. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 240,346 |
2011.06822 | SHAD3S: A model to Sketch, Shade and Shadow | Hatching is a common method used by artists to accentuate the third dimension of a sketch, and to illuminate the scene. Our system SHAD3S attempts to compete with a human at hatching generic three-dimensional (3D) shapes, and also tries to assist her in a form exploration exercise. The novelty of our approach lies in the fact that we make no assumptions about the input other than that it represents a 3D shape, and yet, given a contextual information of illumination and texture, we synthesise an accurate hatch pattern over the sketch, without access to 3D or pseudo 3D. In the process, we contribute towards a) a cheap yet effective method to synthesise a sufficiently large high fidelity dataset, pertinent to task; b) creating a pipeline with conditional generative adversarial network (CGAN); and c) creating an interactive utility with GIMP, that is a tool for artists to engage with automated hatching or a form-exploration exercise. User evaluation of the tool suggests that the model performance does generalise satisfactorily over diverse input, both in terms of style as well as shape. A simple comparison of inception scores suggest that the generated distribution is as diverse as the ground truth. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 206,354 |
1709.00348 | Inferring Networked Device Categories from Low-Level Activity Indicators | We study the problem of inferring the type of a networked device in a home network by leveraging low level traffic activity indicators seen at commodity home gateways. We analyze a dataset of detailed device network activity obtained from 240 subscriber homes of a large European ISP and extract a number of traffic and spatial fingerprints for individual devices. We develop a two level taxonomy to describe devices onto which we map individual devices using a number of heuristics. We leverage the heuristically derived labels to train classifiers that distinguish device classes based on the traffic and spatial fingerprints of a device. Our results show an accuracy level up to 91% for the coarse level category and up to 84% for the fine grained category. By incorporating information from other sources (e.g., MAC OUI), we are able to further improve accuracy to above 97% and 92%, respectively. Finally, we also extract a set of simple and human-readable rules that concisely capture the behaviour of these distinct device categories. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 79,888 |
2011.04872 | An Efficient Closed-Form Method for Optimal Hybrid Force-Velocity
Control | This paper derives a closed-form method for computing hybrid force-velocity control. The key idea is to maximize the kinematic conditioning of the mechanical system, which includes a robot, free objects, a rigid environment and contact constraints. The method is complete, in that it always produces an optimal/near optimal solution when a solution exists. It is efficient, since it is in closed form, avoiding the iterative search of previous work. We test the method on 78,000 randomly generated test cases. The method outperforms our previous search-based technique by being from 7 to 40 times faster, while consistently producing better solutions in the sense of robustness to kinematic singularity. We also test the method in several representative manipulation experiments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 205,728 |
1512.05665 | Probabilistic Programming with Gaussian Process Memoization | Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higher-order probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 50,248 |
1605.06190 | Modularity in Complex Multilayer Networks with Multiple Aspects: A
Static Perspective | Complex systems are usually illustrated by networks which captures the topology of the interactions between the entities. To better understand the roles played by the entities in the system one needs to uncover the underlying community structure of the system. In recent years, systems with interactions that have various types or can change over time between the entities have attracted an increasing research attention. However, algorithms aiming to solve the key problem - community detection - in multilayer networks are still limited. In this work, we first introduce the multilayer network model representation with multiple aspects, which is flexible to a variety of networks. Then based on this model, we naturally derive the multilayer modularity - a widely adopted objective function of community detection in networks - from a static perspective as an evaluation metric to evaluate the quality of the communities detected in multilayer networks. It enables us to better understand the essence of the modularity by pointing out the specific kind of communities that will lead to a high modularity score. We also propose a spectral method called mSpec for the optimization of the proposed modularity function based on the supra-adjacency representation of the multilayer networks. Experiments on the electroencephalograph network and the comparison results on several empirical multilayer networks demonstrate the feasibility and reliable performance of the proposed method. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 56,096 |
2405.03851 | Querying in Constant Expected Time with Learned Indexes | Learned indexes leverage machine learning models to accelerate query answering in databases, showing impressive practical performance. However, theoretical understanding of these methods remains incomplete. Existing research suggests that learned indexes have superior asymptotic complexity compared to their non-learned counterparts, but these findings have been established under restrictive probabilistic assumptions. Specifically, for a sorted array with $n$ elements, it has been shown that learned indexes can find a key in $O(\log(\log n))$ expected time using at most linear space, compared with $O(\log n)$ for non-learned methods. In this work, we prove $O(1)$ expected time can be achieved with at most linear space, thereby establishing the tightest upper bound so far for the time complexity of an asymptotically optimal learned index. Notably, we use weaker probabilistic assumptions than prior research, meaning our work generalizes previous results. Furthermore, we introduce a new measure of statistical complexity for data. This metric exhibits an information-theoretical interpretation and can be estimated in practice. This characterization provides further theoretical understanding of learned indexes, by helping to explain why some datasets seem to be particularly challenging for these methods. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 452,333 |
2212.04624 | The Hybridization of Branch and Bound with Metaheuristics for Nonconvex
Multiobjective Optimization | A hybrid framework combining the branch and bound method with multiobjective evolutionary algorithms is proposed for nonconvex multiobjective optimization. The hybridization exploits the complementary character of the two optimization strategies. A multiobjective evolutionary algorithm is intended for inducing tight lower and upper bounds during the branch and bound procedure. Tight bounds such as the ones derived in this way can reduce the number of subproblems that have to be solved. The branch and bound method guarantees the global convergence of the framework and improves the search capability of the multiobjective evolutionary algorithm. An implementation of the hybrid framework considering NSGA-II and MOEA/D-DE as multiobjective evolutionary algorithms is presented. Numerical experiments verify the hybrid algorithms benefit from synergy of the branch and bound method and multiobjective evolutionary algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 335,510 |
2010.02650 | If beam search is the answer, what was the question? | Quite surprisingly, exact maximum a posteriori (MAP) decoding of neural language generators frequently leads to low-quality results. Rather, most state-of-the-art results on language generation tasks are attained using beam search despite its overwhelmingly high search error rate. This implies that the MAP objective alone does not express the properties we desire in text, which merits the question: if beam search is the answer, what was the question? We frame beam search as the exact solution to a different decoding objective in order to gain insights into why high probability under a model alone may not indicate adequacy. We find that beam search enforces uniform information density in text, a property motivated by cognitive science. We suggest a set of decoding objectives that explicitly enforce this property and find that exact decoding with these objectives alleviates the problems encountered when decoding poorly calibrated language generation models. Additionally, we analyze the text produced using various decoding strategies and see that, in our neural machine translation experiments, the extent to which this property is adhered to strongly correlates with BLEU. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 199,121 |
2112.01914 | SGM3D: Stereo Guided Monocular 3D Object Detection | Monocular 3D object detection aims to predict the object location, dimension and orientation in 3D space alongside the object category given only a monocular image. It poses a great challenge due to its ill-posed property which is critically lack of depth information in the 2D image plane. While there exist approaches leveraging off-the-shelve depth estimation or relying on LiDAR sensors to mitigate this problem, the dependence on the additional depth model or expensive equipment severely limits their scalability to generic 3D perception. In this paper, we propose a stereo-guided monocular 3D object detection framework, dubbed SGM3D, adapting the robust 3D features learned from stereo inputs to enhance the feature for monocular detection. We innovatively present a multi-granularity domain adaptation (MG-DA) mechanism to exploit the network's ability to generate stereo-mimicking features given only on monocular cues. Coarse BEV feature-level, as well as the fine anchor-level domain adaptation, are both leveraged for guidance in the monocular domain.In addition, we introduce an IoU matching-based alignment (IoU-MA) method for object-level domain adaptation between the stereo and monocular predictions to alleviate the mismatches while adopting the MG-DA. Extensive experiments demonstrate state-of-the-art results on KITTI and Lyft datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 269,670 |
2108.09637 | Graph-Convolutional Deep Learning to Identify Optimized Molecular
Configurations | Tackling molecular optimization problems using conventional computational methods is challenging, because the determination of the optimized configuration is known to be an NP-hard problem. Recently, there has been increasing interest in applying different deep-learning techniques to benchmark molecular optimization tasks. In this work, we implement a graph-convolutional method to classify molecular structures using the equilibrium and non-equilibrium configurations provided in the QM7-X data set. Atomic forces are encoded in graph vertices and the substantial suppression in the total force magnitude on the atoms in the optimized structure is learned for the graph classification task. We demonstrate the results using two different graph pooling layers and compare their respective performances. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 251,666 |
2008.03324 | Fisher Information Field: an Efficient and Differentiable Map for
Perception-aware Planning | Considering visual localization accuracy at the planning time gives preference to robot motion that can be better localized and thus has the potential of improving vision-based navigation, especially in visually degraded environments. To integrate the knowledge about localization accuracy in motion planning algorithms, a central task is to quantify the amount of information that an image taken at a 6 degree-of-freedom pose brings for localization, which is often represented by the Fisher information. However, computing the Fisher information from a set of sparse landmarks (i.e., a point cloud), which is the most common map for visual localization, is inefficient. This approach scales linearly with the number of landmarks in the environment and does not allow the reuse of the computed Fisher information. To overcome these drawbacks, we propose the first dedicated map representation for evaluating the Fisher information of 6 degree-of-freedom visual localization for perception-aware motion planning. By formulating the Fisher information and sensor visibility carefully, we are able to separate the rotational invariant component from the Fisher information and store it in a voxel grid, namely the Fisher information field. This step only needs to be performed once for a known environment. The Fisher information for arbitrary poses can then be computed from the field in constant time, eliminating the need of costly iterating all the 3D landmarks at the planning time. Experimental results show that the proposed Fisher information field can be applied to different motion planning algorithms and is at least one order-of-magnitude faster than using the point cloud directly. Moreover,the proposed map representation is differentiable, resulting in better performance than the point cloud when used in trajectory optimization algorithms. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 190,861 |
2305.04213 | Robust Image Ordinal Regression with Controllable Image Generation | Image ordinal regression has been mainly studied along the line of exploiting the order of categories. However, the issues of class imbalance and category overlap that are very common in ordinal regression were largely overlooked. As a result, the performance on minority categories is often unsatisfactory. In this paper, we propose a novel framework called CIG based on controllable image generation to directly tackle these two issues. Our main idea is to generate extra training samples with specific labels near category boundaries, and the sample generation is biased toward the less-represented categories. To achieve controllable image generation, we seek to separate structural and categorical information of images based on structural similarity, categorical similarity, and reconstruction constraints. We evaluate the effectiveness of our new CIG approach in three different image ordinal regression scenarios. The results demonstrate that CIG can be flexibly integrated with off-the-shelf image encoders or ordinal regression models to achieve improvement, and further, the improvement is more significant for minority categories. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 362,681 |
2304.12951 | Neural Implicit Shape Editing using Boundary Sensitivity | Neural fields are receiving increased attention as a geometric representation due to their ability to compactly store detailed and smooth shapes and easily undergo topological changes. Compared to classic geometry representations, however, neural representations do not allow the user to exert intuitive control over the shape. Motivated by this, we leverage boundary sensitivity to express how perturbations in parameters move the shape boundary. This allows to interpret the effect of each learnable parameter and study achievable deformations. With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation. Prescribing the deformation only locally allows the rest of the shape to change according to some prior, such as semantics or deformation rigidity. Our method is agnostic to the model its training and updates the NN in-place. Furthermore, we show how boundary sensitivity helps to optimize and constrain objectives (such as surface area and volume), which are difficult to compute without first converting to another representation, such as a mesh. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 360,400 |
2308.11417 | ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes | We present ScanNet++, a large-scale dataset that couples together capture of high-quality and commodity-level geometry and color of indoor scenes. Each scene is captured with a high-end laser scanner at sub-millimeter resolution, along with registered 33-megapixel images from a DSLR camera, and RGB-D streams from an iPhone. Scene reconstructions are further annotated with an open vocabulary of semantics, with label-ambiguous scenarios explicitly annotated for comprehensive semantic understanding. ScanNet++ enables a new real-world benchmark for novel view synthesis, both from high-quality RGB capture, and importantly also from commodity-level images, in addition to a new benchmark for 3D semantic scene understanding that comprehensively encapsulates diverse and ambiguous semantic labeling scenarios. Currently, ScanNet++ contains 460 scenes, 280,000 captured DSLR images, and over 3.7M iPhone RGBD frames. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 387,128 |
1802.03001 | Statistical Learnability of Generalized Additive Models based on Total
Variation Regularization | A generalized additive model (GAM, Hastie and Tibshirani (1987)) is a nonparametric model by the sum of univariate functions with respect to each explanatory variable, i.e., $f({\mathbf x}) = \sum f_j(x_j)$, where $x_j\in\mathbb{R}$ is $j$-th component of a sample ${\mathbf x}\in \mathbb{R}^p$. In this paper, we introduce the total variation (TV) of a function as a measure of the complexity of functions in $L^1_{\rm c}(\mathbb{R})$-space. Our analysis shows that a GAM based on TV-regularization exhibits a Rademacher complexity of $O(\sqrt{\frac{\log p}{m}})$, which is tight in terms of both $m$ and $p$ in the agnostic case of the classification problem. In result, we obtain generalization error bounds for finite samples according to work by Bartlett and Mandelson (2002). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 89,876 |
2208.01136 | Exploring the GLIDE model for Human Action-effect Prediction | We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,072 |
1912.01413 | Spatial images from temporal data | Traditional paradigms for imaging rely on the use of a spatial structure, either in the detector (pixels arrays) or in the illumination (patterned light). Removal of the spatial structure in the detector or illumination, i.e., imaging with just a single-point sensor, would require solving a very strongly ill-posed inverse retrieval problem that to date has not been solved. Here, we demonstrate a data-driven approach in which full 3D information is obtained with just a single-point, single-photon avalanche diode that records the arrival time of photons reflected from a scene that is illuminated with short pulses of light. Imaging with single-point time-of-flight (temporal) data opens new routes in terms of speed, size, and functionality. As an example, we show how the training based on an optical time-of-flight camera enables a compact radio-frequency impulse radio detection and ranging transceiver to provide 3D images. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 156,080 |
1804.06489 | Simplex Queues for Hot-Data Download | In cloud storage systems, hot data is usually replicated over multiple nodes in order to accommodate simultaneous access by multiple users as well as increase the fault tolerance of the system. Recent cloud storage research has proposed using availability codes, which is a special class of erasure codes, as a more storage efficient way to store hot data. These codes enable data recovery from multiple, small disjoint groups of servers. The number of the recovery groups is referred to as the availability and the size of each group as the locality of the code. Until now, we have very limited knowledge on how code locality and availability affect data access time. Data download from these systems involves multiple fork-join queues operating in-parallel, making the analysis of access time a very challenging problem. In this paper, we present an approximate analysis of data access time in storage systems that employ simplex codes, which are an important and in certain sense optimal class of availability codes. We consider and compare three strategies in assigning download requests to servers; first one aggressively exploits the storage availability for faster download, second one implements only load balancing, and the last one employs storage availability only for hot data download without incurring any negative impact on the cold data download. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 95,310 |
2304.09818 | What Should Be Balanced in a "Balanced" Face Recognition Dataset? | The issue of demographic disparities in face recognition accuracy has attracted increasing attention in recent years. Various face image datasets have been proposed as 'fair' or 'balanced' to assess the accuracy of face recognition algorithms across demographics. These datasets typically balance the number of identities and images across demographics. It is important to note that the number of identities and images in an evaluation dataset are {\em not} driving factors for 1-to-1 face matching accuracy. Moreover, balancing the number of identities and images does not ensure balance in other factors known to impact accuracy, such as head pose, brightness, and image quality. We demonstrate these issues using several recently proposed datasets. To improve the ability to perform less biased evaluations, we propose a bias-aware toolkit that facilitates creation of cross-demographic evaluation datasets balanced on factors mentioned in this paper. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 359,181 |
2301.02818 | App Review Driven Collaborative Bug Finding | Software development teams generally welcome any effort to expose bugs in their code base. In this work, we build on the hypothesis that mobile apps from the same category (e.g., two web browser apps) may be affected by similar bugs in their evolution process. It is therefore possible to transfer the experience of one historical app to quickly find bugs in its new counterparts. This has been referred to as collaborative bug finding in the literature. Our novelty is that we guide the bug finding process by considering that existing bugs have been hinted within app reviews. Concretely, we design the BugRMSys approach to recommend bug reports for a target app by matching historical bug reports from apps in the same category with user app reviews of the target app. We experimentally show that this approach enables us to quickly expose and report dozens of bugs for targeted apps such as Brave (web browser app). BugRMSys's implementation relies on DistilBERT to produce natural language text embeddings. Our pipeline considers similarities between bug reports and app reviews to identify relevant bugs. We then focus on the app review as well as potential reproduction steps in the historical bug report (from a same-category app) to reproduce the bugs. Overall, after applying BugRMSys to six popular apps, we were able to identify, reproduce and report 20 new bugs: among these, 9 reports have been already triaged, 6 were confirmed, and 4 have been fixed by official development teams, respectively. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 339,606 |
2003.14291 | Hurricanes and hashtags: Characterizing online collective attention for
natural disasters | We study collective attention paid towards hurricanes through the lens of $n$-grams on Twitter, a social media platform with global reach. Using hurricane name mentions as a proxy for awareness, we find that the exogenous temporal dynamics are remarkably similar across storms, but that overall collective attention varies widely even among storms causing comparable deaths and damage. We construct `hurricane attention maps' and observe that hurricanes causing deaths on (or economic damage to) the continental United States generate substantially more attention in English language tweets than those that do not. We find that a hurricane's Saffir-Simpson wind scale category assignment is strongly associated with the amount of attention it receives. Higher category storms receive higher proportional increases of attention per proportional increases in number of deaths or dollars of damage, than lower category storms. The most damaging and deadly storms of the 2010s, Hurricanes Harvey and Maria, generated the most attention and were remembered the longest, respectively. On average, a category 5 storm receives 4.6 times more attention than a category 1 storm causing the same number of deaths and economic damage. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 170,457 |
2409.05070 | Lepskii Principle for Distributed Kernel Ridge Regression | Parameter selection without communicating local data is quite challenging in distributed learning, exhibing an inconsistency between theoretical analysis and practical application of it in tackling distributively stored data. Motivated by the recently developed Lepskii principle and non-privacy communication protocol for kernel learning, we propose a Lepskii principle to equip distributed kernel ridge regression (DKRR) and consequently develop an adaptive DKRR with Lepskii principle (Lep-AdaDKRR for short) by using a double weighted averaging synthesization scheme. We deduce optimal learning rates for Lep-AdaDKRR and theoretically show that Lep-AdaDKRR succeeds in adapting to the regularity of regression functions, effective dimension decaying rate of kernels and different metrics of generalization, which fills the gap of the mentioned inconsistency between theory and application. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 486,626 |
2102.12362 | Detecting Compliance of Privacy Policies with Data Protection Laws | Privacy Policies are the legal documents that describe the practices that an organization or company has adopted in the handling of the personal data of its users. But as policies are a legal document, they are often written in extensive legal jargon that is difficult to understand. Though work has been done on privacy policies but none that caters to the problem of verifying if a given privacy policy adheres to the data protection laws of a given country or state. We aim to bridge that gap by providing a framework that analyzes privacy policies in light of various data protection laws, such as the General Data Protection Regulation (GDPR). To achieve that, firstly we labeled both the privacy policies and laws. Then a correlation scheme is developed to map the contents of a privacy policy to the appropriate segments of law that a policy must conform to. Then we check the compliance of privacy policy's text with the corresponding text of the law using NLP techniques. By using such a tool, users would be better equipped to understand how their personal data is managed. For now, we have provided a mapping for the GDPR and PDPA, but other laws can easily be incorporated in the already built pipeline. | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | 221,711 |
2209.10166 | Chaotic Hedging with Iterated Integrals and Neural Networks | In this paper, we extend the Wiener-Ito chaos decomposition to the class of continuous semimartingales that are exponentially integrable, which includes in particular affine and some polynomial diffusion processes. By omitting the orthogonality in the expansion, we are able to show that every $p$-integrable functional of the semimartingale, for $p \in [1,\infty)$, can be represented as a sum of iterated integrals thereof. Using finitely many terms of this expansion and (possibly random) neural networks for the integrands, whose parameters are learned in a machine learning setting, we show that every financial derivative can be approximated arbitrarily well in the $L^p$-sense. In particular, for $p = 2$, we recover the optimal hedging strategy in the sense of quadratic hedging. Moreover, since the hedging strategy of the approximating option can be computed in closed form, we obtain an efficient algorithm to approximately replicate any sufficiently integrable financial derivative within short runtime. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,774 |
2210.08197 | DyFEn: Agent-Based Fee Setting in Payment Channel Networks | In recent years, with the development of easy to use learning environments, implementing and reproducible benchmarking of reinforcement learning algorithms has been largely accelerated by utilizing these frameworks. In this article, we introduce the Dynamic Fee learning Environment (DyFEn), an open-source real-world financial network model. It can provide a testbed for evaluating different reinforcement learning techniques. To illustrate the promise of DyFEn, we present a challenging problem which is a simultaneous multi-channel dynamic fee setting for off-chain payment channels. This problem is well-known in the Bitcoin Lightning Network and has no effective solutions. Specifically, we report the empirical results of several commonly used deep reinforcement learning methods on this dynamic fee setting task as a baseline for further experiments. To the best of our knowledge, this work proposes the first virtual learning environment based on a simulation of blockchain and distributed ledger technologies, unlike many others which are based on physics simulations or game platforms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 324,036 |
2205.11255 | A Template-based Method for Constrained Neural Machine Translation | Machine translation systems are expected to cope with various types of constraints in many practical scenarios. While neural machine translation (NMT) has achieved strong performance in unconstrained cases, it is non-trivial to impose pre-specified constraints into the translation process of NMT models. Although many approaches have been proposed to address this issue, most existing methods can not satisfy the following three desiderata at the same time: (1) high translation quality, (2) high match accuracy, and (3) low latency. In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models. Our basic idea is to rearrange the generation of constrained and unconstrained tokens through a template. Our method does not require any changes in the model architecture and the decoding algorithm. Experimental results show that the proposed template-based approach can outperform several representative baselines in both lexically and structurally constrained translation tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 298,073 |
2312.04819 | Attention-Guided Contrastive Role Representations for Multi-Agent
Reinforcement Learning | Real-world multi-agent tasks usually involve dynamic team composition with the emergence of roles, which should also be a key to efficient cooperation in multi-agent reinforcement learning (MARL). Drawing inspiration from the correlation between roles and agent's behavior patterns, we propose a novel framework of **A**ttention-guided **CO**ntrastive **R**ole representation learning for **M**ARL (**ACORM**) to promote behavior heterogeneity, knowledge transfer, and skillful coordination across agents. First, we introduce mutual information maximization to formalize role representation learning, derive a contrastive learning objective, and concisely approximate the distribution of negative pairs. Second, we leverage an attention mechanism to prompt the global state to attend to learned role representations in value decomposition, implicitly guiding agent coordination in a skillful role space to yield more expressive credit assignment. Experiments on challenging StarCraft II micromanagement and Google research football tasks demonstrate the state-of-the-art performance of our method and its advantages over existing approaches. Our code is available at [https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM). | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 413,841 |
2310.06585 | A Black-Box Physics-Informed Estimator based on Gaussian Process
Regression for Robot Inverse Dynamics Identification | Learning the inverse dynamics of robots directly from data, adopting a black-box approach, is interesting for several real-world scenarios where limited knowledge about the system is available. In this paper, we propose a black-box model based on Gaussian Process (GP) Regression for the identification of the inverse dynamics of robotic manipulators. The proposed model relies on a novel multidimensional kernel, called \textit{Lagrangian Inspired Polynomial} (\kernelInitials{}) kernel. The \kernelInitials{} kernel is based on two main ideas. First, instead of directly modeling the inverse dynamics components, we model as GPs the kinetic and potential energy of the system. The GP prior on the inverse dynamics components is derived from those on the energies by applying the properties of GPs under linear operators. Second, as regards the energy prior definition, we prove a polynomial structure of the kinetic and potential energy, and we derive a polynomial kernel that encodes this property. As a consequence, the proposed model allows also to estimate the kinetic and potential energy without requiring any label on these quantities. Results on simulation and on two real robotic manipulators, namely a 7 DOF Franka Emika Panda, and a 6 DOF MELFA RV4FL, show that the proposed model outperforms state-of-the-art black-box estimators based both on Gaussian Processes and Neural Networks in terms of accuracy, generality and data efficiency. The experiments on the MELFA robot also demonstrate that our approach achieves performance comparable to fine-tuned model-based estimators, despite requiring less prior information. | false | false | false | false | true | false | true | true | false | false | true | false | false | false | false | false | false | false | 398,640 |
1112.0054 | Improving the User Query for the Boolean Model Using Genetic Algorithms | The Use of genetic algorithms in the Information retrieval (IR) area, especially in optimizing a user query in Arabic data collections is presented in this paper. Very little research has been carried out on Arabic text collections. Boolean model have been used in this research. To optimize the query using GA we used different fitness functions, different mutation strategies to find which is the best strategy and fitness function that can be used with Boolean model when the data collection is the Arabic language. Our results show that the best GA strategy for the Boolean model is the GA (M2, Precision) method. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 13,265 |
2001.03224 | Identifying Distinct, Effective Treatments for Acute Hypotension with
SODA-RL: Safely Optimized Diverse Accurate Reinforcement Learning | Hypotension in critical care settings is a life-threatening emergency that must be recognized and treated early. While fluid bolus therapy and vasopressors are common treatments, it is often unclear which interventions to give, in what amounts, and for how long. Observational data in the form of electronic health records can provide a source for helping inform these choices from past events, but often it is not possible to identify a single best strategy from observational data alone. In such situations, we argue it is important to expose the collection of plausible options to a provider. To this end, we develop SODA-RL: Safely Optimized, Diverse, and Accurate Reinforcement Learning, to identify distinct treatment options that are supported in the data. We demonstrate SODA-RL on a cohort of 10,142 ICU stays where hypotension presented. Our learned policies perform comparably to the observed physician behaviors, while providing different, plausible alternatives for treatment decisions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 159,912 |
2306.07272 | Zero-shot Composed Text-Image Retrieval | In this paper, we consider the problem of composed image retrieval (CIR), it aims to train a model that can fuse multi-modal information, e.g., text and images, to accurately retrieve images that match the query, extending the user's expression ability. We make the following contributions: (i) we initiate a scalable pipeline to automatically construct datasets for training CIR model, by simply exploiting a large-scale dataset of image-text pairs, e.g., a subset of LAION-5B; (ii) we introduce a transformer-based adaptive aggregation model, TransAgg, which employs a simple yet efficient fusion mechanism, to adaptively combine information from diverse modalities; (iii) we conduct extensive ablation studies to investigate the usefulness of our proposed data construction procedure, and the effectiveness of core components in TransAgg; (iv) when evaluating on the publicly available benckmarks under the zero-shot scenario, i.e., training on the automatically constructed datasets, then directly conduct inference on target downstream datasets, e.g., CIRR and FashionIQ, our proposed approach either performs on par with or significantly outperforms the existing state-of-the-art (SOTA) models. Project page: https://code-kunkun.github.io/ZS-CIR/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 372,949 |
2005.13421 | Thirty Musts for Meaning Banking | Meaning banking--creating a semantically annotated corpus for the purpose of semantic parsing or generation--is a challenging task. It is quite simple to come up with a complex meaning representation, but it is hard to design a simple meaning representation that captures many nuances of meaning. This paper lists some lessons learned in nearly ten years of meaning annotation during the development of the Groningen Meaning Bank (Bos et al., 2017) and the Parallel Meaning Bank (Abzianidze et al., 2017). The paper's format is rather unconventional: there is no explicit related work, no methodology section, no results, and no discussion (and the current snippet is not an abstract but actually an introductory preface). Instead, its structure is inspired by work of Traum (2000) and Bender (2013). The list starts with a brief overview of the existing meaning banks (Section 1) and the rest of the items are roughly divided into three groups: corpus collection (Section 2 and 3, annotation methods (Section 4-11), and design of meaning representations (Section 12-30). We hope this overview will give inspiration and guidance in creating improved meaning banks in the future. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 179,006 |
2011.04482 | DynaVSR: Dynamic Adaptive Blind Video Super-Resolution | Most conventional supervised super-resolution (SR) algorithms assume that low-resolution (LR) data is obtained by downscaling high-resolution (HR) data with a fixed known kernel, but such an assumption often does not hold in real scenarios. Some recent blind SR algorithms have been proposed to estimate different downscaling kernels for each input LR image. However, they suffer from heavy computational overhead, making them infeasible for direct application to videos. In this work, we present DynaVSR, a novel meta-learning-based framework for real-world video SR that enables efficient downscaling model estimation and adaptation to the current input. Specifically, we train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation. Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin, with an order of magnitude faster inference time compared to the existing blind SR approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 205,603 |
2311.15685 | The Battleship Approach to the Low Resource Entity Matching Problem | Entity matching, a core data integration problem, is the task of deciding whether two data tuples refer to the same real-world entity. Recent advances in deep learning methods, using pre-trained language models, were proposed for resolving entity matching. Although demonstrating unprecedented results, these solutions suffer from a major drawback as they require large amounts of labeled data for training, and, as such, are inadequate to be applied to low resource entity matching problems. To overcome the challenge of obtaining sufficient labeled data we offer a new active learning approach, focusing on a selection mechanism that exploits unique properties of entity matching. We argue that a distributed representation of a tuple pair indicates its informativeness when considered among other pairs. This is used consequently in our approach that iteratively utilizes space-aware considerations. Bringing it all together, we treat the low resource entity matching problem as a Battleship game, hunting indicative samples, focusing on positive ones, through awareness of the latent space along with careful planning of next sampling iterations. An extensive experimental analysis shows that the proposed algorithm outperforms state-of-the-art active learning solutions to low resource entity matching, and although using less samples, can be as successful as state-of-the-art fully trained known algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 410,609 |
1911.09723 | Fast Sparse ConvNets | Historically, the pursuit of efficient inference has been one of the driving forces behind research into new deep learning architectures and building blocks. Some recent examples include: the squeeze-and-excitation module, depthwise separable convolutions in Xception, and the inverted bottleneck in MobileNet v2. Notably, in all of these cases, the resulting building blocks enabled not only higher efficiency, but also higher accuracy, and found wide adoption in the field. In this work, we further expand the arsenal of efficient building blocks for neural network architectures; but instead of combining standard primitives (such as convolution), we advocate for the replacement of these dense primitives with their sparse counterparts. While the idea of using sparsity to decrease the parameter count is not new, the conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for ARM and WebAssembly, which we open-source for the benefit of the community as part of the XNNPACK library. Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet v1, MobileNet v2 and EfficientNet architectures substantially outperform strong dense baselines on the efficiency-accuracy curve. On Snapdragon 835 our sparse networks outperform their dense equivalents by $1.3-2.4\times$ -- equivalent to approximately one entire generation of MobileNet-family improvement. We hope that our findings will facilitate wider adoption of sparsity as a tool for creating efficient and accurate deep learning architectures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 154,591 |
2407.11735 | ProSub: Probabilistic Open-Set Semi-Supervised Learning with
Subspace-Based Out-of-Distribution Detection | In open-set semi-supervised learning (OSSL), we consider unlabeled datasets that may contain unknown classes. Existing OSSL methods often use the softmax confidence for classifying data as in-distribution (ID) or out-of-distribution (OOD). Additionally, many works for OSSL rely on ad-hoc thresholds for ID/OOD classification, without considering the statistics of the problem. We propose a new score for ID/OOD classification based on angles in feature space between data and an ID subspace. Moreover, we propose an approach to estimate the conditional distributions of scores given ID or OOD data, enabling probabilistic predictions of data being ID or OOD. These components are put together in a framework for OSSL, termed \emph{ProSub}, that is experimentally shown to reach SOTA performance on several benchmark problems. Our code is available at https://github.com/walline/prosub. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 473,594 |
2309.04453 | WiSARD: A Labeled Visual and Thermal Image Dataset for Wilderness Search
and Rescue | Sensor-equipped unoccupied aerial vehicles (UAVs) have the potential to help reduce search times and alleviate safety risks for first responders carrying out Wilderness Search and Rescue (WiSAR) operations, the process of finding and rescuing person(s) lost in wilderness areas. Unfortunately, visual sensors alone do not address the need for robustness across all the possible terrains, weather, and lighting conditions that WiSAR operations can be conducted in. The use of multi-modal sensors, specifically visual-thermal cameras, is critical in enabling WiSAR UAVs to perform in diverse operating conditions. However, due to the unique challenges posed by the wilderness context, existing dataset benchmarks are inadequate for developing vision-based algorithms for autonomous WiSAR UAVs. To this end, we present WiSARD, a dataset with roughly 56,000 labeled visual and thermal images collected from UAV flights in various terrains, seasons, weather, and lighting conditions. To the best of our knowledge, WiSARD is the first large-scale dataset collected with multi-modal sensors for autonomous WiSAR operations. We envision that our dataset will provide researchers with a diverse and challenging benchmark that can test the robustness of their algorithms when applied to real-world (life-saving) applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 390,732 |
2309.09358 | An Automatic Tuning MPC with Application to Ecological Cruise Control | Model predictive control (MPC) is a powerful tool for planning and controlling dynamical systems due to its capacity for handling constraints and taking advantage of preview information. Nevertheless, MPC performance is highly dependent on the choice of cost function tuning parameters. In this work, we demonstrate an approach for online automatic tuning of an MPC controller with an example application to an ecological cruise control system that saves fuel by using a preview of road grade. We solve the global fuel consumption minimization problem offline using dynamic programming and find the corresponding MPC cost function by solving the inverse optimization problem. A neural network fitted to these offline results is used to generate the desired MPC cost function weight during online operation. The effectiveness of the proposed approach is verified in simulation for different road geometries. | false | true | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 392,575 |
2102.05998 | A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote
Collaboration Systems | Remote collaboration systems have become increasingly important in today's society, especially during times where physical distancing is advised. Industry, research and individuals face the challenging task of collaborating and networking over long distances. While video and teleconferencing are already widespread, collaboration systems in augmented, virtual, and mixed reality are still a niche technology. We provide an overview of recent developments of synchronous remote collaboration systems and create a taxonomy by dividing them into three main components that form such systems: Environment, Avatars, and Interaction. A thorough overview of existing systems is given, categorising their main contributions in order to help researchers working in different fields by providing concise information about specific topics such as avatars, virtual environment, visualisation styles and interaction. The focus of this work is clearly on synchronised collaboration from a distance. A total of 82 unique systems for remote collaboration are discussed, including more than 100 publications and 25 commercial systems. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 219,606 |
1811.00498 | Multilingual NMT with a language-independent attention bridge | In this paper, we propose a multilingual encoder-decoder architecture capable of obtaining multilingual sentence representations by means of incorporating an intermediate {\em attention bridge} that is shared across all languages. That is, we train the model with language-specific encoders and decoders that are connected via self-attention with a shared layer that we call attention bridge. This layer exploits the semantics from each language for performing translation and develops into a language-independent meaning representation that can efficiently be used for transfer learning. We present a new framework for the efficient development of multilingual NMT using this model and scheduled training. We have tested the approach in a systematic way with a multi-parallel data set. We show that the model achieves substantial improvements over strong bilingual models and that it also works well for zero-shot translation, which demonstrates its ability of abstraction and transfer learning. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 112,113 |
2305.11464 | A Real-Time Limit Order Book as a Market Mechanism for Transactive
Energy Systems | This paper presents a limit order book (LOB) market mechanism design for transactive energy systems. The proposed design is planned for deployment in New Hampshire and Maine under a US Department of Energy Connected Communities project. The new LOB mechanism is intended to replace or work in conjunction with the conventional transactive energy double auction mechanisms designed for retail real-time electricity price discovery, and will facilitate significant scaling of transactive energy systems. The paper provides LOB market rules, clearing algorithm, and illustrative examples and discusses clearing algorithm performance and reliability. The proposed LOB design includes support for discovering prices arising from wholesale electricity markets, distribution system asset constraints, distributed energy resource constraints, and consumer willingness to consume or produce at a reservation price. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 365,543 |
1506.01743 | Socially Driven News Recommendation | The participatory Web has enabled the ubiquitous and pervasive access of information, accompanied by an increase of speed and reach in information sharing. Data dissemination services such as news aggregators are expected to provide up-to-date, real-time information to the end users. News aggregators are in essence recommendation systems that filter and rank news stories in order to select the few that will appear on the users front screen at any time. One of the main challenges in such systems is to address the recency and latency problems, that is, to identify as soon as possible how important a news story is. In this work we propose an integrated framework that aims at predicting the importance of news items upon their publication with a focus on recent and highly popular news, employing resampling strategies, and at translating the result into concrete news rankings. We perform an extensive experimental evaluation using real-life datasets of the proposed framework as both a stand-alone system and when applied to news recommendations from Google News. Additionally, we propose and evaluate a combinatorial solution to the augmentation of official media recommendations with social information. Results show that the proposed approach complements and enhances the news rankings generated by state-of-the-art systems. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 43,822 |
0710.0564 | TP Decoding | `Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 727 |
2201.09635 | State-Conditioned Adversarial Subgoal Generation | Hierarchical reinforcement learning (HRL) proposes to solve difficult tasks by performing decision-making and control at successively higher levels of temporal abstraction. However, off-policy HRL often suffers from the problem of a non-stationary high-level policy since the low-level policy is constantly changing. In this paper, we propose a novel HRL approach for mitigating the non-stationarity by adversarially enforcing the high-level policy to generate subgoals compatible with the current instantiation of the low-level policy. In practice, the adversarial learning is implemented by training a simple state-conditioned discriminator network concurrently with the high-level policy which determines the compatibility level of subgoals. Comparison to state-of-the-art algorithms shows that our approach improves both learning efficiency and performance in challenging continuous control tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 276,735 |
2410.15642 | Resource-Efficient Medical Report Generation using Large Language Models | Medical report generation is the task of automatically writing radiology reports for chest X-ray images. Manually composing these reports is a time-consuming process that is also prone to human errors. Generating medical reports can therefore help reduce the burden on radiologists. In other words, we can promote greater clinical automation in the medical domain. In this work, we propose a new framework leveraging vision-enabled Large Language Models (LLM) for the task of medical report generation. We introduce a lightweight solution that achieves better or comparative performance as compared to previous solutions on the task of medical report generation. We conduct extensive experiments exploring different model sizes and enhancement approaches, such as prefix tuning to improve the text generation abilities of the LLMs. We evaluate our approach on a prominent large-scale radiology report dataset - MIMIC-CXR. Our results demonstrate the capability of our resource-efficient framework to generate patient-specific reports with strong medical contextual understanding and high precision. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 500,653 |
1606.04155 | Rationalizing Neural Predictions | Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 57,196 |
1801.07395 | On the Computation of Optimal Control Problems with Terminal Inequality
Constraint via Variation Evolution | Studies regarding the computation of Optimal Control Problems (OCPs) with terminal inequality constraint, under the frame of the Variation Evolving Method (VEM), are carried out. The attributes of equality constraints and inequality constraints in the generalized optimization problem is traversed, and the intrinsic relations to the multipliers are uncovered. Upon these preliminaries, the right Evolution Partial Differential Equation (EPDE) is derived, and the costate-free optimality conditions are established. Besides the analytic expression for the costates in the classic treatment, they also reveal the analytic relations between the states, the controls and the (Lagrange and KKT) multipliers, which adjoin the terminal (equality and inequality) constraints. Moreover, in solving the transformed Initial-value Problems (IVPs) with common Ordinary Differential Equation (ODE) integration methods, the numerical soft barrier is proposed to eliminate the numerical error resulting from the suddenly triggered inequality constraint and it is shown to be effective. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 88,780 |
1612.06835 | Box constrained $\ell_1$ optimization in random linear systems --
asymptotics | In this paper we consider box constrained adaptations of $\ell_1$ optimization heuristic when applied for solving random linear systems. These are typically employed when on top of being sparse the systems' solutions are also known to be confined in a specific way to an interval on the real axis. Two particular $\ell_1$ adaptations (to which we will refer as the \emph{binary} $\ell_1$ and \emph{box} $\ell_1$) will be discussed in great detail. Many of their properties will be addressed with a special emphasis on the so-called phase transitions (PT) phenomena and the large deviation principles (LDP). We will fully characterize these through two different mathematical approaches, the first one that is purely probabilistic in nature and the second one that connects to high-dimensional geometry. Of particular interest we will find that for many fairly hard mathematical problems a collection of pretty elegant characterizations of their final solutions will turn out to exist. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,867 |
2411.02045 | Conversations with Data: How Data Journalism Affects Online Comments in
the New York Times | Users in the data age have access to more data than ever before, but little is known how they interact with it. Using transparency and multimedia, data journalism (DJ) lets users explore and interpret data on their own. This study examines how DJ affects online comments as a case study of user interactions with data. The corpus comprises 6,400 stories and their comment sections from the DJ and other sections of the New York Times, from 2014-2022. Results indicate that DJ is positively associated with higher level of interactivity between the users. This relationship is mediated by statistical information, information sources, and static visualizations. However, there is a low level of interactivity with the content; consequently, only part of the users use it. The results demonstrate how data accessibility through DJ engages the users in conversation. According to deliberation theory, this creates a conducive environment for democratic processes. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 505,329 |
2209.06612 | Distribution Calibration for Out-of-Domain Detection with Bayesian
Approximation | Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system, which aims to identify whether a query falls outside the predefined supported intent set. Previous softmax-based detection algorithms are proved to be overconfident for OOD samples. In this paper, we analyze overconfident OOD comes from distribution uncertainty due to the mismatch between the training and test distributions, which makes the model can't confidently make predictions thus probably causing abnormal softmax scores. We propose a Bayesian OOD detection framework to calibrate distribution uncertainty using Monte-Carlo Dropout. Our method is flexible and easily pluggable into existing softmax-based baselines and gains 33.33\% OOD F1 improvements with increasing only 0.41\% inference time compared to MSP. Further analyses show the effectiveness of Bayesian learning for OOD detection. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 317,455 |
2405.00747 | Soft Preference Optimization: Aligning Language Models to Expert
Distributions | We propose Soft Preference Optimization (SPO), a method for aligning generative models, such as Large Language Models (LLMs), with human preferences, without the need for a reward model. SPO optimizes model outputs directly over a preference dataset through a natural loss function that integrates preference loss with a regularization term across the model's entire output distribution rather than limiting it to the preference dataset. Although SPO does not require the assumption of an existing underlying reward model, we demonstrate that, under the Bradley-Terry (BT) model assumption, it converges to a softmax of scaled rewards, with the distribution's "softness" adjustable via the softmax exponent, an algorithm parameter. We showcase SPO's methodology, its theoretical foundation, and its comparative advantages in simplicity, computational efficiency, and alignment precision. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 451,067 |
2211.00497 | Modelling black-box audio effects with time-varying feature modulation | Deep learning approaches for black-box modelling of audio effects have shown promise, however, the majority of existing work focuses on nonlinear effects with behaviour on relatively short time-scales, such as guitar amplifiers and distortion. While recurrent and convolutional architectures can theoretically be extended to capture behaviour at longer time scales, we show that simply scaling the width, depth, or dilation factor of existing architectures does not result in satisfactory performance when modelling audio effects such as fuzz and dynamic range compression. To address this, we propose the integration of time-varying feature-wise linear modulation into existing temporal convolutional backbones, an approach that enables learnable adaptation of the intermediate activations. We demonstrate that our approach more accurately captures long-range dependencies for a range of fuzz and compressor implementations across both time and frequency domain metrics. We provide sound examples, source code, and pretrained models to faciliate reproducibility. | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 327,891 |
2004.11726 | A Two-Stage Multiple Instance Learning Framework for the Detection of
Breast Cancer in Mammograms | Mammograms are commonly employed in the large scale screening of breast cancer which is primarily characterized by the presence of malignant masses. However, automated image-level detection of malignancy is a challenging task given the small size of the mass regions and difficulty in discriminating between malignant, benign mass and healthy dense fibro-glandular tissue. To address these issues, we explore a two-stage Multiple Instance Learning (MIL) framework. A Convolutional Neural Network (CNN) is trained in the first stage to extract local candidate patches in the mammograms that may contain either a benign or malignant mass. The second stage employs a MIL strategy for an image level benign vs. malignant classification. A global image-level feature is computed as a weighted average of patch-level features learned using a CNN. Our method performed well on the task of localization of masses with an average Precision/Recall of 0.76/0.80 and acheived an average AUC of 0.91 on the imagelevel classification task using a five-fold cross-validation on the INbreast dataset. Restricting the MIL only to the candidate patches extracted in Stage 1 led to a significant improvement in classification performance in comparison to a dense extraction of patches from the entire mammogram. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 174,001 |
2107.11246 | Chance Constrained Economic Dispatch Considering the Capability of
Network Flexibility Against Renewable Uncertainties | This paper incorporates a continuous-type network flexibility into chance constrained economic dispatch (CCED). In the proposed model, both power generations and line susceptances are continuous variables to minimize the expected generation cost and guarantee a low probability of constraint violation in terms of generations and line flows under renewable uncertainties. From the analytical form of CCED, we figure out the mechanism of network flexibility against uncertainties -- while renewable uncertainties shrink the usable line capacities and aggravate transmission congestion, network flexibility mitigates congestion by re-routing the base-case line flows and reducing the line capacity shrinkage caused by uncertainties. Further, we propose an alternate iteration solver for this problem. By duality theory, we set up a master problem in the form of second-order cone programming to optimize generation dispatch scheme and a subproblem in the form of linear programming to optimize line susceptances. A satisfactory solution can be obtained efficiently by alternately solving these two problems. The proposed method applies to both Gaussian uncertainty and non-Gaussian uncertainty by means of Gaussian mixture model. The case studies on the IEEE 14-bus system and IEEE 118-bus system suggest that network flexibility can significantly improve operational economy while ensuring security under uncertainties. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 247,541 |
2501.16138 | Quantifying the Self-Interest Level of Markov Social Dilemmas | This paper introduces a novel method for estimating the self-interest level of computationally intractable Markov social dilemmas. We extend the concept of self-interest level from normal-form games to Markov games, providing a quantitative measure of the minimum reward exchange required to incentivize cooperation by aligning individual and collective interests. We demonstrate our method on three environments from the Melting Pot suite: which represent either common-pool resources or public goods. Our results show that the proposed method successfully identifies a threshold at which learning agents transition from selfish to cooperative equilibria in a Markov social dilemma. This work contributes to the fields of Cooperative AI and multiagent reinforcement learning by providing a practical tool for analysing complex, multistep social dilemmas. Our findings offer insights into how reward structures can promote or hinger cooperation in challenging multiagent scenarios, with potential applications in areas such as mechanism design. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 527,831 |
2310.19263 | A Metadata-Driven Approach to Understand Graph Neural Networks | Graph Neural Networks (GNNs) have achieved remarkable success in various applications, but their performance can be sensitive to specific data properties of the graph datasets they operate on. Current literature on understanding the limitations of GNNs has primarily employed a $\textit{model-driven}$ approach that leverage heuristics and domain knowledge from network science or graph theory to model the GNN behaviors, which is time-consuming and highly subjective. In this work, we propose a $\textit{metadata-driven}$ approach to analyze the sensitivity of GNNs to graph data properties, motivated by the increasing availability of graph learning benchmarks. We perform a multivariate sparse regression analysis on the metadata derived from benchmarking GNN performance across diverse datasets, yielding a set of salient data properties. To validate the effectiveness of our data-driven approach, we focus on one identified data property, the degree distribution, and investigate how this property influences GNN performance through theoretical analysis and controlled experiments. Our theoretical findings reveal that datasets with more balanced degree distribution exhibit better linear separability of node representations, thus leading to better GNN performance. We also conduct controlled experiments using synthetic datasets with varying degree distributions, and the results align well with our theoretical findings. Collectively, both the theoretical analysis and controlled experiments verify that the proposed metadata-driven approach is effective in identifying critical data properties for GNNs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 403,923 |
2201.05405 | The Implicit Regularization of Momentum Gradient Descent with Early
Stopping | The study on the implicit regularization induced by gradient-based optimization is a longstanding pursuit. In the present paper, we characterize the implicit regularization of momentum gradient descent (MGD) with early stopping by comparing with the explicit $\ell_2$-regularization (ridge). In details, we study MGD in the continuous-time view, so-called momentum gradient flow (MGF), and show that its tendency is closer to ridge than the gradient descent (GD) [Ali et al., 2019] for least squares regression. Moreover, we prove that, under the calibration $t=\sqrt{2/\lambda}$, where $t$ is the time parameter in MGF and $\lambda$ is the tuning parameter in ridge regression, the risk of MGF is no more than 1.54 times that of ridge. In particular, the relative Bayes risk of MGF to ridge is between 1 and 1.035 under the optimal tuning. The numerical experiments support our theoretical results strongly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,381 |
2310.05149 | Retrieval-Generation Synergy Augmented Large Language Models | Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks. However, regarding how to obtain effective documents, the existing methods are mainly divided into two categories. One is to retrieve from an external knowledge base, and the other is to utilize large language models to generate documents. We propose an iterative retrieval-generation collaborative framework. It is not only able to leverage both parametric and non-parametric knowledge, but also helps to find the correct reasoning path through retrieval-generation interactions, which is very important for tasks that require multi-step reasoning. We conduct experiments on four question answering datasets, including single-hop QA and multi-hop QA tasks. Empirical results show that our method significantly improves the reasoning ability of large language models and outperforms previous baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 398,008 |
1809.05142 | A Deep Learning and Gamification Approach to Energy Conservation at
Nanyang Technological University | The implementation of smart building technology in the form of smart infrastructure applications has great potential to improve sustainability and energy efficiency by leveraging humans-in-the-loop strategy. However, human preference in regard to living conditions is usually unknown and heterogeneous in its manifestation as control inputs to a building. Furthermore, the occupants of a building typically lack the independent motivation necessary to contribute to and play a key role in the control of smart building infrastructure. Moreover, true human actions and their integration with sensing/actuation platforms remains unknown to the decision maker tasked with improving operational efficiency. By modeling user interaction as a sequential discrete game between non-cooperative players, we introduce a gamification approach for supporting user engagement and integration in a human-centric cyber-physical system. We propose the design and implementation of a large-scale network game with the goal of improving the energy efficiency of a building through the utilization of cutting-edge Internet of Things (IoT) sensors and cyber-physical systems sensing/actuation platforms. A benchmark utility learning framework that employs robust estimations for classical discrete choice models provided for the derived high dimensional imbalanced data. To improve forecasting performance, we extend the benchmark utility learning scheme by leveraging Deep Learning end-to-end training with Deep bi-directional Recurrent Neural Networks. We apply the proposed methods to high dimensional data from a social game experiment designed to encourage energy efficient behavior among smart building occupants in Nanyang Technological University (NTU) residential housing. Using occupant-retrieved actions for resources such as lighting and A/C, we simulate the game defined by the estimated utility functions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 107,724 |
2411.08019 | Language Models as Causal Effect Generators | We present a framework for large language model (LLM) based data generation with controllable causal structure. In particular, we define a procedure for turning any language model and any directed acyclic graph (DAG) into a sequence-driven structural causal model (SD-SCM). Broadly speaking, an SD-SCM is a causal model with user-defined structure and LLM-defined structural equations. We characterize how an SD-SCM allows sampling from observational, interventional, and counterfactual distributions according to the desired causal structure. We then leverage this procedure to propose a new type of benchmark for causal inference methods, generating individual-level counterfactual data without needing to manually specify functional relationships between variables. We create an example benchmark consisting of thousands of datasets, and test a suite of popular estimation methods on these datasets for average, conditional average, and individual treatment effect estimation, both with and without hidden confounding. Apart from generating data, the same procedure also allows us to test for the presence of a causal effect that might be encoded in an LLM. This procedure can underpin auditing LLMs for misinformation, discrimination, or otherwise undesirable behavior. We believe SD-SCMs can serve as a useful tool in any application that would benefit from sequential data with controllable causal structure. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 507,750 |
2502.06788 | EVEv2: Improved Baselines for Encoder-Free Vision-Language Models | Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment. We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We develop efficient strategies for encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth investigation, we launch EVEv2.0, a new and improved family of encoder-free VLMs. We show that: (i) Properly decomposing and hierarchically associating vision and language within a unified model reduces interference between modalities. (ii) A well-designed training strategy enables effective optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0 represents a thorough study for developing a decoder-only architecture across modalities, demonstrating superior data efficiency and strong vision-reasoning capability. Code is publicly available at: https://github.com/baaivision/EVE. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 532,245 |
2304.01094 | Data-Efficient Policy Selection for Navigation in Partial Maps via
Subgoal-Based Abstraction | We present a novel approach for fast and reliable policy selection for navigation in partial maps. Leveraging the recent learning-augmented model-based Learning over Subgoals Planning (LSP) abstraction to plan, our robot reuses data collected during navigation to evaluate how well other alternative policies could have performed via a procedure we call offline alt-policy replay. Costs from offline alt-policy replay constrain policy selection among the LSP-based policies during deployment, allowing for improvements in convergence speed, cumulative regret and average navigation cost. With only limited prior knowledge about the nature of unseen environments, we achieve at least 67% and as much as 96% improvements on cumulative regret over the baseline bandit approach in our experiments in simulated maze and office-like environments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 355,933 |
2105.08721 | A LightGBM based Forecasting of Dominant Wave Periods in Oceanic Waters | In this paper, we propose a Light Gradient Boosting (LightGBM) to forecast dominant wave periods in oceanic waters. First, we use the data collected from CDIP buoys and apply various data filtering methods. The data filtering methods allow us to obtain a high-quality dataset for training and validation purposes. We then extract various wave-based features like wave heights, periods, skewness, kurtosis, etc., and atmospheric features like humidity, pressure, and air temperature for the buoys. Afterward, we train algorithms that use LightGBM and Extra Trees through a hv-block cross-validation scheme to forecast dominant wave periods for up to 30 days ahead. LightGBM has the R2 score of 0.94, 0.94, and 0.94 for 1-day ahead, 15-day ahead, and 30-day ahead prediction. Similarly, Extra Trees (ET) has an R2 score of 0.88, 0.86, and 0.85 for 1-day ahead, 15-day ahead, and 30 day ahead prediction. In case of the test dataset, LightGBM has R2 score of 0.94, 0.94, and 0.94 for 1-day ahead, 15-day ahead and 30-day ahead prediction. ET has R2 score of 0.88, 0.86, and 0.85 for 1-day ahead, 15-day ahead, and 30-day ahead prediction. A similar R2 score for both training and the test dataset suggests that the machine learning models developed in this paper are robust. Since the LightGBM algorithm outperforms ET for all the windows tested, it is taken as the final algorithm. Note that the performance of both methods does not decrease significantly as the forecast horizon increases. Likewise, the proposed method outperforms the numerical approaches included in this paper in the test dataset. For 1 day ahead prediction, the proposed algorithm has SI, Bias, CC, and RMSE of 0.09, 0.00, 0.97, and 1.78 compared to 0.268, 0.40, 0.63, and 2.18 for the European Centre for Medium-range Weather Forecasts (ECMWF) model, which outperforms all the other methods in the test dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 235,851 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.