id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2003.12673
Semantic Implicit Neural Scene Representations With Semi-Supervised Training
The recent success of implicit neural scene representations has presented a viable new method for how we capture and store 3D scenes. Unlike conventional 3D representations, such as point clouds, which explicitly store scene properties in discrete, localized units, these implicit representations encode a scene in the weights of a neural network which can be queried at any coordinate to produce these same scene properties. Thus far, implicit representations have primarily been optimized to estimate only the appearance and/or 3D geometry information in a scene. We take the next step and demonstrate that an existing implicit representation (SRNs) is actually multi-modal; it can be further leveraged to perform per-point semantic segmentation while retaining its ability to represent appearance and geometry. To achieve this multi-modal behavior, we utilize a semi-supervised learning strategy atop the existing pre-trained scene representation. Our method is simple, general, and only requires a few tens of labeled 2D segmentation masks in order to achieve dense 3D semantic segmentation. We explore two novel applications for this semantically aware implicit neural scene representation: 3D novel view and semantic label synthesis given only a single input RGB image or 2D label mask, as well as 3D interpolation of appearance and semantics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
169,977
2205.05989
Towards Answering Open-ended Ethical Quandary Questions
Considerable advancements have been made in various NLP tasks based on the impressive power of large language models (LLMs) and many NLP applications are deployed in our daily lives. In this work, we challenge the capability of LLMs with the new task of Ethical Quandary Generative Question Answering. Ethical quandary questions are more challenging to address because multiple conflicting answers may exist to a single quandary. We explore the current capability of LLMs in providing an answer with a deliberative exchange of different perspectives to an ethical quandary, in the approach of Socratic philosophy, instead of providing a closed answer like an oracle. We propose a model that searches for different ethical principles applicable to the ethical quandary and generates an answer conditioned on the chosen principles through prompt-based few-shot learning. We also discuss the remaining challenges and ethical issues involved in this task and suggest the direction toward developing responsible NLP systems by incorporating human values explicitly.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
296,112
2206.10802
SVoRT: Iterative Transformer for Slice-to-Volume Registration in Fetal Brain MRI
Volumetric reconstruction of fetal brains from multiple stacks of MR slices, acquired in the presence of almost unpredictable and often severe subject motion, is a challenging task that is highly sensitive to the initialization of slice-to-volume transformations. We propose a novel slice-to-volume registration method using Transformers trained on synthetically transformed data, which model multiple stacks of MR slices as a sequence. With the attention mechanism, our model automatically detects the relevance between slices and predicts the transformation of one slice using information from other slices. We also estimate the underlying 3D volume to assist slice-to-volume registration and update the volume and transformations alternately to improve accuracy. Results on synthetic data show that our method achieves lower registration error and better reconstruction quality compared with existing state-of-the-art methods. Experiments with real-world MRI data are also performed to demonstrate the ability of the proposed model to improve the quality of 3D reconstruction under severe fetal motion.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
304,039
2012.03930
Identity-Driven DeepFake Detection
DeepFake detection has so far been dominated by ``artifact-driven'' methods and the detection performance significantly degrades when either the type of image artifacts is unknown or the artifacts are simply too hard to find. In this work, we present an alternative approach: Identity-Driven DeepFake Detection. Our approach takes as input the suspect image/video as well as the target identity information (a reference image or video). We output a decision on whether the identity in the suspect image/video is the same as the target identity. Our motivation is to prevent the most common and harmful DeepFakes that spread false information of a targeted person. The identity-based approach is fundamentally different in that it does not attempt to detect image artifacts. Instead, it focuses on whether the identity in the suspect image/video is true. To facilitate research on identity-based detection, we present a new large scale dataset ``Vox-DeepFake", in which each suspect content is associated with multiple reference images collected from videos of a target identity. We also present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research. Even trained without fake videos, the OuterFace algorithm achieves superior detection accuracy and generalizes well to different DeepFake methods, and is robust with respect to video degradation techniques -- a performance not achievable with existing detection algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
210,305
2201.11916
Constraint-based Formation of Drone Swarms
Drone swarms are required for the simultaneous delivery of multiple packages. We demonstrate a multi-stop drone swarm-based delivery in a smart city. We leverage formation flying to conserve energy and increase the flight range of a drone swarm. An adaptive formation is presented in which a swarm adjusts to extrinsic constraints and changes the formation pattern in-flight. We utilize the existing building rooftops in a city and build a line-of-sight skyway network to safely operate the swarms. We use a heuristic-based A* algorithm to route a drone swarm in a skyway network.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
277,445
2106.12733
Feature Completion for Occluded Person Re-Identification
Person re-identification (reID) plays an important role in computer vision. However, existing methods suffer from performance degradation in occluded scenes. In this work, we propose an occlusion-robust block, Region Feature Completion (RFC), for occluded reID. Different from most previous works that discard the occluded regions, RFC block can recover the semantics of occluded regions in feature space. Firstly, a Spatial RFC (SRFC) module is developed. SRFC exploits the long-range spatial contexts from non-occluded regions to predict the features of occluded regions. The unit-wise prediction task leads to an encoder/decoder architecture, where the region-encoder models the correlation between non-occluded and occluded region, and the region-decoder utilizes the spatial correlation to recover occluded region features. Secondly, we introduce Temporal RFC (TRFC) module which captures the long-term temporal contexts to refine the prediction of SRFC. RFC block is lightweight, end-to-end trainable and can be easily plugged into existing CNNs to form RFCnet. Extensive experiments are conducted on occluded and commonly holistic reID benchmarks. Our method significantly outperforms existing methods on the occlusion datasets, while remains top even superior performance on holistic datasets. The source code is available at https://github.com/blue-blue272/OccludedReID-RFCnet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
242,820
2207.03776
Towards Intrinsic Common Discriminative Features Learning for Face Forgery Detection using Adversarial Learning
Existing face forgery detection methods usually treat face forgery detection as a binary classification problem and adopt deep convolution neural networks to learn discriminative features. The ideal discriminative features should be only related to the real/fake labels of facial images. However, we observe that the features learned by vanilla classification networks are correlated to unnecessary properties, such as forgery methods and facial identities. Such phenomenon would limit forgery detection performance especially for the generalization ability. Motivated by this, we propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities, which helps classification network to learn intrinsic common discriminative features for face forgery detection. To leverage data lacking ground truth label of facial identities, we design a special identity discriminator based on similarity information derived from off-the-shelf face recognition model. With the help of adversarial learning, our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities. Extensive experiments demonstrate the effectiveness of the proposed method under both intra-dataset and cross-dataset evaluation settings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
306,972
2006.07906
Fair Influence Maximization: A Welfare Optimization Approach
Several behavioral, social, and public health interventions, such as suicide/HIV prevention or community preparedness against natural disasters, leverage social network information to maximize outreach. Algorithmic influence maximization techniques have been proposed to aid with the choice of "peer leaders" or "influencers" in such interventions. Yet, traditional algorithms for influence maximization have not been designed with these interventions in mind. As a result, they may disproportionately exclude minority communities from the benefits of the intervention. This has motivated research on fair influence maximization. Existing techniques come with two major drawbacks. First, they require committing to a single fairness measure. Second, these measures are typically imposed as strict constraints leading to undesirable properties such as wastage of resources. To address these shortcomings, we provide a principled characterization of the properties that a fair influence maximization algorithm should satisfy. In particular, we propose a framework based on social welfare theory, wherein the cardinal utilities derived by each community are aggregated using the isoelastic social welfare functions. Under this framework, the trade-off between fairness and efficiency can be controlled by a single inequality aversion design parameter. We then show under what circumstances our proposed principles can be satisfied by a welfare function. The resulting optimization problem is monotone and submodular and can be solved efficiently with optimality guarantees. Our framework encompasses as special cases leximin and proportional fairness. Extensive experiments on synthetic and real world datasets including a case study on landslide risk management demonstrate the efficacy of the proposed framework.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
182,000
1510.08103
From Acquaintances to Friends: Homophily and Learning in Networks
This paper considers the evolution of a network in a discrete time, stochastic setting in which agents learn about each other through repeated interactions and maintain/break links on the basis of what they learn from these interactions. Agents have homophilous preferences and limited capacity, so they maintain links with others who are learned to be similar to themselves and cut links to others who are learned to be dissimilar to themselves. Thus learning influences the evolution of the network, but learning is imperfect so the evolution is stochastic. Homophily matters. Higher levels of homophily decrease the (average) number of links that agents form. However, the effect of homophily is anomalous: mutually beneficial links may be dropped before learning is completed, thereby resulting in sparser networks and less clustering than under complete information. There may be big differences between the networks that emerge under complete and incomplete information. Homophily matters here as well: initially, greater levels of homophily increase the difference between the complete and incomplete information networks, but sufficiently high levels of homophily eventually decrease the difference. Complete and incomplete information networks differ the most when the degree of homophily is intermediate. With multiple stages of life, the effects of incomplete information are large initially but fade somewhat over time.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
48,252
1902.08691
Enhancing Clinical Concept Extraction with Contextual Embeddings
Neural network-based representations ("embeddings") have dramatically advanced natural language processing (NLP) tasks, including clinical NLP tasks such as concept extraction. Recently, however, more advanced embedding methods and representations (e.g., ELMo, BERT) have further pushed the state-of-the-art in NLP, yet there are no common best practices for how to integrate these representations into clinical tasks. The purpose of this study, then, is to explore the space of possible options in utilizing these new models for clinical concept extraction, including comparing these to traditional word embedding methods (word2vec, GloVe, fastText). Both off-the-shelf open-domain embeddings and pre-trained clinical embeddings from MIMIC-III are evaluated. We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings, and compare these on four concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. We also analyze the impact of the pre-training time of a large language model like ELMo or BERT on the extraction performance. Last, we present an intuitive way to understand the semantic information encoded by contextual embeddings. Contextual embeddings pre-trained on a large clinical corpus achieves new state-of-the-art performances across all concept extraction tasks. The best-performing model outperforms all state-of-the-art methods with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65. We demonstrate the potential of contextual embeddings through the state-of-the-art performance these methods achieve on clinical concept extraction. Additionally, we demonstrate contextual embeddings encode valuable semantic information not accounted for in traditional word representations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
122,249
1504.01316
Progmosis: Evaluating Risky Individual Behavior During Epidemics Using Mobile Network Data
The possibility to analyze, quantify and forecast epidemic outbreaks is fundamental when devising effective disease containment strategies. Policy makers are faced with the intricate task of drafting realistically implementable policies that strike a balance between risk management and cost. Two major techniques policy makers have at their disposal are: epidemic modeling and contact tracing. Models are used to forecast the evolution of the epidemic both globally and regionally, while contact tracing is used to reconstruct the chain of people who have been potentially infected, so that they can be tested, isolated and treated immediately. However, both techniques might provide limited information, especially during an already advanced crisis when the need for action is urgent. In this paper we propose an alternative approach that goes beyond epidemic modeling and contact tracing, and leverages behavioral data generated by mobile carrier networks to evaluate contagion risk on a per-user basis. The individual risk represents the loss incurred by not isolating or treating a specific person, both in terms of how likely it is for this person to spread the disease as well as how many secondary infections it will cause. To this aim, we develop a model, named Progmosis, which quantifies this risk based on movement and regional aggregated statistics about infection rates. We develop and release an open-source tool that calculates this risk based on cellular network events. We simulate a realistic epidemic scenarios, based on an Ebola virus outbreak; we find that gradually restricting the mobility of a subset of individuals reduces the number of infected people after 30 days by 24%.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
41,796
2406.04928
AGBD: A Global-scale Biomass Dataset
Accurate estimates of Above Ground Biomass (AGB) are essential in addressing two of humanity's biggest challenges, climate change and biodiversity loss. Existing datasets for AGB estimation from satellite imagery are limited. Either they focus on specific, local regions at high resolution, or they offer global coverage at low resolution. There is a need for a machine learning-ready, globally representative, high-resolution benchmark. Our findings indicate significant variability in biomass estimates across different vegetation types, emphasizing the necessity for a dataset that accurately captures global diversity. To address these gaps, we introduce a comprehensive new dataset that is globally distributed, covers a range of vegetation types, and spans several years. This dataset combines AGB reference data from the GEDI mission with data from Sentinel-2 and PALSAR-2 imagery. Additionally, it includes pre-processed high-level features such as a dense canopy height map, an elevation map, and a land-cover classification map. We also produce a dense, high-resolution (10m) map of AGB predictions for the entire area covered by the dataset. Rigorously tested, our dataset is accompanied by several benchmark models and is publicly available. It can be easily accessed using a single line of code, offering a solid basis for efforts towards global AGB estimation. The GitHub repository github.com/ghjuliasialelli/AGBD serves as a one-stop shop for all code and data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
461,914
1911.05833
Emotion and Theme Recognition in Music with Frequency-Aware RF-Regularized CNNs
We present CP-JKU submission to MediaEval 2019; a Receptive Field-(RF)-regularized and Frequency-Aware CNN approach for tagging music with emotion/mood labels. We perform an investigation regarding the impact of the RF of the CNNs on their performance on this dataset. We observe that ResNets with smaller receptive fields -- originally adapted for acoustic scene classification -- also perform well in the emotion tagging task. We improve the performance of such architectures using techniques such as Frequency Awareness and Shake-Shake regularization, which were used in previous work on general acoustic recognition tasks.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
153,376
2402.13109
CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models
The advancement of large language models (LLMs) has enhanced the ability to generalize across a wide range of unseen natural language processing (NLP) tasks through instruction-following. Yet, their effectiveness often diminishes in low-resource languages like Chinese, exacerbated by biased evaluations from data leakage, casting doubt on their true generalizability to new linguistic territories. In response, we introduce the Chinese Instruction-Following Benchmark (CIF-Bench), designed to evaluate the zero-shot generalizability of LLMs to the Chinese language. CIF-Bench comprises 150 tasks and 15,000 input-output pairs, developed by native speakers to test complex reasoning and Chinese cultural nuances across 20 categories. To mitigate data contamination, we release only half of the dataset publicly, with the remainder kept private, and introduce diversified instructions to minimize score variance, totaling 45,000 data instances. Our evaluation of 28 selected LLMs reveals a noticeable performance gap, with the best model scoring only 52.9%, highlighting the limitations of LLMs in less familiar language and task contexts. This work not only uncovers the current limitations of LLMs in handling Chinese language tasks but also sets a new standard for future LLM generalizability research, pushing towards the development of more adaptable, culturally informed, and linguistically diverse models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
431,117
2204.00209
Green Routing Game: Strategic Logistical Planning using Mixed Fleets of ICEVs and EVs
This paper introduces a "green" routing game between multiple logistic operators (players), each owning a mixed fleet of internal combustion engine vehicle (ICEV) and electric vehicle (EV) trucks. Each player faces the cost of delayed delivery (due to charging requirements of EVs) and a pollution cost levied on the ICEVs. This cost structure models: 1) limited battery capacity of EVs and their charging requirement; 2) shared nature of charging facilities; 3) pollution cost levied by regulatory agency on the use of ICEVs. We characterize Nash equilibria of this game and derive a condition for its uniqueness. We also use the gradient projection method to compute this equilibrium in a distributed manner. Our equilibrium analysis is useful to analyze the trade-off faced by players in incurring higher delay due to congestion at charging locations when the share of EVs increases versus a higher pollution cost when the share of ICEVs increases. A numerical example suggests that to increase marginal pollution cost can dramatically reduce inefficiency of equilibria.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
289,182
2012.13300
Hierarchical Planning for Resource Allocation in Emergency Response Systems
A classical problem in city-scale cyber-physical systems (CPS) is resource allocation under uncertainty. Typically, such problems are modeled as Markov (or semi-Markov) decision processes. While online, offline, and decentralized approaches have been applied to such problems, they have difficulty scaling to large decision problems. We present a general approach to hierarchical planning that leverages structure in city-level CPS problems for resource allocation under uncertainty. We use the emergency response as a case study and show how a large resource allocation problem can be split into smaller problems. We then create a principled framework for solving the smaller problems and tackling the interaction between them. Finally, we use real-world data from Nashville, Tennessee, a major metropolitan area in the United States, to validate our approach. Our experiments show that the proposed approach outperforms state-of-the-art approaches used in the field of emergency response.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
213,178
2012.06550
Characterizing Twitter users behaviour during the Spanish Covid-19 first wave
People use Online Social Media to make sense of crisis events. A pandemic crisis like the Covid-19 outbreak is a complex event, involving numerous aspects of the social life on multiple temporal scales. Focusing on the Spanish Twittersphere, we characterized users activity behaviour across the different phases of the Covid-19 first wave. Firstly, we analyzed a sample of timelines of different classes of users from the Spanish Twittersphere in terms of their propensity to produce new information or to amplify information produced by others. Secondly, by performing stepwise segmented regression analysis and Bayesian switchpoint analysis, we looked for a possible behavioral footprint of the crisis in the statistics of users' activity. We observed that generic Spanish Twitter users and journalists experienced an abrupt increment of their tweeting activity between March 9 and March 14, in coincidence with control measures being announced by regional and State level authorities. However, they displayed a stable proportion of retweets before and after the switching point. On the contrary, politicians represented an exception, being the only class of users not experimenting this abrupt change and following a completely endogenous dynamics determined by institutional agenda. On the one hand, they did not increment their overall activity, displaying instead a slight decrease. On the other hand, in times of crisis, politicians tended to strengthen their propensity to amplify information rather than produce it.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
211,152
2310.08104
Voice Conversion for Stuttered Speech, Instruments, Unseen Languages and Textually Described Voices
Voice conversion aims to convert source speech into a target voice using recordings of the target speaker as a reference. Newer models are producing increasingly realistic output. But what happens when models are fed with non-standard data, such as speech from a user with a speech impairment? We investigate how a recent voice conversion model performs on non-standard downstream voice conversion tasks. We use a simple but robust approach called k-nearest neighbors voice conversion (kNN-VC). We look at four non-standard applications: stuttered voice conversion, cross-lingual voice conversion, musical instrument conversion, and text-to-voice conversion. The latter involves converting to a target voice specified through a text description, e.g. "a young man with a high-pitched voice". Compared to an established baseline, we find that kNN-VC retains high performance in stuttered and cross-lingual voice conversion. Results are more mixed for the musical instrument and text-to-voice conversion tasks. E.g., kNN-VC works well on some instruments like drums but not on others. Nevertheless, this shows that voice conversion models - and kNN-VC in particular - are increasingly applicable in a range of non-standard downstream tasks. But there are still limitations when samples are very far from the training distribution. Code, samples, trained models: https://rf5.github.io/sacair2023-knnvc-demo/.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
399,270
2501.08507
A Framework for Dynamic Situational Awareness in Human Robot Teams: An Interview Study
In human-robot teams, human situational awareness is the operator's conscious knowledge of the team's states, actions, plans and their environment. Appropriate human situational awareness is critical to successful human-robot collaboration. In human-robot teaming, it is often assumed that the best and required level of situational awareness is knowing everything at all times. This view is problematic, because what a human needs to know for optimal team performance varies given the dynamic environmental conditions, task context and roles and capabilities of team members. We explore this topic by interviewing 16 participants with active and repeated experience in diverse human-robot teaming applications. Based on analysis of these interviews, we derive a framework explaining the dynamic nature of required situational awareness in human-robot teaming. In addition, we identify a range of factors affecting the dynamic nature of required and actual levels of situational awareness (i.e., dynamic situational awareness), types of situational awareness inefficiencies resulting from gaps between actual and required situational awareness, and their main consequences. We also reveal various strategies, initiated by humans and robots, that assist in maintaining the required situational awareness. Our findings inform the implementation of accurate estimates of dynamic situational awareness and the design of user-adaptive human-robot interfaces. Therefore, this work contributes to the future design of more collaborative and effective human-robot teams.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
524,790
cmp-lg/9506012
Presenting Punctuation
Until recently, punctuation has received very little attention in the linguistics and computational linguistics literature. Since the publication of Nunberg's (1990) monograph on the topic, however, punctuation has seen its stock begin to rise: spurred in part by Nunberg's ground-breaking work, a number of valuable inquiries have been subsequently undertaken, including Hovy and Arens (1991), Dale (1991), Pascual (1993), Jones (1994), and Briscoe (1994). Continuing this line of research, I investigate in this paper how Nunberg's approach to presenting punctuation (and other formatting devices) might be incorporated into NLG systems. Insofar as the present paper focuses on the proper syntactic treatment of punctuation, it differs from these other subsequent works in that it is the first to examine this issue from the generation perspective.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,416
2411.17044
4D Scaffold Gaussian Splatting for Memory Efficient Dynamic Scene Reconstruction
Existing 4D Gaussian methods for dynamic scene reconstruction offer high visual fidelity and fast rendering. However, these methods suffer from excessive memory and storage demands, which limits their practical deployment. This paper proposes a 4D anchor-based framework that retains visual quality and rendering speed of 4D Gaussians while significantly reducing storage costs. Our method extends 3D scaffolding to 4D space, and leverages sparse 4D grid-aligned anchors with compressed feature vectors. Each anchor models a set of neural 4D Gaussians, each of which represent a local spatiotemporal region. In addition, we introduce a temporal coverage-aware anchor growing strategy to effectively assign additional anchors to under-reconstructed dynamic regions. Our method adjusts the accumulated gradients based on Gaussians' temporal coverage, improving reconstruction quality in dynamic regions. To reduce the number of anchors, we further present enhanced formulations of neural 4D Gaussians. These include the neural velocity, and the temporal opacity derived from a generalized Gaussian distribution. Experimental results demonstrate that our method achieves state-of-the-art visual quality and 97.8% storage reduction over 4DGS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
511,281
2406.05428
Information-Theoretic Thresholds for the Alignments of Partially Correlated Graphs
This paper studies the problem of recovering the hidden vertex correspondence between two correlated random graphs. We propose the partially correlated Erd\H{o}s-R\'enyi graphs model, wherein a pair of induced subgraphs with a certain number are correlated. We investigate the information-theoretic thresholds for recovering the latent correlated subgraphs and the hidden vertex correspondence. We prove that there exists an optimal rate for partial recovery for the number of correlated nodes, above which one can correctly match a fraction of vertices and below which correctly matching any positive fraction is impossible, and we also derive an optimal rate for exact recovery. In the proof of possibility results, we propose correlated functional digraphs, which partition the edges of the intersection graph into two types of components, and bound the error probability by lower-order cumulant generating functions. The proof of impossibility results build upon the generalized Fano's inequality and the recovery thresholds settled in correlated Erd\H{o}s-R\'enyi graphs model.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
462,132
2502.01458
Understanding the Capabilities and Limitations of Weak-to-Strong Generalization
Weak-to-strong generalization, where weakly supervised strong models outperform their weaker teachers, offers a promising approach to aligning superhuman models with human values. To deepen the understanding of this approach, we provide theoretical insights into its capabilities and limitations. First, in the classification setting, we establish upper and lower generalization error bounds for the strong model, identifying the primary limitations as stemming from the weak model's generalization error and the optimization objective itself. Additionally, we derive lower and upper bounds on the calibration error of the strong model. These theoretical bounds reveal two critical insights: (1) the weak model should demonstrate strong generalization performance and maintain well-calibrated predictions, and (2) the strong model's training process must strike a careful balance, as excessive optimization could undermine its generalization capability by over-relying on the weak supervision signals. Finally, in the regression setting, we extend the work of Charikar et al. (2024) to a loss function based on Kullback-Leibler (KL) divergence, offering guarantees that the strong student can outperform its weak teacher by at least the magnitude of their disagreement. We conduct sufficient experiments to validate our theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
529,863
2201.09965
Decentralized EM to Learn Gaussian Mixtures from Datasets Distributed by Features
Expectation Maximization (EM) is the standard method to learn Gaussian mixtures. Yet its classic, centralized form is often infeasible, due to privacy concerns and computational and communication bottlenecks. Prior work dealt with data distributed by examples, horizontal partitioning, but we lack a counterpart for data scattered by features, an increasingly common scheme (e.g. user profiling with data from multiple entities). To fill this gap, we provide an EM-based algorithm to fit Gaussian mixtures to Vertically Partitioned data (VP-EM). In federated learning setups, our algorithm matches the centralized EM fitting of Gaussian mixtures constrained to a subspace. In arbitrary communication graphs, consensus averaging allows VP-EM to run on large peer-to-peer networks as an EM approximation. This mismatch comes from consensus error only, which vanishes exponentially fast with the number of consensus rounds. We demonstrate VP-EM on various topologies for both synthetic and real data, evaluating its approximation of centralized EM and seeing that it outperforms the available benchmark.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,827
2303.09164
Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
In this paper, we present our advanced solutions to the two sub-challenges of Affective Behavior Analysis in the wild (ABAW) 2023: the Emotional Reaction Intensity (ERI) Estimation Challenge and Expression (Expr) Classification Challenge. ABAW 2023 aims to tackle the challenge of affective behavior analysis in natural contexts, with the ultimate goal of creating intelligent machines and robots that possess the ability to comprehend human emotions, feelings, and behaviors. For the Expression Classification Challenge, we propose a streamlined approach that handles the challenges of classification effectively. However, our main contribution lies in our use of diverse models and tools to extract multimodal features such as audio and video cues from the Hume-Reaction dataset. By studying, analyzing, and combining these features, we significantly enhance the model's accuracy for sentiment prediction in a multimodal context. Furthermore, our method achieves outstanding results on the Emotional Reaction Intensity (ERI) Estimation Challenge, surpassing the baseline method by an impressive 84\% increase, as measured by the Pearson Coefficient, on the validation dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,936
2207.11742
From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic Approach
In multi-label learning, a particular case of multi-task learning where a single data point is associated with multiple target labels, it was widely assumed in the literature that, to obtain best accuracy, the dependence among the labels should be explicitly modeled. This premise led to a proliferation of methods offering techniques to learn and predict labels together, for example where the prediction for one label influences predictions for other labels. Even though it is now acknowledged that in many contexts a model of dependence is not required for optimal performance, such models continue to outperform independent models in some of those very contexts, suggesting alternative explanations for their performance beyond label dependence, which the literature is only recently beginning to unravel. Leveraging and extending recent discoveries, we turn the original premise of multi-label learning on its head, and approach the problem of joint-modeling specifically under the absence of any measurable dependence among task labels; for example, when task labels come from separate problem domains. We shift insights from this study towards building an approach for transfer learning that challenges the long-held assumption that transferability of tasks comes from measurements of similarity between the source and target domains or models. This allows us to design and test a method for transfer learning, which is model driven rather than purely data driven, and furthermore it is black box and model-agnostic (any base model class can be considered). We show that essentially we can create task-dependence based on source-model capacity. The results we obtain have important implications and provide clear directions for future work, both in the areas of multi-label and transfer learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
309,762
2410.11560
PSVMA+: Exploring Multi-granularity Semantic-visual Adaption for Generalized Zero-shot Learning
Generalized zero-shot learning (GZSL) endeavors to identify the unseen categories using knowledge from the seen domain, necessitating the intrinsic interactions between the visual features and attribute semantic features. However, GZSL suffers from insufficient visual-semantic correspondences due to the attribute diversity and instance diversity. Attribute diversity refers to varying semantic granularity in attribute descriptions, ranging from low-level (specific, directly observable) to high-level (abstract, highly generic) characteristics. This diversity challenges the collection of adequate visual cues for attributes under a uni-granularity. Additionally, diverse visual instances corresponding to the same sharing attributes introduce semantic ambiguity, leading to vague visual patterns. To tackle these problems, we propose a multi-granularity progressive semantic-visual mutual adaption (PSVMA+) network, where sufficient visual elements across granularity levels can be gathered to remedy the granularity inconsistency. PSVMA+ explores semantic-visual interactions at different granularity levels, enabling awareness of multi-granularity in both visual and semantic elements. At each granularity level, the dual semantic-visual transformer module (DSVTM) recasts the sharing attributes into instance-centric attributes and aggregates the semantic-related visual regions, thereby learning unambiguous visual features to accommodate various instances. Given the diverse contributions of different granularities, PSVMA+ employs selective cross-granularity learning to leverage knowledge from reliable granularities and adaptively fuses multi-granularity features for comprehensive representations. Experimental results demonstrate that PSVMA+ consistently outperforms state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
498,619
2101.03785
Predictive Analysis of Chikungunya
Chikungunya is an emerging threat for health security all over the world which is spreading very fast. Researches for proper forecasting of the incidence rate of chikungunya has been going on in many places in which DARPA has done a very extensive summarized result from 2014 to 2017 with the data of suspected cases, confirmed cases, deaths, population and incidence rate in different countries. In this project, we have analysed the dataset from DARPA and extended it to predict the incidence rate using different features of weather like temperature, humidity, dewiness, wind and pressure along with the latitude and longitude of every country. We had to use different APIs to find out these extra features from 2014-2016. After creating a pure dataset, we have used Linear Regression to predict the incidence rate and calculated the accuracy and error rate.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
214,992
2107.02168
DPPIN: A Biological Repository of Dynamic Protein-Protein Interaction Network Data
In the big data era, the relationship between entries becomes more and more complex. Many graph (or network) algorithms have already paid attention to dynamic networks, which are more suitable than static ones for fitting the complex real-world scenarios with evolving structures and features. To contribute to the dynamic network representation learning and mining research, we provide a new bunch of label-adequate, dynamics-meaningful, and attribute-sufficient dynamic networks from the health domain. To be specific, in our proposed repository DPPIN, we totally have 12 individual dynamic network datasets at different scales, and each dataset is a dynamic protein-protein interaction network describing protein-level interactions of yeast cells. We hope these domain-specific node features, structure evolution patterns, and node and graph labels could inspire the regularization techniques to increase the performance of graph machine learning algorithms in a more complex setting. Also, we link potential applications with our DPPIN by designing various dynamic graph experiments, where DPPIN could indicate future research opportunities for some tasks by presenting challenges on state-of-the-art baseline algorithms. Finally, we identify future directions to improve the utility of this repository and welcome constructive inputs from the community. All resources (e.g., data and code) of this work are deployed and publicly available at https://github.com/DongqiFu/DPPIN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
244,717
1710.03287
One-bit compressed sensing with partial Gaussian circulant matrices
In this paper we consider memoryless one-bit compressed sensing with randomly subsampled Gaussian circulant matrices. We show that in a small sparsity regime and for small enough accuracy $\delta$, $m\sim \delta^{-4} s\log(N/s\delta)$ measurements suffice to reconstruct the direction of any $s$-sparse vector up to accuracy $\delta$ via an efficient program. We derive this result by proving that partial Gaussian circulant matrices satisfy an $\ell_1/\ell_2$ RIP-property. Under a slightly worse dependence on $\delta$, we establish stability with respect to approximate sparsity, as well as full vector recovery results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
82,301
2312.13868
Data-driven path collective variables
Identifying optimal collective variables to model transformations, using atomic-scale simulations, is a long-standing challenge. We propose a new method for the generation, optimization, and comparison of collective variables, which can be thought of as a data-driven generalization of the path collective variable concept. It consists in a kernel ridge regression of the committor probability, which encodes a transformation's progress. The resulting collective variable is one-dimensional, interpretable, and differentiable, making it appropriate for enhanced sampling simulations requiring biasing. We demonstrate the validity of the method on two different applications: a precipitation model, and the association of Li$^+$ and F$^-$ in water. For the former, we show that global descriptors such as the permutation invariant vector allow to reach an accuracy far from the one achieved \textit{via} simpler, more intuitive variables. For the latter, we show that information correlated with the transformation mechanism is contained in the first solvation shell only, and that inertial effects prevent the derivation of optimal collective variables from the atomic positions only.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
417,441
2410.14787
Privacy for Free in the Over-Parameterized Regime
Differentially private gradient descent (DP-GD) is a popular algorithm to train deep learning models with provable guarantees on the privacy of the training data. In the last decade, the problem of understanding its performance cost with respect to standard GD has received remarkable attention from the research community, which formally derived upper bounds on the excess population risk $R_{P}$ in different learning settings. However, existing bounds typically degrade with over-parameterization, i.e., as the number of parameters $p$ gets larger than the number of training samples $n$ -- a regime which is ubiquitous in current deep-learning practice. As a result, the lack of theoretical insights leaves practitioners without clear guidance, leading some to reduce the effective number of trainable parameters to improve performance, while others use larger models to achieve better results through scale. In this work, we show that in the popular random features model with quadratic loss, for any sufficiently large $p$, privacy can be obtained for free, i.e., $\left|R_{P} \right| = o(1)$, not only when the privacy parameter $\varepsilon$ has constant order, but also in the strongly private setting $\varepsilon = o(1)$. This challenges the common wisdom that over-parameterization inherently hinders performance in private learning.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
500,210
2003.14304
New Perspectives on the Use of Online Learning for Congestion Level Prediction over Traffic Data
This work focuses on classification over time series data. When a time series is generated by non-stationary phenomena, the pattern relating the series with the class to be predicted may evolve over time (concept drift). Consequently, predictive models aimed to learn this pattern may become eventually obsolete, hence failing to sustain performance levels of practical use. To overcome this model degradation, online learning methods incrementally learn from new data samples arriving over time, and accommodate eventual changes along the data stream by implementing assorted concept drift strategies. In this manuscript we elaborate on the suitability of online learning methods to predict the road congestion level based on traffic speed time series data. We draw interesting insights on the performance degradation when the forecasting horizon is increased. As opposed to what is done in most literature, we provide evidence of the importance of assessing the distribution of classes over time before designing and tuning the learning model. This previous exercise may give a hint of the predictability of the different congestion levels under target. Experimental results are discussed over real traffic speed data captured by inductive loops deployed over Seattle (USA). Several online learning methods are analyzed, from traditional incremental learning algorithms to more elaborated deep learning models. As shown by the reported results, when increasing the prediction horizon, the performance of all models degrade severely due to the distribution of classes along time, which supports our claim about the importance of analyzing this distribution prior to the design of the model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
170,462
2311.10437
DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection
Though feature-alignment based Domain Adaptive Object Detection (DAOD) methods have achieved remarkable progress, they ignore the source bias issue, i.e., the detector tends to acquire more source-specific knowledge, impeding its generalization capabilities in the target domain. Furthermore, these methods face a more formidable challenge in achieving consistent classification and localization in the target domain compared to the source domain. To overcome these challenges, we propose a novel Distillation-based Source Debiasing (DSD) framework for DAOD, which can distill domain-agnostic knowledge from a pre-trained teacher model, improving the detector's performance on both domains. In addition, we design a Target-Relevant Object Localization Network (TROLN), which can mine target-related localization information from source and target-style mixed data. Accordingly, we present a Domain-aware Consistency Enhancing (DCE) strategy, in which these information are formulated into a new localization representation to further refine classification scores in the testing stage, achieving a harmonization between classification and localization. Extensive experiments have been conducted to manifest the effectiveness of this method, which consistently improves the strong baseline by large margins, outperforming existing alignment-based works.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,532
1903.04523
The Iterated Local Model for Social Networks
On-line social networks, such as in Facebook and Twitter, are often studied from the perspective of friendship ties between agents in the network. Adversarial ties, however, also play an important role in the structure and function of social networks, but are often hidden. Underlying generative mechanisms of social networks are predicted by structural balance theory, which postulates that triads of agents, prefer to be transitive, where friends of friends are more likely friends, or anti-transitive, where adversaries of adversaries become friends. The previously proposed Iterated Local Transitivity (ILT) and Iterated Local Anti-Transitivity (ILAT) models incorporated transitivity and anti-transitivity, respectively, as evolutionary mechanisms. These models resulted in graphs with many observable properties of social networks, such as low diameter, high clustering, and densification. We propose a new, generative model, referred to as the Iterated Local Model (ILM) for social networks synthesizing both transitive and anti-transitive triads over time. In ILM, we are given a countably infinite binary sequence as input, and that sequence determines whether we apply a transitive or an anti-transitive step. The resulting model exhibits many properties of complex networks observed in the ILT and ILAT models. In particular, for any input binary sequence, we show that asymptotically the model generates finite graphs that densify, have clustering coefficient bounded away from 0, have diameter at most 3, and exhibit bad spectral expansion. We also give a thorough analysis of the chromatic number, domination number, Hamiltonicity, and isomorphism types of induced subgraphs of ILM graphs.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
123,991
2311.15937
Optimal Transport Aggregation for Visual Place Recognition
The task of Visual Place Recognition (VPR) aims to match a query image against references from an extensive database of images from different places, relying solely on visual cues. State-of-the-art pipelines focus on the aggregation of features extracted from a deep backbone, in order to form a global descriptor for each image. In this context, we introduce SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors), which reformulates NetVLAD's soft-assignment of local features to clusters as an optimal transport problem. In SALAD, we consider both feature-to-cluster and cluster-to-feature relations and we also introduce a 'dustbin' cluster, designed to selectively discard features deemed non-informative, enhancing the overall descriptor quality. Additionally, we leverage and fine-tune DINOv2 as a backbone, which provides enhanced description power for the local features, and dramatically reduces the required training time. As a result, our single-stage method not only surpasses single-stage baselines in public VPR datasets, but also surpasses two-stage methods that add a re-ranking with significantly higher cost. Code and models are available at https://github.com/serizba/salad.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,690
1901.09531
Secure multi-party linear regression at plaintext speed
We detail distributed algorithms for scalable, secure multiparty linear regression and feature selection at essentially the same speed as plaintext regression. While the core geometric ideas are simple, the recognition of their broad utility when combined is novel. Our scheme opens the door to efficient and secure genome-wide association studies across multiple biobanks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
119,777
1902.03748
Peeking into the Future: Predicting Future Person Activities and Locations in Videos
Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in many applications. Motivated by this idea, this paper studies predicting a pedestrian's future path jointly with future activities. We propose an end-to-end, multi-task learning system utilizing rich visual features about human behavioral information and interaction with their surroundings. To facilitate the training, the network is learned with an auxiliary task of predicting future location in which the activity will happen. Experimental results demonstrate our state-of-the-art performance over two public benchmarks on future trajectory prediction. Moreover, our method is able to produce meaningful future activity prediction in addition to the path. The result provides the first empirical evidence that joint modeling of paths and activities benefits future path prediction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,191
1503.01890
Coupling models of cattle and farms with models of badgers for predicting the dynamics of bovine tuberculosis (TB)
Bovine TB is a major problem for the agricultural industry in several countries. TB can be contracted and spread by species other than cattle and this can cause a problem for disease control. In the UK and Ireland, badgers are a recognised reservoir of infection and there has been substantial discussion about potential control strategies. We present a coupling of individual based models of bovine TB in badgers and cattle, which aims to capture the key details of the natural history of the disease and of both species at approximately county scale. The model is spatially explicit it follows a very large number of cattle and badgers on a different grid size for each species and includes also winter housing. We show that the model can replicate the reported dynamics of both cattle and badger populations as well as the increasing prevalence of the disease in cattle. Parameter space used as input in simulations was swept out using Latin hypercube sampling and sensitivity analysis to model outputs was conducted using mixed effect models. By exploring a large and computationally intensive parameter space we show that of the available control strategies it is the frequency of TB testing and whether or not winter housing is practised that have the most significant effects on the number of infected cattle, with the effect of winter housing becoming stronger as farm size increases. Whether badgers were culled or not explained about 5%, while the accuracy of the test employed to detect infected cattle explained less than 3% of the variance in the number of infected cattle.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
40,876
1603.00502
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Features
Although recent advances in regional Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their response time is still slow for real-time performance. To address this issue, we propose a method for region proposal as an alternative to selective search, which is used in current state-of-the art object detection algorithms. We evaluate our Keypoint Density-based Region Proposal (KDRP) approach and show that it speeds up detection and classification on fine-grained tasks by 100% versus the existing selective search region proposal technique without compromising classification accuracy. KDRP makes the application of CNNs to real-time detection and classification feasible.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
52,778
2407.14153
ESP-MedSAM: Efficient Self-Prompting SAM for Universal Domain-Generalized Medical Image Segmentation
The universality of deep neural networks across different modalities and their generalization capabilities to unseen domains play an essential role in medical image segmentation. The recent Segment Anything Model (SAM) has demonstrated its potential in both settings. However, the huge computational costs, demand for manual annotations as prompts and conflict-prone decoding process of SAM degrade its generalizability and applicability in clinical scenarios. To address these issues, we propose an efficient self-prompting SAM for universal domain-generalized medical image segmentation, named ESP-MedSAM. Specifically, we first devise the Multi-Modal Decoupled Knowledge Distillation (MMDKD) strategy to construct a lightweight semi-parameter sharing image encoder that produces discriminative visual features for diverse modalities. Further, we introduce the Self-Patch Prompt Generator (SPPG) to automatically generate high-quality dense prompt embeddings for guiding segmentation decoding. Finally, we design the Query-Decoupled Modality Decoder (QDMD) that leverages a one-to-one strategy to provide an independent decoding channel for every modality. Extensive experiments indicate that ESP-MedSAM outperforms state-of-the-arts in diverse medical imaging segmentation tasks, displaying superior modality universality and generalization capabilities. Especially, ESP-MedSAM uses only 4.5\% parameters compared to SAM-H. The source code is available at https://github.com/xq141839/ESP-MedSAM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
474,673
1402.6787
Learning multifractal structure in large networks
Generating random graphs to model networks has a rich history. In this paper, we analyze and improve upon the multifractal network generator (MFNG) introduced by Palla et al. We provide a new result on the probability of subgraphs existing in graphs generated with MFNG. From this result it follows that we can quickly compute moments of an important set of graph properties, such as the expected number of edges, stars, and cliques. Specifically, we show how to compute these moments in time complexity independent of the size of the graph and the number of recursive levels in the generative model. We leverage this theory to a new method of moments algorithm for fitting large networks to MFNG. Empirically, this new approach effectively simulates properties of several social and information networks. In terms of matching subgraph counts, our method outperforms similar algorithms used with the Stochastic Kronecker Graph model. Furthermore, we present a fast approximation algorithm to generate graph instances following the multi- fractal structure. The approximation scheme is an improvement over previous methods, which ran in time complexity quadratic in the number of vertices. Combined, our method of moments and fast sampling scheme provide the first scalable framework for effectively modeling large networks with MFNG.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,206
2201.06173
SunCast: Solar Irradiance Nowcasting from Geosynchronous Satellite Data
When cloud layers cover photovoltaic (PV) panels, the amount of power the panels produce fluctuates rapidly. Therefore, to maintain enough energy on a power grid to match demand, utilities companies rely on reserve power sources that typically come from fossil fuels and therefore pollute the environment. Accurate short-term PV power prediction enables operators to maximize the amount of power obtained from PV panels and safely reduce the reserve energy needed from fossil fuel sources. While several studies have developed machine learning models to predict solar irradiance at specific PV generation facilities, little work has been done to model short-term solar irradiance on a global scale. Furthermore, models that have been developed are proprietary and have architectures that are not publicly available or rely on computationally demanding Numerical Weather Prediction (NWP) models. Here, we propose a Convolutional Long Short-Term Memory Network model that treats solar nowcasting as a next frame prediction problem, is more efficient than NWP models and has a straightforward, reproducible architecture. Our models can predict solar irradiance for entire North America for up to 3 hours in under 60 seconds on a single machine without a GPU and has a RMSE of 120 W/m2 when evaluated on 2 months of data.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
275,635
2006.15617
Shadow Removal by a Lightness-Guided Network with Training on Unpaired Data
Shadow removal can significantly improve the image visual quality and has many applications in computer vision. Deep learning methods based on CNNs have become the most effective approach for shadow removal by training on either paired data, where both the shadow and underlying shadow-free versions of an image are known, or unpaired data, where shadow and shadow-free training images are totally different with no correspondence. In practice, CNN training on unpaired data is more preferred given the easiness of training data collection. In this paper, we present a new Lightness-Guided Shadow Removal Network (LG-ShadowNet) for shadow removal by training on unpaired data. In this method, we first train a CNN module to compensate for the lightness and then train a second CNN module with the guidance of lightness information from the first CNN module for final shadow removal. We also introduce a loss function to further utilise the colour prior of existing data. Extensive experiments on widely used ISTD, adjusted ISTD and USR datasets demonstrate that the proposed method outperforms the state-of-the-art methods with training on unpaired data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
184,566
1811.00956
A Fast Algorithm for Clustering High Dimensional Feature Vectors
We propose an algorithm for clustering high dimensional data. If $P$ features for $N$ objects are represented in an $N\times P$ matrix ${\bf X}$, where $N\ll P$, the method is based on exploiting the cluster-dependent structure of the $N\times N$ matrix ${\bf XX}^T$. Computational burden thus depends primarily on $N$, the number of objects to be clustered, rather than $P$, the number of features that are measured. This makes the method particularly useful in high dimensional settings, where it is substantially faster than a number of other popular clustering algorithms. Aside from an upper bound on the number of potential clusters, the method is independent of tuning parameters. When compared to $16$ other clustering algorithms on $32$ genomic datasets with gold standards, we show that it provides the most accurate cluster configuration more than twice as often than its closest competitors. We illustrate the method on data taken from highly cited genomic studies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,232
1910.11713
ALET (Automated Labeling of Equipment and Tools): A Dataset, a Baseline and a Usecase for Tool Detection in the Wild
Robots collaborating with humans in realistic environments will need to be able to detect the tools that can be used and manipulated. However, there is no available dataset or study that addresses this challenge in real settings. In this paper, we fill this gap by providing an extensive dataset (METU-ALET) for detecting farming, gardening, office, stonemasonry, vehicle, woodworking and workshop tools. The scenes correspond to sophisticated environments with or without humans using the tools. The scenes we consider introduce several challenges for object detection, including the small scale of the tools, their articulated nature, occlusion, inter-class invariance, etc. Moreover, we train and compare several state of the art deep object detectors (including Faster R-CNN, Cascade R-CNN, RepPoint and RetinaNet) on our dataset. We observe that the detectors have difficulty in detecting especially small-scale tools or tools that are visually similar to parts of other tools. This in turn supports the importance of our dataset and paper. With the dataset, the code and the trained models, our work provides a basis for further research into tools and their use in robotics applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
150,866
1303.5743
Handling Uncertainty during Plan Recognition in Task-Oriented Consultation Systems
During interactions with human consultants, people are used to providing partial and/or inaccurate information, and still be understood and assisted. We attempt to emulate this capability of human consultants; in computer consultation systems. In this paper, we present a mechanism for handling uncertainty in plan recognition during task-oriented consultations. The uncertainty arises while choosing an appropriate interpretation of a user?s statements among many possible interpretations. Our mechanism handles this uncertainty by using probability theory to assess the probabilities of the interpretations, and complements this assessment by taking into account the information content of the interpretations. The information content of an interpretation is a measure of how well defined an interpretation is in terms of the actions to be performed on the basis of the interpretation. This measure is used to guide the inference process towards interpretations with a higher information content. The information content for an interpretation depends on the specificity and the strength of the inferences in it, where the strength of an inference depends on the reliability of the information on which the inference is based. Our mechanism has been developed for use in task-oriented consultation systems. The domain that we have chosen for exploration is that of a travel agency.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,191
2405.02229
On the Utility of External Agent Intention Predictor for Human-AI Coordination
Reaching a consensus on the team plans is vital to human-AI coordination. Although previous studies provide approaches through communications in various ways, it could still be hard to coordinate when the AI has no explainable plan to communicate. To cover this gap, we suggest incorporating external models to assist humans in understanding the intentions of AI agents. In this paper, we propose a two-stage paradigm that first trains a Theory of Mind (ToM) model from collected offline trajectories of the target agent, and utilizes the model in the process of human-AI collaboration by real-timely displaying the future action predictions of the target agent. Such a paradigm leaves the AI agent as a black box and thus is available for improving any agents. To test our paradigm, we further implement a transformer-based predictor as the ToM model and develop an extended online human-AI collaboration platform for experiments. The comprehensive experimental results verify that human-AI teams can achieve better performance with the help of our model. A user assessment attached to the experiment further demonstrates that our paradigm can significantly enhance the situational awareness of humans. Our study presents the potential to augment the ability of humans via external assistance in human-AI collaboration, which may further inspire future research.
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
451,675
2012.11260
Unsupervised Domain Adaptation with Temporal-Consistent Self-Training for 3D Hand-Object Joint Reconstruction
Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will encounter at test time. Unfortunately, this is not always the case, and one often has to resort to training them on synthetic data, which does not guarantee that they will work well in real situations. In this paper, we introduce an effective approach to addressing this challenge by exploiting 3D geometric constraints within a cycle generative adversarial network (CycleGAN) to perform domain adaptation. Furthermore, in contrast to most existing works, which fail to leverage the rich temporal information available in unlabeled real videos as a source of supervision, we propose to enforce short- and long-term temporal consistency to fine-tune the domain-adapted model in a self-supervised fashion. We will demonstrate that our approach outperforms state-of-the-art 3D hand-object joint reconstruction methods on three widely-used benchmarks and will make our code publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,588
2309.12481
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems
Instruction-tuned Large Language Models (It-LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models' abilities. However, earlier works are demonstrating the presence of inherent "order bias" in It-LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate It-LLMs' resilience abilities towards a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
393,809
2405.11410
Characterizing the Complexity of Social Robot Navigation Scenarios
Social robot navigation algorithms are often demonstrated in overly simplified scenarios, prohibiting the extraction of practical insights about their relevance to real-world domains. Our key insight is that an understanding of the inherent complexity of a social robot navigation scenario could help characterize the limitations of existing navigation algorithms and provide actionable directions for improvement. Through an exploration of recent literature, we identify a series of factors contributing to the complexity of a scenario, disambiguating between contextual and robot-related ones. We then conduct a simulation study investigating how manipulations of contextual factors impact the performance of a variety of navigation algorithms. We find that dense and narrow environments correlate most strongly with performance drops, while the heterogeneity of agent policies and directionality of interactions have a less pronounced effect. Our findings motivate a shift towards developing and testing algorithms under higher-complexity settings.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
455,124
2409.00921
Statically Contextualizing Large Language Models with Typed Holes
Large language models (LLMs) have reshaped the landscape of program synthesis. However, contemporary LLM-based code completion systems often hallucinate broken code because they lack appropriate context, particularly when working with definitions not in the training data nor near the cursor. This paper demonstrates that tight integration with the type and binding structure of a language, as exposed by its language server, can address this contextualization problem in a token-efficient manner. In short, we contend that AIs need IDEs, too! In particular, we integrate LLM code generation into the Hazel live program sketching environment. The Hazel Language Server identifies the type and typing context of the hole being filled, even in the presence of errors, ensuring that a meaningful program sketch is always available. This allows prompting with codebase-wide contextual information not lexically local to the cursor, nor necessarily in the same file, but that is likely to be semantically local to the developer's goal. Completions synthesized by the LLM are then iteratively refined via further dialog with the language server. To evaluate these techniques, we introduce MVUBench, a dataset of model-view-update (MVU) web applications. These applications serve as challenge problems due to their reliance on application-specific data structures. We find that contextualization with type definitions is particularly impactful. After introducing our ideas in the context of Hazel we duplicate our techniques and port MVUBench to TypeScript in order to validate the applicability of these methods to higher-resource languages. Finally, we outline ChatLSP, a conservative extension to the Language Server Protocol (LSP) that language servers can implement to expose capabilities that AI code completion systems of various designs can use to incorporate static context when generating prompts for an LLM.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
485,134
2108.05670
Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates
Federated Learning (FL) solves many of this decade's concerns regarding data privacy and computation challenges. FL ensures no data leaves its source as the model is trained at where the data resides. However, FL comes with its own set of challenges. The communication of model weight updates in this distributed environment comes with significant network bandwidth costs. In this context, we propose a mechanism of compressing the weight updates using Autoencoders (AE), which learn the data features of the weight updates and subsequently perform compression. The encoder is set up on each of the nodes where the training is performed while the decoder is set up on the node where the weights are aggregated. This setup achieves compression through the encoder and recreates the weights at the end of every communication round using the decoder. This paper shows that the dynamic and orthogonal AE based weight compression technique could serve as an advantageous alternative (or an add-on) in a large scale FL, as it not only achieves compression ratios ranging from 500x to 1720x and beyond, but can also be modified based on the accuracy requirements, computational capacity, and other requirements of the given FL setup.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
250,374
1910.08906
Self-Adaptive Network Pruning
Deep convolutional neural networks have been proved successful on a wide range of tasks, yet they are still hindered by their large computation cost in many industrial scenarios. In this paper, we propose to reduce such cost for CNNs through a self-adaptive network pruning method (SANP). Our method introduces a general Saliency-and-Pruning Module (SPM) for each convolutional layer, which learns to predict saliency scores and applies pruning for each channel. Given a total computation budget, SANP adaptively determines the pruning strategy with respect to each layer and each sample, such that the average computation cost meets the budget. This design allows SANP to be more efficient in computation, as well as more robust to datasets and backbones. Extensive experiments on 2 datasets and 3 backbones show that SANP surpasses state-of-the-art methods in both classification accuracy and pruning rate.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
150,011
2311.12433
Constellation Shaping under Phase Noise Impairment for Sub-THz Communications
The large untapped spectrum in the sub-THz allows for ultra-high throughput communication to realize many seemingly impossible applications in 6G. One of the challenges in radio communications in sub-THz is the hardware impairments. Specifically, phase noise is one key hardware impairment, which is accentuated as we increase the frequency and bandwidth. Furthermore, the moderate output power of the sub-THz power amplifier demands limits on peak to average power ratio (PAPR) signal design. Single carrier frequency domain equalization (SC-FDE) has been identified as a suitable candidate for sub-THz, although some challenges such as phase noise and PAPR still remain to be tackled. In this work, we design a phase noise robust, modest PAPR SC waveform by geometrically shaping the constellation under practical conditions. We formulate the waveform optimization problem in its augmented Lagrangian form and use a back-propagation-inspired technique to obtain a constellation design that is numerically robust to phase noise, while maintaining a relatively low PAPR compared to the conventional waveforms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
409,333
1712.05247
Intrinsic Point of Interest Discovery from Trajectory Data
This paper presents a framework for intrinsic point of interest discovery from trajectory databases. Intrinsic points of interest are regions of a geospatial area innately defined by the spatial and temporal aspects of trajectory data, and can be of varying size, shape, and resolution. Any trajectory database exhibits such points of interest, and hence are intrinsic, as compared to most other point of interest definitions which are said to be extrinsic, as they require trajectory metadata, external knowledge about the region the trajectories are observed, or other application-specific information. Spatial and temporal aspects are qualities of any trajectory database, making the framework applicable to data from any domain and of any resolution. The framework is developed under recent developments on the consistency of nonparametric hierarchical density estimators and enables the possibility of formal statistical inference and evaluation over such intrinsic points of interest. Comparisons of the POIs uncovered by the framework in synthetic truth data to thousands of parameter settings for common POI discovery methods show a marked improvement in fidelity without the need to tune any parameters by hand.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
86,713
1209.2515
Wavelet Based Image Coding Schemes : A Recent Survey
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
18,515
2003.13002
Divergence Conditions for Investigation and Control of Nonautonomous Dynamical Systems
The paper describes a novel method for studying the stability of nonautonomous dynamical systems. This method based on the flow and divergence of the vector field with coupling to the method of Lyapunov functions. The necessary and sufficient stability conditions are formulated. It is shown that the necessary stability conditions are related to the integral and differential forms of continuity equations with the sources (the flux is directed inward) located in the equilibrium points of the dynamical system. The sufficient stability conditions are applied to design the state feedback control laws. The proposed control law is found as a solution of partial differential inequality, whereas the control law based on Lyapunov technique is found from the solution of algebraic inequality. The examples illustrate the effectiveness of the proposed method compared with some existing ones.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
170,073
1905.02108
Are commercially implemented adaptive cruise control systems string stable?
In this article, we assess the string stability of seven 2018 model year adaptive cruise control (ACC) equipped vehicles that are widely available in the US market. Seven distinct vehicle models from two different vehicle makes are analyzed using data collected from more than 1,200 miles of driving in car-following experiments with ACC engaged by the follower vehicle. The resulting dataset is used to identify the parameters of a linear second order delay differential equation model that approximates the behavior of the black box ACC systems. The string stability of the data-fitted model associated with each vehicle is assessed, and the main finding is that all seven vehicle models have string unstable ACC systems. For one commonly available vehicle model that offers ACC as a standard feature on all trim levels, we validate the string stability finding with a multi-vehicle platoon experiment in which all vehicles are the same year, make, and model. In this test, an initial disturbance of 6 mph is amplified to a 25 mph disturbance, at which point the last vehicle in the platoon is observed to disengage the ACC. The data collected in the driving experiments is made available, representing the largest publicly available comparative driving dataset on ACC equipped vehicles.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
129,896
2009.10799
My Health Sensor, my Classifier: Adapting a Trained Classifier to Unlabeled End-User Data
In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available, and instead only access to a classifier trained on the source data is provided. Our solution, iteratively labels only high confidence sub-regions of the target data distribution, based on the belief of the classifier. Then it iteratively learns new classifiers from the expanding high-confidence dataset. The goal is to apply the proposed approach on DA for the task of sleep apnea detection and achieve personalization based on the needs of the patient. In a series of experiments with both open and closed sleep monitoring datasets, the proposed approach is applied to data from different sensors, for DA between the different datasets. The proposed approach outperforms in all experiments the classifier trained in the source domain, with an improvement of the kappa coefficient that varies from 0.012 to 0.242. Additionally, our solution is applied to digit classification DA between three well established digit datasets, to investigate the generalizability of the approach, and to allow for comparison with related work. Even without direct access to the source data, it achieves good results, and outperforms several well established unsupervised DA methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
196,985
2201.07286
Conservative Distributional Reinforcement Learning with Safety Constraints
Safety exploration can be regarded as a constrained Markov decision problem where the expected long-term cost is constrained. Previous off-policy algorithms convert the constrained optimization problem into the corresponding unconstrained dual problem by introducing the Lagrangian relaxation technique. However, the cost function of the above algorithms provides inaccurate estimations and causes the instability of the Lagrange multiplier learning. In this paper, we present a novel off-policy reinforcement learning algorithm called Conservative Distributional Maximum a Posteriori Policy Optimization (CDMPO). At first, to accurately judge whether the current situation satisfies the constraints, CDMPO adapts distributional reinforcement learning method to estimate the Q-function and C-function. Then, CDMPO uses a conservative value function loss to reduce the number of violations of constraints during the exploration process. In addition, we utilize Weighted Average Proportional Integral Derivative (WAPID) to update the Lagrange multiplier stably. Empirical results show that the proposed method has fewer violations of constraints in the early exploration process. The final test results also illustrate that our method has better risk control.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
275,980
1704.06217
Reinforcement Learning with External Knowledge and Two-Stage Q-functions for Predicting Popular Reddit Threads
This paper addresses the problem of predicting popularity of comments in an online discussion forum using reinforcement learning, particularly addressing two challenges that arise from having natural language state and action spaces. First, the state representation, which characterizes the history of comments tracked in a discussion at a particular point, is augmented to incorporate the global context represented by discussions on world events available in an external knowledge source. Second, a two-stage Q-learning framework is introduced, making it feasible to search the combinatorial action space while also accounting for redundancy among sub-actions. We experiment with five Reddit communities, showing that the two methods improve over previous reported results on this task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
72,141
2001.00127
Reinforcement Learning with Goal-Distance Gradient
Reinforcement learning usually uses the feedback rewards of environmental to train agents. But the rewards in the actual environment are sparse, and even some environments will not rewards. Most of the current methods are difficult to get good performance in sparse reward or non-reward environments. Although using shaped rewards is effective when solving sparse reward tasks, it is limited to specific problems and learning is also susceptible to local optima. We propose a model-free method that does not rely on environmental rewards to solve the problem of sparse rewards in the general environment. Our method use the minimum number of transitions between states as the distance to replace the rewards of environmental, and proposes a goal-distance gradient to achieve policy improvement. We also introduce a bridge point planning method based on the characteristics of our method to improve exploration efficiency, thereby solving more complex tasks. Experiments show that our method performs better on sparse reward and local optimal problems in complex environments than previous work.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
159,138
2404.01547
Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining
How to effectively explore multi-scale representations of rain streaks is important for image deraining. In contrast to existing Transformer-based methods that depend mostly on single-scale rain appearance, we develop an end-to-end multi-scale Transformer that leverages the potentially useful features in various scales to facilitate high-quality image reconstruction. To better explore the common degradation representations from spatially-varying rain streaks, we incorporate intra-scale implicit neural representations based on pixel coordinates with the degraded inputs in a closed-loop design, enabling the learned features to facilitate rain removal and improve the robustness of the model in complex scenarios. To ensure richer collaborative representation from different scales, we embed a simple yet effective inter-scale bidirectional feedback operation into our multi-scale Transformer by performing coarse-to-fine and fine-to-coarse information communication. Extensive experiments demonstrate that our approach, named as NeRD-Rain, performs favorably against the state-of-the-art ones on both synthetic and real-world benchmark datasets. The source code and trained models are available at https://github.com/cschenxiang/NeRD-Rain.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
443,460
2401.06518
Transitional Grid Maps: Joint Modeling of Static and Dynamic Occupancy
Autonomous agents rely on sensor data to construct representations of their environments, essential for predicting future events and planning their actions. However, sensor measurements suffer from limited range, occlusions, and sensor noise. These challenges become more evident in highly dynamic environments. This work proposes a probabilistic framework to jointly infer which parts of an environment are statically and which parts are dynamically occupied. We formulate the problem as a Bayesian network and introduce minimal assumptions that significantly reduce the complexity of the problem. Based on those, we derive Transitional Grid Maps (TGMs), an efficient analytical solution. Using real data, we demonstrate how this approach produces better maps by keeping track of both static and dynamic elements and, as a side effect, can help improve existing SLAM algorithms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
421,187
1810.07924
Explaining Machine Learning Models using Entropic Variable Projection
In this paper, we present a new explainability formalism designed to shed light on how each input variable of a test set impacts the predictions of machine learning models. Hence, we propose a group explainability formalism for trained machine learning decision rules, based on their response to the variability of the input variables distribution. In order to emphasize the impact of each input variable, this formalism uses an information theory framework that quantifies the influence of all input-output observations based on entropic projections. This is thus the first unified and model agnostic formalism enabling data scientists to interpret the dependence between the input variables, their impact on the prediction errors, and their influence on the output predictions. Convergence rates of the entropic projections are provided in the large sample case. Most importantly, we prove that computing an explanation in our framework has a low algorithmic complexity, making it scalable to real-life large datasets. We illustrate our strategy by explaining complex decision rules learned by using XGBoost, Random Forest or Deep Neural Network classifiers on various datasets such as Adult Income, MNIST, CelebA, Boston Housing, Iris, as well as synthetic ones. We finally make clear its differences with the explainability strategies LIME and SHAP, that are based on single observations. Results can be reproduced by using the freely distributed Python toolbox https://gems-ai.aniti.fr/.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
110,724
2007.05307
TIMELY: Improving Labeling Consistency in Medical Imaging for Cell Type Classification
Diagnosing diseases such as leukemia or anemia requires reliable counts of blood cells. Hematologists usually label and count microscopy images of blood cells manually. In many cases, however, cells in different maturity states are difficult to distinguish, and in combination with image noise and subjectivity, humans are prone to make labeling mistakes. This results in labels that are often not reproducible, which can directly affect the diagnoses. We introduce TIMELY, a probabilistic model that combines pseudotime inference methods with inhomogeneous hidden Markov trees, which addresses this challenge of label inconsistency. We show first on simulation data that TIMELY is able to identify and correct wrong labels with higher precision and recall than baseline methods for labeling correction. We then apply our method to two real-world datasets of blood cell data and show that TIMELY successfully finds inconsistent labels, thereby improving the quality of human-generated labels.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
186,637
2206.03149
Self-Training of Handwritten Word Recognition for Synthetic-to-Real Adaptation
Performances of Handwritten Text Recognition (HTR) models are largely determined by the availability of labeled and representative training samples. However, in many application scenarios labeled samples are scarce or costly to obtain. In this work, we propose a self-training approach to train a HTR model solely on synthetic samples and unlabeled data. The proposed training scheme uses an initial model trained on synthetic data to make predictions for the unlabeled target dataset. Starting from this initial model with rather poor performance, we show that a considerable adaptation is possible by training against the predicted pseudo-labels. Moreover, the investigated self-training strategy does not require any manually annotated training samples. We evaluate the proposed method on four widely used benchmark datasets and show its effectiveness on closing the gap to a model trained in a fully-supervised manner.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
301,161
2305.02897
An automatically discovered chain-of-thought prompt generalizes to novel models and datasets
Emergent chain-of-thought (CoT) reasoning capabilities promise to improve performance and explainability of large language models (LLMs). However, uncertainties remain about how reasoning strategies formulated for previous model generations generalize to new model generations and different datasets. In this small-scale study, we compare different reasoning strategies induced by zero-shot prompting across six recently released LLMs (davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5-xxl and Cohere command-xlarge) on a mixture of six question-answering datasets, including datasets from scientific and medical domains. Our findings demonstrate that while some variations in effectiveness occur, gains from CoT reasoning strategies remain robust across different models and datasets. GPT-4 has the most benefit from current state-of-the-art reasoning strategies and exhibits the best performance by applying a prompt previously discovered through automated discovery.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
362,207
2411.19942
Free-form Generation Enhances Challenging Clothed Human Modeling
Achieving realistic animated human avatars requires accurate modeling of pose-dependent clothing deformations. Existing learning-based methods heavily rely on the Linear Blend Skinning (LBS) of minimally-clothed human models like SMPL to model deformation. However, these methods struggle to handle loose clothing, such as long dresses, where the canonicalization process becomes ill-defined when the clothing is far from the body, leading to disjointed and fragmented results. To overcome this limitation, we propose a novel hybrid framework to model challenging clothed humans. Our core idea is to use dedicated strategies to model different regions, depending on whether they are close to or distant from the body. Specifically, we segment the human body into three categories: unclothed, deformed, and generated. We simply replicate unclothed regions that require no deformation. For deformed regions close to the body, we leverage LBS to handle the deformation. As for the generated regions, which correspond to loose clothing areas, we introduce a novel free-form, part-aware generator to model them, as they are less affected by movements. This free-form generation paradigm brings enhanced flexibility and expressiveness to our hybrid framework, enabling it to capture the intricate geometric details of challenging loose clothing, such as skirts and dresses. Experimental results on the benchmark dataset featuring loose clothing demonstrate that our method achieves state-of-the-art performance with superior visual fidelity and realism, particularly in the most challenging cases.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
512,437
1502.05241
NEFI: Network Extraction From Images
Networks and network-like structures are amongst the central building blocks of many technological and biological systems. Given a mathematical graph representation of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to graphs describing large scale networks such as social networks, protein-interaction networks, etc. In these applications, graph acquisition, i.e., the extraction of a mathematical graph from a network, is relatively simple. However, for many network-like structures, e.g. leaf venations, slime molds and mud cracks, data collection relies on images where graph extraction requires domain-specific solutions or even manual. Here we introduce Network Extraction From Images, NEFI, a software tool that automatically extracts accurate graphs from images of a wide range of networks originating in various domains. While there is previous work on graph extraction from images, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners from many disciplines to easily extract graph representations from images by supplying flexible tools from image processing, computer vision and graph theory bundled in a convenient package. Thus, NEFI constitutes a scalable alternative to tedious and error-prone manual graph extraction and special purpose tools. We anticipate NEFI to enable the collection of larger datasets by reducing the time spent on graph extraction. The analysis of these new datasets may open up the possibility to gain new insights into the structure and function of various types of networks. NEFI is open source and available http://nefi.mpi-inf.mpg.de.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
40,351
1812.07056
OCTID: Optical Coherence Tomography Image Database
Optical coherence tomography (OCT) is a non-invasive imaging modality which is widely used in clinical ophthalmology. OCT images are capable of visualizing deep retinal layers which is crucial for early diagnosis of retinal diseases. In this paper, we describe a comprehensive open-access database containing more than 500 highresolution images categorized into different pathological conditions. The image classes include Normal (NO), Macular Hole (MH), Age-related Macular Degeneration (AMD), Central Serous Retinopathy (CSR), and Diabetic Retinopathy (DR). The images were obtained from a raster scan protocol with a 2mm scan length and 512x1024 pixel resolution. We have also included 25 normal OCT images with their corresponding ground truth delineations which can be used for an accurate evaluation of OCT image segmentation. In addition, we have provided a user-friendly GUI which can be used by clinicians for manual (and semi-automated) segmentation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
116,733
2311.15421
Wired Perspectives: Multi-View Wire Art Embraces Generative AI
Creating multi-view wire art (MVWA), a static 3D sculpture with diverse interpretations from different viewpoints, is a complex task even for skilled artists. In response, we present DreamWire, an AI system enabling everyone to craft MVWA easily. Users express their vision through text prompts or scribbles, freeing them from intricate 3D wire organisation. Our approach synergises 3D B\'ezier curves, Prim's algorithm, and knowledge distillation from diffusion models or their variants (e.g., ControlNet). This blend enables the system to represent 3D wire art, ensuring spatial continuity and overcoming data scarcity. Extensive evaluation and analysis are conducted to shed insight on the inner workings of the proposed system, including the trade-off between connectivity and visual aesthetics.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
410,506
2412.14374
Scaling Deep Learning Training with MPMD Pipeline Parallelism
We present JaxPP, a system for efficiently scaling the training of large deep learning models with flexible pipeline parallelism. We introduce a seamless programming model that allows implementing user-defined pipeline schedules for gradient accumulation. JaxPP automatically distributes tasks, corresponding to pipeline stages, over a cluster of nodes and automatically infers the communication among them. We implement a MPMD runtime for asynchronous execution of SPMD tasks. The pipeline parallelism implementation of JaxPP improves hardware utilization by up to $1.11\times$ with respect to the best performing SPMD configuration.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
518,672
0808.0876
Capacity Regions and Bounds for a Class of Z-interference Channels
We define a class of Z-interference channels for which we obtain a new upper bound on the capacity region. The bound exploits a technique first introduced by Korner and Marton. A channel in this class has the property that, for the transmitter-receiver pair that suffers from interference, the conditional output entropy at the receiver is invariant with respect to the transmitted codewords. We compare the new capacity region upper bound with the Han/Kobayashi achievable rate region for interference channels. This comparison shows that our bound is tight in some cases, thereby yielding specific points on the capacity region as well as sum capacity for certain Z-interference channels. In particular, this result can be used as an alternate method to obtain sum capacity of Gaussian Z-interference channels. We then apply an additional restriction on our channel class: the transmitter-receiver pair that suffers from interference achieves its maximum output entropy with a single input distribution irrespective of the interference distribution. For these channels we show that our new capacity region upper bound coincides with the Han/Kobayashi achievable rate region, which is therefore capacity-achieving. In particular, for these channels superposition encoding with partial decoding is shown to be optimal and a single-letter characterization for the capacity region is obtained.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,170
2004.06496
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst-case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a Deep Q-Network policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios and a classic control task. This work extends one of our prior works with new performance guarantees, extensions to other RL algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
172,521
2112.05410
Multimedia Datasets for Anomaly Detection: A Review
Multimedia anomaly datasets play a crucial role in automated surveillance. They have a wide range of applications expanding from outlier objects/ situation detection to the detection of life-threatening events. For more than 1.5 decades, this field has attracted a lot of research attention, and as a result, more and more datasets dedicated to anomalous actions and object detection have been developed. Tapping these public anomaly datasets enable researchers to generate and compare various anomaly detection frameworks with the same input data. This paper presents a comprehensive survey on a variety of video, audio, as well as audio-visual datasets based on the application of anomaly detection. This survey aims to address the lack of a comprehensive comparison and analysis of multimedia public datasets based on anomaly detection. Also, it can assist researchers in selecting the best available dataset for bench-marking frameworks. Additionally, we discuss gaps in the existing dataset and insights for future direction towards developing multimodal anomaly detection datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
270,837
1908.10645
Machine-learning techniques for the optimal design of acoustic metamaterials
Recently, an increasing research effort has been dedicated to analyse the transmission and dispersion properties of periodic acoustic metamaterials, characterized by the presence of local resonators. Within this context, particular attention has been paid to the optimization of the amplitudes and center frequencies of selected stop and pass bands inside the Floquet-Bloch spectra of the acoustic metamaterials featured by a chiral or antichiral microstructure. Novel functional applications of such research are expected in the optimal parametric design of smart tunable mechanical filters and directional waveguides. The present paper deals with the maximization of the amplitude of low-frequency band gaps, by proposing suitable numerical techniques to solve the associated optimization problems. Specifically, the feasibility and effectiveness of Radial Basis Function networks and Quasi-Monte Carlo methods for the interpolation of the objective functions of such optimization problems are discussed, and their numerical application to a specific acoustic metamaterial with tetrachiral microstructure is presented. The discussion is motivated theoretically by the high computational effort often needed for an exact evaluation of the objective functions arising in band gap optimization problems, when iterative algorithms are used for their approximate solution. By replacing such functions with suitable surrogate objective functions constructed applying machine-learning techniques, well performing suboptimal solutions can be obtained with a smaller computational effort. Numerical results demonstrate the effective potential of the proposed approach. Current directions of research involving the use of additional machine-learning techniques are also presented.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
143,171
2211.10157
UMFuse: Unified Multi View Fusion for Human Editing applications
Numerous pose-guided human editing methods have been explored by the vision community due to their extensive practical applications. However, most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. This objective becomes ill-defined in cases when the target pose differs significantly from the input pose. Existing methods then resort to in-painting or style transfer to handle occlusions and preserve content. In this paper, we explore the utilization of multiple views to minimize the issue of missing information and generate an accurate representation of the underlying human model. To fuse knowledge from multiple viewpoints, we design a multi-view fusion network that takes the pose key points and texture from multiple source images and generates an explainable per-pixel appearance retrieval map. Thereafter, the encodings from a separate network (trained on a single-view human reposing task) are merged in the latent space. This enables us to generate accurate, precise, and visually coherent images for different editing tasks. We show the application of our network on two newly proposed tasks - Multi-view human reposing and Mix&Match Human Image generation. Additionally, we study the limitations of single-view editing and scenarios in which multi-view provides a better alternative.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
331,228
2402.04097
Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction
The ability of deep image prior (DIP) to recover high-quality images from incomplete or corrupted measurements has made it popular in inverse problems in image restoration and medical imaging including magnetic resonance imaging (MRI). However, conventional DIP suffers from severe overfitting and spectral bias effects. In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures. This study sheds light on important underlying properties for DIP-based recovery. Current research suggests that incorporating a reference image as network input can enhance DIP's performance in image reconstruction compared to using random inputs. However, obtaining suitable reference images requires supervision, and raises practical difficulties. In an attempt to overcome this obstacle, we further introduce a self-driven reconstruction process that concurrently optimizes both the network weights and the input while eliminating the need for training data. Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image. We demonstrate that our self-guided method surpasses both the original DIP and modern supervised methods in terms of MR image reconstruction performance and outperforms previous DIP-based schemes for image inpainting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
427,329
2112.08836
Imbalanced Sample Generation and Evaluation for Power System Transient Stability Using CTGAN
Although deep learning has achieved impressive advances in transient stability assessment of power systems, the insufficient and imbalanced samples still trap the training effect of the data-driven methods. This paper proposes a controllable sample generation framework based on Conditional Tabular Generative Adversarial Network (CTGAN) to generate specified transient stability samples. To fit the complex feature distribution of the transient stability samples, the proposed framework firstly models the samples as tabular data and uses Gaussian mixture models to normalize the tabular data. Then we transform multiple conditions into a single conditional vector to enable multi-conditional generation. Furthermore, this paper introduces three evaluation metrics to verify the quality of generated samples based on the proposed framework. Experimental results on the IEEE 39-bus system show that the proposed framework effectively balances the transient stability samples and significantly improves the performance of transient stability assessment models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,955
2107.07200
Real-Time Grasping Strategies Using Event Camera
Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. Compared with standard frame-based vision, neuromorphic vision has advantages of microsecond-level sampling rate and no motion blur. Building on that, the model-based and model-free approaches are developed for known and unknown objects' grasping respectively. For the model-based approach, event-based multi-view approach is used to localize the objects in the scene, and then point cloud processing allows for the clustering and registering of objects. Differently, the proposed model-free approach utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. The proposed approaches are experimentally validated with objects of different sizes, using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper. Moreover, the robustness of the two proposed event-based grasping approaches are validated in a low-light environment. This low-light operating ability shows a great advantage over the grasping using the standard frame-based vision. Furthermore, the developed model-free approach demonstrates the advantage of dealing with unknown object without prior knowledge compared to the proposed model-based approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
246,346
2001.04243
Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models
A weakly-supervised learning framework named as complementary-label learning has been proposed recently, where each sample is equipped with a single complementary label that denotes one of the classes the sample does not belong to. However, the existing complementary-label learning methods cannot learn from the easily accessible unlabeled samples and samples with multiple complementary labels, which are more informative. In this paper, to remove these limitations, we propose the novel multi-complementary and unlabeled learning framework that allows unbiased estimation of classification risk from samples with any number of complementary labels and unlabeled samples, for arbitrary loss functions and models. We first give an unbiased estimator of the classification risk from samples with multiple complementary labels, and then further improve the estimator by incorporating unlabeled samples into the risk formulation. The estimation error bounds show that the proposed methods are in the optimal parametric convergence rate. Finally, the experiments on both linear and deep models show the effectiveness of our methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
160,193
2208.05537
Decoding quantum Tanner codes
We introduce sequential and parallel decoders for quantum Tanner codes. When the Tanner code construction is applied to a sufficiently expanding square complex with robust local codes, we obtain a family of asymptotically good quantum low-density parity-check codes. In this case, our decoders provably correct arbitrary errors of weight linear in the code length, respectively in linear or logarithmic time. The same decoders are easily adapted to the expander lifted product codes of Panteleev and Kalachev. Along the way, we exploit recently established bounds on the robustness of random tensor codes to give a tighter bound on the minimum distance of quantum Tanner codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
312,424
2411.00612
How to Bridge Spatial and Temporal Heterogeneity in Link Prediction? A Contrastive Method
Temporal Heterogeneous Networks play a crucial role in capturing the dynamics and heterogeneity inherent in various real-world complex systems, rendering them a noteworthy research avenue for link prediction. However, existing methods fail to capture the fine-grained differential distribution patterns and temporal dynamic characteristics, which we refer to as spatial heterogeneity and temporal heterogeneity. To overcome such limitations, we propose a novel \textbf{C}ontrastive Learning-based \textbf{L}ink \textbf{P}rediction model, \textbf{CLP}, which employs a multi-view hierarchical self-supervised architecture to encode spatial and temporal heterogeneity. Specifically, aiming at spatial heterogeneity, we develop a spatial feature modeling layer to capture the fine-grained topological distribution patterns from node- and edge-level representations, respectively. Furthermore, aiming at temporal heterogeneity, we devise a temporal information modeling layer to perceive the evolutionary dependencies of dynamic graph topologies from time-level representations. Finally, we encode the spatial and temporal distribution heterogeneity from a contrastive learning perspective, enabling a comprehensive self-supervised hierarchical relation modeling for the link prediction task. Extensive experiments conducted on four real-world dynamic heterogeneous network datasets verify that our \mymodel consistently outperforms the state-of-the-art models, demonstrating an average improvement of 10.10\%, 13.44\% in terms of AUC and AP, respectively.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
504,667
2003.07797
Hyperplane Arrangements of Trained ConvNets Are Biased
We investigate the geometric properties of the functions learned by trained ConvNets in the preactivation space of their convolutional layers, by performing an empirical study of hyperplane arrangements induced by a convolutional layer. We introduce statistics over the weights of a trained network to study local arrangements and relate them to the training dynamics. We observe that trained ConvNets show a significant statistical bias towards regular hyperplane configurations. Furthermore, we find that layers showing biased configurations are critical to validation performance for the architectures considered, trained on CIFAR10, CIFAR100 and ImageNet.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
168,546
1605.02559
Robust imaging of hippocampal inner structure at 7T: in vivo acquisition protocol and methodological choices
OBJECTIVE:Motion-robust multi-slab imaging of hippocampal inner structure in vivo at 7T.MATERIALS AND METHODS:Motion is a crucial issue for ultra-high resolution imaging, such as can be achieved with 7T MRI. An acquisition protocol was designed for imaging hippocampal inner structure at 7T. It relies on a compromise between anatomical details visibility and robustness to motion. In order to reduce acquisition time and motion artifacts, the full slab covering the hippocampus was split into separate slabs with lower acquisition time. A robust registration approach was implemented to combine the acquired slabs within a final 3D-consistent high-resolution slab covering the whole hippocampus. Evaluation was performed on 50 subjects overall, made of three groups of subjects acquired using three acquisition settings; it focused on three issues: visibility of hippocampal inner structure, robustness to motion artifacts and registration procedure performance.RESULTS:Overall, T2-weighted acquisitions with interleaved slabs proved robust. Multi-slab registration yielded high quality datasets in 96 % of the subjects, thus compatible with further analyses of hippocampal inner structure.CONCLUSION:Multi-slab acquisition and registration setting is efficient for reducing acquisition time and consequently motion artifacts for ultra-high resolution imaging of the inner structure of the hippocampus.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,643
2408.09120
Time Series Analysis by State Space Learning
Time series analysis by state-space models is widely used in forecasting and extracting unobservable components like level, slope, and seasonality, along with explanatory variables. However, their reliance on traditional Kalman filtering frequently hampers their effectiveness, primarily due to Gaussian assumptions and the absence of efficient subset selection methods to accommodate the multitude of potential explanatory variables in today's big-data applications. Our research introduces the State Space Learning (SSL), a novel framework and paradigm that leverages the capabilities of statistical learning to construct a comprehensive framework for time series modeling and forecasting. By utilizing a regularized high-dimensional regression framework, our approach jointly extracts typical time series unobservable components, detects and addresses outliers, and selects the influence of exogenous variables within a high-dimensional space in polynomial time and global optimality guarantees. Through a controlled numerical experiment, we demonstrate the superiority of our approach in terms of subset selection of explanatory variables accuracy compared to relevant benchmarks. We also present an intuitive forecasting scheme and showcase superior performances relative to traditional time series models using a dataset of 48,000 monthly time series from the M4 competition. We extend the applicability of our approach to reformulate any linear state space formulation featuring time-varying coefficients into high-dimensional regularized regressions, expanding the impact of our research to other engineering applications beyond time series analysis. Finally, our proposed methodology is implemented within the Julia open-source package, ``StateSpaceLearning.jl".
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,287
2501.14073
LLMs are Vulnerable to Malicious Prompts Disguised as Scientific Language
As large language models (LLMs) have been deployed in various real-world settings, concerns about the harm they may propagate have grown. Various jailbreaking techniques have been developed to expose the vulnerabilities of these models and improve their safety. This work reveals that many state-of-the-art LLMs are vulnerable to malicious requests hidden behind scientific language. Specifically, our experiments with GPT4o, GPT4o-mini, GPT-4, LLama3-405B-Instruct, Llama3-70B-Instruct, Cohere, Gemini models demonstrate that, the models' biases and toxicity substantially increase when prompted with requests that deliberately misinterpret social science and psychological studies as evidence supporting the benefits of stereotypical biases. Alarmingly, these models can also be manipulated to generate fabricated scientific arguments claiming that biases are beneficial, which can be used by ill-intended actors to systematically jailbreak these strong LLMs. Our analysis studies various factors that contribute to the models' vulnerabilities to malicious requests in academic language. Mentioning author names and venues enhances the persuasiveness of models, and the bias scores increase as dialogues progress. Our findings call for a more careful investigation on the use of scientific data for training LLMs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
526,966
2006.05963
Balancing Fairness and Efficiency in an Optimization Model
Optimization models generally aim for efficiency by maximizing total benefit or minimizing cost. Yet a trade-off between fairness and efficiency is an important element of many practical decisions. We propose a principled and practical method for balancing these two criteria in an optimization model. Following a critical assessment of existing schemes, we define a set of social welfare functions (SWFs) that combine Rawlsian leximax fairness and utilitarianism and overcome some of the weaknesses of previous approaches. In particular, we regulate the equity/efficiency trade-off with a single parameter that has a meaningful interpretation in practical contexts. We formulate the SWFs using mixed integer constraints and sequentially maximize them subject to constraints that define the problem at hand. After providing practical step-by-step instructions for implementation, we demonstrate the method on problems of realistic size involving healthcare resource allocation and disaster preparation. The solution times are modest, ranging from a fraction of a second to 18 seconds for a given value of the trade-off parameter.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
181,266
1605.03764
Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions
This paper is dedicated to the long-term, or multi-step-ahead, time series prediction problem. We propose a novel method for training feed-forward neural networks, such as multilayer perceptrons, with tapped delay lines. Special batch calculation of derivatives called Forecasted Propagation Through Time and batch modification of the Extended Kalman Filter are introduced. Experiments were carried out on well-known time series benchmarks, the Mackey-Glass chaotic process and the Santa Fe Laser Data Series. Recurrent and feed-forward neural networks were evaluated.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
55,789
2209.08490
EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention
Accurate and robust localization is a fundamental need for mobile agents. Visual-inertial odometry (VIO) algorithms exploit the information from camera and inertial sensors to estimate position and translation. Recent deep learning based VIO models attract attentions as they provide pose information in a data-driven way, without the need of designing hand-crafted algorithms. Existing learning based VIO models rely on recurrent models to fuse multimodal data and process sensor signal, which are hard to train and not efficient enough. We propose a novel learning based VIO framework with external memory attention that effectively and efficiently combines visual and inertial features for states estimation. Our proposed model is able to estimate pose accurately and robustly, even in challenging scenarios, e.g., on overcast days and water-filled ground , which are difficult for traditional VIO algorithms to extract visual features. Experiments validate that it outperforms both traditional and learning based VIO baselines in different scenes.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
318,147
2406.07054
CoEvol: Constructing Better Responses for Instruction Finetuning through Multi-Agent Cooperation
In recent years, instruction fine-tuning (IFT) on large language models (LLMs) has garnered considerable attention to enhance model performance on unseen tasks. Attempts have been made on automatic construction and effective selection for IFT data. However, we posit that previous methods have not fully harnessed the potential of LLMs for enhancing data quality. The responses within IFT data could be further enhanced by leveraging the capabilities of LLMs themselves. In this paper, we propose CoEvol, an LLM-based multi-agent cooperation framework for the improvement of responses to instructions. To effectively refine the responses, we develop an iterative framework following a debate-advise-edit-judge paradigm. A two-stage multi-agent debate strategy is further devised to ensure the diversity and reliability of editing suggestions within the framework. Empirically, models equipped with CoEvol outperform competitive baselines evaluated by MT-Bench and AlpacaEval, demonstrating its effectiveness in enhancing instruction-following capabilities for LLMs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
462,879
1802.06757
Deep Inference of Personality Traits by Integrating Image and Word Use in Social Networks
Social media, as a major platform for communication and information exchange, is a rich repository of the opinions and sentiments of 2.3 billion users about a vast spectrum of topics. To sense the whys of certain social user's demands and cultural-driven interests, however, the knowledge embedded in the 1.8 billion pictures which are uploaded daily in public profiles has just started to be exploited since this process has been typically been text-based. Following this trend on visual-based social analysis, we present a novel methodology based on Deep Learning to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits. So the key contribution here is to explore whether OCEAN personality trait modeling can be addressed based on images, here called \emph{Mind{P}ics}, appearing with certain tags with psychological insights. We found that there is a correlation between those posted images and their accompanying texts, which can be successfully modeled using deep neural networks for personality estimation. The experimental results are consistent with previous cyber-psychology results based on texts or images. In addition, classification results on some traits show that some patterns emerge in the set of images corresponding to a specific text, in essence to those representing an abstract concept. These results open new avenues of research for further refining the proposed personality model under the supervision of psychology experts.
false
false
false
false
false
false
false
false
true
false
false
true
false
true
false
false
false
false
90,735
0908.4310
Co-occurrence Matrix and Fractal Dimension for Image Segmentation
One of the most important tasks in image processing problem and machine vision is object recognition, and the success of many proposed methods relies on a suitable choice of algorithm for the segmentation of an image. This paper focuses on how to apply texture operators based on the concept of fractal dimension and cooccurence matrix, to the problem of object recognition and a new method based on fractal dimension is introduced. Several images, in which the result of the segmentation can be shown, are used to illustrate the use of each method and a comparative study of each operator is made.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
4,361
1004.5421
Interference Mitigation through Limited Transmitter Cooperation
Interference limits performance in wireless networks, and cooperation among receivers or transmitters can help mitigate interference by forming distributed MIMO systems. Earlier work shows how limited receiver cooperation helps mitigate interference. The scenario with transmitter cooperation, however, is more difficult to tackle. In this paper we study the two-user Gaussian interference channel with conferencing transmitters to make progress towards this direction. We characterize the capacity region to within 6.5 bits/s/Hz, regardless of channel parameters. Based on the constant-to-optimality result, we show that there is an interesting reciprocity between the scenario with conferencing transmitters and the scenario with conferencing receivers, and their capacity regions are within a constant gap to each other. Hence in the interference-limited regime, the behavior of the benefit brought by transmitter cooperation is the same as that by receiver cooperation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,344
1801.10193
DeepDTA: Deep Drug-Target Binding Affinity Prediction
The identification of novel drug-target (DT) interactions is a substantial part of the drug discovery process. Most of the computational methods that have been proposed to predict DT interactions have focused on binary classification, where the goal is to determine whether a DT pair interacts or not. However, protein-ligand interactions assume a continuum of binding strength values, also called binding affinity and predicting this value still remains a challenge. The increase in the affinity data available in DT knowledge-bases allows the use of advanced learning techniques such as deep learning architectures in the prediction of binding affinities. In this study, we propose a deep-learning based model that uses only sequence information of both targets and drugs to predict DT interaction binding affinities. The few studies that focus on DT binding affinity prediction use either 3D structures of protein-ligand complexes or 2D features of compounds. One novel approach used in this work is the modeling of protein sequences and compound 1D representations with convolutional neural networks (CNNs). The results show that the proposed deep learning based model that uses the 1D representations of targets and drugs is an effective approach for drug target binding affinity prediction. The model in which high-level representations of a drug and a target are constructed via CNNs achieved the best Concordance Index (CI) performance in one of our larger benchmark data sets, outperforming the KronRLS algorithm and SimBoost, a state-of-the-art method for DT binding affinity prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
89,243
2412.19384
The Internet of Value: Integrating Blockchain and Lightning Network Micropayments for Knowledge Markets
Q&A websites rely on user-generated responses, with incentives such as reputation scores or monetary rewards often offered. While some users may find it intrinsically rewarding to assist others, studies indicate that payment can improve the quality and speed of answers. However, traditional payment processors impose minimum thresholds that many Q&A inquiries fall below. The introduction of Bitcoin enabled direct digital value transfer, yet frequent micropayments remain challenging. Recent advancements like the Lightning Network now allow frictionless micropayments by reducing costs and minimising reliance on intermediaries. This development fosters an "Internet of Value," where transferring even small amounts of money is as simple as sharing data. This study investigates integrating Lightning Network-based micropayment strategies into Q&A platforms, aiming to create a knowledge market free of minimum payment barriers. A survey was conducted to address the gap below the $2 payment level identified in prior research. Responses confirmed that incentives for asking and answering weaken as payments decrease. Findings reveal even minimal payments, such as {\pounds}0.01, significantly encourage higher quality and effort in responses. The study recommends micropayment incentives for service-oriented applications, particularly Q&A platforms. By leveraging the Lightning Network to remove barriers, a more open marketplace can emerge, improving engagement and outcomes. Further research is needed to confirm if users follow through on reported intentions when spending funds.
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
520,816
2501.01057
HPC Application Parameter Autotuning on Edge Devices: A Bandit Learning Approach
The growing necessity for enhanced processing capabilities in edge devices with limited resources has led us to develop effective methods for improving high-performance computing (HPC) applications. In this paper, we introduce LASP (Lightweight Autotuning of Scientific Application Parameters), a novel strategy designed to address the parameter search space challenge in edge devices. Our strategy employs a multi-armed bandit (MAB) technique focused on online exploration and exploitation. Notably, LASP takes a dynamic approach, adapting seamlessly to changing environments. We tested LASP with four HPC applications: Lulesh, Kripke, Clomp, and Hypre. Its lightweight nature makes it particularly well-suited for resource-constrained edge devices. By employing the MAB framework to efficiently navigate the search space, we achieved significant performance improvements while adhering to the stringent computational limits of edge devices. Our experimental results demonstrate the effectiveness of LASP in optimizing parameter search on edge devices.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
521,908
1301.0610
A New Class of Upper Bounds on the Log Partition Function
Bounds on the log partition function are important in a variety of contexts, including approximate inference, model fitting, decision theory, and large deviations analysis. We introduce a new class of upper bounds on the log partition function, based on convex combinations of distributions in the exponential domain, that is applicable to an arbitrary undirected graphical model. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe free energy, but distinguished by the following desirable properties: i. they are cnvex, and have a unique global minimum; and ii. the global minimum gives an upper bound on the log partition function. The global minimum is defined by stationary conditions very similar to those defining fixed points of belief propagation or tree-based reparameterization Wainwright et al., 2001. As with BP fixed points, the elements of the minimizing argument can be used as approximations to the marginals of the original model. The analysis described here can be extended to structures of higher treewidth e.g., hypertrees, thereby making connections with more advanced approximations e.g., Kikuchi and variants Yedidia et al., 2001; Minka, 2001.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
20,790