id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1901.09681 | Network Lens: Node Classification in Topologically Heterogeneous
Networks | We study the problem of identifying different behaviors occurring in different parts of a large heterogenous network. We zoom in to the network using lenses of different sizes to capture the local structure of the network. These network signatures are then weighted to provide a set of predicted labels for every node. We achieve a peak accuracy of $\sim42\%$ (random=$11\%$) on two networks with $\sim100,000$ and $\sim1,000,000$ nodes each. Further, we perform better than random even when the given node is connected to up to 5 different types of networks. Finally, we perform this analysis on homogeneous networks and show that highly structured networks have high homogeneity. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 119,815 |
2210.01292 | Data-Efficient Characterization of the Global Dynamics of Robot
Controllers with Confidence Guarantees | This paper proposes an integration of surrogate modeling and topology to significantly reduce the amount of data required to describe the underlying global dynamics of robot controllers, including closed-box ones. A Gaussian Process (GP), trained with randomized short trajectories over the state-space, acts as a surrogate model for the underlying dynamical system. Then, a combinatorial representation is built and used to describe the dynamics in the form of a directed acyclic graph, known as {\it Morse graph}. The Morse graph is able to describe the system's attractors and their corresponding regions of attraction (\roa). Furthermore, a pointwise confidence level of the global dynamics estimation over the entire state space is provided. In contrast to alternatives, the framework does not require estimation of Lyapunov functions, alleviating the need for high prediction accuracy of the GP. The framework is suitable for data-driven controllers that do not expose an analytical model as long as Lipschitz-continuity is satisfied. The method is compared against established analytical and recent machine learning alternatives for estimating \roa s, outperforming them in data efficiency without sacrificing accuracy. Link to code: https://go.rutgers.edu/49hy35en | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 321,199 |
1409.0758 | Comparing Stochastic Differential Equations and Agent-Based Modelling
and Simulation for Early-stage Cancer | There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 35,757 |
2502.13668 | PeerQA: A Scientific Question Answering Dataset from Peer Reviews | We present PeerQA, a real-world, scientific, document-level Question Answering (QA) dataset. PeerQA questions have been sourced from peer reviews, which contain questions that reviewers raised while thoroughly examining the scientific article. Answers have been annotated by the original authors of each paper. The dataset contains 579 QA pairs from 208 academic articles, with a majority from ML and NLP, as well as a subset of other scientific communities like Geoscience and Public Health. PeerQA supports three critical tasks for developing practical QA systems: Evidence retrieval, unanswerable question classification, and answer generation. We provide a detailed analysis of the collected dataset and conduct experiments establishing baseline systems for all three tasks. Our experiments and analyses reveal the need for decontextualization in document-level retrieval, where we find that even simple decontextualization approaches consistently improve retrieval performance across architectures. On answer generation, PeerQA serves as a challenging benchmark for long-context modeling, as the papers have an average size of 12k tokens. Our code and data is available at https://github.com/UKPLab/peerqa. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 535,462 |
2307.13762 | Implementing and Benchmarking the Locally Competitive Algorithm on the
Loihi 2 Neuromorphic Processor | Neuromorphic processors have garnered considerable interest in recent years for their potential in energy-efficient and high-speed computing. The Locally Competitive Algorithm (LCA) has been utilized for power efficient sparse coding on neuromorphic processors, including the first Loihi processor. With the Loihi 2 processor enabling custom neuron models and graded spike communication, more complex implementations of LCA are possible. We present a new implementation of LCA designed for the Loihi 2 processor and perform an initial set of benchmarks comparing it to LCA on CPU and GPU devices. In these experiments LCA on Loihi 2 is orders of magnitude more efficient and faster for large sparsity penalties, while maintaining similar reconstruction quality. We find this performance improvement increases as the LCA parameters are tuned towards greater representation sparsity. Our study highlights the potential of neuromorphic processors, particularly Loihi 2, in enabling intelligent, autonomous, real-time processing on small robots, satellites where there are strict SWaP (small, lightweight, and low power) requirements. By demonstrating the superior performance of LCA on Loihi 2 compared to conventional computing device, our study suggests that Loihi 2 could be a valuable tool in advancing these types of applications. Overall, our study highlights the potential of neuromorphic processors for efficient and accurate data processing on resource-constrained devices. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 381,691 |
2310.11614 | Learning a Hierarchical Planner from Humans in Multiple Generations | A typical way in which a machine acquires knowledge from humans is by programming. Compared to learning from demonstrations or experiences, programmatic learning allows the machine to acquire a novel skill as soon as the program is written, and, by building a library of programs, a machine can quickly learn how to perform complex tasks. However, as programs often take their execution contexts for granted, they are brittle when the contexts change, making it difficult to adapt complex programs to new contexts. We present natural programming, a library learning system that combines programmatic learning with a hierarchical planner. Natural programming maintains a library of decompositions, consisting of a goal, a linguistic description of how this goal decompose into sub-goals, and a concrete instance of its decomposition into sub-goals. A user teaches the system via curriculum building, by identifying a challenging yet not impossible goal along with linguistic hints on how this goal may be decomposed into sub-goals. The system solves for the goal via hierarchical planning, using the linguistic hints to guide its probability distribution in proposing the right plans. The system learns from this interaction by adding newly found decompositions in the successful search into its library. Simulated studies and a human experiment (n=360) on a controlled environment demonstrate that natural programming can robustly compose programs learned from different users and contexts, adapting faster and solving more complex tasks when compared to programmatic baselines. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 400,706 |
2311.17664 | On the Convergence Rate of Linear Datalogo over Stable Semirings | Datalogo is an extension of Datalog, where instead of a program being a collection of union of conjunctive queries over the standard Boolean semiring, a program may now be a collection of sum-sum-product queries over an arbitrary commutative partially ordered pre-semiring. Datalogo is more powerful than Datalog in that its additional algebraic structure alows for supporting recursion with aggregation. At the same time, Datalogo retains the syntactic and semantic simplicity of Datalog: Datalogo has declarative least fixpoint semantics. The least fixpoint can be found via the na\"ive evaluation algorithm that repeatedly applies the immediate sequence opeator until no further change is possible. It was shown that, when the underlying semiring is $p$-stable, then the naive evaluation of any Datalogo program over the semiring converges in a finite number of steps. However, the upper bounds on the rate of convergence were exponential in the number of ground IDB atoms. This paper establishes polynomial upper bounds on the convergence rate of the na\"ive algorithm on {\bf linear} Datalogo programs, which is quite common in practice. In particular, the main result of this paper is that the convergence rate of linear Datalogo programs under any $p$-stable semiring is $O(pn^3)$. Furthermore, we show a matching lower bound by constructing a $p$-stable semiring and a linear Datalogo program that requires $\Omega(pn^3)$ iterations for the na\"ive iteration algorithm to converge. Next, we study the convergence rate in terms of the number of elements in the semiring for linear Datalogo programs. When $L$ is the number of elements, the convergence rate is bounded by $O(pn \log L)$. This significantly improves the convergence rate for small $L$. We show a nearly matching lower bound as well. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 411,377 |
2312.13175 | Nonlinear moving horizon estimation for robust state and parameter
estimation -- extended version | We propose a moving horizon estimation scheme to estimate the states and the unknown constant parameters of general nonlinear uncertain discrete-time systems. The proposed framework and analysis explicitly do not involve the a priori verification of a particular excitation condition for the parameters. Instead, we use online information about the actual excitation of the parameters at any time during operation and ensure that the regularization term in the cost function is always automatically selected appropriately. This ensures that the state and parameter estimation error is bounded for all times, even if the parameters are never (or only rarely) excited during operation. Robust exponential stability of the state and parameter estimation error emerges under an additional uniform condition on the maximum duration of insufficient excitation. The theoretical results are illustrated by a numerical example. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 417,230 |
2006.06480 | Adaptation Strategies for Automated Machine Learning on Evolving Data | Automated Machine Learning (AutoML) systems have been shown to efficiently build good models for new datasets. However, it is often not clear how well they can adapt when the data evolves over time. The main goal of this study is to understand the effect of data stream challenges such as concept drift on the performance of AutoML methods, and which adaptation strategies can be employed to make them more robust. To that end, we propose 6 concept drift adaptation strategies and evaluate their effectiveness on different AutoML approaches. We do this for a variety of AutoML approaches for building machine learning pipelines, including those that leverage Bayesian optimization, genetic programming, and random search with automated stacking. These are evaluated empirically on real-world and synthetic data streams with different types of concept drift. Based on this analysis, we propose ways to develop more sophisticated and robust AutoML techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,446 |
2211.05189 | Deterministic Random Walk Model in NetLogo and the Identification of
Asymmetric Saturation Time in Random Graph | Interactive programming environments are powerful tools for promoting innovative network thinking, teaching science of complexity, and exploring emergent phenomena. This paper reports on our recent development of the deterministic random walk model in NetLogo, a leading platform for computational thinking, eco-system thinking, and multi-agent cross-platform programming environment. The deterministic random walk is foundational to modeling dynamical processes on complex networks. Inspired by the temporal visualizations offered in NetLogo, we investigated the relationship between network topology and diffusion saturation time for the deterministic random walk model. Our analysis uncovers that in Erd\H{o}s-R\'{e}nyi graphs, the saturation time exhibits an asymmetric pattern with a considerable probability of occurrence. This behavior occurs when the hubs, defined as nodes with relatively higher number of connections, emerge in Erd\H{o}s-R\'{e}nyi graphs. Yet, our analysis yields that the hubs in Barab\'{a}si-Albert model stabilize the the convergence time of the deterministic random walk model. These findings strongly suggest that depending on the dynamical process running on complex networks, complementing characteristics other than the degree need to be taken into account for considering a node as a hub. We have made our development open-source, available to the public at no cost at https://github.com/bravandi/NetLogo-Dynamical-Processes. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 329,457 |
1905.04433 | Learning an Unknown Network State in Routing Games | We study learning dynamics induced by myopic travelers who repeatedly play a routing game on a transportation network with an unknown state. The state impacts cost functions of one or more edges of the network. In each stage, travelers choose their routes according to Wardrop equilibrium based on public belief of the state. This belief is broadcast by an information system that observes the edge loads and realized costs on the used edges, and performs a Bayesian update to the prior stage's belief. We show that the sequence of public beliefs and edge load vectors generated by the repeated play converge almost surely. In any rest point, travelers have no incentive to deviate from the chosen routes and accurately learn the true costs on the used edges. However, the costs on edges that are not used may not be accurately learned. Thus, learning can be incomplete in that the edge load vectors at rest point and complete information equilibrium can be different. We present some conditions for complete learning and illustrate situations when such an outcome is not guaranteed. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 130,466 |
2203.09831 | DTA: Physical Camouflage Attacks using Differentiable Transformation
Network | To perform adversarial attacks in the physical world, many studies have proposed adversarial camouflage, a method to hide a target object by applying camouflage patterns on 3D object surfaces. For obtaining optimal physical adversarial camouflage, previous studies have utilized the so-called neural renderer, as it supports differentiability. However, existing neural renderers cannot fully represent various real-world transformations due to a lack of control of scene parameters compared to the legacy photo-realistic renderers. In this paper, we propose the Differentiable Transformation Attack (DTA), a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models with a wide range of transformations. It utilizes our novel Differentiable Transformation Network (DTN), which learns the expected transformation of a rendered object when the texture is changed while preserving the original properties of the target object. Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistic renderers including various physical-world transformations and the benefit of white-box access by offering differentiability. Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models in the photo-realistic environment (i.e., CARLA on Unreal Engine). Furthermore, our demonstration on a scaled Tesla Model 3 proves the applicability and transferability of our method to the real world. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 286,305 |
2002.03742 | Dynamic Error-bounded Lossy Compression (EBLC) to Reduce the Bandwidth
Requirement for Real-time Vision-based Pedestrian Safety Applications | As camera quality improves and their deployment moves to areas with limited bandwidth, communication bottlenecks can impair real-time constraints of an ITS application, such as video-based real-time pedestrian detection. Video compression reduces the bandwidth requirement to transmit the video but degrades the video quality. As the quality level of the video decreases, it results in the corresponding decreases in the accuracy of the vision-based pedestrian detection model. Furthermore, environmental conditions (e.g., rain and darkness) alter the compression ratio and can make maintaining a high pedestrian detection accuracy more difficult. The objective of this study is to develop a real-time error-bounded lossy compression (EBLC) strategy to dynamically change the video compression level depending on different environmental conditions in order to maintain a high pedestrian detection accuracy. We conduct a case study to show the efficacy of our dynamic EBLC strategy for real-time vision-based pedestrian detection under adverse environmental conditions. Our strategy selects the error tolerances dynamically for lossy compression that can maintain a high detection accuracy across a representative set of environmental conditions. Analyses reveal that our strategy increases pedestrian detection accuracy up to 14% and reduces the communication bandwidth up to 14x for adverse environmental conditions compared to the same conditions but without our dynamic EBLC strategy. Our dynamic EBLC strategy is independent of detection models and environmental conditions allowing other detection models and environmental conditions to be easily incorporated in our strategy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 163,384 |
2201.11307 | Dissecting the impact of different loss functions with gradient surgery | Pair-wise loss is an approach to metric learning that learns a semantic embedding by optimizing a loss function that encourages images from the same semantic class to be mapped closer than images from different classes. The literature reports a large and growing set of variations of the pair-wise loss strategies. Here we decompose the gradient of these loss functions into components that relate to how they push the relative feature positions of the anchor-positive and anchor-negative pairs. This decomposition allows the unification of a large collection of current pair-wise loss functions. Additionally, explicitly constructing pair-wise gradient updates to separate out these effects gives insights into which have the biggest impact, and leads to a simple algorithm that beats the state of the art for image retrieval on the CAR, CUB and Stanford Online products datasets. | false | false | false | false | true | true | true | false | false | false | false | true | false | false | false | false | false | false | 277,255 |
2010.14489 | Distributed Constraint-Coupled Optimization via Primal Decomposition
over Random Time-Varying Graphs | The paper addresses large-scale, convex optimization problems that need to be solved in a distributed way by agents communicating according to a random time-varying graph. Specifically, the goal of the network is to minimize the sum of local costs, while satisfying local and coupling constraints. Agents communicate according to a time-varying model in which edges of an underlying connected graph are active at each iteration with certain non-uniform probabilities. By relying on a primal decomposition scheme applied to an equivalent problem reformulation, we propose a novel distributed algorithm in which agents negotiate a local allocation of the total resource only with neighbors with active communication links. The algorithm is studied as a subgradient method with block-wise updates, in which blocks correspond to the graph edges that are active at each iteration. Thanks to this analysis approach, we show almost sure convergence to the optimal cost of the original problem and almost sure asymptotic primal recovery without resorting to averaging mechanisms typically employed in dual decomposition schemes. Explicit sublinear convergence rates are provided under the assumption of diminishing and constant step-sizes. Finally, an extensive numerical study on a plug-in electric vehicle charging problem corroborates the theoretical results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 203,460 |
1701.07810 | Intelligent Topic Selection for Low-Cost Information Retrieval
Evaluation: A New Perspective on Deep vs. Shallow Judging | While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging. While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together 1) topic selection method 2) the effect of topic familiarity on human judging speed and 3) how different topic generation processes impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 67,359 |
2405.00749 | More is Better: Deep Domain Adaptation with Multiple Sources | In many practical applications, it is often difficult and expensive to obtain large-scale labeled data to train state-of-the-art deep neural networks. Therefore, transferring the learned knowledge from a separate, labeled source domain to an unlabeled or sparsely labeled target domain becomes an appealing alternative. However, direct transfer often results in significant performance decay due to domain shift. Domain adaptation (DA) aims to address this problem by aligning the distributions between the source and target domains. Multi-source domain adaptation (MDA) is a powerful and practical extension in which the labeled data may be collected from multiple sources with different distributions. In this survey, we first define various MDA strategies. Then we systematically summarize and compare modern MDA methods in the deep learning era from different perspectives, followed by commonly used datasets and a brief benchmark. Finally, we discuss future research directions for MDA that are worth investigating. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 451,069 |
2312.03151 | Multitask Learning Can Improve Worst-Group Outcomes | In order to create machine learning systems that serve a variety of users well, it is vital to not only achieve high average performance but also ensure equitable outcomes across diverse groups. However, most machine learning methods are designed to improve a model's average performance on a chosen end task without consideration for their impact on worst group error. Multitask learning (MTL) is one such widely used technique. In this paper, we seek not only to understand the impact of MTL on worst-group accuracy but also to explore its potential as a tool to address the challenge of group-wise fairness. We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself. In settings with few or no group annotations, we find that multitasking often, but not consistently, achieves better worst-group accuracy than Just-Train-Twice (JTT; \citet{pmlr-v139-liu21f}) -- a representative distributionally robust optimization (DRO) method. Leveraging insights from synthetic data experiments, we propose to modify standard MTL by regularizing the joint multitask representation space. We run a large number of fine-tuning experiments across computer vision and natural language processing datasets and find that our regularized MTL approach \emph{consistently} outperforms JTT on both average and worst-group outcomes. Our official code can be found here: \href{https://github.com/atharvajk98/MTL-group-robustness.git}{\url{https://github.com/atharvajk98/MTL-group-robustness}}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 413,151 |
2411.11758 | The Power of Many: Multi-Agent Multimodal Models for Cultural Image
Captioning | Large Multimodal Models (LMMs) exhibit impressive performance across various multimodal tasks. However, their effectiveness in cross-cultural contexts remains limited due to the predominantly Western-centric nature of most data and models. Conversely, multi-agent models have shown significant capability in solving complex tasks. Our study evaluates the collective performance of LMMs in a multi-agent interaction setting for the novel task of cultural image captioning. Our contributions are as follows: (1) We introduce MosAIC, a Multi-Agent framework to enhance cross-cultural Image Captioning using LMMs with distinct cultural personas; (2) We provide a dataset of culturally enriched image captions in English for images from China, India, and Romania across three datasets: GeoDE, GD-VCR, CVQA; (3) We propose a culture-adaptable metric for evaluating cultural information within image captions; and (4) We show that the multi-agent interaction outperforms single-agent models across different metrics, and offer valuable insights for future research. Our dataset and models can be accessed at https://github.com/MichiganNLP/MosAIC. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 509,163 |
2404.16556 | Conditional Distribution Modelling for Few-Shot Image Synthesis with
Diffusion Models | Few-shot image synthesis entails generating diverse and realistic images of novel categories using only a few example images. While multiple recent efforts in this direction have achieved impressive results, the existing approaches are dependent only upon the few novel samples available at test time in order to generate new images, which restricts the diversity of the generated images. To overcome this limitation, we propose Conditional Distribution Modelling (CDM) -- a framework which effectively utilizes Diffusion models for few-shot image generation. By modelling the distribution of the latent space used to condition a Diffusion process, CDM leverages the learnt statistics of the training data to get a better approximation of the unseen class distribution, thereby removing the bias arising due to limited number of few shot samples. Simultaneously, we devise a novel inversion based optimization strategy that further improves the approximated unseen class distribution, and ensures the fidelity of the generated samples to the unseen class. The experimental results on four benchmark datasets demonstrate the effectiveness of our proposed CDM for few-shot generation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 449,541 |
2404.03429 | Scaffolding Language Learning via Multi-modal Tutoring Systems with
Pedagogical Instructions | Intelligent tutoring systems (ITSs) that imitate human tutors and aim to provide immediate and customized instructions or feedback to learners have shown their effectiveness in education. With the emergence of generative artificial intelligence, large language models (LLMs) further entitle the systems to complex and coherent conversational interactions. These systems would be of great help in language education as it involves developing skills in communication, which, however, drew relatively less attention. Additionally, due to the complicated cognitive development at younger ages, more endeavors are needed for practical uses. Scaffolding refers to a teaching technique where teachers provide support and guidance to students for learning and developing new concepts or skills. It is an effective way to support diverse learning needs, goals, processes, and outcomes. In this work, we investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning. We construct different types of scaffolding tutoring systems grounded in four fundamental learning theories: knowledge construction, inquiry-based learning, dialogic teaching, and zone of proximal development. For qualitative and quantitative analyses, we build and refine a seven-dimension rubric to evaluate the scaffolding process. In our experiment on GPT-4V, we observe that LLMs demonstrate strong potential to follow pedagogical instructions and achieve self-paced learning in different student groups. Moreover, we extend our evaluation framework from a manual to an automated approach, paving the way to benchmark various conversational tutoring systems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 444,256 |
2201.10247 | Identification of System Vulnerability under a Smart Sensor Attack via
Attack Model Reduction | In this work, we investigate how to make use of model reduction techniques to identify the vulnerability of a closed-loop system, consisting of a plant and a supervisor, that might invite attacks. Here, the system vulnerability refers to the existence of key observation sequences that could be exploited by a specific smart sensor attack to cause damage infliction. We consider a nondeterministic smart attack, i.e., there might exist more than one attack choice over each received observation, and adopt our previously proposed modeling framework, where such an attack is captured by a standard finite-state automaton. For a given supervisor S and a smart sensor attack model A, another smart attack model A' is called attack equivalent to A with respect to S, if the resulting compromised supervisor, defined as the composition of the supervisor S and attack model A', is control equivalent to the original compromised supervisor, defined as the composition of S and A. Following the spirit of supervisor reduction that relies on the concept of control congruence, we will show that, this problem of synthesizing a reduced smart attack model A' that is attack equivalent to A with respect to S, can be transformed to a classical supervisor reduction problem, making all existing synthesis tools available for supervisor reduction directly applicable to our problem. A simplified and ideally minimum-state attack model can reveal all necessary observation sequences for the attacker to be successful, thus, reminds system designers to take necessary precautions in advance, which may improve system resilience significantly. An example is presented to show the effectiveness of our proposed attack model reduction technique to identify the system vulnerability. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 276,923 |
2407.14700 | Composer's Assistant 2: Interactive Multi-Track MIDI Infilling with
Fine-Grained User Control | We introduce Composer's Assistant 2, a system for interactive human-computer composition in the REAPER digital audio workstation. Our work upgrades the Composer's Assistant system (which performs multi-track infilling of symbolic music at the track-measure level) with a wide range of new controls to give users fine-grained control over the system's outputs. Controls introduced in this work include two types of rhythmic conditioning controls, horizontal and vertical note onset density controls, several types of pitch controls, and a rhythmic interest control. We train a T5-like transformer model to implement these controls and to serve as the backbone of our system. With these controls, we achieve a dramatic improvement in objective metrics over the original system. We also study how well our model understands the meaning of our controls, and we conduct a listening study that does not find a significant difference between real music and music composed in a co-creative fashion with our system. We release our complete system, consisting of source code, pretrained models, and REAPER scripts. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 474,869 |
2102.04060 | OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications | Many applications of Visual SLAM, such as augmented reality, virtual reality, robotics or autonomous driving, require versatile, robust and precise solutions, most often with real-time capability. In this work, we describe OV$^{2}$SLAM, a fully online algorithm, handling both monocular and stereo camera setups, various map scales and frame-rates ranging from a few Hertz up to several hundreds. It combines numerous recent contributions in visual localization within an efficient multi-threaded architecture. Extensive comparisons with competing algorithms shows the state-of-the-art accuracy and real-time performance of the resulting algorithm. For the benefit of the community, we release the source code: \url{https://github.com/ov2slam/ov2slam}. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 218,980 |
2009.12145 | Direct computation of nonlinear mapping via normal form for
reduced-order models of finite element nonlinear structures | The direct computation of the third-order normal form for a geometrically nonlinear structure discretised with the finite element (FE) method, is detailed. The procedure allows to define a nonlinear mapping in order to derive accurate reduced-order models (ROM) relying on invariant manifold theory. The proposed reduction strategy is direct and simulation free, in the sense that it allows to pass from physical coordinates (FE nodes) to normal coordinates, describing the dynamics in an invariant-based span of the phase space. The number of master modes for the ROM is not a priori limited since a complete change of coordinate is proposed. The underlying theory ensures the quality of the predictions thanks to the invariance property of the reduced subspace, together with their curvatures in phase space that accounts for the nonresonant nonlinear couplings. The method is applied to a beam discretised with 3D elements and shows its ability in recovering internal resonance at high energy. Then a fan blade model is investigated and the correct prediction given by the ROMs are assessed and discussed. A method is proposed to approximate an aggregate value for the damping, that takes into account the damping coefficients of all the slave modes, and also using the Rayleigh damping model as input. Frequency-response curves for the beam and the blades are then exhibited, showing the accuracy of the proposed method. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 197,345 |
2309.04800 | VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable
Human Image Synthesis | Unsupervised learning of 3D-aware generative adversarial networks has lately made much progress. Some recent work demonstrates promising results of learning human generative models using neural articulated radiance fields, yet their generalization ability and controllability lag behind parametric human models, i.e., they do not perform well when generalizing to novel pose/shape and are not part controllable. To solve these problems, we propose VeRi3D, a generative human vertex-based radiance field parameterized by vertices of the parametric human template, SMPL. We map each 3D point to the local coordinate system defined on its neighboring vertices, and use the corresponding vertex feature and local coordinates for mapping it to color and density values. We demonstrate that our simple approach allows for generating photorealistic human images with free control over camera pose, human pose, shape, as well as enabling part-level editing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 390,857 |
2311.13665 | A Joint Gradient and Loss Based Clustered Federated Learning Design | In this paper, a novel clustered FL framework that enables distributed edge devices with non-IID data to independently form several clusters in a distributed manner and implement FL training within each cluster is proposed. In particular, our designed clustered FL algorithm must overcome two challenges associated with FL training. First, the server has limited FL training information (i.e., the parameter server can only obtain the FL model information of each device) and limited computational power for finding the differences among a large amount of devices. Second, each device does not have the data information of other devices for device clustering and can only use global FL model parameters received from the server and its data information to determine its cluster identity, which will increase the difficulty of device clustering. To overcome these two challenges, we propose a joint gradient and loss based distributed clustering method in which each device determines its cluster identity considering the gradient similarity and training loss. The proposed clustering method not only considers how a local FL model of one device contributes to each cluster but also the direction of gradient descent thus improving clustering speed. By delegating clustering decisions to edge devices, each device can fully leverage its private data information to determine its own cluster identity, thereby reducing clustering overhead and improving overall clustering performance. Simulation results demonstrate that our proposed clustered FL algorithm can reduce clustering iterations by up to 99% compared to the existing baseline. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 409,824 |
2411.01683 | ROAD-Waymo: Action Awareness at Scale for Autonomous Driving | Autonomous Vehicle (AV) perception systems require more than simply seeing, via e.g., object detection or scene segmentation. They need a holistic understanding of what is happening within the scene for safe interaction with other road users. Few datasets exist for the purpose of developing and training algorithms to comprehend the actions of other road users. This paper presents ROAD-Waymo, an extensive dataset for the development and benchmarking of techniques for agent, action, location and event detection in road scenes, provided as a layer upon the (US) Waymo Open dataset. Considerably larger and more challenging than any existing dataset (and encompassing multiple cities), it comes with 198k annotated video frames, 54k agent tubes, 3.9M bounding boxes and a total of 12.4M labels. The integrity of the dataset has been confirmed and enhanced via a novel annotation pipeline designed for automatically identifying violations of requirements specifically designed for this dataset. As ROAD-Waymo is compatible with the original (UK) ROAD dataset, it provides the opportunity to tackle domain adaptation between real-world road scenarios in different countries within a novel benchmark: ROAD++. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 505,180 |
2410.21258 | Quantum computing and persistence in topological data analysis | Topological data analysis (TDA) aims to extract noise-robust features from a data set by examining the number and persistence of holes in its topology. We show that a computational problem closely related to a core task in TDA -- determining whether a given hole persists across different length scales -- is $\mathsf{BQP}_1$-hard and contained in $\mathsf{BQP}$. This result implies an exponential quantum speedup for this problem under standard complexity-theoretic assumptions. Our approach relies on encoding the persistence of a hole in a variant of the guided sparse Hamiltonian problem, where the guiding state is constructed from a harmonic representative of the hole. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 503,150 |
2005.00675 | Opportunistic Decoding with Timely Correction for Simultaneous
Translation | Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 175,324 |
2401.13961 | TriSAM: Tri-Plane SAM for zero-shot cortical blood vessel segmentation
in VEM images | While imaging techniques at macro and mesoscales have garnered substantial attention and resources, microscale Volume Electron Microscopy (vEM) imaging, capable of revealing intricate vascular details, has lacked the necessary benchmarking infrastructure. In this paper, we address a significant gap in this field of neuroimaging by introducing the first-in-class public benchmark, BvEM, designed specifically for cortical blood vessel segmentation in vEM images. Our BvEM benchmark is based on vEM image volumes from three mammals: adult mouse, macaque, and human. We standardized the resolution, addressed imaging variations, and meticulously annotated blood vessels through semi-automatic, manual, and quality control processes, ensuring high-quality 3D segmentation. Furthermore, we developed a zero-shot cortical blood vessel segmentation method named TriSAM, which leverages the powerful segmentation model SAM for 3D segmentation. To extend SAM from 2D to 3D volume segmentation, TriSAM employs a multi-seed tracking framework, leveraging the reliability of certain image planes for tracking while using others to identify potential turning points. This approach effectively achieves long-term 3D blood vessel segmentation without model training or fine-tuning. Experimental results show that TriSAM achieved superior performances on the BvEM benchmark across three species. Our dataset, code, and model are available online at \url{https://jia-wan.github.io/bvem}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 423,915 |
2306.11737 | Neural ShDF: Reviving an Efficient and Consistent Mesh Segmentation
Method | Partitioning a polygonal mesh into meaningful parts can be challenging. Many applications require decomposing such structures for further processing in computer graphics. In the last decade, several methods were proposed to tackle this problem, at the cost of intensive computational times. Recently, machine learning has proven to be effective for the segmentation task on 3D structures. Nevertheless, these state-of-the-art methods are often hardly generalizable and require dividing the learned model into several specific classes of objects to avoid overfitting. We present a data-driven approach leveraging deep learning to encode a mapping function prior to mesh segmentation for multiple applications. Our network reproduces a neighborhood map using our knowledge of the \textsl{Shape Diameter Function} (SDF) method using similarities among vertex neighborhoods. Our approach is resolution-agnostic as we downsample the input meshes and query the full-resolution structure solely for neighborhood contributions. Using our predicted SDF values, we can inject the resulting structure into a graph-cut algorithm to generate an efficient and robust mesh segmentation while considerably reducing the required computation times. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 374,695 |
1906.05061 | Probing Multilingual Sentence Representations With X-Probe | This paper extends the task of probing sentence representations for linguistic insight in a multilingual domain. In doing so, we make two contributions: first, we provide datasets for multilingual probing, derived from Wikipedia, in five languages, viz. English, French, German, Spanish and Russian. Second, we evaluate six sentence encoders for each language, each trained by mapping sentence representations to English sentence representations, using sentences in a parallel corpus. We discover that cross-lingually mapped representations are often better at retaining certain linguistic information than representations derived from English encoders trained on natural language inference (NLI) as a downstream task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 134,917 |
2207.12280 | ArtFID: Quantitative Evaluation of Neural Style Transfer | The field of neural style transfer has experienced a surge of research exploring different avenues ranging from optimization-based approaches and feed-forward models to meta-learning methods. The developed techniques have not just progressed the field of style transfer, but also led to breakthroughs in other areas of computer vision, such as all of visual synthesis. However, whereas quantitative evaluation and benchmarking have become pillars of computer vision research, the reproducible, quantitative assessment of style transfer models is still lacking. Even in comparison to other fields of visual synthesis, where widely used metrics exist, the quantitative evaluation of style transfer is still lagging behind. To support the automatic comparison of different style transfer approaches and to study their respective strengths and weaknesses, the field would greatly benefit from a quantitative measurement of stylization performance. Therefore, we propose a method to complement the currently mostly qualitative evaluation schemes. We provide extensive evaluations and a large-scale user study to show that the proposed metric strongly coincides with human judgment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,963 |
1412.4067 | Monotonicity of quantum relative entropy and recoverability | The relative entropy is a principal measure of distinguishability in quantum information theory, with its most important property being that it is non-increasing with respect to noisy quantum operations. Here, we establish a remainder term for this inequality that quantifies how well one can recover from a loss of information by employing a rotated Petz recovery map. The main approach for proving this refinement is to combine the methods of [Fawzi and Renner, arXiv:1410.0664] with the notion of a relative typical subspace from [Bjelakovic and Siegmund-Schultze, arXiv:quant-ph/0307170]. Our paper constitutes partial progress towards a remainder term which features just the Petz recovery map (not a rotated Petz map), a conjecture which would have many consequences in quantum information theory. A well known result states that the monotonicity of relative entropy with respect to quantum operations is equivalent to each of the following inequalities: strong subadditivity of entropy, concavity of conditional entropy, joint convexity of relative entropy, and monotonicity of relative entropy with respect to partial trace. We show that this equivalence holds true for refinements of all these inequalities in terms of the Petz recovery map. So either all of these refinements are true or all are false. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,346 |
2411.13154 | DMQR-RAG: Diverse Multi-Query Rewriting for RAG | Large language models often encounter challenges with static knowledge and hallucinations, which undermine their reliability. Retrieval-augmented generation (RAG) mitigates these issues by incorporating external information. However, user queries frequently contain noise and intent deviations, necessitating query rewriting to improve the relevance of retrieved documents. In this paper, we introduce DMQR-RAG, a Diverse Multi-Query Rewriting framework designed to improve the performance of both document retrieval and final responses in RAG. Specifically, we investigate how queries with varying information quantities can retrieve a diverse array of documents, presenting four rewriting strategies that operate at different levels of information to enhance the performance of baseline approaches. Additionally, we propose an adaptive strategy selection method that minimizes the number of rewrites while optimizing overall performance. Our methods have been rigorously validated through extensive experiments conducted in both academic and industry settings. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 509,698 |
2308.12069 | Identifying Reaction-Aware Driving Styles of Stochastic Model Predictive
Controlled Vehicles by Inverse Reinforcement Learning | The driving style of an Autonomous Vehicle (AV) refers to how it behaves and interacts with other AVs. In a multi-vehicle autonomous driving system, an AV capable of identifying the driving styles of its nearby AVs can reliably evaluate the risk of collisions and make more reasonable driving decisions. However, there has not been a consistent definition of driving styles for an AV in the literature, although it is considered that the driving style is encoded in the AV's trajectories and can be identified using Maximum Entropy Inverse Reinforcement Learning (ME-IRL) methods as a cost function. Nevertheless, an important indicator of the driving style, i.e., how an AV reacts to its nearby AVs, is not fully incorporated in the feature design of previous ME-IRL methods. In this paper, we describe the driving style as a cost function of a series of weighted features. We design additional novel features to capture the AV's reaction-aware characteristics. Then, we identify the driving styles from the demonstration trajectories generated by the Stochastic Model Predictive Control (SMPC) using a modified ME-IRL method with our newly proposed features. The proposed method is validated using MATLAB simulation and an off-the-shelf experiment. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 387,410 |
2412.11653 | Self-Adaptive Paraphrasing and Preference Learning for Improved Claim
Verifiability | In fact-checking, structure and phrasing of claims critically influence a model's ability to predict verdicts accurately. Social media content in particular rarely serves as optimal input for verification systems, which necessitates pre-processing to extract the claim from noisy context before fact checking. Prior work suggests extracting a claim representation that humans find to be checkworthy and verifiable. This has two limitations: (1) the format may not be optimal for a fact-checking model, and (2), it requires annotated data to learn the extraction task from. We address both issues and propose a method to extract claims that is not reliant on labeled training data. Instead, our self-adaptive approach only requires a black-box fact checking model and a generative language model (LM). Given a tweet, we iteratively optimize the LM to generate a claim paraphrase that increases the performance of a fact checking model. By learning from preference pairs, we align the LM to the fact checker using direct preference optimization. We show that this novel setup extracts a claim paraphrase that is more verifiable than their original social media formulations, and is on par with competitive baselines. For refuted claims, our method consistently outperforms all baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 517,513 |
1508.01648 | Predicting academic major of students using bayesian networks to the
case of iran | In this study, which took place current year in the city of Maragheh in IRAN. Number of high school students in the fields of study: mathematics, Experimental Sciences, humanities, vocational, business and science were studied and compared. The purpose of this research is to predict the academic major of high school students using Bayesian networks. The effective factors have been used in academic major selection for the first time as an effective indicator of Bayesian networks. Evaluation of Impacts of indicators on each other, discretization data and processing them was performed by GeNIe. The proper course would be advised for students to continue their education. | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | 45,807 |
1811.01314 | Modeling Traffic Networks Using Integrated Route and Link Data | Real-time navigation services, such as Google Maps and Waze, are widely used in daily life. These services provide rich data resources in real-time traffic conditions and travel time predictions; however, they have not been fully applied in transportation modeling. This paper aims to use traffic data from Google Maps and applying cutting-edge technologies in maximum likelihood estimation to model traffic networks and travel time reliability. This paper integrates Google Maps travel time data for routes and traffic condition data for links to model the complexities of traffic networks. We then formulate the Fisher information matrix and apply the asymptotic normality to obtain the probability distribution of the travel time estimates for a random route within the network of interest. We also derive the travel time reliability by considering two levels of uncertainties, i.e., the uncertainty of the route's travel time and the uncertainty of its travel time estimates. The proposed method could provide a more realistic and precise travel time reliability estimate. The methodology is applied to a small network in the downtown Baltimore area, where we propose a link data collection strategy and provide empirical evidence to show data independence by following this strategy. We also show results for maximum likelihood estimates and travel time reliability measures for different routes within the network. Furthermore, we use the historical data from a different network to validate this approach, showing our method provides a more accurate and precise estimate compared to the sample mean of the empirical data. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 112,333 |
2407.05550 | MEEG and AT-DGNN: Improving EEG Emotion Recognition with Music
Introducing and Graph-based Learning | We present the MEEG dataset, a multi-modal collection of music-induced electroencephalogram (EEG) recordings designed to capture emotional responses to various musical stimuli across different valence and arousal levels. This public dataset facilitates an in-depth examination of brainwave patterns within musical contexts, providing a robust foundation for studying brain network topology during emotional processing. Leveraging the MEEG dataset, we introduce the Attention-based Temporal Learner with Dynamic Graph Neural Network (AT-DGNN), a novel framework for EEG-based emotion recognition. This model combines an attention mechanism with a dynamic graph neural network (DGNN) to capture intricate EEG dynamics. The AT-DGNN achieves state-of-the-art (SOTA) performance with an accuracy of 83.74% in arousal recognition and 86.01% in valence recognition, outperforming existing SOTA methods. Comparative analysis with traditional datasets, such as DEAP, further validates the model's effectiveness and underscores the potency of music as an emotional stimulus. This study advances graph-based learning methodology in brain-computer interfaces (BCI), significantly improving the accuracy of EEG-based emotion recognition. The MEEG dataset and source code are publicly available at https://github.com/xmh1011/AT-DGNN. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 471,022 |
1701.06770 | Analysis of Breakdown Probability of Wireless Sensor Networks with
Unreliable Relay Nodes | In the present paper, we derive an upper bound of the average network breakdown probability of packet networks with unreliable relay nodes. We here assume that relay nodes get independently broken with a given node breakdown probability. A survivor graph is the induced subgraph obtained by removing the broken relay nodes and their connecting edges from the original graph. If the survivor network is disconnected, we consider a network breakdown happens. The primal contribution of the paper is to derive an upper bound of the average network breakdown probability, where the expectation is taken over a regular graph ensemble. The proof of the bound is based on a natural one-to-one correspondence between a regular graph and a regular bipartite graph, and also on enumeration of bipartite graphs satisfying certain conditions. This proof argument is inspired by the analysis of weight distribution for low-density parity-check codes. Compared with estimates of the average network breakdown probability obtained by computer experiments, it is observed that the upper bound provides the values which are not only upper bounds but also precise estimates of the network breakdown probability when the node breakdown probability is small. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,194 |
2412.09150 | Evaluating Adversarial Attacks on Traffic Sign Classifiers beyond
Standard Baselines | Adversarial attacks on traffic sign classification models were among the first successfully tried in the real world. Since then, the research in this area has been mainly restricted to repeating baseline models, such as LISA-CNN or GTSRB-CNN, and similar experiment settings, including white and black patches on traffic signs. In this work, we decouple model architectures from the datasets and evaluate on further generic models to make a fair comparison. Furthermore, we compare two attack settings, inconspicuous and visible, which are usually regarded without direct comparison. Our results show that standard baselines like LISA-CNN or GTSRB-CNN are significantly more susceptible than the generic ones. We, therefore, suggest evaluating new attacks on a broader spectrum of baselines in the future. Our code is available at \url{https://github.com/KASTEL-MobilityLab/attacks-on-traffic-sign-recognition/}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 516,379 |
2408.11822 | State-of-the-art in Robot Learning for Multi-Robot Collaboration: A
Comprehensive Survey | With the continuous breakthroughs in core technology, the dawn of large-scale integration of robotic systems into daily human life is on the horizon. Multi-robot systems (MRS) built on this foundation are undergoing drastic evolution. The fusion of artificial intelligence technology with robot hardware is seeing broad application possibilities for MRS. This article surveys the state-of-the-art of robot learning in the context of Multi-Robot Cooperation (MRC) of recent. Commonly adopted robot learning methods (or frameworks) that are inspired by humans and animals are reviewed and their advantages and disadvantages are discussed along with the associated technical challenges. The potential trends of robot learning and MRS integration exploiting the merging of these methods with real-world applications is also discussed at length. Specifically statistical methods are used to quantitatively corroborate the ideas elaborated in the article. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 482,450 |
2207.13307 | Marker and source-marker reprogramming of Most Permissive Boolean
networks and ensembles with BoNesis | Boolean networks (BNs) are discrete dynamical systems with applications to the modeling of cellular behaviors. In this paper, we demonstrate how the software BoNesis can be employed to exhaustively identify combinations of perturbations which enforce properties on their fixed points and attractors. We consider marker properties, which specify that some components are fixed to a specific value. We study 4 variants of the marker reprogramming problem: the reprogramming of fixed points, of minimal trap spaces, and of fixed points and minimal trap spaces reachable from a given initial configuration with the most permissive update mode. The perturbations consist of fixing a set of components to a fixed value. They can destroy and create new attractors. In each case, we give an upper bound on their theoretical computational complexity, and give an implementation of the resolution using the BoNesis Python framework. Finally, we lift the reprogramming problems to ensembles of BNs, as supported by BoNesis, bringing insight on possible and universal reprogramming strategies. This paper can be executed and modified interactively. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 310,258 |
1610.09543 | FEAST: An Automated Feature Selection Framework for Compilation Tasks | The success of the application of machine-learning techniques to compilation tasks can be largely attributed to the recent development and advancement of program characterization, a process that numerically or structurally quantifies a target program. While great achievements have been made in identifying key features to characterize programs, choosing a correct set of features for a specific compiler task remains an ad hoc procedure. In order to guarantee a comprehensive coverage of features, compiler engineers usually need to select excessive number of features. This, unfortunately, would potentially lead to a selection of multiple similar features, which in turn could create a new problem of bias that emphasizes certain aspects of a program's characteristics, hence reducing the accuracy and performance of the target compiler task. In this paper, we propose FEAture Selection for compilation Tasks (FEAST), an efficient and automated framework for determining the most relevant and representative features from a feature pool. Specifically, FEAST utilizes widely used statistics and machine-learning tools, including LASSO, sequential forward and backward selection, for automatic feature selection, and can in general be applied to any numerical feature set. This paper further proposes an automated approach to compiler parameter assignment for assessing the performance of FEAST. Intensive experimental results demonstrate that, under the compiler parameter assignment task, FEAST can achieve comparable results with about 18% of features that are automatically selected from the entire feature pool. We also inspect these selected features and discuss their roles in program execution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 63,079 |
2407.14102 | MSSP : A Versatile Multi-Scenario Adaptable Intelligent Robot Simulation
Platform Based on LIDAR-Inertial Fusion | This letter presents a multi-scenario adaptable intelligent robot simulation platform based on LIDAR-inertial fusion, with three main features: (1 The platform includes an versatile robot model that can be freely controlled through manual control or autonomous tracking. This model is equipped with various types of LIDAR and Inertial Measurement Unit (IMU), providing ground truth information with absolute accuracy. (2 The platform provides a collection of simulation environments with diverse characteristic information and supports developers in customizing and modifying environments according to their needs. (3 The platform supports evaluation of localization performance for SLAM frameworks. Ground truth with absolute accuracy eliminates the inherent errors of global positioning sensors present in real experiments, facilitating detailed analysis and evaluation of the algorithms. By utilizing the simulation platform, developers can overcome the limitations of real environments and datasets, enabling fine-grained analysis and evaluation of mainstream SLAM algorithms in various environments. Experiments conducted in different environments and with different LIDARs demonstrate the wide applicability and practicality of our simulation platform. The implementation of the simulation platform is open-sourced on Github. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 474,650 |
2210.00393 | (Non)-Coherent MU-MIMO Block Fading Channels with Finite Blocklength and
Linear Processing | This paper studies the coherent and non-coherent multiuser multiple-input multiple-output (MU-MIMO) uplink system in the finite blocklength regime. The i.i.d. Gaussian codebook is assumed for each user. To be more specific, the BS first uses two popular linear processing schemes to combine the signals transmitted from all users, namely, MRC and ZF. Following it, the matched maximum-likelihood (ML) and mismatched nearest-neighbour (NN) decoding metric for the coherent and non-coherent cases are respectively employed at the BS. Under these conditions, the refined third-order achievable coding rate, expressed as a function of the blocklength, average error probability, and the third-order term of the information density (called as the channel perturbation), is derived. With this result in hand, a detailed performance analysis is then pursued, through which, we derive the asymptotic results of the channel perturbation, achievable coding rate, channel capacity, and the channel dispersion. These theoretical results enable us to obtain a number of interesting insights related to the impact of the finite blocklength: i) in our system setting, massive MIMO helps to reduce the channel perturbation of the achievable coding rate, which can even be discarded without affecting the performance with just a small-to-moderate number of BS antennas and number of blocks; ii) under the non-coherent case, even with massive MIMO, the channel estimation errors cannot be eliminated unless the transmit powers in both the channel estimation and data transmission phases for each user are made inversely proportional to the square root of the number of BS antennas; iii) in the non-coherent case and for fixed total blocklength, the scenarios with longer coherence intervals and smaller number of blocks offer higher achievable coding rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 320,847 |
2105.15064 | Using Pareto Simulated Annealing to Address Algorithmic Bias in Machine
Learning | Algorithmic Bias can be due to bias in the training data or issues with the algorithm itself. These algorithmic issues typically relate to problems with model capacity and regularisation. This underestimation bias may arise because the model has been optimised for good generalisation accuracy without any explicit consideration of bias or fairness. In a sense, we should not be surprised that a model might be biased when it hasn't been "asked" not to be. In this paper, we consider including bias (underestimation) as an additional criterion in model training. We present a multi-objective optimisation strategy using Pareto Simulated Annealing that optimise for both balanced accuracy and underestimation. We demonstrate the effectiveness of this strategy on one synthetic and two real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 237,902 |
1901.00536 | Visualizing Deep Similarity Networks | For convolutional neural network models that optimize an image embedding, we propose a method to highlight the regions of images that contribute most to pairwise similarity. This work is a corollary to the visualization tools developed for classification networks, but applicable to the problem domains better suited to similarity learning. The visualization shows how similarity networks that are fine-tuned learn to focus on different features. We also generalize our approach to embedding networks that use different pooling strategies and provide a simple mechanism to support image similarity searches on objects or sub-regions in the query image. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 117,798 |
2411.02275 | Breaking the Reclustering Barrier in Centroid-based Deep Clustering | This work investigates an important phenomenon in centroid-based deep clustering (DC) algorithms: Performance quickly saturates after a period of rapid early gains. Practitioners commonly address early saturation with periodic reclustering, which we demonstrate to be insufficient to address performance plateaus. We call this phenomenon the "reclustering barrier" and empirically show when the reclustering barrier occurs, what its underlying mechanisms are, and how it is possible to Break the Reclustering Barrier with our algorithm BRB. BRB avoids early over-commitment to initial clusterings and enables continuous adaptation to reinitialized clustering targets while remaining conceptually simple. Applying our algorithm to widely-used centroid-based DC algorithms, we show that (1) BRB consistently improves performance across a wide range of clustering benchmarks, (2) BRB enables training from scratch, and (3) BRB performs competitively against state-of-the-art DC algorithms when combined with a contrastive loss. We release our code and pre-trained models at https://github.com/Probabilistic-and-Interactive-ML/breaking-the-reclustering-barrier . | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 505,421 |
2104.06703 | Deep Permutation Equivariant Structure from Motion | Existing deep methods produce highly accurate 3D reconstructions in stereo and multiview stereo settings, i.e., when cameras are both internally and externally calibrated. Nevertheless, the challenge of simultaneous recovery of camera poses and 3D scene structure in multiview settings with deep networks is still outstanding. Inspired by projective factorization for Structure from Motion (SFM) and by deep matrix completion techniques, we propose a neural network architecture that, given a set of point tracks in multiple images of a static scene, recovers both the camera parameters and a (sparse) scene structure by minimizing an unsupervised reprojection loss. Our network architecture is designed to respect the structure of the problem: the sought output is equivariant to permutations of both cameras and scene points. Notably, our method does not require initialization of camera parameters or 3D point locations. We test our architecture in two setups: (1) single scene reconstruction and (2) learning from multiple scenes. Our experiments, conducted on a variety of datasets in both internally calibrated and uncalibrated settings, indicate that our method accurately recovers pose and structure, on par with classical state of the art methods. Additionally, we show that a pre-trained network can be used to reconstruct novel scenes using inexpensive fine-tuning with no loss of accuracy. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 230,168 |
2107.06129 | Bidirectional Regression for Arbitrary-Shaped Text Detection | Arbitrary-shaped text detection has recently attracted increasing interests and witnessed rapid development with the popularity of deep learning algorithms. Nevertheless, existing approaches often obtain inaccurate detection results, mainly due to the relatively weak ability to utilize context information and the inappropriate choice of offset references. This paper presents a novel text instance expression which integrates both foreground and background information into the pipeline, and naturally uses the pixels near text boundaries as the offset starts. Besides, a corresponding post-processing algorithm is also designed to sequentially combine the four prediction results and reconstruct the text instance accurately. We evaluate our method on several challenging scene text benchmarks, including both curved and multi-oriented text datasets. Experimental results demonstrate that the proposed approach obtains superior or competitive performance compared to other state-of-the-art methods, e.g., 83.4% F-score for Total-Text, 82.4% F-score for MSRA-TD500, etc. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 245,993 |
1812.07941 | Automatic Detection of Reflective Thinking in Mathematical Problem
Solving based on Unconstrained Bodily Exploration | For technology (like serious games) that aims to deliver interactive learning, it is important to address relevant mental experiences such as reflective thinking during problem solving. To facilitate research in this direction, we present the weDraw-1 Movement Dataset of body movement sensor data and reflective thinking labels for 26 children solving mathematical problems in unconstrained settings where the body (full or parts) was required to explore these problems. Further, we provide qualitative analysis of behaviours that observers used in identifying reflective thinking moments in these sessions. The body movement cues from our compilation informed features that lead to average F1 score of 0.73 for automatic detection of reflective thinking based on Long Short-Term Memory neural networks. We further obtained 0.79 average F1 score for end-to-end detection of reflective thinking periods, i.e. based on raw sensor data. Finally, the algorithms resulted in 0.64 average F1 score for period subsegments as short as 4 seconds. Overall, our results show the possibility of detecting reflective thinking moments from body movement behaviours of a child exploring mathematical concepts bodily, such as within serious game play. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,908 |
2003.08494 | Progress Extrapolating Algorithmic Learning to Arbitrary Sequence
Lengths | Recent neural network models for algorithmic tasks have led to significant improvements in extrapolation to sequences much longer than training, but it remains an outstanding problem that the performance still degrades for very long or adversarial sequences. We present alternative architectures and loss-terms to address these issues, and our testing of these approaches has not detected any remaining extrapolation errors within memory constraints. We focus on linear time algorithmic tasks including copy, parentheses parsing, and binary addition. First, activation binning was used to discretize the trained network in order to avoid computational drift from continuous operations, and a binning-based digital loss term was added to encourage discretizable representations. In addition, a localized differentiable memory (LDM) architecture, in contrast to distributed memory access, addressed remaining extrapolation errors and avoided unbounded growth of internal computational states. Previous work has found that algorithmic extrapolation issues can also be alleviated with approaches relying on program traces, but the current effort does not rely on such traces. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 168,751 |
1607.02748 | Adversarial Training For Sketch Retrieval | Generative Adversarial Networks (GAN) are able to learn excellent representations for unlabelled data which can be applied to image generation and scene classification. Representations learned by GANs have not yet been applied to retrieval. In this paper, we show that the representations learned by GANs can indeed be used for retrieval. We consider heritage documents that contain unlabelled Merchant Marks, sketch-like symbols that are similar to hieroglyphs. We introduce a novel GAN architecture with design features that make it suitable for sketch retrieval. The performance of this sketch-GAN is compared to a modified version of the original GAN architecture with respect to simple invariance properties. Experiments suggest that sketch-GANs learn representations that are suitable for retrieval and which also have increased stability to rotation, scale and translation compared to the standard GAN architecture. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 58,408 |
1811.11977 | DuLa-Net: A Dual-Projection Network for Estimating Room Layouts from a
Single RGB Panorama | We present a deep learning framework, called DuLa-Net, to predict Manhattan-world 3D room layouts from a single RGB panorama. To achieve better prediction accuracy, our method leverages two projections of the panorama at once, namely the equirectangular panorama-view and the perspective ceiling-view, that each contains different clues about the room layouts. Our network architecture consists of two encoder-decoder branches for analyzing each of the two views. In addition, a novel feature fusion structure is proposed to connect the two branches, which are then jointly trained to predict the 2D floor plans and layout heights. To learn more complex room layouts, we introduce the Realtor360 dataset that contains panoramas of Manhattan-world room layouts with different numbers of corners. Experimental results show that our work outperforms recent state-of-the-art in prediction accuracy and performance, especially in the rooms with non-cuboid layouts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,897 |
2410.16033 | TreeBoN: Enhancing Inference-Time Alignment with Speculative Tree-Search
and Best-of-N Sampling | Inference-time alignment enhances the performance of large language models without requiring additional training or fine-tuning but presents challenges due to balancing computational efficiency with high-quality output. Best-of-N (BoN) sampling, as a simple yet powerful approach, generates multiple responses and selects the best one, achieving improved performance but with a high computational cost. We propose TreeBoN, a novel framework that integrates a speculative tree-search strategy into Best-of-N (BoN) Sampling. TreeBoN maintains a set of parent nodes, iteratively branching and pruning low-quality responses, thereby reducing computational overhead while maintaining high output quality. Our approach also leverages token-level rewards from Direct Preference Optimization (DPO) to guide tree expansion and prune low-quality paths. We evaluate TreeBoN using AlpacaFarm, HH-RLHF, UltraFeedback, GSM8K, and TutorEval datasets, demonstrating consistent improvements. Specifically, TreeBoN achieves the highest win rate of 65% on TutorEval and around 60% win rates across other different datasets, outperforming standard BoN with the same computational cost and showcasing its scalability and alignment efficacy. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 500,850 |
2309.03631 | Insights Into the Inner Workings of Transformer Models for Protein
Function Prediction | Motivation: We explored how explainable artificial intelligence (XAI) can help to shed light into the inner workings of neural networks for protein function prediction, by extending the widely used XAI method of integrated gradients such that latent representations inside of transformer models, which were finetuned to Gene Ontology term and Enzyme Commission number prediction, can be inspected too. Results: The approach enabled us to identify amino acids in the sequences that the transformers pay particular attention to, and to show that these relevant sequence parts reflect expectations from biology and chemistry, both in the embedding layer and inside of the model, where we identified transformer heads with a statistically significant correspondence of attribution maps with ground truth sequence annotations (e.g. transmembrane regions, active sites) across many proteins. Availability and Implementation: Source code can be accessed at https://github.com/markuswenzel/xai-proteins . | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 390,447 |
2010.02814 | Anomaly Detection Approach to Identify Early Cases in a Pandemic using
Chest X-rays | The current COVID-19 pandemic is now getting contained, albeit at the cost of morethan2.3million human lives. A critical phase in any pandemic is the early detection of cases to develop preventive treatments and strategies. In the case of COVID-19,several studies have indicated that chest radiography images of the infected patients show characteristic abnormalities. However, at the onset of a given pandemic, such asCOVID-19, there may not be sufficient data for the affected cases to train models for their robust detection. Hence, supervised classification is ill-posed for this problem because the time spent in collecting large amounts of data from infected persons could lead to the loss of human lives and delays in preventive interventions. Therefore, we formulate the problem of identifying early cases in a pandemic as an anomaly detection problem, in which the data for healthy patients is abundantly available, whereas no training data is present for the class of interest (COVID-19 in our case). To solve this problem, we present several unsupervised deep learning approaches, including convolutional and adversarially trained autoencoder. We tested two settings on a publicly available dataset (COVIDx)by training the model on chest X-rays from (i) only healthy adults, and (ii) healthy and other non-COVID-19 pneumonia, and detected COVID-19 as an anomaly. Afterperforming3-fold cross validation, we obtain a ROC-AUC of0.765. These results are very encouraging and pave the way towards research for ensuring emergency preparedness in future pandemics, especially the ones that could be detected from chest X-rays | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 199,176 |
1905.13568 | Quantization Loss Re-Learning Method | In order to quantize the gate parameters of the LSTM (Long Short-Term Memory) neural network model with almost no recognition performance degraded, a new quantization method named Quantization Loss Re-Learn Method is proposed in this paper. The method does lossy quantization on gate parameters during training iterations, and the weight parameters learn to offset the loss of gate parameters quantization by adjusting the gradient in back propagation during weight parameters optimization. We proved the effectiveness of this method through theoretical derivation and experiments. The gate parameters had been quantized to 0, 0.5, 1 three values, and on the Named Entity Recognition dataset, the F1 score of the model with the new quantization method on gate parameters decreased by only 0.7% compared to the baseline model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,177 |
2502.10243 | Safety Blind Spot in Remote Driving: Considerations for Risk Assessment
of Connection Loss Fallback Strategies | As part of the overall goal of driverless road vehicles, remote driving is a major emerging field of research of its own. Current remote driving concepts for public road traffic often establish a fallback strategy of immediate braking to a standstill in the event of a connection loss. This may seem like the most logical option when human control of the vehicle is lost. However, our simulation results from hundreds of scenarios based on naturalistic traffic scenes indicate high collision rates for any immediate substantial deceleration to a standstill in urban settings. We show that such a fallback strategy can result in a SOTIF relevant hazard, making it questionable whether such a design decision can be considered acceptable. Therefore, from a safety perspective, we would call this problem a safety blind spot, as safety analyses in this regard seem to be very rare. In this article, we first present a simulation on a naturalistic dataset that shows a high probability of collision in the described case. Second, we discuss the severity of the resulting potential rear-end collisions and provide an even more severe example by including a large commercial vehicle in the potential collision. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 533,793 |
2501.13462 | Generalized graph codes and thier minimum distances | Graph code is a linear code obtained from linear codes $C$ and a certain bipartite graph G. In this paper, I propose an expansion of the definition of graph code to general $l$-partite, and give its lower bound of minimum distance. I also give an example of generalized graph code and calculate its parameters $[n, k, d]$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 526,701 |
2410.00147 | Modeling Turbulence in the Atmospheric Boundary Layer with Spectral
Element and Finite Volume Methods | We present large-eddy-simulation (LES) modeling approaches for the simulation of atmospheric boundary layer turbulence that are of direct relevance to wind energy production. In this paper, we study a GABLS benchmark problem using high-order spectral element code Nek5000/RS and a block-structured second-order finite-volume code AMR-Wind which are supported under the DOE's Exascale Computing Project (ECP) Center for Efficient Exascale Discretizations (CEED) and ExaWind projects, respectively, targeting application simulations on various acceleration-device based exascale computing platforms. As for Nek5000/RS we demonstrate our newly developed subgrid-scale (SGS) models based on mean-field eddy viscosity (MFEV), high-pass filter (HPF), and Smagorinsky (SMG) with traction boundary conditions. For the traction boundary conditions, a novel analytical approach is presented that solves for the surface friction velocity and surface kinematic temperature flux. For AMR-Wind, standard SMG is used and discussed in detail the traction boundary conditions for convergence. We provide low-order statistics, convergence and turbulent structure analysis. Verification and convergence studies were performed for both codes at various resolutions and it was found that Nek5000/RS demonstrate convergence with resolution for all ABL bulk parameters, including boundary layer and low level jet (LLJ) height. Extensive comparisons are presented with simulation data from the literature. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 493,244 |
2109.05633 | Generating Datasets of 3D Garments with Sewing Patterns | Garments are ubiquitous in both real and many of the virtual worlds. They are highly deformable objects, exhibit an immense variety of designs and shapes, and yet, most garments are created from a set of regularly shaped flat pieces. Exploration of garment structure presents a peculiar case for an object structure estimation task and might prove useful for downstream tasks of neural 3D garment modeling and reconstruction by providing strong prior on garment shapes. To facilitate research in these directions, we propose a method for generating large synthetic datasets of 3D garment designs and their sewing patterns. Our method consists of a flexible description structure for specifying parametric sewing pattern templates and the automatic generation pipeline to produce garment 3D models with little-to-none manual intervention. To add realism, the pipeline additionally creates corrupted versions of the final meshes that imitate artifacts of 3D scanning. With this pipeline, we created the first large-scale synthetic dataset of 3D garment models with their sewing patterns. The dataset contains more than 20000 garment design variations produced from 19 different base types. Seven of these garment types are specifically designed to target evaluation of the generalization across garment sewing pattern topologies. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 254,870 |
2205.00742 | FirmTruss Community Search in Multilayer Networks | In applications such as biological, social, and transportation networks, interactions between objects span multiple aspects. For accurately modeling such applications, multilayer networks have been proposed. Community search allows for personalized community discovery and has a wide range of applications in large real-world networks. While community search has been widely explored for single-layer graphs, the problem for multilayer graphs has just recently attracted attention. Existing community models in multilayer graphs have several limitations, including disconnectivity, free-rider effect, resolution limits, and inefficiency. To address these limitations, we study the problem of community search over large multilayer graphs. We first introduce FirmTruss, a novel dense structure in multilayer networks, which extends the notion of truss to multilayer graphs. We show that FirmTrusses possess nice structural and computational properties and bring many advantages compared to the existing models. Building on this, we present a new community model based on FirmTruss, called FTCS, and show that finding an FTCS community is NP-hard. We propose two efficient 2-approximation algorithms, and show that no polynomial-time algorithm can have a better approximation guarantee unless P = NP. We propose an index-based method to further improve the efficiency of the algorithms. We then consider attributed multilayer networks and propose a new community model based on network homophily. We show that community search in attributed multilayer graphs is NP-hard and present an effective and efficient approximation algorithm. Experimental studies on real-world graphs with ground-truth communities validate the quality of the solutions we obtain and the efficiency of the proposed algorithms. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 294,358 |
2501.09372 | Image Segmentation with transformers: An Overview, Challenges and Future | Image segmentation, a key task in computer vision, has traditionally relied on convolutional neural networks (CNNs), yet these models struggle with capturing complex spatial dependencies, objects with varying scales, need for manually crafted architecture components and contextual information. This paper explores the shortcomings of CNN-based models and the shift towards transformer architectures -to overcome those limitations. This work reviews state-of-the-art transformer-based segmentation models, addressing segmentation-specific challenges and their solutions. The paper discusses current challenges in transformer-based segmentation and outlines promising future trends, such as lightweight architectures and enhanced data efficiency. This survey serves as a guide for understanding the impact of transformers in advancing segmentation capabilities and overcoming the limitations of traditional models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 525,122 |
2011.04749 | Longitudinal modeling of MS patient trajectories improves predictions of
disability progression | Research in Multiple Sclerosis (MS) has recently focused on extracting knowledge from real-world clinical data sources. This type of data is more abundant than data produced during clinical trials and potentially more informative about real-world clinical practice. However, this comes at the cost of less curated and controlled data sets. In this work, we address the task of optimally extracting information from longitudinal patient data in the real-world setting with a special focus on the sporadic sampling problem. Using the MSBase registry, we show that with machine learning methods suited for patient trajectories modeling, such as recurrent neural networks and tensor factorization, we can predict disability progression of patients in a two-year horizon with an ROC-AUC of 0.86, which represents a 33% decrease in the ranking pair error (1-AUC) compared to reference methods using static clinical features. Compared to the models available in the literature, this work uses the most complete patient history for MS disease progression prediction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,671 |
2308.05765 | Unleashing the Power of Extra-Tree Feature Selection and Random Forest
Classifier for Improved Survival Prediction in Heart Failure Patients | Heart failure is a life-threatening condition that affects millions of people worldwide. The ability to accurately predict patient survival can aid in early intervention and improve patient outcomes. In this study, we explore the potential of utilizing data pre-processing techniques and the Extra-Tree (ET) feature selection method in conjunction with the Random Forest (RF) classifier to improve survival prediction in heart failure patients. By leveraging the strengths of ET feature selection, we aim to identify the most significant predictors associated with heart failure survival. Using the public UCL Heart failure (HF) survival dataset, we employ the ET feature selection algorithm to identify the most informative features. These features are then used as input for grid search of RF. Finally, the tuned RF Model was trained and evaluated using different matrices. The approach was achieved 98.33% accuracy that is the highest over the exiting work. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,906 |
2005.06869 | Lower bounds for invariant statistical models with applications to
principal component analysis | This paper develops nonasymptotic information inequalities for the estimation of the eigenspaces of a covariance operator. These results generalize previous lower bounds for the spiked covariance model, and they show that recent upper bounds for models with decaying eigenvalues are sharp. The proof relies on lower bound techniques based on group invariance arguments which can also deal with a variety of other statistical models. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 177,129 |
2104.07916 | Augmenting Deep Classifiers with Polynomial Neural Networks | Deep neural networks have been the driving force behind the success in classification tasks, e.g., object and audio recognition. Impressive results and generalization have been achieved by a variety of recently proposed architectures, the majority of which are seemingly disconnected. In this work, we cast the study of deep classifiers under a unifying framework. In particular, we express state-of-the-art architectures (e.g., residual and non-local networks) in the form of different degree polynomials of the input. Our framework provides insights on the inductive biases of each model and enables natural extensions building upon their polynomial nature. The efficacy of the proposed models is evaluated on standard image and audio classification benchmarks. The expressivity of the proposed models is highlighted both in terms of increased model performance as well as model compression. Lastly, the extensions allowed by this taxonomy showcase benefits in the presence of limited data and long-tailed data distributions. We expect this taxonomy to provide links between existing domain-specific architectures. The source code is available at \url{https://github.com/grigorisg9gr/polynomials-for-augmenting-NNs}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 230,596 |
2410.17481 | AI, Global Governance, and Digital Sovereignty | This essay examines how Artificial Intelligence (AI) systems are becoming more integral to international affairs by affecting how global governors exert power and pursue digital sovereignty. We first introduce a taxonomy of multifaceted AI payoffs for governments and corporations related to instrumental, structural, and discursive power in the domains of violence, markets, and rights. We next leverage different institutional and practice perspectives on sovereignty to assess how digital sovereignty is variously implicated in AI-empowered global governance. States both seek sovereign control over AI infrastructures in the institutional approach, while establishing sovereign competence through AI infrastructures in the practice approach. Overall, we present the digital sovereignty stakes of AI as related to entanglements of public and private power. Rather than foreseeing technology companies as replacing states, we argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation. We conclude with sketching future directions for IR research on AI and global governance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 501,477 |
1303.3636 | Low-Complexity Adaptive Set-Membership Reduced-rank LCMV Beamforming | This paper proposes a new adaptive algorithm for the implementation of the linearly constrained minimum variance (LCMV) beamformer. The proposed algorithm utilizes the set-membership filtering (SMF) framework and the reduced-rank joint iterative optimization (JIO) scheme. We develop a stochastic gradient (SG) based algorithm for the beamformer design. An effective time-varying bound is employed in the proposed method to adjust the step sizes, avoid the misadjustment and the risk of overbounding or underbounding. Simulations are performed to show the improved performance of the proposed algorithm in comparison with existing full-rank and reduced-rank methods. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 22,936 |
2403.08491 | Compliant Hierarchical Control for Arbitrary Equality and Inequality
Tasks with Strict and Soft Priorities | When a robotic system is redundant with respect to a given task, the remaining degrees of freedom can be used to satisfy additional objectives. With current robotic systems having more and more degrees of freedom, this can lead to an entire hierarchy of tasks that need to be solved according to given priorities. In this paper, the first compliant control strategy is presented that allows to consider an arbitrary number of equality and inequality tasks, while still preserving the natural inertia of the robot. The approach is therefore a generalization of a passivity-based controller to the case of an arbitrary number of equality and inequality tasks. The key idea of the method is to use a Weighted Hierarchical Quadratic Problem to extract the set of active tasks and use the latter to perform a coordinate transformation that inertially decouples the tasks. Thereby unifying the line of research focusing on optimization-based and passivity-based multi-task controllers. The method is validated in simulation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 437,359 |
2310.07093 | Argumentative Stance Prediction: An Exploratory Study on Multimodality
and Few-Shot Learning | To advance argumentative stance prediction as a multimodal problem, the First Shared Task in Multimodal Argument Mining hosted stance prediction in crucial social topics of gun control and abortion. Our exploratory study attempts to evaluate the necessity of images for stance prediction in tweets and compare out-of-the-box text-based large-language models (LLM) in few-shot settings against fine-tuned unimodal and multimodal models. Our work suggests an ensemble of fine-tuned text-based language models (0.817 F1-score) outperforms both the multimodal (0.677 F1-score) and text-based few-shot prediction using a recent state-of-the-art LLM (0.550 F1-score). In addition to the differences in performance, our findings suggest that the multimodal models tend to perform better when image content is summarized as natural language over their native pixel structure and, using in-context examples improves few-shot performance of LLMs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 398,825 |
2311.01660 | Maximum Likelihood Estimation of Flexible Survival Densities with
Importance Sampling | Survival analysis is a widely-used technique for analyzing time-to-event data in the presence of censoring. In recent years, numerous survival analysis methods have emerged which scale to large datasets and relax traditional assumptions such as proportional hazards. These models, while being performant, are very sensitive to model hyperparameters including: (1) number of bins and bin size for discrete models and (2) number of cluster assignments for mixture-based models. Each of these choices requires extensive tuning by practitioners to achieve optimal performance. In addition, we demonstrate in empirical studies that: (1) optimal bin size may drastically differ based on the metric of interest (e.g., concordance vs brier score), and (2) mixture models may suffer from mode collapse and numerical instability. We propose a survival analysis approach which eliminates the need to tune hyperparameters such as mixture assignments and bin sizes, reducing the burden on practitioners. We show that the proposed approach matches or outperforms baselines on several real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 405,123 |
2310.04743 | Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning
in Large Language Models | Chain-of-thought (CoT) prompting, which offers step-by-step problem-solving rationales, has impressively unlocked the reasoning potential of large language models (LLMs). Yet, the standard CoT is less effective in problems demanding multiple reasoning steps. This limitation arises from the complex reasoning process in multi-step problems: later stages often depend on the results of several steps earlier, not just the results of the immediately preceding step. Such complexities suggest the reasoning process is naturally represented as a graph. The almost linear and straightforward structure of CoT prompting, however, struggles to capture this complex reasoning graph. To address this challenge, we propose Residual Connection Prompting (RESPROMPT), a new prompting strategy that advances multi-step reasoning in LLMs. Our key idea is to reconstruct the reasoning graph within prompts. We achieve this by integrating necessary connections-links present in the reasoning graph but missing in the linear CoT flow-into the prompts. Termed "residual connections", these links are pivotal in morphing the linear CoT structure into a graph representation, effectively capturing the complex reasoning graphs inherent in multi-step problems. We evaluate RESPROMPT on six benchmarks across three diverse domains: math, sequential, and commonsense reasoning. For the open-sourced LLaMA family of models, RESPROMPT yields a significant average reasoning accuracy improvement of 12.5% on LLaMA-65B and 6.8% on LLaMA2-70B. Breakdown analysis further highlights RESPROMPT particularly excels in complex multi-step reasoning: for questions demanding at least five reasoning steps, RESPROMPT outperforms the best CoT based benchmarks by a remarkable average improvement of 21.1% on LLaMA-65B and 14.3% on LLaMA2-70B. Through extensive ablation studies and analyses, we pinpoint how to most effectively build residual connections. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 397,797 |
1909.09142 | Using Quantifier Elimination to Enhance the Safety Assurance of Deep
Neural Networks | Advances in the field of Machine Learning and Deep Neural Networks (DNNs) has enabled rapid development of sophisticated and autonomous systems. However, the inherent complexity to rigorously assure the safe operation of such systems hinders their real-world adoption in safety-critical domains such as aerospace and medical devices. Hence, there is a surge in interest to explore the use of advanced mathematical techniques such as formal methods to address this challenge. In fact, the initial results of such efforts are promising. Along these lines, we propose the use of quantifier elimination (QE) - a formal method technique, as a complimentary technique to the state-of-the-art static analysis and verification procedures. Using an airborne collision avoidance DNN as a case example, we illustrate the use of QE to formulate the precise range forward propagation through a network as well as analyze its robustness. We discuss the initial results of this ongoing work and explore the future possibilities of extending this approach and/or integrating it with other approaches to perform advanced safety assurance of DNNs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 146,161 |
1908.01381 | On Flying Backwards: Preventing Run-away of Small, Low-speed, Fixed-wing
UAVs in Strong Winds | Small, low-speed fixed-wing Unmanned Aerial Vehicles (UAVs) operating autonomously, beyond-visual-line-of-sight (BVLOS) will inevitably encounter winds rising to levels near or exceeding the vehicles' nominal airspeed. In this paper, we develop a nonlinear lateral-directional path following guidance law with explicit consideration of online wind estimates. Energy efficient airspeed reference compensation logic is developed for excess wind scenarios (i.e. when the wind speed rises above the airspeed), enabling either mitigation, prevention, or over-powering of excess wind induced run-away from a given path. The developed guidance law is demonstrated on a representative small, low-speed test UAV in two flight experiments conducted in mountainous regions of Switzerland with strong, turbulent wind conditions, gusts reaching up to 13 meters per second. We demonstrate track-keeping errors of less than 1 meter consistently maintained during a representative duration of gusting, excess winds and a mean ground speed undershoot of 0.5 meters per second from the commanded minimum forward ground speed demonstrated in over 5 minutes of the showcased flight results. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 140,749 |
2405.08577 | Intelligent Control in 6G Open RAN: Security Risk or Opportunity? | The Open Radio Access Network (Open RAN) framework, emerging as the cornerstone for Artificial Intelligence (AI)-enabled Sixth-Generation (6G) mobile networks, heralds a transformative shift in radio access network architecture. As the adoption of Open RAN accelerates, ensuring its security becomes critical. The RAN Intelligent Controller (RIC) plays a central role in Open RAN by improving network efficiency and flexibility. Nevertheless, it also brings about potential security risks that need careful scrutiny. Therefore, it is imperative to evaluate the current state of RIC security comprehensively. This assessment is essential to gain a profound understanding of the security considerations associated with RIC. This survey combines a comprehensive analysis of RAN security, tracing its evolution from 2G to 5G, with an in-depth exploration of RIC security, marking the first comprehensive examination of its kind in the literature. Real-world security incidents involving RIC are vividly illustrated, providing practical insights. The study evaluates the security implications of the RIC within the 6G Open RAN context, addressing security vulnerabilities, mitigation strategies, and potential enhancements. It aims to guide stakeholders in the telecom industry toward a secure and dependable telecommunications infrastructure. The article serves as a valuable reference, shedding light on the RIC's crucial role within the broader network infrastructure and emphasizing security's paramount importance. This survey also explores the promising security opportunities that the RIC presents for enhancing network security and resilience in the context of 6G mobile networks. It outlines open issues, lessons learned, and future research directions in the domain of intelligent control in 6G open RAN, facilitating a comprehensive understanding of this dynamic landscape. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | true | 454,142 |
2309.17341 | MixQuant: Mixed Precision Quantization with a Bit-width Optimization
Search | Quantization is a technique for creating efficient Deep Neural Networks (DNNs), which involves performing computations and storing tensors at lower bit-widths than f32 floating point precision. Quantization reduces model size and inference latency, and therefore allows for DNNs to be deployed on platforms with constrained computational resources and real-time systems. However, quantization can lead to numerical instability caused by roundoff error which leads to inaccurate computations and therefore, a decrease in quantized model accuracy. Similarly to prior works, which have shown that both biases and activations are more sensitive to quantization and are best kept in full precision or quantized with higher bit-widths, we show that some weights are more sensitive than others which should be reflected on their quantization bit-width. To that end we propose MixQuant, a search algorithm that finds the optimal custom quantization bit-width for each layer weight based on roundoff error and can be combined with any quantization method as a form of pre-processing optimization. We show that combining MixQuant with BRECQ, a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine MixQuant with vanilla asymmetric quantization to show that MixQuant has the potential to optimize the performance of any quantization technique. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 395,739 |
1805.05633 | A Deeply-Recursive Convolutional Network for Crowd Counting | The estimation of crowd count in images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. Recently, the convolutional neural network (CNN) based approaches have been shown to be more effective in crowd counting than traditional methods that use handcrafted features. However, the existing CNN-based methods still suffer from large number of parameters and large storage space, which require high storage and computing resources and thus limit the real-world application. Consequently, we propose a deeply-recursive network (DR-ResNet) based on ResNet blocks for crowd counting. The recursive structure makes the network deeper while keeping the number of parameters unchanged, which enhances network capability to capture statistical regularities in the context of the crowd. Besides, we generate a new dataset from the video-monitoring data of Beijing bus station. Experimental results have demonstrated that proposed method outperforms most state-of-the-art methods with far less number of parameters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 97,469 |
2007.04192 | Agent-Based Modelling: An Overview with Application to Disease Dynamics | Modelling and computational methods have been essential in advancing quantitative science, especially in the past two decades with the availability of vast amount of complex, voluminous, and heterogeneous data. In particular, there has been a surge of interest in agent-based modelling, largely due to its capabilities to exploit such data and make significant projections. However, any well-established quantitative method relies on theoretical frameworks for both construction and analysis. While the computational aspects of agent-based modelling have been detailed in existing literature, the underlying theoretical basis has rarely been used in its construction. In this exposition, we provide an overview of the theoretical foundation of agent-based modelling and establish a relationship with its computational implementation. In addition to detailing the main characteristics of this computational methodology, we illustrate its application to simulating the spread of an infectious disease in a simple, dynamical process. As the use of agent-based models expands to various disciplines, our review highlights the need for directed research efforts to develop theoretical methods and analytical tools for the analysis of such models. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 186,278 |
1910.10597 | Sample Complexity of Reinforcement Learning using Linearly Combined
Model Ensembles | Reinforcement learning (RL) methods have been shown to be capable of learning intelligent behavior in rich domains. However, this has largely been done in simulated domains without adequate focus on the process of building the simulator. In this paper, we consider a setting where we have access to an ensemble of pre-trained and possibly inaccurate simulators (models). We approximate the real environment using a state-dependent linear combination of the ensemble, where the coefficients are determined by the given state features and some unknown parameters. Our proposed algorithm provably learns a near-optimal policy with a sample complexity polynomial in the number of unknown parameters, and incurs no dependence on the size of the state (or action) space. As an extension, we also consider the more challenging problem of model selection, where the state features are unknown and can be chosen from a large candidate set. We provide exponential lower bounds that illustrate the fundamental hardness of this problem, and develop a provably efficient algorithm under additional natural assumptions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 150,534 |
2409.09067 | SLiCK: Exploiting Subsequences for Length-Constrained Keyword Spotting | User-defined keyword spotting on a resource-constrained edge device is challenging. However, keywords are often bounded by a maximum keyword length, which has been largely under-leveraged in prior works. Our analysis of keyword-length distribution shows that user-defined keyword spotting can be treated as a length-constrained problem, eliminating the need for aggregation over variable text length. This leads to our proposed method for efficient keyword spotting, SLiCK (exploiting Subsequences for Length-Constrained Keyword spotting). We further introduce a subsequence-level matching scheme to learn audio-text relations at a finer granularity, thus distinguishing similar-sounding keywords more effectively through enhanced context. In SLiCK, the model is trained with a multi-task learning approach using two modules: Matcher (utterance-level matching task, novel subsequence-level matching task) and Encoder (phoneme recognition task). The proposed method improves the baseline results on Libriphrase hard dataset, increasing AUC from $88.52$ to $94.9$ and reducing EER from $18.82$ to $11.1$. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 488,161 |
2105.11834 | Improving THz Coverage for 6G URLLC Services via Exploiting Mobile
Computing | Terahertz (THz) communication (0.1-10 THz) is regarded as a promising technology, which provides rich available bandwidth and high data rate of terahertz bit per second (Tbps). However, THz signals suffer from high path loss, which profoundly decreases the transmission distance. To improve THz coverage, we consider the aid of mobile computing. Specifically, job offloading decision in mobile computing and frequency allocation in communication are co-designed to maximize distance and concurrently support ultra-reliable low-latency communications (URLLC) services for the sixth-generation (6G) mobile communication. Further, the above optimization problem is non-convex, then an effective and low-complexity method is proposed via exploiting the special structure of this problem. Finally, numerical results verify the effectiveness of our work. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 236,836 |
2007.02561 | Learning from Failure: Training Debiased Classifier from Biased
Classifier | Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased. While previous work tackles this issue by using explicit labeling on the spuriously correlated attributes or presuming a particular bias type, we instead utilize a cheaper, yet generic form of human knowledge, which can be widely applicable to various types of bias. We first observe that neural networks learn to rely on the spurious correlation only when it is "easier" to learn than the desired knowledge, and such reliance is most prominent during the early phase of training. Based on the observations, we propose a failure-based debiasing scheme by training a pair of neural networks simultaneously. Our main idea is twofold; (a) we intentionally train the first network to be biased by repeatedly amplifying its "prejudice", and (b) we debias the training of the second network by focusing on samples that go against the prejudice of the biased network in (a). Extensive experiments demonstrate that our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets. Surprisingly, our framework even occasionally outperforms the debiasing methods requiring explicit supervision of the spuriously correlated attributes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 185,784 |
2111.09131 | An efficient two-dimensional heat transfer model for building envelopes | A two-dimensional model is proposed for energy efficiency assessment through the simulation of heat transfer in building envelopes, considering the influence of the surrounding environment. The model is based on the \DF ~approach that provides an explicit scheme with a relaxed stability condition. The model is first validated using an analytical solution and then compared to three other standard schemes. Results show that the proposed model offers a good compromise in terms of high accuracy and reduced computational efforts. Then, a more complex case study is investigated, considering non-uniform shading effects due to the neighboring buildings. In addition, the surface heat transfer coefficient varies with wind velocity and height, which imposes an addition non-uniform boundary condition. After showing the reliability of the model prediction, a comparison over almost $120$ cities in France is carried out between the two- and the one-dimensional approaches of the current building simulation programs. Important discrepancies are observed for regions with high magnitudes of solar radiation and wind velocity. Last, a sensitivity analysis is carried out using a derivative-based approach. It enables to assess the variability of the solution according to the modeling of the two-dimensional boundary conditions. Moreover, the proposed model computes efficiently the solution and its sensitivity to the modeling of the urban environment. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 266,926 |
2204.10817 | Reward Reports for Reinforcement Learning | Building systems that are good for society in the face of complex societal effects requires a dynamic approach. Recent approaches to machine learning (ML) documentation have demonstrated the promise of discursive frameworks for deliberation about these complexities. However, these developments have been grounded in a static ML paradigm, leaving the role of feedback and post-deployment performance unexamined. Meanwhile, recent work in reinforcement learning has shown that the effects of feedback and optimization objectives on system behavior can be wide-ranging and unpredictable. In this paper we sketch a framework for documenting deployed and iteratively updated learning systems, which we call Reward Reports. Taking inspiration from various contributions to the technical literature on reinforcement learning, we outline Reward Reports as living documents that track updates to design choices and assumptions behind what a particular automated system is optimizing for. They are intended to track dynamic phenomena arising from system deployment, rather than merely static properties of models or data. After presenting the elements of a Reward Report, we discuss a concrete example: Meta's BlenderBot 3 chatbot. Several others for game-playing (DeepMind's MuZero), content recommendation (MovieLens), and traffic control (Project Flow) are included in the appendix. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 292,926 |
2312.02224 | Tracing Hyperparameter Dependencies for Model Parsing via Learnable
Graph Pooling Network | Model Parsing defines the research task of predicting hyperparameters of the generative model (GM), given a generated image as input. Since a diverse set of hyperparameters is jointly employed by the generative model, and dependencies often exist among them, it is crucial to learn these hyperparameter dependencies for the improved model parsing performance. To explore such important dependencies, we propose a novel model parsing method called Learnable Graph Pooling Network (LGPN). Specifically, we transform model parsing into a graph node classification task, using graph nodes and edges to represent hyperparameters and their dependencies, respectively. Furthermore, LGPN incorporates a learnable pooling-unpooling mechanism tailored to model parsing, which adaptively learns hyperparameter dependencies of GMs used to generate the input image. We also extend our proposed method to CNN-generated image detection and coordinate attacks detection. Empirically, we achieve state-of-the-art results in model parsing and its extended applications, showing the effectiveness of our method. Our source code are available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 412,774 |
2310.04295 | Identifying Representations for Intervention Extrapolation | The premise of identifiable and causal representation learning is to improve the current representation learning paradigm in terms of generalizability or robustness. Despite recent progress in questions of identifiability, more theoretical results demonstrating concrete advantages of these methods for downstream tasks are needed. In this paper, we consider the task of intervention extrapolation: predicting how interventions affect an outcome, even when those interventions are not observed at training time, and show that identifiable representations can provide an effective solution to this task even if the interventions affect the outcome non-linearly. Our setup includes an outcome Y, observed features X, which are generated as a non-linear transformation of latent features Z, and exogenous action variables A, which influence Z. The objective of intervention extrapolation is to predict how interventions on A that lie outside the training support of A affect Y. Here, extrapolation becomes possible if the effect of A on Z is linear and the residual when regressing Z on A has full support. As Z is latent, we combine the task of intervention extrapolation with identifiable representation learning, which we call Rep4Ex: we aim to map the observed features X into a subspace that allows for non-linear extrapolation in A. We show that the hidden representation is identifiable up to an affine transformation in Z-space, which is sufficient for intervention extrapolation. The identifiability is characterized by a novel constraint describing the linearity assumption of A on Z. Based on this insight, we propose a method that enforces the linear invariance constraint and can be combined with any type of autoencoder. We validate our theoretical findings through synthetic experiments and show that our approach succeeds in predicting the effects of unseen interventions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 397,597 |
2412.11555 | TS-SatFire: A Multi-Task Satellite Image Time-Series Dataset for
Wildfire Detection and Prediction | Wildfire monitoring and prediction are essential for understanding wildfire behaviour. With extensive Earth observation data, these tasks can be integrated and enhanced through multi-task deep learning models. We present a comprehensive multi-temporal remote sensing dataset for active fire detection, daily wildfire monitoring, and next-day wildfire prediction. Covering wildfire events in the contiguous U.S. from January 2017 to October 2021, the dataset includes 3552 surface reflectance images and auxiliary data such as weather, topography, land cover, and fuel information, totalling 71 GB. The lifecycle of each wildfire is documented, with labels for active fires (AF) and burned areas (BA), supported by manual quality assurance of AF and BA test labels. The dataset supports three tasks: a) active fire detection, b) daily burned area mapping, and c) wildfire progression prediction. Detection tasks use pixel-wise classification of multi-spectral, multi-temporal images, while prediction tasks integrate satellite and auxiliary data to model fire dynamics. This dataset and its benchmarks provide a foundation for advancing wildfire research using deep learning. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 517,473 |
2402.03353 | Tweet Influence on Market Trends: Analyzing the Impact of Social Media
Sentiment on Biotech Stocks | This study investigates the relationship between tweet sentiment across diverse categories: news, company opinions, CEO opinions, competitor opinions, and stock market behavior in the biotechnology sector, with a focus on understanding the impact of social media discourse on investor sentiment and decision-making processes. We analyzed historical stock market data for ten of the largest and most influential pharmaceutical companies alongside Twitter data related to COVID-19, vaccines, the companies, and their respective CEOs. Using VADER sentiment analysis, we examined the sentiment scores of tweets and assessed their relationships with stock market performance. We employed ARIMA (AutoRegressive Integrated Moving Average) and VAR (Vector AutoRegression) models to forecast stock market performance, incorporating sentiment covariates to improve predictions. Our findings revealed a complex interplay between tweet sentiment, news, biotech companies, their CEOs, and stock market performance, emphasizing the importance of considering diverse factors when modeling and predicting stock prices. This study provides valuable insights into the influence of social media on the financial sector and lays a foundation for future research aimed at refining stock price prediction models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 426,976 |
1709.00781 | Non-Uniform Wavelet Sampling for RF Analog-to-Information Conversion | Feature extraction, such as spectral occupancy, interferer energy and type, or direction-of-arrival, from wideband radio-frequency~(RF) signals finds use in a growing number of applications as it enhances RF transceivers with cognitive abilities and enables parameter tuning of traditional RF chains. In power and cost limited applications, e.g., for sensor nodes in the Internet of Things, wideband RF feature extraction with conventional, Nyquist-rate analog-to-digital converters is infeasible. However, the structure of many RF features (such as signal sparsity) enables the use of compressive sensing (CS) techniques that acquire such signals at sub-Nyquist rates. While such CS-based analog-to-information (A2I) converters have the potential to enable low-cost and energy-efficient wideband RF sensing, they suffer from a variety of real-world limitations, such as noise folding, low sensitivity, aliasing, and limited flexibility. This paper proposes a novel CS-based A2I architecture called non-uniform wavelet sampling (NUWS). Our solution extracts a carefully-selected subset of wavelet coefficients directly in the RF domain, which mitigates the main issues of existing A2I converter architectures. For multi-band RF signals, we propose a specialized variant called non-uniform wavelet bandpass sampling (NUWBS), which further improves sensitivity and reduces hardware complexity by leveraging the multi-band signal structure. We use simulations to demonstrate that NUWBS approaches the theoretical performance limits of $\ell_1$-norm-based sparse signal recovery. We investigate hardware-design aspects and show ASIC measurement results for the wavelet generation stage, which highlight the efficacy of NUWBS for a broad range of RF feature extraction tasks in cost- and power-limited applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 79,971 |
2502.06116 | Event Vision Sensor: A Review | By monitoring temporal contrast, event-based vision sensors can provide high temporal resolution and low latency while maintaining low power consumption and simplicity in circuit structure. These characteristics have garnered significant attention in both academia and industry. In recent years, the application of back-illuminated (BSI) technology, wafer stacking techniques, and industrial interfaces has brought new opportunities for enhancing the performance of event-based vision sensors. This is evident in the substantial advancements made in reducing noise, improving resolution, and increasing readout rates. Additionally, the integration of these technologies has enhanced the compatibility of event-based vision sensors with current and edge vision systems, providing greater possibilities for their practical applications. This paper will review the progression from neuromorphic engineering to state-of-the-art event-based vision sensor technologies, including their development trends, operating principles, and key features. Moreover, we will delve into the sensitivity of event-based vision sensors and the opportunities and challenges they face in the realm of infrared imaging, providing references for future research and applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,930 |
2412.08160 | DG-Mamba: Robust and Efficient Dynamic Graph Structure Learning with
Selective State Space Models | Dynamic graphs exhibit intertwined spatio-temporal evolutionary patterns, widely existing in the real world. Nevertheless, the structure incompleteness, noise, and redundancy result in poor robustness for Dynamic Graph Neural Networks (DGNNs). Dynamic Graph Structure Learning (DGSL) offers a promising way to optimize graph structures. However, aside from encountering unacceptable quadratic complexity, it overly relies on heuristic priors, making it hard to discover underlying predictive patterns. How to efficiently refine the dynamic structures, capture intrinsic dependencies, and learn robust representations, remains under-explored. In this work, we propose the novel DG-Mamba, a robust and efficient Dynamic Graph structure learning framework with the Selective State Space Models (Mamba). To accelerate the spatio-temporal structure learning, we propose a kernelized dynamic message-passing operator that reduces the quadratic time complexity to linear. To capture global intrinsic dynamics, we establish the dynamic graph as a self-contained system with State Space Model. By discretizing the system states with the cross-snapshot graph adjacency, we enable the long-distance dependencies capturing with the selective snapshot scan. To endow learned dynamic structures more expressive with informativeness, we propose the self-supervised Principle of Relevant Information for DGSL to regularize the most relevant yet least redundant information, enhancing global robustness. Extensive experiments demonstrate the superiority of the robustness and efficiency of our DG-Mamba compared with the state-of-the-art baselines against adversarial attacks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 515,971 |
2309.09517 | FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural
Networks | Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios while preserving data privacy. However, graph heterogeneity issues in federated GNN systems continue to pose challenges. Existing frameworks address the problem by representing local tasks using different statistics and relating them through a simple aggregation mechanism. However, these approaches suffer from limited efficiency from two aspects: low quality of task-relatedness quantification and inefficacy of exploiting the collaboration structure. To address these issues, we propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method to extract task features that better describe task-relatedness, and introduces a novel server-side aggregation mechanism that is aware of the global collaboration structure. We conduct extensive experiments on six real-world datasets of different scales, demonstrating our framework's outperformance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 392,647 |
1901.01536 | Exploring applications of deep reinforcement learning for real-world
autonomous driving systems | Deep Reinforcement Learning (DRL) has become increasingly powerful in recent years, with notable achievements such as Deepmind's AlphaGo. It has been successfully deployed in commercial vehicles like Mobileye's path planning system. However, a vast majority of work on DRL is focused on toy examples in controlled synthetic car simulator environments such as TORCS and CARLA. In general, DRL is still at its infancy in terms of usability in real-world applications. Our goal in this paper is to encourage real-world deployment of DRL in various autonomous driving (AD) applications. We first provide an overview of the tasks in autonomous driving systems, reinforcement learning algorithms and applications of DRL to AD systems. We then discuss the challenges which must be addressed to enable further progress towards real-world deployment. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 117,996 |
2302.08917 | Massively Multilingual Shallow Fusion with Large Language Models | While large language models (LLM) have made impressive progress in natural language processing, it remains unclear how to utilize them in improving automatic speech recognition (ASR). In this work, we propose to train a single multilingual language model (LM) for shallow fusion in multiple languages. We push the limits of the multilingual LM to cover up to 84 languages by scaling up using a mixture-of-experts LLM, i.e., generalist language model (GLaM). When the number of experts increases, GLaM dynamically selects only two at each decoding step to keep the inference computation roughly constant. We then apply GLaM to a multilingual shallow fusion task based on a state-of-the-art end-to-end model. Compared to a dense LM of similar computation during inference, GLaM reduces the WER of an English long-tail test set by 4.4% relative. In a multilingual shallow fusion task, GLaM improves 41 out of 50 languages with an average relative WER reduction of 3.85%, and a maximum reduction of 10%. Compared to the baseline model, GLaM achieves an average WER reduction of 5.53% over 43 languages. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 346,229 |
2502.05547 | Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in
Federated Learning | Federated learning (FL) is inherently susceptible to privacy breaches and poisoning attacks. To tackle these challenges, researchers have separately devised secure aggregation mechanisms to protect data privacy and robust aggregation methods that withstand poisoning attacks. However, simultaneously addressing both concerns is challenging; secure aggregation facilitates poisoning attacks as most anomaly detection techniques require access to unencrypted local model updates, which are obscured by secure aggregation. Few recent efforts to simultaneously tackle both challenges offen depend on impractical assumption of non-colluding two-server setups that disrupt FL's topology, or three-party computation which introduces scalability issues, complicating deployment and application. To overcome this dilemma, this paper introduce a Dual Defense Federated learning (DDFed) framework. DDFed simultaneously boosts privacy protection and mitigates poisoning attacks, without introducing new participant roles or disrupting the existing FL topology. DDFed initially leverages cutting-edge fully homomorphic encryption (FHE) to securely aggregate model updates, without the impractical requirement for non-colluding two-server setups and ensures strong privacy protection. Additionally, we proposes a unique two-phase anomaly detection mechanism for encrypted model updates, featuring secure similarity computation and feedback-driven collaborative selection, with additional measures to prevent potential privacy breaches from Byzantine clients incorporated into the detection process. We conducted extensive experiments on various model poisoning attacks and FL scenarios, including both cross-device and cross-silo FL. Experiments on publicly available datasets demonstrate that DDFed successfully protects model privacy and effectively defends against model poisoning threats. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 531,661 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.