id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1508.07435 | Subdifferential-based implicit return-mapping operators in Mohr-Coulomb
plasticity | The paper is devoted to a constitutive solution, limit load analysis and Newton-like methods in elastoplastic problems containing the Mohr-Coulomb yield criterion. Within the constitutive problem, we introduce a self-contained derivation of the implicit return-mapping solution scheme using a recent subdifferential-based treatment. Unlike conventional techniques based on Koiter's rules, the presented scheme a priori detects a position of the unknown stress tensor on the yield surface even if the constitutive solution cannot be found in closed form. This fact eliminates blind guesswork from the scheme, enables to analyze properties of the constitutive operator, and simplifies construction of the consistent tangent operator which is important for the semismooth Newton method applied on the incremental boundary value elastoplastic problem. The incremental problem in Mohr-Coulomb plasticity is combined with the limit load analysis. Beside a conventional direct method of the incremental limit analysis, a recent indirect one is introduced and its advantages are described. The paper contains 2D and 3D numerical experiments on slope stability with publicly available Matlab implementations. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 46,404 |
2203.03516 | A Large Force Haptic Interface with Modular Linear Actuators | This paper presents a haptic interface with modular linear actuators which can address limitations of conventional devices based on rotatory joints. The proposed haptic interface is composed of parallel linear actuators that provide high backdrivability and small inertia. The performance of the haptic interface is compared with the conventional mechanisms in terms of force capability, reflected inertia, and structural stiffness. High stiffness, large range of motion with high force capability are achieved with the proposed mechanism, which are in trade-off relationships in traditional haptic interfaces. The device can apply up to 83 N continuously, which is three times larger than most haptic devices. The theoretical minimum haptic force density and the stiffness of the proposed mechanism were 1.3 to 1.9 times and 37 times of conventional mechanisms in a similar condition, respectively. The system is also scalable because its structural stiffness only depends on the timing belt stiffness, while that of conventional haptic interfaces is inversely proportional to the cube of structural lengths. The modular actuator design enables change of degrees freedom (DOFs) for different applications. The proposed haptic interface was tested by the interaction experiment with a virtual environment with rigid walls. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 284,113 |
1708.09596 | SINR-Threshold Scheduling with Binary Power Control for D2D Networks | In this paper, we consider a device-to-device communication network in which $K$ transmitter-receiver pairs are sharing spectrum with each other. We propose a novel but simple binary scheduling scheme for this network to maximize the average sum rate of the pairs. According to the scheme, each receiver predicts its Signal-to-Interference-plus-Noise Ratio (SINR), assuming \emph{all} other user pairs are active, and compares it to a preassigned threshold to decide whether its corresponding transmitter to be activated or not. For our proposed scheme, the optimal threshold that maximizes the expected sum rate is obtained analytically for the two user-pair case and empirically in the general $K$ user-pair case. Simulation results reveal that our proposed SINR-threshold scheduling scheme outperforms ITLinQ \cite{navid}, FlashLinQ \cite{flash} and the method presented in \cite{G} in terms of the expected sum rate (network throughput). In addition, the computational complexity of the proposed scheme is $O(K)$, outperforming both ITLinQ and FlashLinQ that have $O(K^2)$ complexity requirements. Moreover, we also discuss the application of our proposed new scheme into an operator-assisted cellular D2D heterogeneous network. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 79,808 |
cs/0405005 | Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard | Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum- likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 538,169 |
2110.08377 | Starkit: RoboCup Humanoid KidSize 2021 Worldwide Champion Team Paper | This article is devoted to the features that were under development between RoboCup 2019 Sydney and RoboCup 2021 Worldwide. These features include vision-related matters, such as detection and localization, mechanical and algorithmic novelties. Since the competition was held virtually, the simulation-specific features are also considered in the article. We give an overview of the approaches that were tried out along with the analysis of their preconditions, perspectives and the evaluation of their performance. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 261,364 |
2203.16230 | Evaluation of semantic relations impact in query expansion-based
retrieval systems | With the increasing demand of intelligent systems capable of operating in different contexts (e.g. users on the move) the correct interpretation of the user-need by such systems has become crucial to give consistent answers to the user questions. The most effective applications addressing such task are in the fields of natural language processing and semantic expansion of terms. These techniques are aimed at estimating the goal of an input query reformulating it as an intent, commonly relying on textual resources built exploiting different semantic relations like \emph{synonymy}, \emph{antonymy} and many others. The aim of this paper is to generate such resources using the labels of a given taxonomy as source of information. The obtained resources are integrated into a plain classifier for reformulating a set of input queries as intents and tracking the effect of each relation, in order to quantify the impact of each semantic relation on the classification. As an extension to this, the best tradeoff between improvement and noise introduction when combining such relations is evaluated. The assessment is made generating the resources and their combinations and using them for tuning the classifier which is used to reformulate the user questions as labels. The evaluation employs a wide and varied taxonomy as a use-case, exploiting its labels as basis for the semantic expansion and producing several corpora with the purpose of enhancing the pseudo-queries estimation. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 288,706 |
1706.08033 | Decomposing Motion and Content for Natural Video Sequence Prediction | We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 75,934 |
2109.11801 | SIM2REALVIZ: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation | The Robotics community has started to heavily rely on increasingly realistic 3D simulators for large-scale training of robots on massive amounts of data. But once robots are deployed in the real world, the simulation gap, as well as changes in the real world (e.g. lights, objects displacements) lead to errors. In this paper, we introduce Sim2RealViz, a visual analytics tool to assist experts in understanding and reducing this gap for robot ego-pose estimation tasks, i.e. the estimation of a robot's position using trained models. Sim2RealViz displays details of a given model and the performance of its instances in both simulation and real-world. Experts can identify environment differences that impact model predictions at a given location and explore through direct interactions with the model hypothesis to fix it. We detail the design of the tool, and case studies related to the exploit of the regression to the mean bias and how it can be addressed, and how models are perturbed by the vanish of landmarks such as bikes. | true | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 257,065 |
1804.05083 | Incentive design for learning in user-recommendation systems with
time-varying states | We consider the problem of how strategic users with asymmetric information can learn an underlying time varying state in a user-recommendation system. Users who observe private signals about the state, sequentially make a decision about buying a product whose value varies with time in an ergodic manner. We formulate the team problem as an instance of decentralized stochastic control problem and characterize its optimal policies. With strategic users, we design incentives such that users reveal their true private signals, so that the gap between the strategic and team objective is small and the overall expected incentive payments are also small. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 94,990 |
2202.07050 | Artificial Intelligence-Based Smart Grid Vulnerabilities and Potential
Solutions for Fake-Normal Attacks: A Short Review | Smart grid systems are critical to the power industry, however their sophisticated architectural design and operations expose them to a number of cybersecurity threats, such as data tampering, data eavesdropping, and Denial of Service, among others. Artificial Intelligence (AI)-based technologies are becoming increasingly popular for detecting cyber assaults in a variety of computer settings, and several efforts have been made to secure various systems. The present AI systems are being exposed and vanquished because of the recent emergence of sophisticated adversarial systems such as Generative Adversarial Networks (GAN). The purpose of this short review is to outline some of the initiatives to protect smart grid systems, their obstacles, and what might be a potential future AI research direction | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 280,409 |
2406.04834 | Annotating FrameNet via Structure-Conditioned Language Generation | Despite the remarkable generative capabilities of language models in producing naturalistic language, their effectiveness on explicit manipulation and generation of linguistic structures remain understudied. In this paper, we investigate the task of generating new sentences preserving a given semantic structure, following the FrameNet formalism. We propose a framework to produce novel frame-semantically annotated sentences following an overgenerate-and-filter approach. Our results show that conditioning on rich, explicit semantic information tends to produce generations with high human acceptance, under both prompting and finetuning. Our generated frame-semantic structured annotations are effective at training data augmentation for frame-semantic role labeling in low-resource settings; however, we do not see benefits under higher resource settings. Our study concludes that while generating high-quality, semantically rich data might be within reach, the downstream utility of such generations remains to be seen, highlighting the outstanding challenges with automating linguistic annotation tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 461,865 |
1812.10588 | Synthesizing Robust Domains of Attraction for State-Constrained
Perturbed Polynomial Systems | In this paper we propose a novel semi-definite programming based method to compute robust domains of attraction for state-constrained perturbed polynomial systems. A robust domain of attraction is a set of states such that every trajectory starting from it will approach an equilibrium while never violating a specified state constraint, regardless of the actual perturbation. The semi-definite program is constructed by relaxing a generalized Zubov's equation. The existence of solutions to the constructed semi-definite program is guaranteed and there exists a sequence of solutions such that their strict one sub-level sets inner-approximate the interior of the maximal robust domain of attraction in measure under appropriate assumptions. Some illustrative examples demonstrate the performance of our method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 117,403 |
1102.1691 | Schema Redescription in Cellular Automata: Revisiting Emergence in
Complex Systems | We present a method to eliminate redundancy in the transition tables of Boolean automata: schema redescription with two symbols. One symbol is used to capture redundancy of individual input variables, and another to capture permutability in sets of input variables: fully characterizing the canalization present in Boolean functions. Two-symbol schemata explain aspects of the behaviour of automata networks that the characterization of their emergent patterns does not capture. We use our method to compare two well-known cellular automata for the density classification task: the human engineered CA GKL, and another obtained via genetic programming (GP). We show that despite having very different collective behaviour, these rules are very similar. Indeed, GKL is a special case of GP. Therefore, we demonstrate that it is more feasible to compare cellular automata via schema redescriptions of their rules, than by looking at their emergent behaviour, leading us to question the tendency in complexity research to pay much more attention to emergent patterns than to local interactions. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | true | 9,083 |
1102.4137 | Using Distributed Rotations for a Low-Complexity Dynamic
Decode-and-Forward Relay Protocol | In this paper, we propose to implement the dynamic decode-and-forward (DDF) protocol with distributed rotations. In addition to being the first minimum-delay implementation of the DDF protocol proposed for any number of relays, this technique allows to exploit cooperative diversity without inducing the high decoding complexity of a space-time code. The analysis of outage probabilities for different number of relays and rotations shows that the performance of this technique is close to optimal. Moreover, a lower-bound on the diversity-multiplexing gain tradeoff (DMT) is provided in the case of a single relay and two rotations. This lower-bound reaches the optimal DDF's DMT when the frame-length grows to infinity, which shows that even a small number of rotations is enough to obtain good performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,298 |
2204.08118 | On the Differential Properties of the Power Mapping $x^{p^m+2}$ | Let $m$ be a positive integer and $p$ a prime. In this paper, we investigate the differential properties of the power mapping $x^{p^m+2}$ over $\mathbb{F}_{p^n}$, where $n=2m$ or $n=2m-1$. For the case $n=2m$, by transforming the derivative equation of $x^{p^m+2}$ and studying some related equations, we completely determine the differential spectrum of this power mapping. For the case $n=2m-1$, the derivative equation can be transformed to a polynomial of degree $p+3$. The problem is more difficult and we obtain partial results about the differential spectrum of $x^{p^m+2}$. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 291,967 |
2501.00990 | Cyber-physical Defense for Heterogeneous Multi-agent Systems Against
Exponentially Unbounded Attacks on Signed Digraphs | Cyber-physical systems (CPSs) are subjected to attacks on both cyber and physical spaces. In reality, the attackers could launch exponentially unbounded false data injection (EU-FDI) attacks, which are more destructive and could lead to the system's collapse or instability. Existing literature generally addresses bounded attack signals and/or bounded-first-order-derivative attack signals, which exposes the CPSs to significant threats. In contrast, this paper proposes a fully-distributed attack-resilient bi-layer defense framework to address the bipartite output containment problem for heterogeneous multi-agent systems on signed digraphs, in the presence of EU-FDI attacks on both cyber-physical layer (CPL) and observer layer (OL). First, we design attack-resilient dynamic compensators that utilize data communicated on the OL to estimate the convex combinations of the states and negative states of the leaders. The attack-resilient compensators address the EU-FDI attacks on the OL and guarantee the uniformly ultimately bounded (UUB) estimation of the leaders' states. Then, by using the compensators' states, fully-distributed attack-resilient controllers are designed on the CPL to further address the EU-FDI attacks on the actuators. Rigorous mathematical proof based on Lyapunov stability analysis is provided, establishing the theoretical soundness of the proposed bi-layer resilient defense framework, by preserving the UUB consensus and stability against EU-FDI attacks on both CPL and OL. Finally, a comparative case study for heterogeneous multi-agent systems validate the enhanced resilience of the proposed defense strategies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 521,878 |
2010.15322 | Improvement of EAST Data Acquisition Configuration Management | The data acquisition console is an important component of the EAST data acquisition system which provides unified data acquisition and long-term data storage for diagnostics. The data acquisition console is used to manage the data acquisition configuration information and control the data acquisition workflow. The data acquisition console has been developed many years, and with increasing of data acquisition nodes and emergence of new control nodes, the function of configuration management has become inadequate. It is going to update the configuration management function of data acquisition console. The upgraded data acquisition console based on LabVIEW should be oriented to the data acquisition administrator, with the functions of managing data acquisition nodes, managing control nodes, setting and publishing configuration parameters, batch management, database backup, monitoring the status of data acquisition nodes, controlling the data acquisition workflow, and shot simulation data acquisition test. The upgraded data acquisition console has been designed and under testing recently. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 203,731 |
2501.15833 | Mode Switching-Induced Instability of Multi-source Feed DC Microgrid | In DC microgrids (DCMGs), DC-bus signaling based control strategy is extensively used for power management, where mode switching plays a crucial role in achieving multi-source coordination. However, few studies have noticed the impact of mode switching and switching strategies on system voltage stability. To fill this gap, this paper aims to provide a general analysis framework for mode switching-induced instability in multi-source DCMGs. First, manifold theory is employed to analyze the stability of the DCMG switched system. Subsequently, the instability mechanism and its physical interpretation are explored. The positive feedback activated by the decreasing DC bus voltage during the switching process leads to instability. Switching strategy may inadvertently contribute to this instability. To improve stability, a novel control method based on mode scheduling is proposed, by adjusting switching strategy and thereby correcting the system trajectory. Finally, both real-time simulations and experimental tests on a DCMG system verify the correctness and effectiveness of theoretical analysis results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 527,726 |
1709.09662 | Image Space Potential Fields: Constant Size Environment Representation
for Vision-based Subsumption Control Architectures | This technical report presents an environment representation for use in vision-based navigation. The representation has two useful properties: 1) it has constant size, which can enable strong run-time guarantees to be made for control algorithms using it, and 2) it is structurally similar to a camera image space, which effectively allows control to operate in the sensor space rather than employing difficult, and often inaccurate, projections into a structurally different control space (e.g. Euclidean). The presented representation is intended to form the basis of a vision-based subsumption control architecture. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 81,667 |
2103.06535 | Calibrated and Partially Calibrated Semi-Generalized Homographies | In this paper, we propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera. The proposed solvers use five 2D-2D image point correspondences induced by a scene plane. One of them assumes the perspective camera to be fully calibrated, while the other solver estimates the unknown focal length together with the absolute pose parameters. This setup is particularly important in structure-from-motion and image-based localization pipelines, where a new camera is localized in each step with respect to a set of known cameras and 2D-3D correspondences might not be available. As a consequence of a clever parametrization and the elimination ideal method, our approach only needs to solve a univariate polynomial of degree five or three. The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 224,340 |
2109.06838 | ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language
Understanding | While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surface-level reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 255,295 |
0909.1623 | Two channel paraunitary filter banks based on linear canonical transform | In this paper a two channel paraunitary filter bank is proposed, which is based on linear canonical transform, instead of discrete Fourier transform. Input-output relation for such a filter bank are derived in terms of polyphase matrices and modulation matrices. It is shown that like conventional filter banks, the LCT based paraunitary filter banks need only one filter to be designed and rest of the filters can be obtained from it. It is also shown that LCT based paraunitary filter banks can be designed by using conventional power-symmetric filter design in Fourier domain. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,435 |
1907.07951 | Automatic vocal tract landmark localization from midsagittal MRI data | The various speech sounds of a language are obtained by varying the shape and position of the articulators surrounding the vocal tract. Analyzing their variations is crucial for understanding speech production, diagnosing speech disorders and planning therapy. Identifying key anatomical landmarks of these structures on medical images is a pre-requisite for any quantitative analysis and the rising amount of data generated in the field calls for an automatic solution. The challenge lies in the high inter- and intra-speaker variability, the mutual interaction between the articulators and the moderate quality of the images. This study addresses this issue for the first time and tackles it by means by means of Deep Learning. It proposes a dedicated network architecture named Flat-net and its performance are evaluated and compared with eleven state-of-the-art methods from the literature. The dataset contains midsagittal anatomical Magnetic Resonance Images for 9 speakers sustaining 62 articulations with 21 annotated anatomical landmarks per image. Results show that the Flat-net approach outperforms the former methods, leading to an overall Root Mean Square Error of 3.6 pixels/0.36 cm obtained in a leave-one-out procedure over the speakers. The implementation codes are also shared publicly on GitHub. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 139,004 |
1712.05957 | Degrees of Freedom of Interference Networks with Transmitter-Side Caches | This paper studies cache-aided interference networks with arbitrary number of transmitters and receivers, whereby each transmitter has a cache memory of finite size. Each transmitter fills its cache memory from a content library of files in the placement phase. In the subsequent delivery phase, each receiver requests one of the library files, and the transmitters are responsible for delivering the requested files from their caches to the receivers. The objective is to design schemes for the placement and delivery phases to maximize the sum degrees of freedom (sum-DoF) which expresses the capacity of the interference network at the high signal-to-noise ratio regime. Our work mainly focuses on a commonly used uncoded placement strategy. We provide an information-theoretic bound on the sum-DoF for this placement strategy. We demonstrate by an example that the derived bound is tighter than the bounds existing in the literature for small cache sizes. We propose a novel delivery scheme with a higher achievable sum-DoF than those previously given in the literature. The results reveal that the reciprocal of sum-DoF decreases linearly as the transmitter cache size increases. Therefore, increasing cache sizes at transmitters translates to increasing the sum-DoF and, hence, the capacity of the interference networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 86,802 |
1811.03862 | Targeting Solutions in Bayesian Multi-Objective Optimization: Sequential
and Batch Versions | Multi-objective optimization aims at finding trade-off solutions to conflicting objectives. These constitute the Pareto optimal set. In the context of expensive-to-evaluate functions, it is impossible and often non-informative to look for the entire set. As an end-user would typically prefer a certain part of the objective space, we modify the Bayesian multi-objective optimization algorithm which uses Gaussian Processes to maximize the Expected Hypervolume Improvement, to focus the search in the preferred region. The cumulated effects of the Gaussian Processes and the targeting strategy lead to a particularly efficient convergence to the desired part of the Pareto set. To take advantage of parallel computing, a multi-point extension of the targeting criterion is proposed and analyzed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 112,948 |
1503.05314 | On the Performance of Turbo Signal Recovery with Partial DFT Sensing
Matrices | This letter is on the performance of the turbo signal recovery (TSR) algorithm for partial discrete Fourier transform (DFT) matrices based compressed sensing. Based on state evolution analysis, we prove that TSR with a partial DFT sensing matrix outperforms the well-known approximate message passing (AMP) algorithm with an independent identically distributed (IID) sensing matrix. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,234 |
2104.02849 | Relay-Reconfigurable Intelligent Surface Cooperation for
Energy-Efficient Multiuser Systems | Reconfigurable intelligent surfaces (RIS) have drawn considerable attention recently due to their controllable scattering elements that are able to direct electromagnetic waves into desirable directions. Although RISs share some similarities with relays, the two have fundamental differences impacting their performance. To harness the benefits of both relaying and RISs, a multi-user communication system is proposed in this paper wherein a relay and an RIS cooperate to improve performance in terms of energy efficiency. To utilize the RIS efficiently, the discrete phase shifts of the RIS elements are optimized along with the beamforming matrices at the transmitter and the relay, targeting the minimization of the total transmit power subject to a quality-of-service (QoS) constraint. Then, two suboptimal efficient solutions are proposed for the resulting discrete and non-convex problem, one based on singular value decomposition (SVD) and uplink-downlink duality and the other is based on SVD combined with zero-forcing. Simulations show that the proposed solutions outperform a system with either a relay or an RIS only, especially when both are closer to the users than to the base-station. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 228,873 |
2203.06451 | Bringing Rolling Shutter Images Alive with Dual Reversed Distortion | Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time during the exposure of the RS camera. This means that the information of each instant GS frame is partially, yet sequentially, embedded into the row-dependent distortion. Inspired by this fact, we address the challenging task of reversing this process, i.e., extracting undistorted GS frames from images suffering from RS distortion. However, since RS distortion is coupled with other factors such as readout settings and the relative velocity of scene elements to the camera, models that only exploit the geometric correlation between temporally adjacent images suffer from poor generality in processing data with different readout settings and dynamic scenes with both camera motion and object motion. In this paper, instead of two consecutive frames, we propose to exploit a pair of images captured by dual RS cameras with reversed RS directions for this highly challenging task. Grounded on the symmetric and complementary nature of dual reversed distortion, we develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time. Extensive experimental results demonstrate that IFED is superior to naive cascade schemes, as well as the state-of-the-art which utilizes adjacent RS images. Most importantly, although it is trained on a synthetic dataset, IFED is shown to be effective at retrieving GS frame sequences from real-world RS distorted images of dynamic scenes. Code is available at https://github.com/zzh-tech/Dual-Reversed-RS. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 285,120 |
2212.06064 | Reinforcement Learning Applied to Trading Systems: A Survey | Financial domain tasks, such as trading in market exchanges, are challenging and have long attracted researchers. The recent achievements and the consequent notoriety of Reinforcement Learning (RL) have also increased its adoption in trading tasks. RL uses a framework with well-established formal concepts, which raises its attractiveness in learning profitable trading strategies. However, RL use without due attention in the financial area can prevent new researchers from following standards or failing to adopt relevant conceptual guidelines. In this work, we embrace the seminal RL technical fundamentals, concepts, and recommendations to perform a unified, theoretically-grounded examination and comparison of previous research that could serve as a structuring guide for the field of study. A selection of twenty-nine articles was reviewed under our classification that considers RL's most common formulations and design patterns from a large volume of available studies. This classification allowed for precise inspection of the most relevant aspects regarding data input, preprocessing, state and action composition, adopted RL techniques, evaluation setups, and overall results. Our analysis approach organized around fundamental RL concepts allowed for a clear identification of current system design best practices, gaps that require further investigation, and promising research opportunities. Finally, this review attempts to promote the development of this field of study by facilitating researchers' commitment to standards adherence and helping them to avoid straying away from the RL constructs' firm ground. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 335,996 |
2303.06710 | Decision Making for Human-in-the-loop Robotic Agents via
Uncertainty-Aware Reinforcement Learning | In a Human-in-the-Loop paradigm, a robotic agent is able to act mostly autonomously in solving a task, but can request help from an external expert when needed. However, knowing when to request such assistance is critical: too few requests can lead to the robot making mistakes, but too many requests can overload the expert. In this paper, we present a Reinforcement Learning based approach to this problem, where a semi-autonomous agent asks for external assistance when it has low confidence in the eventual success of the task. The confidence level is computed by estimating the variance of the return from the current state. We show that this estimate can be iteratively improved during training using a Bellman-like recursion. On discrete navigation problems with both fully- and partially-observable state information, we show that our method makes effective use of a limited budget of expert calls at run-time, despite having no access to the expert at training time. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 350,961 |
1909.13868 | Deep learning tools for the measurement of animal behavior in
neuroscience | Recent advances in computer vision have made accurate, fast and robust measurement of animal behavior a reality. In the past years powerful tools specifically designed to aid the measurement of behavior have come to fruition. Here we discuss how capturing the postures of animals - pose estimation - has been rapidly advancing with new deep learning methods. While challenges still remain, we envision that the fast-paced development of new deep learning tools will rapidly change the landscape of realizable real-world neuroscience. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 147,539 |
2405.15037 | "This really lets us see the entire world:" Designing a conversational
telepresence robot for homebound older adults | In this paper, we explore the design and use of conversational telepresence robots to help homebound older adults interact with the external world. An initial needfinding study (N=8) using video vignettes revealed older adults' experiential needs for robot-mediated remote experiences such as exploration, reminiscence and social participation. We then designed a prototype system to support these goals and conducted a technology probe study (N=11) to garner a deeper understanding of user preferences for remote experiences. The study revealed user interactive patterns in each desired experience, highlighting the need of robot guidance, social engagements with the robot and the remote bystanders. Our work identifies a novel design space where conversational telepresence robots can be used to foster meaningful interactions in the remote physical environment. We offer design insights into the robot's proactive role in providing guidance and using dialogue to create personalized, contextualized and meaningful experiences. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 456,711 |
1709.02707 | Learning Populations of Parameters | Consider the following estimation problem: there are $n$ entities, each with an unknown parameter $p_i \in [0,1]$, and we observe $n$ independent random variables, $X_1,\ldots,X_n$, with $X_i \sim $ Binomial$(t, p_i)$. How accurately can one recover the "histogram" (i.e. cumulative density function) of the $p_i$'s? While the empirical estimates would recover the histogram to earth mover distance $\Theta(\frac{1}{\sqrt{t}})$ (equivalently, $\ell_1$ distance between the CDFs), we show that, provided $n$ is sufficiently large, we can achieve error $O(\frac{1}{t})$ which is information theoretically optimal. We also extend our results to the multi-dimensional parameter case, capturing settings where each member of the population has multiple associated parameters. Beyond the theoretical results, we demonstrate that the recovery algorithm performs well in practice on a variety of datasets, providing illuminating insights into several domains, including politics, sports analytics, and variation in the gender ratio of offspring. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 80,321 |
1602.01716 | Decentralized Prediction-Correction Methods for Networked Time-Varying
Convex Optimization | We develop algorithms that find and track the optimal solution trajectory of time-varying convex optimization problems which consist of local and network-related objectives. The algorithms are derived from the prediction-correction methodology, which corresponds to a strategy where the time-varying problem is sampled at discrete time instances and then a sequence is generated via alternatively executing predictions on how the optimizers at the next time sample are changing and corrections on how they actually have changed. Prediction is based on how the optimality conditions evolve in time, while correction is based on a gradient or Newton method, leading to Decentralized Prediction-Correction Gradient (DPC-G) and Decentralized Prediction-Correction Newton (DPC-N). We extend these methods to cases where the knowledge on how the optimization programs are changing in time is only approximate and propose Decentralized Approximate Prediction-Correction Gradient (DAPC-G) and Decentralized Approximate Prediction-Correction Newton (DAPC-N). Convergence properties of all the proposed methods are studied and empirical performance is shown on an application of a resource allocation problem in a wireless network. We observe that the proposed methods outperform existing running algorithms by orders of magnitude. The numerical results showcase a trade-off between convergence accuracy, sampling period, and network communications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 51,741 |
2101.09647 | Does Dialog Length matter for Next Response Selection task? An Empirical
Study | In the last few years, the release of BERT, a multilingual transformer based model, has taken the NLP community by storm. BERT-based models have achieved state-of-the-art results on various NLP tasks, including dialog tasks. One of the limitation of BERT is the lack of ability to handle long text sequence. By default, BERT has a maximum wordpiece token sequence length of 512. Recently, there has been renewed interest to tackle the BERT limitation to handle long text sequences with the addition of new self-attention based architectures. However, there has been little to no research on the impact of this limitation with respect to dialog tasks. Dialog tasks are inherently different from other NLP tasks due to: a) the presence of multiple utterances from multiple speakers, which may be interlinked to each other across different turns and b) longer length of dialogs. In this work, we empirically evaluate the impact of dialog length on the performance of BERT model for the Next Response Selection dialog task on four publicly available and one internal multi-turn dialog datasets. We observe that there is little impact on performance with long dialogs and even the simplest approach of truncating input works really well. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 216,669 |
2406.06487 | When is Multicalibration Post-Processing Necessary? | Calibration is a well-studied property of predictors which guarantees meaningful uncertainty estimates. Multicalibration is a related notion -- originating in algorithmic fairness -- which requires predictors to be simultaneously calibrated over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing across a broad set of tabular, image, and language datasets for models spanning from simple decision trees to 90 million parameter fine-tuned LLMs. Our findings can be summarized as follows: (1) models which are calibrated out of the box tend to be relatively multicalibrated without any additional post-processing; (2) multicalibration post-processing can help inherently uncalibrated models and large vision and language models; and (3) traditional calibration measures may sometimes provide multicalibration implicitly. More generally, we also distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing in real-world contexts. We also release a python package implementing multicalibration algorithms, available via `pip install multicalibration'. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 462,603 |
1904.08513 | MorphIC: A 65-nm 738k-Synapse/mm$^2$ Quad-Core Binary-Weight Digital
Neuromorphic Processor with Stochastic Spike-Driven Online Learning | Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm$^2$ in 65nm CMOS, achieving a high density of 738k synapses/mm$^2$. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 128,093 |
2405.01060 | A text-based, generative deep learning model for soil reflectance
spectrum simulation in the VIS-NIR (400-2499 nm) bands | Simulating soil reflectance spectra is invaluable for soil-plant radiative modeling and training machine learning models, yet it is difficult as the intricate relationships between soil structure and its constituents. To address this, a fully data-driven soil optics generative model (SOGM) for simulation of soil reflectance spectra based on soil property inputs was developed. The model is trained on an extensive dataset comprising nearly 180,000 soil spectra-property pairs from 17 datasets. It generates soil reflectance spectra from text-based inputs describing soil properties and their values rather than only numerical values and labels in binary vector format. The generative model can simulate output spectra based on an incomplete set of input properties. SOGM is based on the denoising diffusion probabilistic model (DDPM). Two additional sub-models were also built to complement the SOGM: a spectral padding model that can fill in the gaps for spectra shorter than the full visible-near-infrared range (VIS-NIR; 400 to 2499 nm), and a wet soil spectra model that can estimate the effects of water content on soil reflectance spectra given the dry spectrum predicted by the SOGM. The SOGM was up-scaled by coupling with the Helios 3D plant modeling software, which allowed for generation of synthetic aerial images of simulated soil and plant scenes. It can also be easily integrated with soil-plant radiation model used for remote sensin research like PROSAIL. The testing results of the SOGM on new datasets that not included in model training proved that the model can generate reasonable soil reflectance spectra based on available property inputs. The presented models are openly accessible on: https://github.com/GEMINI-Breeding/SOGM_soil_spectra_simulation. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 451,202 |
1706.09925 | Harmonic State Space Modeling of a Three-Phase Modular Multilevel
Converter | This paper presents the harmonic state space (HSS) modeling of a three-phase modular multilevel converter (MMC). MMC is a converter system with a typical multi-frequency response due to its significant harmonics in the arm currents, capacitor voltages, and control signals. These internal harmonic dynamics can have a great influence on the operation characteristics of MMC. However, the conventional modeling methods commonly used in two-level voltage-source converters (VSCs), where only the fundamental-frequency dynamic is considered, will lead to an inaccurate model that cannot accurately reflect the real dynamic characteristics of MMC. Therefore, the HSS modeling method, in which harmonics of state variables, inputs, and outputs are posed separately in a state-space form, is introduced in this paper to model the MMC in order to capture all the harmonics and the frequency couplings. The steady-state and small-signal dynamic HSS models of a three-phase MMC are developed, respectively. The validity of the developed HSS model of a three-phase MMC has been verified by the results from both the nonlinear time domain simulation model in MATLAB/Simulink and the laboratory prototype with 12 submodules per arm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 76,223 |
2203.05968 | Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction | Objective. Dual-energy computed tomography (DECT) has the potential to improve contrast, reduce artifacts and the ability to perform material decomposition in advanced imaging applications. The increased number or measurements results with a higher radiation dose and it is therefore essential to reduce either number of projections per energy or the source X-ray intensity, but this makes tomographic reconstruction more ill-posed. Approach. We developed the multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies and we propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features obtained by pre-trained convolutional filters through the convolutional analysis operator learning (CAOL) algorithm. Main results. Extensive experiments with simulated and real computed tomography (CT) data were performed to validate the effectiveness of the proposed methods and we reported increased reconstruction accuracy compared to CAOL and iterative methods with single and joint total-variation (TV) regularization. Significance. Qualitative and quantitative results on sparse-views and low-dose DECT demonstrate that the proposed MCAOL method outperforms both CAOL applied on each energy independently and several existing state-of-the-art model-based iterative reconstruction (MBIR) techniques, thus paving the way for dose reduction. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 284,977 |
2501.12025 | Low-Cost 3D printed, Biocompatible Ionic Polymer Membranes for Soft
Actuators | Ionic polymer actuators, in essence, consist of ion exchange polymers sandwiched between layers of electrodes. They have recently gained recognition as promising candidates for soft actuators due to their lightweight nature, noise-free operation, and low-driving voltages. However, the materials traditionally utilized to develop them are often not human/environmentally friendly. Thus, to address this issue, researchers have been focusing on developing biocompatible versions of this actuator. Despite this, such actuators still face challenges in achieving high performance, in payload capacity, bending capabilities, and response time. In this paper, we present a biocompatible ionic polymer actuator whose membrane is fully 3D printed utilizing a direct ink writing method. The structure of the printed membranes consists of biodegradable ionic fluid encapsulated within layers of activated carbon polymers. From the microscopic observations of its structure, we confirmed that the ionic polymer is well encapsulated. The actuators can achieve a bending performance of up to 124$^\circ$ (curvature of 0.82 $\text{cm}^{-1}$), which, to our knowledge, is the highest curvature attained by any bending ionic polymer actuator to date. It can operate comfortably up to a 2 Hz driving frequency and can achieve blocked forces of up to 0.76 mN. Our results showcase a promising, high-performing biocompatible ionic polymer actuator, whose membrane can be easily manufactured in a single step using a standard FDM 3D printer. This approach paves the way for creating customized designs for functional soft robotic applications, including human-interactive devices, in the near future. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 526,135 |
2106.08903 | GemNet: Universal Directional Graph Neural Networks for Molecules | Effectively predicting molecular interactions has the potential to accelerate molecular dynamics by multiple orders of magnitude and thus revolutionize chemical simulations. Graph neural networks (GNNs) have recently shown great successes for this task, overtaking classical methods based on fixed molecular kernels. However, they still appear very limited from a theoretical perspective, since regular GNNs cannot distinguish certain types of graphs. In this work we close this gap between theory and practice. We show that GNNs with spherical representations are indeed universal approximators for predictions that are invariant to translation, and equivariant to permutation and rotation. We then discretize such GNNs via directed edge embeddings and two-hop message passing, and incorporate multiple structural improvements to arrive at the geometric message passing neural network (GemNet). We demonstrate the benefits of the proposed changes in multiple ablation studies. GemNet outperforms previous models on the COLL, MD17, and OC20 datasets by 34%, 41%, and 20%, respectively, and performs especially well on the most challenging molecules. Our implementation is available online. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 241,473 |
2502.09779 | Automated Muscle and Fat Segmentation in Computed Tomography for
Comprehensive Body Composition Analysis | Body composition assessment using CT images can potentially be used for a number of clinical applications, including the prognostication of cardiovascular outcomes, evaluation of metabolic health, monitoring of disease progression, assessment of nutritional status, prediction of treatment response in oncology, and risk stratification for surgical and critical care outcomes. While multiple groups have developed in-house segmentation tools for this analysis, there are very limited publicly available tools that could be consistently used across different applications. To mitigate this gap, we present a publicly accessible, end-to-end segmentation and feature calculation model specifically for CT body composition analysis. Our model performs segmentation of skeletal muscle, subcutaneous adipose tissue (SAT), and visceral adipose tissue (VAT) across the chest, abdomen, and pelvis area in axial CT images. It also provides various body composition metrics, including muscle density, visceral-to-subcutaneous fat (VAT/SAT) ratio, muscle area/volume, and skeletal muscle index (SMI), supporting both 2D and 3D assessments. The model is shared for public use. To evaluate the model, the segmentation was applied to both internal and external datasets, with body composition metrics analyzed across different age, sex, and race groups. The model achieved high dice coefficients on both internal and external datasets, exceeding 89% for skeletal muscle, SAT, and VAT segmentation. The model outperforms the benchmark by 2.40% on skeletal muscle and 10.26% on SAT compared to the manual annotations given by the publicly available dataset. Body composition metrics show mean relative absolute errors (MRAEs) under 10% for all measures. Furthermore, the model provided muscular fat segmentation with a Dice coefficient of 56.27%, which can be utilized for additional analyses as needed. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 533,593 |
1912.11171 | Geometry-Aware Generation of Adversarial Point Clouds | Machine learning models have been shown to be vulnerable to adversarial examples. While most of the existing methods for adversarial attack and defense work on the 2D image domain, a few recent attempts have been made to extend them to 3D point cloud data. However, adversarial results obtained by these methods typically contain point outliers, which are both noticeable and easy to defend against using the simple techniques of outlier removal. Motivated by the different mechanisms by which humans perceive 2D images and 3D shapes, in this paper we propose the new design of \emph{geometry-aware objectives}, whose solutions favor (the discrete versions of) the desired surface properties of smoothness and fairness. To generate adversarial point clouds, we use a targeted attack misclassification loss that supports continuous pursuit of increasingly malicious signals. Regularizing the targeted attack loss with our proposed geometry-aware objectives results in our proposed method, Geometry-Aware Adversarial Attack ($GeoA^3$). The results of $GeoA^3$ tend to be more harmful, arguably harder to defend against, and of the key adversarial characterization of being imperceptible to humans. While the main focus of this paper is to learn to generate adversarial point clouds, we also present a simple but effective algorithm termed $Geo_{+}A^3$-IterNormPro, with Iterative Normal Projection (IterNorPro) that solves a new objective function $Geo_{+}A^3$, towards surface-level adversarial attacks via generation of adversarial point clouds. We quantitatively evaluate our methods on both synthetic and physical objects in terms of attack success rate and geometric regularity. For a qualitative evaluation, we conduct subjective studies by collecting human preferences from Amazon Mechanical Turk. Comparative results in comprehensive experiments confirm the advantages of our proposed methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 158,495 |
2302.14678 | Graph Reinforcement Learning for Operator Selection in the ALNS
Metaheuristic | ALNS is a popular metaheuristic with renowned efficiency in solving combinatorial optimisation problems. However, despite 16 years of intensive research into ALNS, whether the embedded adaptive layer can efficiently select operators to improve the incumbent remains an open question. In this work, we formulate the choice of operators as a Markov Decision Process, and propose a practical approach based on Deep Reinforcement Learning and Graph Neural Networks. The results show that our proposed method achieves better performance than the classic ALNS adaptive layer due to the choice of operator being conditioned on the current solution. We also discuss important considerations such as the size of the operator portfolio and the impact of the choice of operator scales. Notably, our approach can also save significant time and labour costs for handcrafting problem-specific operator portfolios. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 348,388 |
2106.04975 | The dilemma of quantum neural networks | The core of quantum machine learning is to devise quantum models with good trainability and low generalization error bound than their classical counterparts to ensure better reliability and interpretability. Recent studies confirmed that quantum neural networks (QNNs) have the ability to achieve this goal on specific datasets. With this regard, it is of great importance to understand whether these advantages are still preserved on real-world tasks. Through systematic numerical experiments, we empirically observe that current QNNs fail to provide any benefit over classical learning models. Concretely, our results deliver two key messages. First, QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets. Second, the trainability of QNNs is insensitive to regularization techniques, which sharply contrasts with the classical scenario. These empirical results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,926 |
2502.14662 | InstructAgent: Building User Controllable Recommender via LLM Agent | Traditional recommender systems usually take the user-platform paradigm, where users are directly exposed under the control of the platform's recommendation algorithms. However, the defect of recommendation algorithms may put users in very vulnerable positions under this paradigm. First, many sophisticated models are often designed with commercial objectives in mind, focusing on the platform's benefits, which may hinder their ability to protect and capture users' true interests. Second, these models are typically optimized using data from all users, which may overlook individual user's preferences. Due to these shortcomings, users may experience several disadvantages under the traditional user-platform direct exposure paradigm, such as lack of control over the recommender system, potential manipulation by the platform, echo chamber effects, or lack of personalization for less active users due to the dominance of active users during collaborative learning. Therefore, there is an urgent need to develop a new paradigm to protect user interests and alleviate these issues. Recently, some researchers have introduced LLM agents to simulate user behaviors, these approaches primarily aim to optimize platform-side performance, leaving core issues in recommender systems unresolved. To address these limitations, we propose a new user-agent-platform paradigm, where agent serves as the protective shield between user and recommender system that enables indirect exposure. To this end, we first construct four recommendation datasets, denoted as $\dataset$, along with user instructions for each record. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 535,917 |
2410.03894 | A Machine Learning-Based Reference Governor for Nonlinear Systems With
Application to Automotive Fuel Cells | The prediction-based nonlinear reference governor (PRG) is an add-on algorithm to enforce constraints on pre-stabilized nonlinear systems by modifying, whenever necessary, the reference signal. The implementation of PRG carries a heavy computational burden, as it may require multiple numerical simulations of the plant model at each sample time. To this end, this paper proposes an alternative approach based on machine learning, where we first use a regression neural network (NN) to approximate the input-output map of the PRG from a set of training data. During the real-time operation, at each sample time, we use the trained NN to compute a nominal reference command, which may not be constraint admissible due to training errors and limited data. We adopt a novel sensitivity-based approach to minimally adjust the nominal reference while ensuring constraint enforcement. We thus refer to the resulting control strategy as the modified neural network reference governor (MNN-RG), which is significantly more computationally efficient than the PRG. The computational and theoretical properties of MNN-RG are presented. Finally, the effectiveness and limitations of the proposed method are studied by applying it as a load governor for constraint management in automotive fuel cell systems through simulation-based case studies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 495,029 |
cs/0610025 | Low Correlation Sequences over the QAM Constellation | This paper presents the first concerted look at low correlation sequence families over QAM constellations of size M^2=4^m and their potential applicability as spreading sequences in a CDMA setting. Five constructions are presented, and it is shown how such sequence families have the ability to transport a larger amount of data as well as enable variable-rate signalling on the reverse link. Canonical family CQ has period N, normalized maximum-correlation parameter theta_max bounded above by A sqrt(N), where 'A' ranges from 1.8 in the 16-QAM case to 3.0 for large M. In a CDMA setting, each user is enabled to transfer 2m bits of data per period of the spreading sequence which can be increased to 3m bits of data by halving the size of the sequence family. The technique used to construct CQ is easily extended to produce larger sequence families and an example is provided. Selected family SQ has a lower value of theta_max but permits only (m+1)-bit data modulation. The interleaved 16-QAM sequence family IQ has theta_max <= sqrt(2) sqrt(N) and supports 3-bit data modulation. The remaining two families are over a quadrature-PAM (Q-PAM) subset of size 2M of the M^2-QAM constellation. Family P has a lower value of theta_max in comparison with Family SQ, while still permitting (m+1)-bit data modulation. Interleaved family IP, over the 8-ary Q-PAM constellation, permits 3-bit data modulation and interestingly, achieves the Welch lower bound on theta_max. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,764 |
2411.09441 | A ROS~2-based Navigation and Simulation Stack for the Robotino | The Robotino, developed by Festo Didactic, serves as a versatile platform in education and research for mobile robotics tasks. However, there currently is no ROS2 integration for the Robotino available. In this paper, we describe our work on a Webots simulation environment for a Robotino platform extended by LIDAR sensors. A ROS2 integration and a pre-configured setup for localization and navigation using existing ROS packages from the Nav2 suite are provided. We validate our setup by comparing simulations with real-world experiments conducted by three Robotinos in a logistics environment in our lab. Additionally, we tested the setup using a ROS 2 hardware driver for the Robotino developed by team GRIPS of the RoboCup Logistics League. The results demonstrate the feasibility of using ROS2 and Nav2 for navigation tasks on the Robotino platform showing great consistency between simulation and real-world performance. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 508,250 |
2305.11195 | DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities | With the electrification of transportation, the rising uptake of electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and stability jeopardized. To accommodate these new loads cost-effectively, modern power grids require coordinated or ``smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion. With this in view, the present work focuses on reservation management programs for large-scale, networked EV charging stations. We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits. To tackle the problem at scale while retaining high solution quality, a data-driven optimization framework combining techniques from the fields of Deep Learning and Approximation Algorithms is introduced. The framework's key ingredient is a novel input-output processing scheme for neural networks that allows direct extrapolation to problem sizes substantially larger than those included in the training set. Extensive numerical simulations based on synthetic and real-world data traces verify the effectiveness and superiority of the presented approach over two representative scheduling algorithms. Lastly, we round up the contributions by listing several immediate extensions to the proposed framework and outlining the prospects for further exploration. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 365,428 |
1912.02233 | Large-Scale Semi-Supervised Learning via Graph Structure Learning over
High-Dense Points | We focus on developing a novel scalable graph-based semi-supervised learning (SSL) method for a small number of labeled data and a large amount of unlabeled data. Due to the lack of labeled data and the availability of large-scale unlabeled data, existing SSL methods usually encounter either suboptimal performance because of an improper graph or the high computational complexity of the large-scale optimization problem. In this paper, we propose to address both challenging problems by constructing a proper graph for graph-based SSL methods. Different from existing approaches, we simultaneously learn a small set of vertexes to characterize the high-dense regions of the input data and a graph to depict the relationships among these vertexes. A novel approach is then proposed to construct the graph of the input data from the learned graph of a small number of vertexes with some preferred properties. Without explicitly calculating the constructed graph of inputs, two transductive graph-based SSL approaches are presented with the computational complexity in linear with the number of input data. Extensive experiments on synthetic data and real datasets of varied sizes demonstrate that the proposed method is not only scalable for large-scale data, but also achieve good classification performance, especially for extremely small number of labels. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 156,287 |
1809.02188 | Differentially Private Bayesian Inference for Exponential Families | The study of private inference has been sparked by growing concern regarding the analysis of data when it stems from sensitive sources. We present the first method for private Bayesian inference in exponential families that properly accounts for noise introduced by the privacy mechanism. It is efficient because it works only with sufficient statistics and not individual data. Unlike other methods, it gives properly calibrated posterior beliefs in the non-asymptotic data regime. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 106,983 |
2109.11087 | BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles | A riddle is a question or statement with double or veiled meanings, followed by an unexpected answer. Solving riddle is a challenging task for both machine and human, testing the capability of understanding figurative, creative natural language and reasoning with commonsense knowledge. We introduce BiRdQA, a bilingual multiple-choice question answering dataset with 6614 English riddles and 8751 Chinese riddles. For each riddle-answer pair, we provide four distractors with additional information from Wikipedia. The distractors are automatically generated at scale with minimal bias. Existing monolingual and multilingual QA models fail to perform well on our dataset, indicating that there is a long way to go before machine can beat human on solving tricky riddles. The dataset has been released to the community. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 256,841 |
2306.10091 | Acoustic Identification of Ae. aegypti Mosquitoes using Smartphone Apps
and Residual Convolutional Neural Networks | In this paper, we advocate in favor of smartphone apps as low-cost, easy-to-deploy solution for raising awareness among the population on the proliferation of Aedes aegypti mosquitoes. Nevertheless, devising such a smartphone app is challenging, for many reasons, including the required maturity level of techniques for identifying mosquitoes based on features that can be captured using smartphone resources. In this paper, we identify a set of (non-exhaustive) requirements that smartphone apps must meet to become an effective tooling in the fight against Ae. aegypti, and advance the state-of-the-art with (i) a residual convolutional neural network for classifying Ae. aegypti mosquitoes from their wingbeat sound, (ii) a methodology for reducing the influence of background noise in the classification process, and (iii) a dataset for benchmarking solutions for detecting Ae. aegypti mosquitoes from wingbeat sound recordings. From the analysis of accuracy and recall, we provide evidence that convolutional neural networks have potential as a cornerstone for tracking mosquito apps for smartphones. | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 374,104 |
2311.11668 | AIaaS for ORAN-based 6G Networks: Multi-time Scale Slice Resource
Management with DRL | This paper addresses how to handle slice resources for 6G networks at different time scales in an architecture based on an open radio access network (ORAN). The proposed solution includes artificial intelligence (AI) at the edge of the network and applies two control-level loops to obtain optimal performance compared to other techniques. The ORAN facilitates programmable network architectures to support such multi-time scale management using AI approaches. The proposed algorithms analyze the maximum utilization of resources from slice performance to take decisions at the inter-slice level. Inter-slice intelligent agents work at a non-real-time level to reconfigure resources within various slices. Further than meeting the slice requirements, the intra-slice objective must also include the minimization of maximum resource utilization. This enables smart utilization of the resources within each slice without affecting slice performance. Here, each xApp that is an intra-slice agent aims at meeting the optimal quality of service (QoS) of the users, but at the same time, some inter-slice objectives should be included to coordinate intra- and inter-slice agents. This is done without penalizing the main intra-slice objective. All intelligent agents use deep reinforcement learning (DRL) algorithms to meet their objectives. We have presented results for enhanced mobile broadband (eMBB), ultra-reliable low latency (URLLC), and massive machine type communication (mMTC) slice categories. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 409,037 |
2203.06108 | Active Token Mixer | The three existing dominant network families, i.e., CNNs, Transformers, and MLPs, differ from each other mainly in the ways of fusing spatial contextual information, leaving designing more effective token-mixing mechanisms at the core of backbone architecture development. In this work, we propose an innovative token-mixer, dubbed Active Token Mixer (ATM), to actively incorporate flexible contextual information distributed across different channels from other tokens into the given query token. This fundamental operator actively predicts where to capture useful contexts and learns how to fuse the captured contexts with the query token at channel level. In this way, the spatial range of token-mixing can be expanded to a global scope with limited computational complexity, where the way of token-mixing is reformed. We take ATM as the primary operator and assemble ATMs into a cascade architecture, dubbed ATMNet. Extensive experiments demonstrate that ATMNet is generally applicable and comprehensively surpasses different families of SOTA vision backbones by a clear margin on a broad range of vision tasks, including visual recognition and dense prediction tasks. Code is available at https://github.com/microsoft/ActiveMLP. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 285,014 |
2006.14512 | Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability | Knowledge transferability, or transfer learning, has been widely adopted to allow a pre-trained model in the source domain to be effectively adapted to downstream tasks in the target domain. It is thus important to explore and understand the factors affecting knowledge transferability. In this paper, as the first work, we analyze and demonstrate the connections between knowledge transferability and another important phenomenon--adversarial transferability, \emph{i.e.}, adversarial examples generated against one model can be transferred to attack other models. Our theoretical studies show that adversarial transferability indicates knowledge transferability and vice versa. Moreover, based on the theoretical insights, we propose two practical adversarial transferability metrics to characterize this process, serving as bidirectional indicators between adversarial and knowledge transferability. We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability. Our findings will shed light on future research about effective knowledge transfer learning and adversarial transferability analyses. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 184,244 |
1708.09585 | ICDAR2017 Competition on Reading Chinese Text in the Wild (RCTW-17) | Chinese is the most widely used language in the world. Algorithms that read Chinese text in natural images facilitate applications of various kinds. Despite the large potential value, datasets and competitions in the past primarily focus on English, which bares very different characteristics than Chinese. This report introduces RCTW, a new competition that focuses on Chinese text reading. The competition features a large-scale dataset with 12,263 annotated images. Two tasks, namely text localization and end-to-end recognition, are set up. The competition took place from January 20 to May 31, 2017. 23 valid submissions were received from 19 teams. This report includes dataset description, task definitions, evaluation protocols, and results summaries and analysis. Through this competition, we call for more future research on the Chinese text reading problem. The official website for the competition is http://rctw.vlrlab.net | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 79,807 |
1509.02861 | Preconditioning for continuation model predictive control | Model predictive control (MPC) anticipates future events to take appropriate control actions. Nonlinear MPC (NMPC) deals with nonlinear models and/or constraints. A Continuation/GMRES Method for NMPC, suggested by T. Ohtsuka in 2004, uses the GMRES iterative algorithm to solve a forward difference approximation $Ax=b$ of the original NMPC equations on every time step. We have previously proposed accelerating the GMRES and MINRES convergence by preconditioning the coefficient matrix $A$. We now suggest simplifying the construction of the preconditioner, by approximately solving a forward recursion for the state and a backward recursion for the costate, or simply reusing previously computed solutions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 46,774 |
1911.11952 | Label Dependent Deep Variational Paraphrase Generation | Generating paraphrases that are lexically similar but semantically different is a challenging task. Paraphrases of this form can be used to augment data sets for various NLP tasks such as machine reading comprehension and question answering with non-trivial negative examples. In this article, we propose a deep variational model to generate paraphrases conditioned on a label that specifies whether the paraphrases are semantically related or not. We also present new training recipes and KL regularization techniques that improve the performance of variational paraphrasing models. Our proposed model demonstrates promising results in enhancing the generative power of the model by employing label-dependent generation on paraphrasing datasets. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 155,273 |
1609.06420 | Asymptotically Optimal Regenerating Codes Over Any Field | The study of regenerating codes has advanced tremendously in recent years. However, most known constructions require large field size, and hence may be hard to implement in practice. By using notions from the theory of extension fields, we obtain two explicit constructions of regenerating codes. These codes approach the cut-set bound as the reconstruction degree increases, and may be realized over any given field if the file size is large enough. Since distributed storage systems are the main purpose of regenerating codes, this file size restriction is trivially satisfied in most conceivable scenarios. The first construction attains the cut-set bound at the MBR point asymptotically for all parameters, whereas the second one attains the cut-set bound at the MSR point asymptotically for low-rate parameters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 61,287 |
1908.04389 | NeuroMask: Explaining Predictions of Deep Neural Networks through Mask
Learning | Deep Neural Networks (DNNs) deliver state-of-the-art performance in many image recognition and understanding applications. However, despite their outstanding performance, these models are black-boxes and it is hard to understand how they make their decisions. Over the past few years, researchers have studied the problem of providing explanations of why DNNs predicted their results. However, existing techniques are either obtrusive, requiring changes in model training, or suffer from low output quality. In this paper, we present a novel method, NeuroMask, for generating an interpretable explanation of classification model results. When applied to image classification models, NeuroMask identifies the image parts that are most important to classifier results by applying a mask that hides/reveals different parts of the image, before feeding it back into the model. The mask values are tuned by minimizing a properly designed cost function that preserves the classification result and encourages producing an interpretable mask. Experiments using state-of-the-art Convolutional Neural Networks for image recognition on different datasets (CIFAR-10 and ImageNet) show that NeuroMask successfully localizes the parts of the input image which are most relevant to the DNN decision. By showing a visual quality comparison between NeuroMask explanations and those of other methods, we find NeuroMask to be both accurate and interpretable. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 141,466 |
2312.08901 | Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning | Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observation that adding more concise CoT examples in the prompt can improve LLM reasoning performance, CoT-Influx employs a coarse-to-fine pruner to maximize the input of effective and concise CoT examples. The pruner first selects as many crucial CoT examples as possible and then prunes unimportant tokens to fit the context window. A math reasoning dataset with diverse difficulty levels and reasoning steps is used to train the pruner, along with a math-specialized reinforcement learning approach. As a result, by enabling more CoT examples with double the context window size in tokens, CoT-Influx significantly outperforms various prompting baselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 math datasets, achieving up to 4.55% absolute improvements. Remarkably, without any fine-tuning, LLaMA2-70B with CoT-Influx surpasses GPT-3.5 and a wide range of larger LLMs (PaLM, Minerva 540B, etc.) on the GSM8K. CoT-Influx serves as a plug-and-play module for LLMs and is compatible with most existing reasoning prompting techniques, such as self-consistency and self-verification. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 415,512 |
2502.01521 | Toward Task Generalization via Memory Augmentation in Meta-Reinforcement
Learning | In reinforcement learning (RL), agents often struggle to perform well on tasks that differ from those encountered during training. This limitation presents a challenge to the broader deployment of RL in diverse and dynamic task settings. In this work, we introduce memory augmentation, a memory-based RL approach to improve task generalization. Our approach leverages task-structured augmentations to simulate plausible out-of-distribution scenarios and incorporates memory mechanisms to enable context-aware policy adaptation. Trained on a predefined set of tasks, our policy demonstrates the ability to generalize to unseen tasks through memory augmentation without requiring additional interactions with the environment. Through extensive simulation experiments and real-world hardware evaluations on legged locomotion tasks, we demonstrate that our approach achieves zero-shot generalization to unseen tasks while maintaining robust in-distribution performance and high sample efficiency. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 529,891 |
1806.00890 | Soccer on Your Tabletop | We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 99,424 |
2307.13581 | Comparing Forward and Inverse Design Paradigms: A Case Study on
Refractory High-Entropy Alloys | The rapid design of advanced materials is a topic of great scientific interest. The conventional, ``forward'' paradigm of materials design involves evaluating multiple candidates to determine the best candidate that matches the target properties. However, recent advances in the field of deep learning have given rise to the possibility of an ``inverse'' design paradigm for advanced materials, wherein a model provided with the target properties is able to find the best candidate. Being a relatively new concept, there remains a need to systematically evaluate how these two paradigms perform in practical applications. Therefore, the objective of this study is to directly, quantitatively compare the forward and inverse design modeling paradigms. We do so by considering two case studies of refractory high-entropy alloy design with different objectives and constraints and comparing the inverse design method to other forward schemes like localized forward search, high throughput screening, and multi objective optimization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 381,635 |
2205.04315 | Integrating Social Media into the Design Process | Social media captures examples of people's behaviors, actions, beliefs, and sentiments. As a result, it can be a valuable source of information and inspiration for HCI research and design. Social media technologies can improve, inform, and strengthen insights to better understand and represent user populations. To understand the position of social media research and analysis in the design process, this paper seeks to highlight shortcomings of using traditional research methods (e.g., interviews, focus groups) that ignore or don't adequately reflect relevant social media, and this paper speculates about the importance and benefits of leveraging social media for establishing context in supplement with these methods. We present examples that guide our thinking and introduce discussion around concerns related to using social media data. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 295,602 |
2306.06268 | Attention-stacked Generative Adversarial Network (AS-GAN)-empowered
Sensor Data Augmentation for Online Monitoring of Manufacturing System | Machine learning (ML) has been extensively adopted for the online sensing-based monitoring in advanced manufacturing systems. However, the sensor data collected under abnormal states are usually insufficient, leading to significant data imbalanced issue for supervised machine learning. A common solution is to incorporate data augmentation techniques, i.e., augmenting the available abnormal states data (i.e., minority samples) via synthetic generation. To generate the high-quality minority samples, it is vital to learn the underlying distribution of the abnormal states data. In recent years, the generative adversarial network (GAN)-based approaches become popular to learn data distribution as well as perform data augmentation. However, in practice, the quality of generated samples from GAN-based data augmentation may vary drastically. In addition, the sensor signals are collected sequentially by time from the manufacturing systems, which means sequential information is also very important in data augmentation. To address these limitations, inspired by the multi-head attention mechanism, this paper proposed an attention-stacked GAN (AS-GAN) architecture for sensor data augmentation of online monitoring in manufacturing system. It incorporates a new attention-stacked framework to strengthen the generator in GAN with the capability of capturing sequential information, and thereby the developed attention-stacked framework greatly helps to improve the quality of the generated sensor signals. Afterwards, the generated high-quality sensor signals for abnormal states could be applied to train classifiers more accurately, further improving the online monitoring performance of manufacturing systems. The case study conducted in additive manufacturing also successfully validated the effectiveness of the proposed AS-GAN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 372,538 |
2411.11935 | Calibrated and Efficient Sampling-Free Confidence Estimation for LiDAR
Scene Semantic Segmentation | Reliable deep learning models require not only accurate predictions but also well-calibrated confidence estimates to ensure dependable uncertainty estimation. This is crucial in safety-critical applications like autonomous driving, which depend on rapid and precise semantic segmentation of LiDAR point clouds for real-time 3D scene understanding. In this work, we introduce a sampling-free approach for estimating well-calibrated confidence values for classification tasks, achieving alignment with true classification accuracy and significantly reducing inference time compared to sampling-based methods. Our evaluation using the Adaptive Calibration Error (ACE) metric for LiDAR semantic segmentation shows that our approach maintains well-calibrated confidence values while achieving increased processing speed compared to a sampling baseline. Additionally, reliability diagrams reveal that our method produces underconfidence rather than overconfident predictions, an advantage for safety-critical applications. Our sampling-free approach offers well-calibrated and time-efficient predictions for LiDAR scene semantic segmentation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 509,243 |
2103.10892 | Deep Label Fusion: A 3D End-to-End Hybrid Multi-Atlas Segmentation and
Deep Learning Pipeline | Deep learning (DL) is the state-of-the-art methodology in various medical image segmentation tasks. However, it requires relatively large amounts of manually labeled training data, which may be infeasible to generate in some applications. In addition, DL methods have relatively poor generalizability to out-of-sample data. Multi-atlas segmentation (MAS), on the other hand, has promising performance using limited amounts of training data and good generalizability. A hybrid method that integrates the high accuracy of DL and good generalizability of MAS is highly desired and could play an important role in segmentation problems where manually labeled data is hard to generate. Most of the prior work focuses on improving single components of MAS using DL rather than directly optimizing the final segmentation accuracy via an end-to-end pipeline. Only one study explored this idea in binary segmentation of 2D images, but it remains unknown whether it generalizes well to multi-class 3D segmentation problems. In this study, we propose a 3D end-to-end hybrid pipeline, named deep label fusion (DLF), that takes advantage of the strengths of MAS and DL. Experimental results demonstrate that DLF yields significant improvements over conventional label fusion methods and U-Net, a direct DL approach, in the context of segmenting medial temporal lobe subregions using 3T T1-weighted and T2-weighted MRI. Further, when applied to an unseen similar dataset acquired in 7T, DLF maintains its superior performance, which demonstrates its good generalizability. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 225,596 |
2401.00611 | A Compact Representation for Bayesian Neural Networks By Removing
Permutation Symmetry | Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications. Since exact Bayesian inference over the weights in a BNN is intractable, various approximate inference methods exist, among which sampling methods such as Hamiltonian Monte Carlo (HMC) are often considered the gold standard. While HMC provides high-quality samples, it lacks interpretable summary statistics because its sample mean and variance is meaningless in neural networks due to permutation symmetry. In this paper, we first show that the role of permutations can be meaningfully quantified by a number of transpositions metric. We then show that the recently proposed rebasin method allows us to summarize HMC samples into a compact representation that provides a meaningful explicit uncertainty estimate for each weight in a neural network, thus unifying sampling methods with variational inference. We show that this compact representation allows us to compare trained BNNs directly in weight space across sampling methods and variational inference, and to efficiently prune neural networks trained without explicit Bayesian frameworks by exploiting uncertainty estimates from HMC. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 419,043 |
2404.04745 | Collaborative Feedback Discriminative Propagation for Video
Super-Resolution | The key success of existing video super-resolution (VSR) methods stems mainly from exploring spatial and temporal information, which is usually achieved by a recurrent propagation module with an alignment module. However, inaccurate alignment usually leads to aligned features with significant artifacts, which will be accumulated during propagation and thus affect video restoration. Moreover, propagation modules only propagate the same timestep features forward or backward that may fail in case of complex motion or occlusion, limiting their performance for high-quality frame restoration. To address these issues, we propose a collaborative feedback discriminative (CFD) method to correct inaccurate aligned features and model long -range spatial and temporal information for better video reconstruction. In detail, we develop a discriminative alignment correction (DAC) method to adaptively explore information and reduce the influences of the artifacts caused by inaccurate alignment. Then, we propose a collaborative feedback propagation (CFP) module that employs feedback and gating mechanisms to better explore spatial and temporal information of different timestep features from forward and backward propagation simultaneously. Finally, we embed the proposed DAC and CFP into commonly used VSR networks to verify the effectiveness of our method. Quantitative and qualitative experiments on several benchmarks demonstrate that our method can improve the performance of existing VSR models while maintaining a lower model complexity. The source code and pre-trained models will be available at \url{https://github.com/House-Leo/CFDVSR}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 444,778 |
1309.5676 | Implementation of a language driven Backpropagation algorithm | Inspired by the importance of both communication and feedback on errors in human learning, our main goal was to implement a similar mechanism in supervised learning of artificial neural networks. The starting point in our study was the observation that words should accompany the input vectors included in the training set, thus extending the ANN input space. This had as consequence the necessity to take into consideration a modified sigmoid activation function for neurons in the first hidden layer (in agreement with a specific MLP apartment structure), and also a modified version of the Backpropagation algorithm, which allows using of unspecified (null) desired output components. Following the belief that basic concepts should be tested on simple examples, the previous mentioned mechanism was applied on both the XOR problem and a didactic color case study. In this context, we noticed the interesting fact that the ANN was capable to categorize all desired input vectors in the absence of their corresponding words, even though the training set included only word accompanied inputs, in both positive and negative examples. Further analysis along applying this approach to more complex scenarios is currently in progress, as we consider the proposed language-driven algorithm might contribute to a better understanding of learning in humans, opening as well the possibility to create a specific category of artificial neural networks, with abstraction capabilities. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 27,189 |
2412.06786 | Retrieving Semantics from the Deep: an RAG Solution for Gesture
Synthesis | Non-verbal communication often comprises of semantically rich gestures that help convey the meaning of an utterance. Producing such semantic co-speech gestures has been a major challenge for the existing neural systems that can generate rhythmic beat gestures, but struggle to produce semantically meaningful gestures. Therefore, we present RAG-Gesture, a diffusion-based gesture generation approach that leverages Retrieval Augmented Generation (RAG) to produce natural-looking and semantically rich gestures. Our neuro-explicit gesture generation approach is designed to produce semantic gestures grounded in interpretable linguistic knowledge. We achieve this by using explicit domain knowledge to retrieve exemplar motions from a database of co-speech gestures. Once retrieved, we then inject these semantic exemplar gestures into our diffusion-based gesture generation pipeline using DDIM inversion and retrieval guidance at the inference time without any need of training. Further, we propose a control paradigm for guidance, that allows the users to modulate the amount of influence each retrieval insertion has over the generated sequence. Our comparative evaluations demonstrate the validity of our approach against recent gesture generation approaches. The reader is urged to explore the results on our project page. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 515,377 |
2410.14789 | Differentially Private Covariate Balancing Causal Inference | Differential privacy is the leading mathematical framework for privacy protection, providing a probabilistic guarantee that safeguards individuals' private information when publishing statistics from a dataset. This guarantee is achieved by applying a randomized algorithm to the original data, which introduces unique challenges in data analysis by distorting inherent patterns. In particular, causal inference using observational data in privacy-sensitive contexts is challenging because it requires covariate balance between treatment groups, yet checking the true covariates is prohibited to prevent leakage of sensitive information. In this article, we present a differentially private two-stage covariate balancing weighting estimator to infer causal effects from observational data. Our algorithm produces both point and interval estimators with statistical guarantees, such as consistency and rate optimality, under a given privacy budget. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 500,212 |
2007.00897 | Deep brain state classification of MEG data | Neuroimaging techniques have shown to be useful when studying the brain's activity. This paper uses Magnetoencephalography (MEG) data, provided by the Human Connectome Project (HCP), in combination with various deep artificial neural network models to perform brain decoding. More specifically, here we investigate to which extent can we infer the task performed by a subject based on its MEG data. Three models based on compact convolution, combined convolutional and long short-term architecture as well as a model based on multi-view learning that aims at fusing the outputs of the two stream networks are proposed and examined. These models exploit the spatio-temporal MEG data for learning new representations that are used to decode the relevant tasks across subjects. In order to realize the most relevant features of the input signals, two attention mechanisms, i.e. self and global attention, are incorporated in all the models. The experimental results of cross subject multi-class classification on the studied MEG dataset show that the inclusion of attention improves the generalization of the models across subjects. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 185,261 |
2410.06287 | Non-Halting Queries: Exploiting Fixed Points in LLMs | We introduce a new vulnerability that exploits fixed points in autoregressive models and use it to craft queries that never halt, i.e. an LLM output that does not terminate. More precisely, for what we call non-halting queries, the LLM never samples the end-of-string token (<eos>). We rigorously analyze the conditions under which the non-halting anomaly presents itself. In particular, at temperature zero, we prove that if a repeating (cyclic) sequence of tokens is observed at the output beyond the context size, then the LLM does not halt. We demonstrate the non-halting anomaly in a number of experiments performed in base (unaligned) models where repeating tokens immediately lead to a non-halting cyclic behavior as predicted by the analysis. Further, we develop a simple recipe that takes the same fixed points observed in the base model and creates a prompt structure to target aligned models. We study the recipe behavior in bypassing alignment in a number of LLMs including GPT-4o, llama-3-8b-instruct, and gemma-2-9b-it where all models are forced into a non-halting state. Further, we demonstrate the recipe's success in sending most major models released over the past year into a non-halting state with the same simple prompt even at higher temperatures. Further, we study direct inversion based techniques to craft new short prompts to induce the non-halting state. Our experiments with the gradient search based inversion technique ARCA show that non-halting is prevalent across models and may be easily induced with a few input tokens. While its impact on the reliability of hosted systems can be mitigated by configuring a hard maximum token limit in the sampler, the non-halting anomaly still manages to break alignment. This underlines the need for further studies and stronger forms of alignment against non-halting anomalies. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 496,126 |
2402.17892 | SWTrack: Multiple Hypothesis Sliding Window 3D Multi-Object Tracking | Modern robotic systems are required to operate in dense dynamic environments, requiring highly accurate real-time track identification and estimation. For 3D multi-object tracking, recent approaches process a single measurement frame recursively with greedy association and are prone to errors in ambiguous association decisions. Our method, Sliding Window Tracker (SWTrack), yields more accurate association and state estimation by batch processing many frames of sensor data while being capable of running online in real-time. The most probable track associations are identified by evaluating all possible track hypotheses across the temporal sliding window. A novel graph optimization approach is formulated to solve the multidimensional assignment problem with lifted graph edges introduced to account for missed detections and graph sparsity enforced to retain real-time efficiency. We evaluate our SWTrack implementation$^{2}$ on the NuScenes autonomous driving dataset to demonstrate improved tracking performance. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 433,185 |
2502.07202 | Monte Carlo Tree Diffusion for System 2 Planning | Diffusion models have recently emerged as a powerful tool for planning. However, unlike Monte Carlo Tree Search (MCTS)-whose performance naturally improves with additional test-time computation (TTC), standard diffusion-based planners offer only limited avenues for TTC scalability. In this paper, we introduce Monte Carlo Tree Diffusion (MCTD), a novel framework that integrates the generative strength of diffusion models with the adaptive search capabilities of MCTS. Our method reconceptualizes denoising as a tree-structured process, allowing partially denoised plans to be iteratively evaluated, pruned, and refined. By selectively expanding promising trajectories while retaining the flexibility to revisit and improve suboptimal branches, MCTD achieves the benefits of MCTS such as controlling exploration-exploitation trade-offs within the diffusion framework. Empirical results on challenging long-horizon tasks show that MCTD outperforms diffusion baselines, yielding higher-quality solutions as TTC increases. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 532,481 |
2312.14987 | Deformable Image Registration with Stochastically Regularized
Biomechanical Equilibrium | Numerous regularization methods for deformable image registration aim at enforcing smooth transformations, but are difficult to tune-in a priori and lack a clear physical basis. Physically inspired strategies have emerged, offering a sound theoretical basis, but still necessitating complex discretization and resolution schemes. This study introduces a regularization strategy that does not require discretization, making it compatible with current registration frameworks, while retaining the benefits of physically motivated regularization for medical image registration. The proposed method performs favorably in both synthetic and real datasets, exhibiting an accuracy comparable to current state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 417,828 |
2408.14249 | Beyond Few-shot Object Detection: A Detailed Survey | Object detection is a critical field in computer vision focusing on accurately identifying and locating specific objects in images or videos. Traditional methods for object detection rely on large labeled training datasets for each object category, which can be time-consuming and expensive to collect and annotate. To address this issue, researchers have introduced few-shot object detection (FSOD) approaches that merge few-shot learning and object detection principles. These approaches allow models to quickly adapt to new object categories with only a few annotated samples. While traditional FSOD methods have been studied before, this survey paper comprehensively reviews FSOD research with a specific focus on covering different FSOD settings such as standard FSOD, generalized FSOD, incremental FSOD, open-set FSOD, and domain adaptive FSOD. These approaches play a vital role in reducing the reliance on extensive labeled datasets, particularly as the need for efficient machine learning models continues to rise. This survey paper aims to provide a comprehensive understanding of the above-mentioned few-shot settings and explore the methodologies for each FSOD task. It thoroughly compares state-of-the-art methods across different FSOD settings, analyzing them in detail based on their evaluation protocols. Additionally, it offers insights into their applications, challenges, and potential future directions in the evolving field of object detection with limited data. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 483,471 |
2311.09178 | RBPGAN: Recurrent Back-Projection GAN for Video Super Resolution | Recently, video super resolution (VSR) has become a very impactful task in the area of Computer Vision due to its various applications. In this paper, we propose Recurrent Back-Projection Generative Adversarial Network (RBPGAN) for VSR in an attempt to generate temporally coherent solutions while preserving spatial details. RBPGAN integrates two state-of-the-art models to get the best in both worlds without compromising the accuracy of produced video. The generator of the model is inspired by RBPN system, while the discriminator is inspired by TecoGAN. We also utilize Ping-Pong loss to increase temporal consistency over time. Our contribution together results in a model that outperforms earlier work in terms of temporally consistent details, as we will demonstrate qualitatively and quantitatively using different datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 408,019 |
2502.12210 | Enhancing Frame Detection with Retrieval Augmented Generation | Recent advancements in Natural Language Processing have significantly improved the extraction of structured semantic representations from unstructured text, especially through Frame Semantic Role Labeling (FSRL). Despite this progress, the potential of Retrieval-Augmented Generation (RAG) models for frame detection remains under-explored. In this paper, we present the first RAG-based approach for frame detection called RCIF (Retrieve Candidates and Identify Frames). RCIF is also the first approach to operate without the need for explicit target span and comprises three main stages: (1) generation of frame embeddings from various representations ; (2) retrieval of candidate frames given an input text; and (3) identification of the most suitable frames. We conducted extensive experiments across multiple configurations, including zero-shot, few-shot, and fine-tuning settings. Our results show that our retrieval component significantly reduces the complexity of the task by narrowing the search space thus allowing the frame identifier to refine and complete the set of candidates. Our approach achieves state-of-the-art performance on FrameNet 1.5 and 1.7, demonstrating its robustness in scenarios where only raw text is provided. Furthermore, we leverage the structured representation obtained through this method as a proxy to enhance generalization across lexical variations in the task of translating natural language questions into SPARQL queries. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 534,758 |
2304.14475 | ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox
Generative Model Trigger | Textual backdoor attacks pose a practical threat to existing systems, as they can compromise the model by inserting imperceptible triggers into inputs and manipulating labels in the training dataset. With cutting-edge generative models such as GPT-4 pushing rewriting to extraordinary levels, such attacks are becoming even harder to detect. We conduct a comprehensive investigation of the role of black-box generative models as a backdoor attack tool, highlighting the importance of researching relative defense strategies. In this paper, we reveal that the proposed generative model-based attack, BGMAttack, could effectively deceive textual classifiers. Compared with the traditional attack methods, BGMAttack makes the backdoor trigger less conspicuous by leveraging state-of-the-art generative models. Our extensive evaluation of attack effectiveness across five datasets, complemented by three distinct human cognition assessments, reveals that Figure 4 achieves comparable attack performance while maintaining superior stealthiness relative to baseline methods. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 360,975 |
2002.10625 | A Node Embedding Framework for Integration of Similarity-based Drug
Combination Prediction | Motivation: Drug combination is a sensible strategy for disease treatment by improving the efficacy and reducing concomitant side effects. Due to the large number of possible combinations among candidate compounds, exhaustive screening is prohibitive. Currently, a plenty of studies have focused on predicting potential drug combinations. However, these methods are not entirely satisfactory in performance and scalability. Results: In this paper, we proposed a Network Embedding framework in Multiplex Networks (NEMN) to predict synthetic drug combinations. Based on a multiplex drug similarity network, we offered alternative methods to integrate useful information from different aspects and to decide quantitative importance of each network. To explain the feasibility of NEMN, we applied our framework to the data of drug-drug interactions, on which it showed better performance in terms of AUPR and ROC. For Drug combination prediction, we found seven novel drug combinations which have been validated by external sources among the top-ranked predictions of our model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 165,458 |
1703.05160 | A New Unbiased and Efficient Class of LSH-Based Samplers and Estimators
for Partition Function Computation in Log-Linear Models | Log-linear models are arguably the most successful class of graphical models for large-scale applications because of their simplicity and tractability. Learning and inference with these models require calculating the partition function, which is a major bottleneck and intractable for large state spaces. Importance Sampling (IS) and MCMC-based approaches are lucrative. However, the condition of having a "good" proposal distribution is often not satisfied in practice. In this paper, we add a new dimension to efficient estimation via sampling. We propose a new sampling scheme and an unbiased estimator that estimates the partition function accurately in sub-linear time. Our samples are generated in near-constant time using locality sensitive hashing (LSH), and so are correlated and unnormalized. We demonstrate the effectiveness of our proposed approach by comparing the accuracy and speed of estimating the partition function against other state-of-the-art estimation techniques including IS and the efficient variant of Gumbel-Max sampling. With our efficient sampling scheme, we accurately train real-world language models using only 1-2% of computations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | 70,032 |
2304.13375 | Streamlined Global and Local Features Combinator (SGLC) for High
Resolution Image Dehazing | Image Dehazing aims to remove atmospheric fog or haze from an image. Although the Dehazing models have evolved a lot in recent years, few have precisely tackled the problem of High-Resolution hazy images. For this kind of image, the model needs to work on a downscaled version of the image or on cropped patches from it. In both cases, the accuracy will drop. This is primarily due to the inherent failure to combine global and local features when the image size increases. The Dehazing model requires global features to understand the general scene peculiarities and the local features to work better with fine and pixel details. In this study, we propose the Streamlined Global and Local Features Combinator (SGLC) to solve these issues and to optimize the application of any Dehazing model to High-Resolution images. The SGLC contains two successive blocks. The first is the Global Features Generator (GFG) which generates the first version of the Dehazed image containing strong global features. The second block is the Local Features Enhancer (LFE) which improves the local feature details inside the previously generated image. When tested on the Uformer architecture for Dehazing, SGLC increased the PSNR metric by a significant margin. Any other model can be incorporated inside the SGLC process to improve its efficiency on High-Resolution input data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 360,553 |
1812.02618 | Automatic hyperparameter selection in Autodock | Autodock is a widely used molecular modeling tool which predicts how small molecules bind to a receptor of known 3D structure. The current version of AutoDock uses meta-heuristic algorithms in combination with local search methods for doing the conformation search. Appropriate settings of hyperparameters in these algorithms are important, particularly for novice users who often find it hard to identify the best configuration. In this work, we design a surrogate based multi-objective algorithm to help such users by automatically tuning hyperparameter settings. The proposed method iteratively uses a radial basis function model and non-dominated sorting to evaluate the sampled configurations during the search phase. Our experimental results using Autodock show that the introduced component is practical and effective. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 115,809 |
1910.08506 | Adaptive Partitioning for Template Functions on Persistence Diagrams | As the field of Topological Data Analysis continues to show success in theory and in applications, there has been increasing interest in using tools from this field with methods for machine learning. Using persistent homology, specifically persistence diagrams, as inputs to machine learning techniques requires some mathematical creativity. The space of persistence diagrams does not have the desirable properties for machine learning, thus methods such as kernel methods and vectorization methods have been developed. One such featurization of persistence diagrams by Perea, Munch and Khasawneh uses continuous, compactly supported functions, referred to as "template functions," which results in a stable vector representation of the persistence diagram. In this paper, we provide a method of adaptively partitioning persistence diagrams to improve these featurizations based on localized information in the diagrams. Additionally, we provide a framework to adaptively select parameters required for the template functions in order to best utilize the partitioning method. We present results for application to example data sets comparing classification results between template function featurizations with and without partitioning, in addition to other methods from the literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 149,884 |
1905.10579 | Solutions of $x^{q^k}+\cdots+x^{q}+x=a$ in $GF{2^n}$ | Though it is well known that the roots of any affine polynomial over a finite field can be computed by a system of linear equations by using a normal base of the field, such solving approach appears to be difficult to apply when the field is fairly large. Thus, it may be of great interest to find an explicit representation of the solutions independently of the field base. This was previously done only for quadratic equations over a binary finite field. This paper gives an explicit representation of solutions for a much wider class of affine polynomials over a binary prime field. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 132,109 |
2412.01277 | Streamlining the Action Dependency Graph Framework: Two Key Enhancements | Multi Agent Path Finding (MAPF) is critical for coordinating multiple robots in shared environments, yet robust execution of generated plans remains challenging due to operational uncertainties. The Action Dependency Graph (ADG) framework offers a way to ensure correct action execution by establishing precedence-based dependencies between wait and move actions retrieved from a MAPF planning result. The original construction algorithm is not only inefficient, with a quadratic worst-case time complexity it also results in a network with many redundant dependencies between actions. This paper introduces two key improvements to the ADG framework. First, we prove that wait actions are generally redundant and show that removing them can lead to faster overall plan execution on real robot systems. Second, we propose an optimized ADG construction algorithm, termed Sparse Candidate Partitioning (SCP), which skips unnecessary dependencies and lowers the time complexity to quasi-linear, thereby significantly improving construction speed. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 513,045 |
2403.16042 | Force Controlled Printing for Material Extrusion Additive Manufacturing | In material extrusion additive manufacturing, the extrusion process is commonly controlled in a feed-forward fashion. The amount of material to be extruded at each printing location is pre-computed by a planning software. This approach is inherently unable to adapt the extrusion to external and unexpected disturbances, and the quality of the results strongly depends on a number of modeling and tuning parameters. To overcome these limitations, we propose the first framework for Force Controlled Printing for material extrusion additive manufacturing. We utilize a custom-built extruder to measure the extrusion force in real time, and use this quantity as feedback to continuously control the material flow in closed-loop. We demonstrate the existence of a strong correlation between extrusion force and line width, which we exploit to deposit lines of desired width in a width range of 33 % up to 233 % of the nozzle diameter. We also show how Force Controlled Printing outperforms conventional feed-forward extrusion in print quality and disturbance rejection, while requiring little tuning and automatically adapting to changes in the hardware settings. With no adaptation, Force Controlled Printing can deposit lines of desired width under severe disturbances in bed leveling, such as at layer heights ranging between 20 % and 200 % of the nominal height. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 440,846 |
1907.09918 | A Simple Design of IRS-NOMA Transmission | This letter proposes a simple design of intelligent reflecting surface (IRS) assisted non-orthogonal multiple access (NOMA) transmission, which can ensure that more users are served on each orthogonal spatial direction than spatial division multiple access (SDMA). In particular, by employing IRS, the directions of users' channel vectors can be effectively aligned, which facilitates the implementation of NOMA. Both analytical and simulation results are provided to demonstrate the performance of the proposed IRS-NOMA scheme and also study the impact of hardware impairments on IRS-NOMA. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 139,502 |
2311.08085 | Optimizing Electric Vehicle Efficiency with Real-Time Telemetry using
Machine Learning | In the contemporary world with degrading natural resources, the urgency of energy efficiency has become imperative due to the conservation and environmental safeguarding. Therefore, it's crucial to look for advanced technology to minimize energy consumption. This research focuses on the optimization of battery-electric city style vehicles through the use of a real-time in-car telemetry system that communicates between components through the robust Controller Area Network (CAN) protocol. By harnessing real-time data from various sensors embedded within vehicles, our driving assistance system provides the driver with visual and haptic actionable feedback that guides the driver on using the optimum driving style to minimize power consumed by the vehicle. To develop the pace feedback mechanism for the driver, real-time data is collected through a Shell Eco Marathon Urban Concept vehicle platform and after pre-processing, it is analyzed using the novel machine learning algorithm TEMSL, that outperforms the existing baseline approaches across various performance metrics. This innovative method after numerous experimentation has proven effective in enhancing energy efficiency, guiding the driver along the track, and reducing human errors. The driving-assistance system offers a range of utilities, from cost savings and extended vehicle lifespan to significant contributions to environmental conservation and sustainable driving practices. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 407,580 |
2401.15866 | Stochastic Amortization: A Unified Approach to Accelerate Feature and
Data Attribution | Many tasks in explainable machine learning, such as data valuation and feature attribution, perform expensive computation for each data point and are intractable for large datasets. These methods require efficient approximations, and although amortizing the process by learning a network to directly predict the desired output is a promising solution, training such models with exact labels is often infeasible. We therefore explore training amortized models with noisy labels, and we find that this is inexpensive and surprisingly effective. Through theoretical analysis of the label noise and experiments with various models and datasets, we show that this approach tolerates high noise levels and significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,620 |
1504.05776 | Combining local regularity estimation and total variation optimization
for scale-free texture segmentation | Texture segmentation constitutes a standard image processing task, crucial to many applications. The present contribution focuses on the particular subset of scale-free textures and its originality resides in the combination of three key ingredients: First, texture characterization relies on the concept of local regularity ; Second, estimation of local regularity is based on new multiscale quantities referred to as wavelet leaders ; Third, segmentation from local regularity faces a fundamental bias variance trade-off: In nature, local regularity estimation shows high variability that impairs the detection of changes, while a posteriori smoothing of regularity estimates precludes from locating correctly changes. Instead, the present contribution proposes several variational problem formulations based on total variation and proximal resolutions that effectively circumvent this trade-off. Estimation and segmentation performance for the proposed procedures are quantified and compared on synthetic as well as on real-world textures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 42,319 |
2208.14133 | Deep Generative Modeling on Limited Data with Regularization by
Nontransferable Pre-trained Models | Deep generative models (DGMs) are data-eager because learning a complex model on limited data suffers from a large variance and easily overfits. Inspired by the classical perspective of the bias-variance tradeoff, we propose regularized deep generative model (Reg-DGM), which leverages a nontransferable pre-trained model to reduce the variance of generative modeling with limited data. Formally, Reg-DGM optimizes a weighted sum of a certain divergence and the expectation of an energy function, where the divergence is between the data and the model distributions, and the energy function is defined by the pre-trained model w.r.t. the model distribution. We analyze a simple yet representative Gaussian-fitting case to demonstrate how the weighting hyperparameter trades off the bias and the variance. Theoretically, we characterize the existence and the uniqueness of the global minimum of Reg-DGM in a non-parametric setting and prove its convergence with neural networks trained by gradient-based methods. Empirically, with various pre-trained feature extractors and a data-dependent energy function, Reg-DGM consistently improves the generation performance of strong DGMs with limited data and achieves competitive results to the state-of-the-art methods. Our implementation is available at https://github.com/ML-GSAI/Reg-ADA-APA. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 315,229 |
2308.02435 | Designing Fiduciary Artificial Intelligence | A fiduciary is a trusted agent that has the legal duty to act with loyalty and care towards a principal that employs them. When fiduciary organizations interact with users through a digital interface, or otherwise automate their operations with artificial intelligence, they will need to design these AI systems to be compliant with their duties. This article synthesizes recent work in computer science and law to develop a procedure for designing and auditing Fiduciary AI. The designer of a Fiduciary AI should understand the context of the system, identify its principals, and assess the best interests of those principals. Then the designer must be loyal with respect to those interests, and careful in an contextually appropriate way. We connect the steps in this procedure to dimensions of Trustworthy AI, such as privacy and alignment. Fiduciary AI is a promising means to address the incompleteness of data subject's consent when interacting with complex technical systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 383,621 |
2202.05833 | Active Privacy-Utility Trade-off Against Inference in Time-Series Data
Sharing | Internet of things (IoT) devices, such as smart meters, smart speakers and activity monitors, have become highly popular thanks to the services they offer. However, in addition to their many benefits, they raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. In this work, we consider a user releasing her data containing personal information in return of a service from an honest-but-curious service provider (SP). We model user's personal information as two correlated random variables (r.v.'s), one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user's personal information, i.e., the true values of the r.v.'s, albeit with different statistics. The user manages data release in an online fashion such that the maximum amount of information is revealed about the latent useful variable as quickly as possible, while the confidence for the sensitive variable is kept below a predefined level. For privacy measure, we consider both the probability of correctly detecting the true value of the secret and the mutual information (MI) between the secret and the released data. We formulate both problems as partially observable Markov decision processes (POMDPs), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (DRL). We evaluate the privacy-utility trade-off (PUT) of the proposed policies on both the synthetic data and smoking activity dataset, and show their validity by testing the activity detection accuracy of the SP modeled by a long short-term memory (LSTM) neural network. | false | false | false | false | false | false | true | false | false | true | false | false | true | false | false | false | false | false | 280,007 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.