id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2412.08639 | Fast Prompt Alignment for Text-to-Image Generation | Text-to-image generation has advanced rapidly, yet aligning complex textual prompts with generated visuals remains challenging, especially with intricate object relationships and fine-grained details. This paper introduces Fast Prompt Alignment (FPA), a prompt optimization framework that leverages a one-pass approach, enhancing text-to-image alignment efficiency without the iterative overhead typical of current methods like OPT2I. FPA uses large language models (LLMs) for single-iteration prompt paraphrasing, followed by fine-tuning or in-context learning with optimized prompts to enable real-time inference, reducing computational demands while preserving alignment fidelity. Extensive evaluations on the COCO Captions and PartiPrompts datasets demonstrate that FPA achieves competitive text-image alignment scores at a fraction of the processing time, as validated through both automated metrics (TIFA, VQA) and human evaluation. A human study with expert annotators further reveals a strong correlation between human alignment judgments and automated scores, underscoring the robustness of FPA's improvements. The proposed method showcases a scalable, efficient alternative to iterative prompt optimization, enabling broader applicability in real-time, high-demand settings. The codebase is provided to facilitate further research: https://github.com/tiktok/fast_prompt_alignment | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 516,179 |
2312.12290 | Toward enriched Cognitive Learning with XAI | As computational systems supported by artificial intelligence (AI) techniques continue to play an increasingly pivotal role in making high-stakes recommendations and decisions across various domains, the demand for explainable AI (XAI) has grown significantly, extending its impact into cognitive learning research. Providing explanations for novel concepts is recognised as a fundamental aid in the learning process, particularly when addressing challenges stemming from knowledge deficiencies and skill application. Addressing these difficulties involves timely explanations and guidance throughout the learning process, prompting the interest of AI experts in developing explainer models. In this paper, we introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by XAI, focusing on two key research objectives: exploring how human learners comprehend the internal mechanisms of AI models using XAI tools and evaluating the effectiveness of such tools through human feedback. The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle combinatorial problems to enhance problem-solving skills and deepen their understanding of complex concepts, highlighting the potential for transformative advances in cognitive learning and co-learning. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 416,905 |
1405.5197 | Optimization of Vehicle Dynamics based on Multibody Models using Adjoint
Sensitivity Analysis | Multibody dynamics simulations have become widely used tools for vehicle systems analysis and design. As this approach evolves, it becomes able to provide additional information for various types of analyses. One very important direction is the optimization of multibody systems. Sensitivity analysis of multibody system dynamics is essential for design optimization. Dynamic sensitivities, when needed, are often calculated by means of finite differences. However, depending of the number of parameters involved, this procedure can be computationally expensive. Moreover, in many cases the results suffer from low accuracy when real perturbations are used. This paper develops the adjoint sensitivity analysis of multibody systems in the context of penalty formulations. The resulting sensitivities are applied to perform dynamical optimization of a full vehicle system. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 33,250 |
2306.15559 | RansomAI: AI-powered Ransomware for Stealthy Encryption | Cybersecurity solutions have shown promising performance when detecting ransomware samples that use fixed algorithms and encryption rates. However, due to the current explosion of Artificial Intelligence (AI), sooner than later, ransomware (and malware in general) will incorporate AI techniques to intelligently and dynamically adapt its encryption behavior to be undetected. It might result in ineffective and obsolete cybersecurity solutions, but the literature lacks AI-powered ransomware to verify it. Thus, this work proposes RansomAI, a Reinforcement Learning-based framework that can be integrated into existing ransomware samples to adapt their encryption behavior and stay stealthy while encrypting files. RansomAI presents an agent that learns the best encryption algorithm, rate, and duration that minimizes its detection (using a reward mechanism and a fingerprinting intelligent detection system) while maximizing its damage function. The proposed framework was validated in a ransomware, Ransomware-PoC, that infected a Raspberry Pi 4, acting as a crowdsensor. A pool of experiments with Deep Q-Learning and Isolation Forest (deployed on the agent and detection system, respectively) has demonstrated that RansomAI evades the detection of Ransomware-PoC affecting the Raspberry Pi 4 in a few minutes with >90% accuracy. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 376,062 |
2502.11387 | RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and
Instruction-Following | Role-playing is important for Large Language Models (LLMs) to follow diverse instructions while maintaining role identity and the role's pre-defined ability limits. Existing role-playing datasets mostly contribute to controlling role style and knowledge boundaries, but overlook role-playing in instruction-following scenarios. We introduce a fine-grained role-playing and instruction-following composite benchmark, named RoleMRC, including: (1) Multi-turn dialogues between ideal roles and humans, including free chats or discussions upon given passages; (2) Role-playing machine reading comprehension, involving response, refusal, and attempts according to passage answerability and role ability; (3) More complex scenarios with nested, multi-turn and prioritized instructions. The final RoleMRC features a 10.2k role profile meta-pool, 37.9k well-synthesized role-playing instructions, and 1.4k testing samples. We develop a pipeline to quantitatively evaluate the fine-grained role-playing and instruction-following capabilities of several mainstream LLMs, as well as models that are fine-tuned on our data. Moreover, cross-evaluation on external role-playing datasets confirms that models fine-tuned on RoleMRC enhances instruction-following without compromising general role-playing and reasoning capabilities. We also probe the neural-level activation maps of different capabilities over post-tuned LLMs. Access to our RoleMRC, RoleMRC-mix and Codes: https://github.com/LuJunru/RoleMRC. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 534,338 |
2305.00432 | Synthetic Data-based Detection of Zebras in Drone Imagery | Nowadays, there is a wide availability of datasets that enable the training of common object detectors or human detectors. These come in the form of labelled real-world images and require either a significant amount of human effort, with a high probability of errors such as missing labels, or very constrained scenarios, e.g. VICON systems. On the other hand, uncommon scenarios, like aerial views, animals, like wild zebras, or difficult-to-obtain information, such as human shapes, are hardly available. To overcome this, synthetic data generation with realistic rendering technologies has recently gained traction and advanced research areas such as target tracking and human pose estimation. However, subjects such as wild animals are still usually not well represented in such datasets. In this work, we first show that a pre-trained YOLO detector can not identify zebras in real images recorded from aerial viewpoints. To solve this, we present an approach for training an animal detector using only synthetic data. We start by generating a novel synthetic zebra dataset using GRADE, a state-of-the-art framework for data generation. The dataset includes RGB, depth, skeletal joint locations, pose, shape and instance segmentations for each subject. We use this to train a YOLO detector from scratch. Through extensive evaluations of our model with real-world data from i) limited datasets available on the internet and ii) a new one collected and manually labelled by us, we show that we can detect zebras by using only synthetic data during training. The code, results, trained models, and both the generated and training data are provided as open-source at https://eliabntt.github.io/grade-rr. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 361,337 |
2502.06526 | Convex Split Lemma without Inequalities | We introduce a refinement to the convex split lemma by replacing the max mutual information with the collision mutual information, transforming the inequality into an equality. This refinement yields tighter achievability bounds for quantum source coding tasks, including state merging and state splitting. Furthermore, we derive a universal upper bound on the smoothed max mutual information, where "universal" signifies that the bound depends exclusively on R\'enyi entropies and is independent of the system's dimensions. This result has significant implications for quantum information processing, particularly in applications such as the reverse quantum Shannon theorem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 532,112 |
2401.17404 | ROAMER: Robust Offroad Autonomy using Multimodal State Estimation with
Radar Velocity Integration | Reliable offroad autonomy requires low-latency, high-accuracy state estimates of pose as well as velocity, which remain viable throughout environments with sub-optimal operating conditions for the utilized perception modalities. As state estimation remains a single point of failure system in the majority of aspiring autonomous systems, failing to address the environmental degradation the perception sensors could potentially experience given the operating conditions, can be a mission-critical shortcoming. In this work, a method for integration of radar velocity information in a LiDAR-inertial odometry solution is proposed, enabling consistent estimation performance even with degraded LiDAR-inertial odometry. The proposed method utilizes the direct velocity-measuring capabilities of an Frequency Modulated Continuous Wave (FMCW) radar sensor to enhance the LiDAR-inertial smoother solution onboard the vehicle through integration of the forward velocity measurement into the graph-based smoother. This leads to increased robustness in the overall estimation solution, even in the absence of LiDAR data. This method was validated by hardware experiments conducted onboard an all-terrain vehicle traveling at high speed, ~12 m/s, in demanding offroad environments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 425,185 |
2003.09995 | Gravitational Wave Detection and Information Extraction via Neural
Networks | Laser Interferometer Gravitational-Wave Observatory (LIGO) was the first laboratory to measure the gravitational waves. It was needed an exceptional experimental design to measure distance changes much less than a radius of a proton. In the same way, the data analyses to confirm and extract information is a tremendously hard task. Here, it is shown a computational procedure base on artificial neural networks to detect a gravitation wave event and extract the knowledge of its ring-down time from the LIGO data. With this proposal, it is possible to make a probabilistic thermometer for gravitational wave detection and obtain physical information about the astronomical body system that created the phenomenon. Here, the ring-down time is determined with a direct data measure, without the need to use numerical relativity techniques and high computational power. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,205 |
2106.10022 | Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Optimization | Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks. Despite their wide applicability, solving such problems efficiently and effectively is challenging in the presence of large amounts of data using existing stochastic minimax methods. We study a class of stochastic minimax methods and develop a communication-efficient distributed stochastic extragradient algorithm, LocalAdaSEG, with an adaptive learning rate suitable for solving convex-concave minimax problems in the Parameter-Server model. LocalAdaSEG has three main features: (i) a periodic communication strategy that reduces the communication cost between workers and the server; (ii) an adaptive learning rate that is computed locally and allows for tuning-free implementation; and (iii) theoretically, a nearly linear speed-up with respect to the dominant variance term, arising from the estimation of the stochastic gradient, is proven in both the smooth and nonsmooth convex-concave settings. LocalAdaSEG is used to solve a stochastic bilinear game, and train a generative adversarial network. We compare LocalAdaSEG against several existing optimizers for minimax problems and demonstrate its efficacy through several experiments in both homogeneous and heterogeneous settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 241,874 |
2107.02381 | An Inverse QSAR Method Based on Linear Regression and Integer
Programming | Recently a novel framework has been proposed for designing the molecular structure of chemical compounds using both artificial neural networks (ANNs) and mixed integer linear programming (MILP). In the framework, we first define a feature vector $f(C)$ of a chemical graph $C$ and construct an ANN that maps $x=f(C)$ to a predicted value $\eta(x)$ of a chemical property $\pi$ to $C$. After this, we formulate an MILP that simulates the computation process of $f(C)$ from $C$ and that of $\eta(x)$ from $x$. Given a target value $y^*$ of the chemical property $\pi$, we infer a chemical graph $C^\dagger$ such that $\eta(f(C^\dagger))=y^*$ by solving the MILP. In this paper, we use linear regression to construct a prediction function $\eta$ instead of ANNs. For this, we derive an MILP formulation that simulates the computation process of a prediction function by linear regression. The results of computational experiments suggest our method can infer chemical graphs with around up to 50 non-hydrogen atoms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,793 |
2412.01236 | Confinement Specific Design of SOI Rib Waveguides with Submicron
Dimensions and Single Mode Operation | Full-vectorial finite difference method with perfectly matched layers boundaries is used to identify the single mode operation region of submicron rib waveguides fabricated using sili-con-on-insulator material system. Achieving high mode power confinement factors is emphasized while maintaining the single mode operation. As opposed to the case of large cross-section rib waveguides, theoretical single mode conditions have been demonstrated to hold for sub-micron waveguides with accuracy approaching 100%. Both, the deeply and the shallowly etched rib waveguides have been considered and the single mode condition for entire sub-micrometer range is presented while adhering to design specific mode confinement requirements. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 513,020 |
2111.14467 | What Drives Readership? An Online Study on User Interface Types and
Popularity Bias Mitigation in News Article Recommendations | Personalized news recommender systems support readers in finding the right and relevant articles in online news platforms. In this paper, we discuss the introduction of personalized, content-based news recommendations on DiePresse, a popular Austrian online news platform, focusing on two specific aspects: (i) user interface type, and (ii) popularity bias mitigation. Therefore, we conducted a two-weeks online study that started in October 2020, in which we analyzed the impact of recommendations on two user groups, i.e., anonymous and subscribed users, and three user interface types, i.e., on a desktop, mobile and tablet device. With respect to user interface types, we find that the probability of a recommendation to be seen is the highest for desktop devices, while the probability of interacting with recommendations is the highest for mobile devices. With respect to popularity bias mitigation, we find that personalized, content-based news recommendations can lead to a more balanced distribution of news articles' readership popularity in the case of anonymous users. Apart from that, we find that significant events (e.g., the COVID-19 lockdown announcement in Austria and the Vienna terror attack) influence the general consumption behavior of popular articles for both, anonymous and subscribed users. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 268,611 |
2408.06787 | Unlock the Power of Frozen LLMs in Knowledge Graph Completion | Traditional knowledge graph completion (KGC) methods rely solely on structural information, struggling with the inherent sparsity of knowledge graphs (KGs). Large Language Models (LLMs) learn extensive knowledge from large corpora with powerful context modeling, making them promising for mitigating the limitations of previous methods. Directly fine-tuning LLMs offers great capability but comes at the cost of huge time and memory consumption, while utilizing frozen LLMs yields suboptimal results.In this work, we aim to leverage LLMs for KGC effectively and efficiently. We capture the context-aware hidden states of knowledge triples by employing prompts to stimulate the intermediate layers of LLMs. We then train a data-efficient classifier on these hidden states to harness the inherent capabilities of frozen LLMs in KGC. Additionally, to reduce ambiguity and enrich knowledge representation, we generate detailed entity descriptions through subgraph sampling on KGs. Extensive experiments on standard benchmarks demonstrate the efficiency and effectiveness of our approach. We outperform traditional KGC methods across most datasets and, notably, achieve classification performance comparable to fine-tuned LLMs while enhancing GPU memory efficiency by $188\times$ and accelerating training and inference by $13.48\times$. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 480,344 |
2110.07143 | bert2BERT: Towards Reusable Pretrained Language Models | In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models. However, large language model pre-training costs intensive computational resources and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful. In this paper, we propose bert2BERT, which can effectively transfer the knowledge of an existing smaller pre-trained model (e.g., BERT_BASE) to a large model (e.g., BERT_LARGE) through parameter initialization and significantly improve the pre-training efficiency of the large model. Specifically, we extend the previous function-preserving on Transformer-based language model, and further improve it by proposing advanced knowledge for large model's initialization. In addition, a two-stage pre-training method is proposed to further accelerate the training process. We did extensive experiments on representative PLMs (e.g., BERT and GPT) and demonstrate that (1) our method can save a significant amount of training cost compared with baselines including learning from scratch, StackBERT and MSLT; (2) our method is generic and applicable to different types of pre-trained models. In particular, bert2BERT saves about 45% and 47% computational cost of pre-training BERT_BASE and GPT_BASE by reusing the models of almost their half sizes. The source code will be publicly available upon publication. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 260,876 |
1903.10830 | Large-scale interactive object segmentation with human annotators | Manually annotating object segmentation masks is very time consuming. Interactive object segmentation methods offer a more efficient alternative where a human annotator and a machine segmentation model collaborate. In this paper we make several contributions to interactive segmentation: (1) we systematically explore in simulation the design space of deep interactive segmentation models and report new insights and caveats; (2) we execute a large-scale annotation campaign with real human annotators, producing masks for 2.5M instances on the OpenImages dataset. We plan to release this data publicly, forming the largest existing dataset for instance segmentation. Moreover, by re-annotating part of the COCO dataset, we show that we can produce instance masks 3 times faster than traditional polygon drawing tools while also providing better quality. (3) We present a technique for automatically estimating the quality of the produced masks which exploits indirect signals from the annotation process. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,369 |
1612.09313 | The emergence of pseudo-stable states in network dynamics | In the context of network dynamics, the complexity of systems increases possible evolutionary paths that often are not deterministic. Occasionally, some map routs form over the course of time which guide systems towards some particular states. The main intention of this study is to discover an indicator that can help predict these pseudo-deterministic paths in advance. Here we investigate the dynamics of networks based on Heider balance theory that states the tendency of systems towards decreasing tension. This inclination leads systems to some local and global minimum tension states called "jammed states" and "balanced states", respectively. We show that not only paths towards jammed states are not completely random but also there exist secret pseudo deterministic paths that bound the system to end up in these special states. Our results display that the Inverse Participation Ratio method (IPR) can be a suitable indicator that exhibits collective behaviors of systems. According to this method, these specific paths are those that host the most participation of the constituents in the system. A direct proportionality exists between the distance and the selectable paths towards local minimums; where by getting close to the final steps there is no other way but the one to the jammed states. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 66,176 |
2309.04819 | Detecting Violations of Differential Privacy for Quantum Algorithms | Quantum algorithms for solving a wide range of practical problems have been proposed in the last ten years, such as data search and analysis, product recommendation, and credit scoring. The concern about privacy and other ethical issues in quantum computing naturally rises up. In this paper, we define a formal framework for detecting violations of differential privacy for quantum algorithms. A detection algorithm is developed to verify whether a (noisy) quantum algorithm is differentially private and automatically generate bugging information when the violation of differential privacy is reported. The information consists of a pair of quantum states that violate the privacy, to illustrate the cause of the violation. Our algorithm is equipped with Tensor Networks, a highly efficient data structure, and executed both on TensorFlow Quantum and TorchQuantum which are the quantum extensions of famous machine learning platforms -- TensorFlow and PyTorch, respectively. The effectiveness and efficiency of our algorithm are confirmed by the experimental results of almost all types of quantum algorithms already implemented on realistic quantum computers, including quantum supremacy algorithms (beyond the capability of classical algorithms), quantum machine learning models, quantum approximate optimization algorithms, and variational quantum eigensolvers with up to 21 quantum bits. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 390,864 |
2112.07232 | Structure-Exploiting Newton-Type Method for Optimal Control of Switched
Systems | This study proposes an efficient Newton-type method for the optimal control of switched systems under a given mode sequence. A mesh-refinement-based approach is utilized to discretize continuous-time optimal control problems (OCPs) and formulate a nonlinear program (NLP), which guarantees the local convergence of a Newton-type method. A dedicated structure-exploiting algorithm (Riccati recursion) is proposed to perform a Newton-type method for the NLP efficiently because its sparsity structure is different from a standard OCP. The proposed method computes each Newton step with linear time-complexity for the total number of discretization grids as the standard Riccati recursion algorithm. Additionally, the computation is always successful if the solution is sufficiently close to a local minimum. Conversely, general quadratic programming (QP) solvers cannot accomplish this because the Hessian matrix is inherently indefinite. Moreover, a modification on the reduced Hessian matrix is proposed using the nature of the Riccati recursion algorithm as the dynamic programming for a QP subproblem to enhance the convergence. A numerical comparison is conducted with off-the-shelf NLP solvers, which demonstrates that the proposed method is up to two orders of magnitude faster. Whole-body optimal control of quadrupedal gaits is also demonstrated and shows that the proposed method can achieve the whole-body model predictive control (MPC) of robotic systems with rigid contacts. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 271,419 |
1810.08393 | DGC-Net: Dense Geometric Correspondence Network | This paper addresses the challenge of dense pixel correspondence estimation between two images. This problem is closely related to optical flow estimation task where ConvNets (CNNs) have recently achieved significant progress. While optical flow methods produce very accurate results for the small pixel translation and limited appearance variation scenarios, they hardly deal with the strong geometric transformations that we consider in this work. In this paper, we propose a coarse-to-fine CNN-based framework that can leverage the advantages of optical flow approaches and extend them to the case of large transformations providing dense and subpixel accurate estimates. It is trained on synthetic transformations and demonstrates very good performance to unseen, realistic, data. Further, we apply our method to the problem of relative camera pose estimation and demonstrate that the model outperforms existing dense approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 110,815 |
2012.04525 | GMM-Based Generative Adversarial Encoder Learning | While GAN is a powerful model for generating images, its inability to infer a latent space directly limits its use in applications requiring an encoder. Our paper presents a simple architectural setup that combines the generative capabilities of GAN with an encoder. We accomplish this by combining the encoder with the discriminator using shared weights, then training them simultaneously using a new loss term. We model the output of the encoder latent space via a GMM, which leads to both good clustering using this latent space and improved image generation by the GAN. Our framework is generic and can be easily plugged into any GAN strategy. In particular, we demonstrate it both with Vanilla GAN and Wasserstein GAN, where in both it leads to an improvement in the generated images in terms of both the IS and FID scores. Moreover, we show that our encoder learns a meaningful representation as its clustering results are competitive with the current GAN-based state-of-the-art in clustering. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 210,481 |
2007.07415 | Automatic Image Labelling at Pixel Level | The performance of deep networks for semantic image segmentation largely depends on the availability of large-scale training images which are labelled at the pixel level. Typically, such pixel-level image labellings are obtained manually by a labour-intensive process. To alleviate the burden of manual image labelling, we propose an interesting learning approach to generate pixel-level image labellings automatically. A Guided Filter Network (GFN) is first developed to learn the segmentation knowledge from a source domain, and such GFN then transfers such segmentation knowledge to generate coarse object masks in the target domain. Such coarse object masks are treated as pseudo labels and they are further integrated to optimize/refine the GFN iteratively in the target domain. Our experiments on six image sets have demonstrated that our proposed approach can generate fine-grained object masks (i.e., pixel-level object labellings), whose quality is very comparable to the manually-labelled ones. Our proposed approach can also achieve better performance on semantic image segmentation than most existing weakly-supervised approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 187,323 |
2310.01537 | Adversarial Client Detection via Non-parametric Subspace Monitoring in
the Internet of Federated Things | The Internet of Federated Things (IoFT) represents a network of interconnected systems with federated learning as the backbone, facilitating collaborative knowledge acquisition while ensuring data privacy for individual systems. The wide adoption of IoFT, however, is hindered by security concerns, particularly the susceptibility of federated learning networks to adversarial attacks. In this paper, we propose an effective non-parametric approach FedRR, which leverages the low-rank features of the transmitted parameter updates generated by federated learning to address the adversarial attack problem. Besides, our proposed method is capable of accurately detecting adversarial clients and controlling the false alarm rate under the scenario with no attack occurring. Experiments based on digit recognition using the MNIST datasets validated the advantages of our approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 396,461 |
2106.05728 | Face mask detection using convolution neural network | In the recent times, the Coronaviruses that are a big family of different viruses have become very common, contagious and dangerous to the whole human kind. It spreads human to human by exhaling the infection breath, which leaves droplets of the virus on different surface which is then inhaled by other person and catches the infection too. So it has become very important to protect ourselves and the people around us from this situation. We can take precautions such as social distancing, washing hands every two hours, using sanitizer, maintaining social distance and the most important wearing a mask. Public use of wearing a masks has become very common everywhere in the whole world now. From that the most affected and devastating condition is of India due to its extreme population in small area. This paper proposes a method to detect the face mask is put on or not for offices, or any other work place with a lot of people coming to work. We have used convolutional neural network for the same. The model is trained on a real world dataset and tested with live video streaming with a good accuracy. Further the accuracy of the model with different hyper parameters and multiple people at different distance and location of the frame is done. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 240,204 |
2204.03573 | An optimized hybrid solution for IoT based lifestyle disease
classification using stress data | Stress, anxiety, and nervousness are all high-risk health states in everyday life. Previously, stress levels were determined by speaking with people and gaining insight into what they had experienced recently or in the past. Typically, stress is caused by an incidence that occurred a long time ago, but sometimes it is triggered by unknown factors. This is a challenging and complex task, but recent research advances have provided numerous opportunities to automate it. The fundamental features of most of these techniques are electro dermal activity (EDA) and heart rate values (HRV). We utilized an accelerometer to measure body motions to solve this challenge. The proposed novel method employs a test that measures a subject's electrocardiogram (ECG), galvanic skin values (GSV), HRV values, and body movements in order to provide a low-cost and time-saving solution for detecting stress lifestyle disease in modern times using cyber physical systems. This study provides a new hybrid model for lifestyle disease classification that decreases execution time while picking the best collection of characteristics and increases classification accuracy. The developed approach is capable of dealing with the class imbalance problem by using WESAD (wearable stress and affect dataset) dataset. The new model uses the Grid search (GS) method to select an optimized set of hyper parameters, and it uses a combination of the Correlation coefficient based Recursive feature elimination (CoC-RFE) method for optimal feature selection and gradient boosting as an estimator to classify the dataset, which achieves high accuracy and helps to provide smart, accurate, and high-quality healthcare systems. To demonstrate the validity and utility of the proposed methodology, its performance is compared to those of other well-established machine learning models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 290,358 |
2201.08702 | Dual Contrastive Learning: Text Classification via Label-Aware Data
Augmentation | Contrastive learning has achieved remarkable success in representation learning via self-supervision in unsupervised settings. However, effectively adapting contrastive learning to supervised learning tasks remains as a challenge in practice. In this work, we introduce a dual contrastive learning (DualCL) framework that simultaneously learns the features of input samples and the parameters of classifiers in the same space. Specifically, DualCL regards the parameters of the classifiers as augmented samples associating to different labels and then exploits the contrastive learning between the input samples and the augmented samples. Empirical studies on five benchmark text classification datasets and their low-resource version demonstrate the improvement in classification accuracy and confirm the capability of learning discriminative representations of DualCL. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 276,424 |
cs/0306081 | An on-line Integrated Bookkeeping: electronic run log book and Meta-Data
Repository for ATLAS | In the context of the ATLAS experiment there is growing evidence of the importance of different kinds of Meta-data including all the important details of the detector and data acquisition that are vital for the analysis of the acquired data. The Online BookKeeper (OBK) is a component of ATLAS online software that stores all information collected while running the experiment, including the Meta-data associated with the event acquisition, triggering and storage. The facilities for acquisition of control data within the on-line software framework, together with a full functional Web interface, make the OBK a powerful tool containing all information needed for event analysis, including an electronic log book. In this paper we explain how OBK plays a role as one of the main collectors and managers of Meta-data produced on-line, and we'll also focus on the Web facilities already available. The usage of the web interface as an electronic run logbook is also explained, together with the future extensions. We describe the technology used in OBK development and how we arrived at the present level explaining the previous experience with various DBMS technologies. The extensive performance evaluations that have been performed and the usage in the production environment of the ATLAS test beams are also analysed. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 537,883 |
1602.08680 | Measuring and Predicting Tag Importance for Image Retrieval | Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,677 |
2407.05346 | Wastewater Treatment Plant Data for Nutrient Removal System | This paper introduces the Agtrup (BlueKolding) dataset, collected from Denmark's Agtrup wastewater treatment plant, specifically designed to enhance phosphorus removal via chemical and biological methods. This rich dataset is assembled through a high-frequency Supervisory Control and Data Acquisition (SCADA) system data collection process, which captures a wide range of variables related to the operational dynamics of nutrient removal. It comprises time-series data featuring measurements sampled to a frequency of two minutes across various control, process, and environmental variables. The comprehensive dataset aims to foster significant advancements in wastewater management by supporting the development of sophisticated predictive models and optimizing operational strategies. By providing detailed insights into the interactions and efficiencies of chemical and biological phosphorus removal processes, the dataset serves as a vital resource for environmental researchers and engineers focused on improving the sustainability and effectiveness of wastewater treatment operations. The ultimate goal of this dataset is to facilitate the creation of digital twins and the application of machine learning techniques, such as deep reinforcement learning, to predict and enhance system performance under varying operational conditions. | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 470,936 |
2207.02250 | Array Camera Image Fusion using Physics-Aware Transformers | We demonstrate a physics-aware transformer for feature-based data fusion from cameras with diverse resolution, color spaces, focal planes, focal lengths, and exposure. We also demonstrate a scalable solution for synthetic training data generation for the transformer using open-source computer graphics software. We demonstrate image synthesis on arrays with diverse spectral responses, instantaneous field of view and frame rate. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 306,450 |
1303.0070 | Entropy Distance | Motivated by the approach of random linear codes, a new distance in the vector space over a finite field is defined as the logarithm of the "surface area" of a Hamming ball with radius being the corresponding Hamming distance. It is named entropy distance because of its close relation with entropy function. It is shown that entropy distance is a metric for a non-binary field and a pseudometric for the binary field. The entropy distance of a linear code is defined to be the smallest entropy distance between distinct codewords of the code. Analogues of the Gilbert bound, the Hamming bound, and the Singleton bound are derived for the largest size of a linear code given the length and entropy distance of the code. Furthermore, as an important property related to lossless joint source-channel coding, the entropy distance of a linear encoder is defined. Very tight upper and lower bounds are obtained for the largest entropy distance of a linear encoder with given dimensions of input and output vector spaces. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 22,527 |
1711.05401 | Revisiting Simple Neural Networks for Learning Representations of
Knowledge Graphs | We address the problem of learning vector representations for entities and relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This problem has received significant attention in the past few years and multiple methods have been proposed. Most of the existing methods in the literature use a predefined characteristic scoring function for evaluating the correctness of KG triples. These scoring functions distinguish correct triples (high score) from incorrect ones (low score). However, their performance vary across different datasets. In this work, we demonstrate that a simple neural network based score function can consistently achieve near start-of-the-art performance on multiple datasets. We also quantitatively demonstrate biases in standard benchmark datasets, and highlight the need to perform evaluation spanning various datasets. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 84,556 |
2111.09666 | CCSL: A Causal Structure Learning Method from Multiple Unknown
Environments | Most existing causal structure learning methods assume data collected from one environment and independent and identically distributed (i.i.d.). In some cases, data are collected from different subjects from multiple environments, which provides more information but might make the data non-identical or non-independent distribution. Some previous efforts try to learn causal structure from this type of data in two independent stages, i.e., first discovering i.i.d. groups from non-i.i.d. samples, then learning the causal structures from different groups. This straightforward solution ignores the intrinsic connections between the two stages, that is both the clustering stage and the learning stage should be guided by the same causal mechanism. Towards this end, we propose a unified Causal Cluster Structures Learning (named CCSL) method for causal discovery from non-i.i.d. data. This method simultaneously integrates the following two tasks: 1) clustering samples of the subjects with the same causal mechanism into different groups; 2) learning causal structures from the samples within the group. Specifically, for the former, we provide a Causality-related Chinese Restaurant Process to cluster samples based on the similarity of the causal structure; for the latter, we introduce a variational-inference-based approach to learn the causal structures. Theoretical results provide identification of the causal model and the clustering model under the linear non-Gaussian assumption. Experimental results on both simulated and real-world data further validate the correctness and effectiveness of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 267,077 |
2206.11349 | Prompt Injection: Parameterization of Fixed Inputs | Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 304,234 |
2501.13212 | Covert Communication via Action-Dependent States | This paper studies covert communication over channels with ADSI when the state is available either non-causally or causally at the transmitter. Covert communication refers to reliable communication between a transmitter and a receiver while ensuring a low probability of detection by an adversary, which we refer to as `warden'. It is well known that in a point-to-point DMC, it is possible to communicate on the order of $\sqrt{N}$ bits reliably and covertly over $N$ channel uses while the transmitter and the receiver are required to share a secret key on the order of $\sqrt{N}$ bits. This paper studies achieving reliable and covert communication of positive rate, i.e., reliable and covert communication on the order of N bits in N channel uses, over a channel with ADSI while the transmitter has non-causal or causal access to the ADSI, and the transmitter and the receiver share a secret key of negligible rate. We derive achievable rates for both the non-causal and causal scenarios by using block-Markov encoding and secret key generation from the ADSI, which subsumes the best achievable rates for channels with random states. We also derive upper bounds, for both non-causal and causal scenarios, that meet our achievable rates for some special cases. As an application of our problem setup, we study covert communication over channels with rewrite options, which are closely related to recording covert information on memory, and show that a positive covert rate can be achieved in such channels. As a special case of our problem, we study the AWGN channels and provide lower and upper bounds on the covert capacity that meet when the transmitter and the receiver share a secret key of sufficient rate and when the warden's channel is noisier than the legitimate receiver channel. As another application of our problem setup, we show that cooperation can lead to a positive covert rate in Gaussian channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 526,593 |
2301.05781 | Analysis of November 21, 2021, Kaua`i Island Power System 18-20 Hz
Oscillations | This letter discusses the 18-20 Hz oscillation event at 05:30 am on November 21, 2021, in Kaua`i's power system following the trip of an oil power plant. As far as the authors are aware, this is the first report of a transmission system-wide subsynchronous oscillation driven by inverter-based resources (though the system in question is relatively small). In this letter, we leverage two data-based methods-the dissipating energy flow method and the sub/super-synchronous power flow method-to locate the sources of the oscillation. Also, we build an electromagnetic transient model of the Kaua`i power system and replay the 18-20 Hz oscillation. Finally, we propose two mitigation methods and validate their effectiveness via numerical simulation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 340,442 |
2005.08625 | JointsGait:A model-based Gait Recognition Method based on Gait Graph
Convolutional Networks and Joints Relationship Pyramid Mapping | Gait, as one of unique biometric features, has the advantage of being recognized from a long distance away, can be widely used in public security. Considering 3D pose estimation is more challenging than 2D pose estimation in practice , we research on using 2D joints to recognize gait in this paper, and a new model-based gait recognition method JointsGait is put forward to extract gait information from 2D human body joints. Appearance-based gait recognition algorithms are prevalent before. However, appearance features suffer from external factors which can cause drastic appearance variations, e.g. clothing. Unlike previous approaches, JointsGait firstly extracted spatio-temporal features from 2D joints using gait graph convolutional networks, which are less interfered by external factors. Secondly, Joints Relationship Pyramid Mapping (JRPM) are proposed to map spatio-temporal gait features into a discriminative feature space with biological advantages according to the relationship of human joints when people are walking at various scales. Finally, we design a fusion loss strategy to help the joints features to be insensitive to cross-view. Our method is evaluated on two large datasets, Kinect Gait Biometry Dataset and CASIA-B. On Kinect Gait Biometry Dataset database, JointsGait only uses corresponding 2D coordinates of joints, but achieves satisfactory recognition accuracy compared with those model-based algorithms using 3D joints. On CASIA-B database, the proposed method greatly outperforms advanced model-based methods in all walking conditions, even performs superior to state-of-art appearance-based methods when clothing seriously affect people's appearance. The experimental results demonstrate that JointsGait achieves the state-of-art performance despite the low dimensional feature (2D body joints) and is less affected by the view variations and clothing variation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 177,677 |
2303.13542 | OntoMath${}^{\mathbf{PRO}}$ 2.0 Ontology: Updates of the Formal Model | This paper is devoted to the problems of ontology-based mathematical knowledge management and representation. The main attention is paid to the development of a formal model for the representation of mathematical statements in the Open Linked Data cloud. The proposed model is intended for applications that extract mathematical facts from natural language mathematical texts and represent these facts as Linked Open Data. The model is used in development of a new version of the OntoMath${}^{\mathrm{PRO}}$ ontology of professional mathematics is described. OntoMath${}^{\mathrm{PRO}}$ underlies a semantic publishing platform, that takes as an input a collection of mathematical papers in LaTeX format and builds their ontology-based Linked Open Data representation. The semantic publishing platform, in turn, is a central component of OntoMath digital ecosystem, an ecosystem of ontologies, text analytics tools, and applications for mathematical knowledge management, including semantic search for mathematical formulas and a recommender system for mathematical papers. According to the new model, the ontology is organized into three layers: a foundational ontology layer, a domain ontology layer and a linguistic layer. The domain ontology layer contains language-independent math concepts. The linguistic layer provides linguistic grounding for these concepts, and the foundation ontology layer provides them with meta-ontological annotations. The concepts are organized in two main hierarchies: the hierarchy of objects and the hierarchy of reified relationships. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 353,720 |
2107.09652 | Towards Privacy-preserving Explanations in Medical Image Analysis | The use of Deep Learning in the medical field is hindered by the lack of interpretability. Case-based interpretability strategies can provide intuitive explanations for deep learning models' decisions, thus, enhancing trust. However, the resulting explanations threaten patient privacy, motivating the development of privacy-preserving methods compatible with the specifics of medical data. In this work, we analyze existing privacy-preserving methods and their respective capacity to anonymize medical data while preserving disease-related semantic features. We find that the PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy among the compared state-of-the-art methods. Nevertheless, we emphasize the need to improve privacy-preserving methods for medical imaging, as we identified relevant drawbacks in all existing privacy-preserving approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 247,092 |
1911.04192 | Keep it Consistent: Topic-Aware Storytelling from an Image Stream via
Iterative Multi-agent Communication | Visual storytelling aims to generate a narrative paragraph from a sequence of images automatically. Existing approaches construct text description independently for each image and roughly concatenate them as a story, which leads to the problem of generating semantically incoherent content. In this paper, we propose a new way for visual storytelling by introducing a topic description task to detect the global semantic context of an image stream. A story is then constructed with the guidance of the topic description. In order to combine the two generation tasks, we propose a multi-agent communication framework that regards the topic description generator and the story generator as two agents and learn them simultaneously via iterative updating mechanism. We validate our approach on VIST dataset, where quantitative results, ablations, and human evaluation demonstrate our method's good ability in generating stories with higher quality compared to state-of-the-art methods. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 152,918 |
1209.1557 | Learning Model-Based Sparsity via Projected Gradient Descent | Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 18,453 |
2404.08916 | Meply: A Large-scale Dataset and Baseline Evaluations for Metastatic
Perirectal Lymph Node Detection and Segmentation | Accurate segmentation of metastatic lymph nodes in rectal cancer is crucial for the staging and treatment of rectal cancer. However, existing segmentation approaches face challenges due to the absence of pixel-level annotated datasets tailored for lymph nodes around the rectum. Additionally, metastatic lymph nodes are characterized by their relatively small size, irregular shapes, and lower contrast compared to the background, further complicating the segmentation task. To address these challenges, we present the first large-scale perirectal metastatic lymph node CT image dataset called Meply, which encompasses pixel-level annotations of 269 patients diagnosed with rectal cancer. Furthermore, we introduce a novel lymph-node segmentation model named CoSAM. The CoSAM utilizes sequence-based detection to guide the segmentation of metastatic lymph nodes in rectal cancer, contributing to improved localization performance for the segmentation model. It comprises three key components: sequence-based detection module, segmentation module, and collaborative convergence unit. To evaluate the effectiveness of CoSAM, we systematically compare its performance with several popular segmentation methods using the Meply dataset. Our code and dataset will be publicly available at: https://github.com/kanydao/CoSAM. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,458 |
2204.13710 | A Unified and Modular Model Predictive Control Framework for Soft
Continuum Manipulators under Internal and External Constraints | Fluidically actuated soft robots have promising capabilities such as inherent compliance and user safety. The control of soft robots needs to properly handle nonlinear actuation dynamics, motion constraints, workspace limitations, and variable shape stiffness, so having a unique algorithm for all these issues would be extremely beneficial. In this work, we adapt Model Predictive Control (MPC), popular for rigid robots, to a soft robotic arm called SoPrA. We address the challenges that current control methods are facing, by proposing a framework that handles these in a modular manner. While previous work focused on Joint-Space formulations, we show through simulation and experimental results that Task-Space MPC can be successfully implemented for dynamic soft robotic control. We provide a way to couple the Piece-wise Constant Curvature and Augmented Rigid Body Model assumptions with internal and external constraints and actuation dynamics, delivering an algorithm that unites these aspects and optimizes over them. We believe that a MPC implementation based on our approach could be the way to address most of model-based soft robotics control issues within a unified and modular framework, while allowing to include improvements that usually belong to other control domains such as machine learning techniques. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 293,913 |
1302.1561 | Structure and Parameter Learning for Causal Independence and Causal
Interaction Models | This paper discusses causal independence models and a generalization of these models called causal interaction models. Causal interaction models are models that have independent mechanisms where a mechanism can have several causes. In addition to introducing several particular types of causal interaction models, we show how we can apply the Bayesian approach to learning causal interaction models obtaining approximate posterior distributions for the models and obtain MAP and ML estimates for the parameters. We illustrate the approach with a simulation study of learning model posteriors. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 21,862 |
2205.00334 | Engineering flexible machine learning systems by traversing
functionally-invariant paths | Transformers have emerged as the state of the art neural network architecture for natural language processing and computer vision. In the foundation model paradigm, large transformer models (BERT, GPT3/4, Bloom, ViT) are pre-trained on self-supervised tasks such as word or image masking, and then, adapted through fine-tuning for downstream user applications including instruction following and Question Answering. While many approaches have been developed for model fine-tuning including low-rank weight update strategies (eg. LoRA), underlying mathematical principles that enable network adaptation without knowledge loss remain poorly understood. Here, we introduce a differential geometry framework, functionally invariant paths (FIP), that provides flexible and continuous adaptation of neural networks for a range of machine learning goals and network sparsification objectives. We conceptualize the weight space of a neural network as a curved Riemannian manifold equipped with a metric tensor whose spectrum defines low rank subspaces in weight space that accommodate network adaptation without loss of prior knowledge. We formalize adaptation as movement along a geodesic path in weight space while searching for networks that accommodate secondary objectives. With modest computational resources, the FIP algorithm achieves comparable to state of the art performance on continual learning and sparsification tasks for language models (BERT), vision transformers (ViT, DeIT), and the CNNs. Broadly, we conceptualize a neural network as a mathematical object that can be iteratively transformed into distinct configurations by the path-sampling algorithm to define a sub-manifold of weight space that can be harnessed to achieve user goals. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 294,217 |
2404.03368 | Graph Neural Networks for Electric and Hydraulic Data Fusion to Enhance
Short-term Forecasting of Pumped-storage Hydroelectricity | Pumped-storage hydropower plants (PSH) actively participate in grid power-frequency control and therefore often operate under dynamic conditions, which results in rapidly varying system states. Predicting these dynamically changing states is essential for comprehending the underlying sensor and machine conditions. This understanding aids in detecting anomalies and faults, ensuring the reliable operation of the connected power grid, and in identifying faulty and miscalibrated sensors. PSH are complex, highly interconnected systems encompassing electrical and hydraulic subsystems, each characterized by their respective underlying networks that can individually be represented as graphs. To take advantage of this relational inductive bias, graph neural networks (GNNs) have been separately applied to state forecasting tasks in the individual subsystems, but without considering their interdependencies. In PSH, however, these subsystems depend on the same control input, making their operations highly interdependent and interconnected. Consequently, hydraulic and electrical sensor data should be fused across PSH subsystems to improve state forecasting accuracy. This approach has not been explored in GNN literature yet because many available PSH graphs are limited to their respective subsystem boundaries, which makes the method unsuitable to be applied directly. In this work, we introduce the application of spectral-temporal graph neural networks, which leverage self-attention mechanisms to concurrently capture and learn meaningful subsystem interdependencies and the dynamic patterns observed in electric and hydraulic sensors. Our method effectively fuses data from the PSH's subsystems by operating on a unified, system-wide graph, learned directly from the data, This approach leads to demonstrably improved state forecasting performance and enhanced generalizability. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 444,230 |
1512.01872 | Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving | While emerging deep-learning systems have outclassed knowledge-based approaches in many tasks, their application to detection tasks for autonomous technologies remains an open field for scientific exploration. Broadly, there are two major developmental bottlenecks: the unavailability of comprehensively labeled datasets and of expressive evaluation strategies. Approaches for labeling datasets have relied on intensive hand-engineering, and strategies for evaluating learning systems have been unable to identify failure-case scenarios. Human intelligence offers an untapped approach for breaking through these bottlenecks. This paper introduces Driverseat, a technology for embedding crowds around learning systems for autonomous driving. Driverseat utilizes crowd contributions for (a) collecting complex 3D labels and (b) tagging diverse scenarios for ready evaluation of learning systems. We demonstrate how Driverseat can crowdstrap a convolutional neural network on the lane-detection task. More generally, crowdstrapping introduces a valuable paradigm for any technology that can benefit from leveraging the powerful combination of human and computer intelligence. | true | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 49,876 |
2012.01019 | CORRIDRONE: Corridors for Drones, An Adaptive On-Demand Multi-Lane
Design and Testbed | In this article, a novel drone skyway framework called CORRIDRONE is proposed. As the name suggests, this represents virtual air corridors for point-to-point safe passage of multiple drones. The corridors are not permanent but can be set up on demand. A few such scenarios could be those in warehouse/factory floors, package delivery, shore-to-ship delivery, border patrol, etc. Several factors play major roles in the planning and design of such aerial passages. The proposed framework includes many novel features which aid safe and efficient integration of UAVs into the airspace with already available technologies. A several kilometres long test bed is proposed to be set-up at the 1500 acres Challekere campus of Indian Institute of Science, in the state of Karnataka, to design and test the infrastructure required for CORRIDRONE. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 209,305 |
2301.06187 | CNN-Based Action Recognition and Pose Estimation for Classifying Animal
Behavior from Videos: A Survey | Classifying the behavior of humans or animals from videos is important in biomedical fields for understanding brain function and response to stimuli. Action recognition, classifying activities performed by one or more subjects in a trimmed video, forms the basis of many of these techniques. Deep learning models for human action recognition have progressed significantly over the last decade. Recently, there is an increased interest in research that incorporates deep learning-based action recognition for animal behavior classification. However, human action recognition methods are more developed. This survey presents an overview of human action recognition and pose estimation methods that are based on convolutional neural network (CNN) architectures and have been adapted for animal behavior classification in neuroscience. Pose estimation, estimating joint positions from an image frame, is included because it is often applied before classifying animal behavior. First, we provide foundational information on algorithms that learn spatiotemporal features through 2D, two-stream, and 3D CNNs. We explore motivating factors that determine optimizers, loss functions and training procedures, and compare their performance on benchmark datasets. Next, we review animal behavior frameworks that use or build upon these methods, organized by the level of supervision they require. Our discussion is uniquely focused on the technical evolution of the underlying CNN models and their architectural adaptations (which we illustrate), rather than their usability in a neuroscience lab. We conclude by discussing open research problems, and possible research directions. Our survey is designed to be a resource for researchers developing fully unsupervised animal behavior classification systems of which there are only a few examples in the literature. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,575 |
1111.1648 | Sentiment Analysis of Document Based on Annotation | I present a tool which tells the quality of document or its usefulness based on annotations. Annotation may include comments, notes, observation, highlights, underline, explanation, question or help etc. comments are used for evaluative purpose while others are used for summarization or for expansion also. Further these comments may be on another annotation. Such annotations are referred as meta-annotation. All annotation may not get equal weightage. My tool considered highlights, underline as well as comments to infer the collective sentiment of annotators. Collective sentiments of annotators are classified as positive, negative, objectivity. My tool computes collective sentiment of annotations in two manners. It counts all the annotation present on the documents as well as it also computes sentiment scores of all annotation which includes comments to obtain the collective sentiments about the document or to judge the quality of document. I demonstrate the use of tool on research paper. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 12,944 |
2105.06599 | TriPose: A Weakly-Supervised 3D Human Pose Estimation via Triangulation
from Video | Estimating 3D human poses from video is a challenging problem. The lack of 3D human pose annotations is a major obstacle for supervised training and for generalization to unseen datasets. In this work, we address this problem by proposing a weakly-supervised training scheme that does not require 3D annotations or calibrated cameras. The proposed method relies on temporal information and triangulation. Using 2D poses from multiple views as the input, we first estimate the relative camera orientations and then generate 3D poses via triangulation. The triangulation is only applied to the views with high 2D human joint confidence. The generated 3D poses are then used to train a recurrent lifting network (RLN) that estimates 3D poses from 2D poses. We further apply a multi-view re-projection loss to the estimated 3D poses and enforce the 3D poses estimated from multi-views to be consistent. Therefore, our method relaxes the constraints in practice, only multi-view videos are required for training, and is thus convenient for in-the-wild settings. At inference, RLN merely requires single-view videos. The proposed method outperforms previous works on two challenging datasets, Human3.6M and MPI-INF-3DHP. Codes and pretrained models will be publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,175 |
2308.12043 | IncreLoRA: Incremental Parameter Allocation Method for
Parameter-Efficient Fine-tuning | With the increasing size of pre-trained language models (PLMs), fine-tuning all the parameters in the model is not efficient, especially when there are a large number of downstream tasks, which incur significant training and storage costs. Many parameter-efficient fine-tuning (PEFT) approaches have been proposed, among which, Low-Rank Adaptation (LoRA) is a representative approach that injects trainable rank decomposition matrices into every target module. Yet LoRA ignores the importance of parameters in different modules. To address this problem, many works have been proposed to prune the parameters of LoRA. However, under limited training conditions, the upper bound of the rank of the pruned parameter matrix is still affected by the preset values. We, therefore, propose IncreLoRA, an incremental parameter allocation method that adaptively adds trainable parameters during training based on the importance scores of each module. This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead. We conduct extensive experiments on GLUE to demonstrate the effectiveness of IncreLoRA. The results show that our method owns higher parameter efficiency, especially when under the low-resource settings where our method significantly outperforms the baselines. Our code is publicly available. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 387,393 |
2501.11855 | A New Construction Structure on Coded Caching with Linear
Subpacketization: Non-Half-Sum Disjoint Packing | Coded caching is a promising technique to effectively reduce peak traffic by using local caches and the multicast gains generated by these local caches. We prefer to design a coded caching scheme with the subpacketization $F$ and transmission load $R$ as small as possible since these are the key metrics for evaluating the implementation complexity and transmission efficiency of the scheme, respectively. However, most of the existing coded caching schemes have large subpacketizations which grow exponentially with the number of users $K$, and there are a few schemes with linear subpacketizations which have large transmission loads. In this paper, we focus on studying the linear subpacketization, i.e., $K=F$, coded caching scheme with low transmission load. Specifically, we first introduce a new combinatorial structure called non-half-sum disjoint packing (NHSDP) which can be used to generate a coded caching scheme with $K=F$. Then a class of new schemes is obtained by constructing NHSDP. Theoretical and numerical comparisons show that (i) compared to the existing schemes with linear subpacketization (to the number of users), the proposed scheme achieves a lower load; (ii) compared to some existing schemes with polynomial subpacketization, the proposed scheme can also achieve a lower load in some cases; (iii) compared to some existing schemes with exponential subpacketization, the proposed scheme has loads close to those of these schemes in some cases. Moreover, the new concept of NHSDP is closely related to the classical combinatorial structures such as cyclic difference packing (CDP), non-three-term arithmetic progressions (NTAP), and perfect hash family (PHF). These connections indicate that NHSDP is an important combinatorial structure in the field of combinatorial design. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 526,066 |
1908.05817 | An Analytical Probabilistic Expression for Modeling Sum of
Spatial-dependent Wind Power Output | Applying probability-related knowledge to accurately explore and exploit the inherent uncertainty of wind power output is one of the key issues that need to be solved urgently in the development of smart grid. This letter develops an analytical probabilistic expression for modeling sum of spatial-dependent wind farm power output through introducing unit impulse function, copulas, and Gaussian mixture model. A comparative Monte Carlo sampling study is given to illustrate the validity of the proposed model. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 141,819 |
1008.1427 | Optimal Feedback Systems with Analogue Adaptive Transmitters | The paper presents original approach to concurrent optimization of the transmitting and receiving parts of adaptive communication systems (CS) with feedback channels. The results of research show a possibility and the way of designing the systems transmitting the signals with a bit rate equal to the capacity of the forward channel under given bit-error rate (BER). The results of work can be used for design of different classes of high-efficient low energy/size/cost CS, as well as allow further development and extension. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,227 |
2007.09595 | Risk-aware Path and Motion Planning for a Tethered Aerial Visual
Assistant in Unstructured or Confined Environments | This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two different | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 188,012 |
2403.02786 | Semi-Supervised Graph Representation Learning with Human-centric
Explanation for Predicting Fatty Liver Disease | Addressing the challenge of limited labeled data in clinical settings, particularly in the prediction of fatty liver disease, this study explores the potential of graph representation learning within a semi-supervised learning framework. Leveraging graph neural networks (GNNs), our approach constructs a subject similarity graph to identify risk patterns from health checkup data. The effectiveness of various GNN approaches in this context is demonstrated, even with minimal labeled samples. Central to our methodology is the inclusion of human-centric explanations through explainable GNNs, providing personalized feature importance scores for enhanced interpretability and clinical relevance, thereby underscoring the potential of our approach in advancing healthcare practices with a keen focus on graph representation learning and human-centric explanation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,949 |
1806.09618 | Learning-based Feedback Controller for Deformable Object Manipulation | In this paper, we present a general learning-based framework to automatically visual-servo control the position and shape of a deformable object with unknown deformation parameters. The servo-control is accomplished by learning a feedback controller that determines the robotic end-effector's movement according to the deformable object's current status. This status encodes the object's deformation behavior by using a set of observed visual features, which are either manually designed or automatically extracted from the robot's sensor stream. A feedback control policy is then optimized to push the object toward a desired featured status efficiently. The feedback policy can be learned either online or offline. Our online policy learning is based on the Gaussian Process Regression (GPR), which can achieve fast and accurate manipulation and is robust to small perturbations. An offline imitation learning framework is also proposed to achieve a control policy that is robust to large perturbations in the human-robot interaction. We validate the performance of our controller on a set of deformable object manipulation tasks and demonstrate that our method can achieve effective and accurate servo-control for general deformable objects with a wide variety of goal settings. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 101,387 |
2408.15198 | Automatic 8-tissue Segmentation for 6-month Infant Brains | Numerous studies have highlighted that atypical brain development, particularly during infancy and toddlerhood, is linked to an increased likelihood of being diagnosed with a neurodevelopmental condition, such as autism. Accurate brain tissue segmentations for morphological analysis are essential in numerous infant studies. However, due to ongoing white matter (WM) myelination changing tissue contrast in T1- and T2-weighted images, automatic tissue segmentation in 6-month infants is particularly difficult. On the other hand, manual labelling by experts is time-consuming and labor-intensive. In this study, we propose the first 8-tissue segmentation pipeline for six-month-old infant brains. This pipeline utilizes domain adaptation (DA) techniques to leverage our longitudinal data, including neonatal images segmented with the neonatal Developing Human Connectome Project structural pipeline. Our pipeline takes raw 6-month images as inputs and generates the 8-tissue segmentation as outputs, forming an end-to-end segmentation pipeline. The segmented tissues include WM, gray matter (GM), cerebrospinal fluid (CSF), ventricles, cerebellum, basal ganglia, brainstem, and hippocampus/amygdala. Cycle-Consistent Generative Adversarial Network (CycleGAN) and Attention U-Net were employed to achieve the image contrast transformation between neonatal and 6-month images and perform tissue segmentation on the synthesized 6-month images (neonatal images with 6-month intensity contrast), respectively. Moreover, we incorporated the segmentation outputs from Infant Brain Extraction and Analysis Toolbox (iBEAT) and another Attention U-Net to further enhance the performance and construct the end-to-end segmentation pipeline. Our evaluation with real 6-month images achieved a DICE score of 0.92, an HD95 of 1.6, and an ASSD of 0.42. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 483,833 |
2412.04882 | Nonmyopic Global Optimisation via Approximate Dynamic Programming | Unconstrained global optimisation aims to optimise expensive-to-evaluate black-box functions without gradient information. Bayesian optimisation, one of the most well-known techniques, typically employs Gaussian processes as surrogate models, leveraging their probabilistic nature to balance exploration and exploitation. However, Gaussian processes become computationally prohibitive in high-dimensional spaces. Recent alternatives, based on inverse distance weighting (IDW) and radial basis functions (RBFs), offer competitive, computationally lighter solutions. Despite their efficiency, both traditional global and Bayesian optimisation strategies suffer from the myopic nature of their acquisition functions, which focus solely on immediate improvement neglecting future implications of the sequential decision making process. Nonmyopic acquisition functions devised for the Bayesian setting have shown promise in improving long-term performance. Yet, their use in deterministic strategies with IDW and RBF remains unexplored. In this work, we introduce novel nonmyopic acquisition strategies tailored to IDW- and RBF-based global optimisation. Specifically, we develop dynamic programming-based paradigms, including rollout and multi-step scenario-based optimisation schemes, to enable lookahead acquisition. These methods optimise a sequence of query points over a horizon (instead of only at the next step) by predicting the evolution of the surrogate model, inherently managing the exploration-exploitation trade-off in a systematic way via optimisation techniques. The proposed approach represents a significant advance in extending nonmyopic acquisition principles, previously confined to Bayesian optimisation, to the deterministic framework. Empirical results on synthetic and hyperparameter tuning benchmark problems demonstrate that these nonmyopic methods outperform conventional myopic approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 514,613 |
2310.00935 | Resolving Knowledge Conflicts in Large Language Models | Large language models (LLMs) often encounter knowledge conflicts, scenarios where discrepancy arises between the internal parametric knowledge of LLMs and non-parametric information provided in the prompt context. In this work we ask what are the desiderata for LLMs when a knowledge conflict arises and whether existing LLMs fulfill them. We posit that LLMs should 1) identify knowledge conflicts, 2) pinpoint conflicting information segments, and 3) provide distinct answers or viewpoints in conflicting scenarios. To this end, we introduce an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals. It includes diverse and complex situations of knowledge conflict, knowledge from diverse entities and domains, two synthetic conflict creation methods, and settings with progressively increasing difficulty to reflect realistic knowledge conflicts. Extensive experiments with the framework reveal that while LLMs perform well in identifying the existence of knowledge conflicts, they struggle to determine the specific conflicting knowledge and produce a response with distinct answers amidst conflicting information. To address these challenges, we propose new instruction-based approaches that augment LLMs to better achieve the three goals. Further analysis shows that abilities to tackle knowledge conflicts are greatly impacted by factors such as knowledge domain, while generating robust responses to knowledge conflict scenarios remains an open research question. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 396,215 |
2403.13409 | Influence of concentration-dependent material properties on the fracture
and debonding of electrode particles with core-shell structure | Core-shell electrode particle designs offer a route to improved lithium-ion battery performance. However, they are susceptible to mechanical damage such as fracture and debonding, which can significantly reduce their lifetime. Using a coupled finite element model, we explore the impacts of diffusion-induced stresses on the failure mechanisms of an exemplar system with an NMC811 core and an NMC111 shell. In particular, we systematically compare the implications of assuming constant material properties against using Li concentration-dependent diffusion coefficient and partial molar volume. With constant material properties, our results show that smaller cores with thinner shells avoid debonding and fracture regimes. When factoring in a concentration-dependent partial molar volume, the maximum values of tensile hoop stress in the shell are found to be significantly lower than those predicted with constant properties, reducing the likelihood of fracture. Furthermore, with a concentration-dependent diffusion coefficient, significant barriers to full electrode utilisation are observed due to reduced lithium mobility at high states of lithiation. This provides a possible explanation for the reduced accessible capacity observed in experiments. Shell thickness is found to be the dominant factor in precluding structural integrity once the concentration dependency is accounted for. These findings shed new light on the performance and effective design of core-shell electrode particles. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 439,626 |
2402.09247 | Momentum Approximation in Asynchronous Private Federated Learning | Asynchronous protocols have been shown to improve the scalability of federated learning (FL) with a massive number of clients. Meanwhile, momentum-based methods can achieve the best model quality in synchronous FL. However, naively applying momentum in asynchronous FL algorithms leads to slower convergence and degraded model performance. It is still unclear how to effective combinie these two techniques together to achieve a win-win. In this paper, we find that asynchrony introduces implicit bias to momentum updates. In order to address this problem, we propose momentum approximation that minimizes the bias by finding an optimal weighted average of all historical model updates. Momentum approximation is compatible with secure aggregation as well as differential privacy, and can be easily integrated in production FL systems with a minor communication and storage cost. We empirically demonstrate that on benchmark FL datasets, momentum approximation can achieve $1.15 \textrm{--}4\times$ speed up in convergence compared to naively combining asynchronous FL with momentum. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,434 |
2405.12589 | An Improved Robust Total Logistic Distance Metric algorithm for
Generalized Gaussian Noise and Noisy Input | Although the known maximum total generalized correntropy (MTGC) and generalized maximum blakezisserman total correntropy (GMBZTC) algorithms can maintain good performance under the errors-in-variables (EIV) model disrupted by generalized Gaussian noise, their requirement for manual ad-justment of parameters is excessive, greatly increasing the practical difficulty of use. To solve this problem, the total arctangent based on logical distance metric (TACLDM) algo-rithm is proposed by utilizing the advantage of few parameters in logical distance metric (LDM) theory and the convergence behavior is improved by the arctangent function. Compared with other competing algorithms, the TACLDM algorithm not only has fewer parameters, but also has better robustness to generalized Gaussian noise and significantly reduces the steady-state error. Furthermore, the analysis of the algorithm in the generalized Gaussian noise environment is analyzed in detail in this paper. Finally, computer simulations demonstrate the outstanding performance of the TACLDM algorithm and the rigorous theoretical deduction in this paper. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 455,583 |
1412.2391 | Multihop Caching-Aided Coded Multicasting for the Next Generation of
Cellular Networks | Next generation of cellular networks deploying wireless distributed femtocaching infrastructure proposed by Golrezaei et. al. are studied. By taking advantage of multihop communications in each cell, the number of required femtocaching helpers is significantly reduced. This reduction of femtocaches is achieved by using the underutilized storage and communication capabilities in the User Terminals (UTs), which results in reducing the deployment costs of distributed femtocaches. A multihop index coding technique is proposed to code the cached contents in helpers to achieve order optimal capacity gains. This can serve as an efficient content delivery algorithm for the solution provided by Golrezaei et. al. As an example, we consider a wireless cellular system in which contents have a popularity distribution. It has been shown that if the contents follow a high content reuse popularity distribution, our approach can replace many unicast communication with multicast communication. We will prove that simple linear index codes found by heuristics based on graph coloring algorithms can achieve order optimal capacity under Zipfian content popularity distribution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,201 |
2402.01965 | Analyzing Neural Network-Based Generative Diffusion Models through
Convex Optimization | Diffusion models are gaining widespread use in cutting-edge image, video, and audio generation. Score-based diffusion models stand out among these methods, necessitating the estimation of score function of the input data distribution. In this study, we present a theoretical framework to analyze two-layer neural network-based diffusion models by reframing score matching and denoising score matching as convex optimization. We prove that training shallow neural networks for score prediction can be done by solving a single convex program. Although most analyses of diffusion models operate in the asymptotic setting or rely on approximations, we characterize the exact predicted score function and establish convergence results for neural network-based diffusion models with finite data. Our results provide a precise characterization of what neural network-based diffusion models learn in non-asymptotic settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 426,306 |
2212.04022 | RLSEP: Learning Label Ranks for Multi-label Classification | Multi-label ranking maps instances to a ranked set of predicted labels from multiple possible classes. The ranking approach for multi-label learning problems received attention for its success in multi-label classification, with one of the well-known approaches being pairwise label ranking. However, most existing methods assume that only partial information about the preference relation is known, which is inferred from the partition of labels into a positive and negative set, then treat labels with equal importance. In this paper, we focus on the unique challenge of ranking when the order of the true label set is provided. We propose a novel dedicated loss function to optimize models by incorporating penalties for incorrectly ranked pairs, and make use of the ranking information present in the input. Our method achieves the best reported performance measures on both synthetic and real world ranked datasets and shows improvements on overall ranking of labels. Our experimental results demonstrate that our approach is generalizable to a variety of multi-label classification and ranking tasks, while revealing a calibration towards a certain ranking ordering. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 335,294 |
2303.09870 | TeSLA: Test-Time Self-Learning With Automatic Adversarial Augmentation | Most recent test-time adaptation methods focus on only classification tasks, use specialized network architectures, destroy model calibration or rely on lightweight information from the source domain. To tackle these issues, this paper proposes a novel Test-time Self-Learning method with automatic Adversarial augmentation dubbed TeSLA for adapting a pre-trained source model to the unlabeled streaming test data. In contrast to conventional self-learning methods based on cross-entropy, we introduce a new test-time loss function through an implicitly tight connection with the mutual information and online knowledge distillation. Furthermore, we propose a learnable efficient adversarial augmentation module that further enhances online knowledge distillation by simulating high entropy augmented images. Our method achieves state-of-the-art classification and segmentation results on several benchmarks and types of domain shifts, particularly on challenging measurement shifts of medical images. TeSLA also benefits from several desirable properties compared to competing methods in terms of calibration, uncertainty metrics, insensitivity to model architectures, and source training strategies, all supported by extensive ablations. Our code and models are available on GitHub. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,225 |
2208.12853 | Domain Adaptation with Adversarial Training on Penultimate Activations | Enhancing model prediction confidence on target data is an important objective in Unsupervised Domain Adaptation (UDA). In this paper, we explore adversarial training on penultimate activations, i.e., input features of the final linear classification layer. We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features, as used in previous works. Furthermore, with activation normalization commonly used in domain adaptation to reduce domain gap, we derive two variants and systematically analyze the effects of normalization on our adversarial training. This is illustrated both in theory and through empirical analysis on real adaptation tasks. Extensive experiments are conducted on popular UDA benchmarks under both standard setting and source-data free setting. The results validate that our method achieves the best scores against previous arts. Code is available at https://github.com/tsun/APA. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 314,872 |
2502.09104 | One-shot Federated Learning Methods: A Practical Guide | One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL). OFL demonstrates the practical potential for integration with future approaches that require collaborative training models, such as large language models (LLMs). However, current OFL methods face two major challenges: data heterogeneity and model heterogeneity, which result in subpar performance compared to conventional FL methods. Worse still, despite numerous studies addressing these limitations, a comprehensive summary is still lacking. To address these gaps, this paper presents a systematic analysis of the challenges faced by OFL and thoroughly reviews the current methods. We also offer an innovative categorization method and analyze the trade-offs of various techniques. Additionally, we discuss the most promising future directions and the technologies that should be integrated into the OFL field. This work aims to provide guidance and insights for future research. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,310 |
2412.18735 | Adaptive Self-supervised Learning for Social Recommendations | In recent years, researchers have attempted to exploit social relations to improve the performance in recommendation systems. Generally, most existing social recommendation methods heavily depends on substantial domain knowledge and expertise in primary recommendation tasks for designing useful auxiliary tasks. Meanwhile, Self-Supervised Learning (SSL) recently has received considerable attention in the field of recommendation, since it can provide self-supervision signals in assisting the improvement of target recommendation systems by constructing self-supervised auxiliary tasks from raw data without human-annotated labels. Despite the great success, these SSL-based social recommendations are insufficient to adaptively balance various self-supervised auxiliary tasks, since assigning equal weights on various auxiliary tasks can result in sub-optimal recommendation performance, where different self-supervised auxiliary tasks may contribute differently to improving the primary social recommendation across different datasets. To address this issue, in this work, we propose Adaptive Self-supervised Learning for Social Recommendations (AdasRec) by taking advantage of various self-supervised auxiliary tasks. More specifically, an adaptive weighting mechanism is proposed to learn adaptive weights for various self-supervised auxiliary tasks, so as to balance the contribution of such self-supervised auxiliary tasks for enhancing representation learning in social recommendations. The adaptive weighting mechanism is used to assign different weights on auxiliary tasks to achieve an overall weighting of the entire auxiliary tasks and ultimately assist the primary recommendation task, achieved by a meta learning optimization problem with an adaptive weighting network. Comprehensive experiments on various real-world datasets are constructed to verify the effectiveness of our proposed method. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 520,549 |
1806.04533 | Cross-dataset Person Re-Identification Using Similarity Preserved
Generative Adversarial Networks | Person re-identification (Re-ID) aims to match the image frames which contain the same person in the surveillance videos. Most of the Re-ID algorithms conduct supervised training in some small labeled datasets, so directly deploying these trained models to the real-world large camera networks may lead to a poor performance due to underfitting. The significant difference between the source training dataset and the target testing dataset makes it challenging to incrementally optimize the model. To address this challenge, we propose a novel solution by transforming the unlabeled images in the target domain to fit the original classifier by using our proposed similarity preserved generative adversarial networks model, SimPGAN. Specifically, SimPGAN adopts the generative adversarial networks with the cycle consistency constraint to transform the unlabeled images in the target domain to the style of the source domain. Meanwhile, SimPGAN uses the similarity consistency loss, which is measured by a siamese deep convolutional neural network, to preserve the similarity of the transformed images of the same person. Comprehensive experiments based on multiple real surveillance datasets are conducted, and the results show that our algorithm is better than the state-of-the-art cross-dataset unsupervised person Re-ID algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 100,258 |
2111.02038 | Fair-SSL: Building fair ML Software with less data | Ethical bias in machine learning models has become a matter of concern in the software engineering community. Most of the prior software engineering works concentrated on finding ethical bias in models rather than fixing it. After finding bias, the next step is mitigation. Prior researchers mainly tried to use supervised approaches to achieve fairness. However, in the real world, getting data with trustworthy ground truth is challenging and also ground truth can contain human bias. Semi-supervised learning is a machine learning technique where, incrementally, labeled data is used to generate pseudo-labels for the rest of the data (and then all that data is used for model training). In this work, we apply four popular semi-supervised techniques as pseudo-labelers to create fair classification models. Our framework, Fair-SSL, takes a very small amount (10%) of labeled data as input and generates pseudo-labels for the unlabeled data. We then synthetically generate new data points to balance the training data based on class and protected attribute as proposed by Chakraborty et al. in FSE 2021. Finally, the classification model is trained on the balanced pseudo-labeled data and validated on test data. After experimenting on ten datasets and three learners, we find that Fair-SSL achieves similar performance as three state-of-the-art bias mitigation algorithms. That said, the clear advantage of Fair-SSL is that it requires only 10% of the labeled training data. To the best of our knowledge, this is the first SE work where semi-supervised techniques are used to fight against ethical bias in SE ML models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 264,742 |
2207.04631 | Partial Resampling of Imbalanced Data | Imbalanced data is a frequently encountered problem in machine learning. Despite a vast amount of literature on sampling techniques for imbalanced data, there is a limited number of studies that address the issue of the optimal sampling ratio. In this paper, we attempt to fill the gap in the literature by conducting a large scale study of the effects of sampling ratio on classification accuracy. We consider 10 popular sampling methods and evaluate their performance over a range of ratios based on 20 datasets. The results of the numerical experiments suggest that the optimal sampling ratio is between 0.7 and 0.8 albeit the exact ratio varies depending on the dataset. Furthermore, we find that while factors such the original imbalance ratio or the number of features do not play a discernible role in determining the optimal ratio, the number of samples in the dataset may have a tangible effect. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 307,260 |
2108.13987 | OARnet: Automated organs-at-risk delineation in Head and Neck CT images | A 3D deep learning model (OARnet) is developed and used to delineate 28 H&N OARs on CT images. OARnet utilizes a densely connected network to detect the OAR bounding-box, then delineates the OAR within the box. It reuses information from any layer to subsequent layers and uses skip connections to combine information from different dense block levels to progressively improve delineation accuracy. Training uses up to 28 expert manual delineated (MD) OARs from 165 CTs. Dice similarity coefficient (DSC) and the 95th percentile Hausdorff distance (HD95) with respect to MD is assessed for 70 other CTs. Mean, maximum, and root-mean-square dose differences with respect to MD are assessed for 56 of the 70 CTs. OARnet is compared with UaNet, AnatomyNet, and Multi-Atlas Segmentation (MAS). Wilcoxon signed-rank tests using 95% confidence intervals are used to assess significance. Wilcoxon signed ranked tests show that, compared with UaNet, OARnet improves (p<0.05) the DSC (23/28 OARs) and HD95 (17/28). OARnet outperforms both AnatomyNet and MAS for DSC (28/28) and HD95 (27/28). Compared with UaNet, OARnet improves median DSC up to 0.05 and HD95 up to 1.5mm. Compared with AnatomyNet and MAS, OARnet improves median (DSC, HD95) by up to (0.08, 2.7mm) and (0.17, 6.3mm). Dosimetrically, OARnet outperforms UaNet (Dmax 7/28; Dmean 10/28), AnatomyNet (Dmax 21/28; Dmean 24/28), and MAS (Dmax 22/28; Dmean 21/28). The DenseNet architecture is optimized using a hybrid approach that performs OAR-specific bounding box detection followed by feature recognition. Compared with other auto-delineation methods, OARnet is better than or equal to UaNet for all but one geometric (Temporal Lobe L, HD95) and one dosimetric (Eye L, mean dose) endpoint for the 28 H&N OARs, and is better than or equal to both AnatomyNet and MAS for all OARs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 252,969 |
2302.14031 | Proof-of-Contribution-Based Design for Collaborative Machine Learning on
Blockchain | We consider a project (model) owner that would like to train a model by utilizing the local private data and compute power of interested data owners, i.e., trainers. Our goal is to design a data marketplace for such decentralized collaborative/federated learning applications that simultaneously provides i) proof-of-contribution based reward allocation so that the trainers are compensated based on their contributions to the trained model; ii) privacy-preserving decentralized model training by avoiding any data movement from data owners; iii) robustness against malicious parties (e.g., trainers aiming to poison the model); iv) verifiability in the sense that the integrity, i.e., correctness, of all computations in the data market protocol including contribution assessment and outlier detection are verifiable through zero-knowledge proofs; and v) efficient and universal design. We propose a blockchain-based marketplace design to achieve all five objectives mentioned above. In our design, we utilize a distributed storage infrastructure and an aggregator aside from the project owner and the trainers. The aggregator is a processing node that performs certain computations, including assessing trainer contributions, removing outliers, and updating hyper-parameters. We execute the proposed data market through a blockchain smart contract. The deployed smart contract ensures that the project owner cannot evade payment, and honest trainers are rewarded based on their contributions at the end of training. Finally, we implement the building blocks of the proposed data market and demonstrate their applicability in practical scenarios through extensive experiments. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 348,132 |
2203.00553 | Global-Local Regularization Via Distributional Robustness | Despite superior performance in many situations, deep neural networks are often vulnerable to adversarial examples and distribution shifts, limiting model generalization ability in real-world applications. To alleviate these problems, recent approaches leverage distributional robustness optimization (DRO) to find the most challenging distribution, and then minimize loss function over this most challenging distribution. Regardless of achieving some improvements, these DRO approaches have some obvious limitations. First, they purely focus on local regularization to strengthen model robustness, missing a global regularization effect which is useful in many real-world applications (e.g., domain adaptation, domain generalization, and adversarial machine learning). Second, the loss functions in the existing DRO approaches operate in only the most challenging distribution, hence decouple with the original distribution, leading to a restrictive modeling capability. In this paper, we propose a novel regularization technique, following the veins of Wasserstein-based DRO framework. Specifically, we define a particular joint distribution and Wasserstein-based uncertainty, allowing us to couple the original and most challenging distributions for enhancing modeling capability and applying both local and global regularizations. Empirical studies on different learning problems demonstrate that our proposed approach significantly outperforms the existing regularization approaches in various domains: semi-supervised learning, domain adaptation, domain generalization, and adversarial machine learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,047 |
1710.03190 | Estimating Heterogeneous Treatment Effects in Residential Demand
Response | We evaluate the causal effect of hour-ahead price interventions on the reduction in residential electricity consumption using a data set from a large-scale experiment on 7,000 households in California. By estimating user-level counterfactuals using time-series prediction, we estimate an average treatment effect of ~0.10 kWh (11%) per intervention and household. Next, we leverage causal decision trees to detect treatment effect heterogeneity across users by incorporating census data. These decision trees depart from classification and regression trees, as we intend to estimate a causal effect between treated and control units rather than perform outcome regression. We compare the performance of causal decision trees with a simpler, yet more inaccurate k-means clustering approach that naively detects heterogeneity in the feature space, confirming the superiority of causal decision trees. Lastly, we comment on how our methods to detect heterogeneity can be used for targeting households to improve cost efficiency. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 82,290 |
2106.10034 | Synergetic UAV-RIS Communication with Highly Directional Transmission | The effective integration of unmanned aerial vehicles (UAVs) in future wireless communication systems depends on the conscious use of their limited energy, which constrains their flight time. Reconfigurable intelligent surfaces (RISs) can be used in combination with UAVs with the aim to improve the communication performance without increasing complexity at the UAV side. In this paper, we propose a synergetic UAV RIS communication system, utilizing a UAV with a highly directional antenna aiming to the RIS. The proposed scenario can be applied in all air-to-ground RIS-assisted networks and numerical results illustrate that it is superior from the cases where the UAV utilizes either an omnidirectional antenna or a highly directional antenna aiming towards the ground node. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 241,877 |
2407.15321 | Hierarchical Homogeneity-Based Superpixel Segmentation: Application to
Hyperspectral Image Analysis | Hyperspectral image (HI) analysis approaches have recently become increasingly complex and sophisticated. Recently, the combination of spectral-spatial information and superpixel techniques have addressed some hyperspectral data issues, such as the higher spatial variability of spectral signatures and dimensionality of the data. However, most existing superpixel approaches do not account for specific HI characteristics resulting from its high spectral dimension. In this work, we propose a multiscale superpixel method that is computationally efficient for processing hyperspectral data. The Simple Linear Iterative Clustering (SLIC) oversegmentation algorithm, on which the technique is based, has been extended hierarchically. Using a novel robust homogeneity testing, the proposed hierarchical approach leads to superpixels of variable sizes but with higher spectral homogeneity when compared to the classical SLIC segmentation. For validation, the proposed homogeneity-based hierarchical method was applied as a preprocessing step in the spectral unmixing and classification tasks carried out using, respectively, the Multiscale sparse Unmixing Algorithm (MUA) and the CNN-Enhanced Graph Convolutional Network (CEGCN) methods. Simulation results with both synthetic and real data show that the technique is competitive with state-of-the-art solutions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 475,131 |
1809.08881 | Vision-based Control of a Quadrotor in User Proximity: Mediated vs
End-to-End Learning Approaches | We consider the task of controlling a quadrotor to hover in front of a freely moving user, using input data from an onboard camera. On this specific task we compare two widespread learning paradigms: a mediated approach, which learns an high-level state from the input and then uses it for deriving control signals; and an end-to-end approach, which skips high-level state estimation altogether. We show that despite their fundamental difference, both approaches yield equivalent performance on this task. We finally qualitatively analyze the behavior of a quadrotor implementing such approaches. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 108,612 |
1710.05111 | Reconfigurable Antennas in mmWave MIMO Systems | The key obstacle to achieving the full potential of the millimeter wave (mmWave) band has been the poor propagation characteristics of wireless signals in this band. One approach to overcome this issue is to use antennas that can support higher gains while providing beam adaptability and diversity, i.e., reconfigurable antennas. In this article, we present a new architecture for mmWave multiple-input multiple-output (MIMO) communications that uses a new class of reconfigurable antennas. More specifically, the proposed lens-based antennas can support multiple radiation patterns while using a single radio frequency chain. Moreover, by using a beam selection network, each antenna beam can be steered in the desired direction. Further, using the proposed reconfigurable antenna in a MIMO architecture, we propose a new signal processing algorithm that uses the additional degrees of freedom provided by the antennas to overcome propagation issues at mmWave frequencies. Our simulation results show that the proposed reconfigurable antenna MIMO architecture significantly enhances the performance of mmWave communication systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 82,583 |
2204.04780 | A Fully Polynomial Time Approximation Scheme for Constrained MDPs and
Stochastic Shortest Path under Local Transitions | The fixed-horizon constrained Markov Decision Process (C-MDP) is a well-known model for planning in stochastic environments under operating constraints. Chance-Constrained MDP (CC-MDP) is a variant that allows bounding the probability of constraint violation, which is desired in many safety-critical applications. CC-MDP can also model a class of MDPs, called Stochastic Shortest Path (SSP), under dead-ends, where there is a trade-off between the probability-to-goal and cost-to-goal. This work studies the structure of (C)C-MDP, particularly an important variant that involves local transition. In this variant, the state reachability exhibits a certain degree of locality and independence from the remaining states. More precisely, the number of states, at a given time, that share some reachable future states is always constant. (C)C-MDP under local transition is NP-Hard even for a planning horizon of two. In this work, we propose a fully polynomial-time approximation scheme for (C)C-MDP that computes (near) optimal deterministic policies. Such an algorithm is among the best approximation algorithm attainable in theory and gives insights into the approximability of constrained MDP and its variants. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 290,786 |
1911.09086 | Shapelets for earthquake detection | This paper introduces EQShapelets (EarthQuake Shapelets) a time-series shape-based approach embedded in machine learning to autonomously detect earthquakes. It promises to overcome the challenges in the field of seismology related to automated detection and cataloging of earthquakes. EQShapelets are amplitude and phase-independent, i.e., their detection sensitivity is irrespective of the magnitude of the earthquake and the time of occurrence. They are also robust to noise and other spurious signals. The detection capability of EQShapelets is tested on one week of continuous seismic data provided by the Northern California Seismic Network (NCSN) obtained from a station in central California near the Calaveras Fault. EQShapelets combined with a Random Forest classifier, detected all of the cataloged earthquakes and 281 uncataloged events with lower false detection rate thus offering a better performance than autocorrelation and FAST algorithms. The primary advantage of EQShapelets over competing methods is the interpretability and insight it offers. Shape-based approaches are intuitive, visually meaningful and offers immediate insight into the problem domain that goes beyond their use in accurate detection. EQShapelets, if implemented at a large scale, can significantly reduce catalog completeness magnitudes and can serve as an effective tool for near real-time earthquake monitoring and cataloging. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 154,399 |
2104.01646 | SOLO: Search Online, Learn Offline for Combinatorial Optimization
Problems | We study combinatorial problems with real world applications such as machine scheduling, routing, and assignment. We propose a method that combines Reinforcement Learning (RL) and planning. This method can equally be applied to both the offline, as well as online, variants of the combinatorial problem, in which the problem components (e.g., jobs in scheduling problems) are not known in advance, but rather arrive during the decision-making process. Our solution is quite generic, scalable, and leverages distributional knowledge of the problem parameters. We frame the solution process as an MDP, and take a Deep Q-Learning approach wherein states are represented as graphs, thereby allowing our trained policies to deal with arbitrary changes in a principled manner. Though learned policies work well in expectation, small deviations can have substantial negative effects in combinatorial settings. We mitigate these drawbacks by employing our graph-convolutional policies as non-optimal heuristics in a compatible search algorithm, Monte Carlo Tree Search, to significantly improve overall performance. We demonstrate our method on two problems: Machine Scheduling and Capacitated Vehicle Routing. We show that our method outperforms custom-tailored mathematical solvers, state of the art learning-based algorithms, and common heuristics, both in computation time and performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,422 |
2310.19413 | CARPE-ID: Continuously Adaptable Re-identification for Personalized
Robot Assistance | In today's Human-Robot Interaction (HRI) scenarios, a prevailing tendency exists to assume that the robot shall cooperate with the closest individual or that the scene involves merely a singular human actor. However, in realistic scenarios, such as shop floor operations, such an assumption may not hold and personalized target recognition by the robot in crowded environments is required. To fulfil this requirement, in this work, we propose a person re-identification module based on continual visual adaptation techniques that ensure the robot's seamless cooperation with the appropriate individual even subject to varying visual appearances or partial or complete occlusions. We test the framework singularly using recorded videos in a laboratory environment and an HRI scenario, i.e., a person-following task by a mobile robot. The targets are asked to change their appearance during tracking and to disappear from the camera field of view to test the challenging cases of occlusion and outfit variations. We compare our framework with one of the state-of-the-art Multi-Object Tracking (MOT) methods and the results show that the CARPE-ID can accurately track each selected target throughout the experiments in all the cases (except two limit cases). At the same time, the s-o-t-a MOT has a mean of 4 tracking errors for each video. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 403,983 |
2004.08101 | A stochastic approach to handle knapsack problems in the creation of
ensembles | Ensemble-based methods are highly popular approaches that increase the accuracy of a decision by aggregating the opinions of individual voters. The common point is to maximize accuracy; however, a natural limitation occurs if incremental costs are also assigned to the individual voters. Consequently, we investigate creating ensembles under an additional constraint on the total cost of the members. This task can be formulated as a knapsack problem, where the energy is the ensemble accuracy formed by some aggregation rules. However, the generally applied aggregation rules lead to a nonseparable energy function, which takes the common solution tools -- such as dynamic programming -- out of action. We introduce a novel stochastic approach that considers the energy as the joint probability function of the member accuracies. This type of knowledge can be efficiently incorporated in a stochastic search process as a stopping rule, since we have the information on the expected accuracy or, alternatively, the probability of finding more accurate ensembles. Experimental analyses of the created ensembles of pattern classifiers and object detectors confirm the efficiency of our approach. Moreover, we propose a novel stochastic search strategy that better fits the energy, compared with general approaches such as simulated annealing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 172,966 |
2404.07413 | JetMoE: Reaching Llama2 Performance with 0.1M Dollars | Large Language Models (LLMs) have achieved remarkable results, but their increasing resource demand has become a major obstacle to the development of powerful and accessible super-human intelligence. This report introduces JetMoE-8B, a new LLM trained with less than $0.1 million, using 1.25T tokens from carefully mixed open-source corpora and 30,000 H100 GPU hours. Despite its low cost, the JetMoE-8B demonstrates impressive performance, with JetMoE-8B outperforming the Llama2-7B model and JetMoE-8B-Chat surpassing the Llama2-13B-Chat model. These results suggest that LLM training can be much more cost-effective than generally thought. JetMoE-8B is based on an efficient Sparsely-gated Mixture-of-Experts (SMoE) architecture, composed of attention and feedforward experts. Both layers are sparsely activated, allowing JetMoE-8B to have 8B parameters while only activating 2B for each input token, reducing inference computation by about 70% compared to Llama2-7B. Moreover, JetMoE-8B is highly open and academia-friendly, using only public datasets and training code. All training parameters and data mixtures have been detailed in this report to facilitate future efforts in the development of open foundation models. This transparency aims to encourage collaboration and further advancements in the field of accessible and efficient LLMs. The model weights are publicly available at https://github.com/myshell-ai/JetMoE. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 445,820 |
2401.17057 | What can Information Guess? Guessing Advantage vs. R\'enyi Entropy for
Small Leakages | We leverage the Gibbs inequality and its natural generalization to R\'enyi entropies to derive closed-form parametric expressions of the optimal lower bounds of $\rho$th-order guessing entropy (guessing moment) of a secret taking values on a finite set, in terms of the R\'enyi-Arimoto $\alpha$-entropy. This is carried out in an non-asymptotic regime when side information may be available. The resulting bounds yield a theoretical solution to a fundamental problem in side-channel analysis: Ensure that an adversary will not gain much guessing advantage when the leakage information is sufficiently weakened by proper countermeasures in a given cryptographic implementation. Practical evaluation for classical leakage models show that the proposed bounds greatly improve previous ones for analyzing the capability of an adversary to perform side-channel attacks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 425,081 |
2207.11934 | Optimal Boxes: Boosting End-to-End Scene Text Recognition by Adjusting
Annotated Bounding Boxes via Reinforcement Learning | Text detection and recognition are essential components of a modern OCR system. Most OCR approaches attempt to obtain accurate bounding boxes of text at the detection stage, which is used as the input of the text recognition stage. We observe that when using tight text bounding boxes as input, a text recognizer frequently fails to achieve optimal performance due to the inconsistency between bounding boxes and deep representations of text recognition. In this paper, we propose Box Adjuster, a reinforcement learning-based method for adjusting the shape of each text bounding box to make it more compatible with text recognition models. Additionally, when dealing with cross-domain problems such as synthetic-to-real, the proposed method significantly reduces mismatches in domain distribution between the source and target domains. Experiments demonstrate that the performance of end-to-end text recognition systems can be improved when using the adjusted bounding boxes as the ground truths for training. Specifically, on several benchmark datasets for scene text understanding, the proposed method outperforms state-of-the-art text spotters by an average of 2.0% F-Score on end-to-end text recognition tasks and 4.6% F-Score on domain adaptation tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,832 |
1701.07388 | Extracting and Analyzing Hidden Graphs from Relational Databases | Analyzing interconnection structures among underlying entities or objects in a dataset through the use of graph analytics has been shown to provide tremendous value in many application domains. However, graphs are not the primary representation choice for storing most data today, and in order to have access to these analyses, users are forced to extract data from their data stores, construct the requisite graphs, and then load them into some graph engine in order to execute their graph analysis task. Moreover, these graphs can be significantly larger than the initial input stored in the database, making it infeasible to construct or analyze such graphs in memory. In this paper we address both of these challenges by building a system that enables users to declaratively specify graph extraction tasks over a relational database schema and then execute graph algorithms on the extracted graphs. We propose a declarative domain-specific language for this purpose, and pair it up with a novel condensed, in-memory representation that significantly reduces the memory footprint of these graphs, permitting analysis of larger-than-memory graphs. We present a general algorithm for creating this condensed representation for a large class of graph extraction queries against arbitrary schemas. We observe that the condensed representation suffers from a duplication issue, that results in inaccuracies for most graph algorithms. We then present a suite of in-memory representations that handle this duplication in different ways and allow trading off the memory required and the computational cost for executing different graph algorithms. We introduce novel deduplication algorithms for removing this duplication in the graph, which are of independent interest for graph compression, and provide a comprehensive experimental evaluation over several real-world and synthetic datasets illustrating these trade-offs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 67,277 |
2112.06752 | Adaptation through prediction: multisensory active inference torque
control | Adaptation to external and internal changes is major for robotic systems in uncertain environments. Here we present a novel multisensory active inference torque controller for industrial arms that shows how prediction can be used to resolve adaptation. Our controller, inspired by the predictive brain hypothesis, improves the capabilities of current active inference approaches by incorporating learning and multimodal integration of low and high-dimensional sensor inputs (e.g., raw images) while simplifying the architecture. We performed a systematic evaluation of our model on a 7DoF Franka Emika Panda robot arm by comparing its behavior with previous active inference baselines and classic controllers, analyzing both qualitatively and quantitatively adaptation capabilities and control accuracy. Results showed improved control accuracy in goal-directed reaching with high noise rejection due to multimodal filtering, and adaptability to dynamical inertial changes, elasticity constraints and human disturbances without the need to relearn the model nor parameter retuning. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 271,288 |
2209.04224 | Modelling Patient Trajectories Using Multimodal Information | Electronic Health Records (EHRs) aggregate diverse information at the patient level, holding a trajectory representative of the evolution of the patient health status throughout time. Although this information provides context and can be leveraged by physicians to monitor patient health and make more accurate prognoses/diagnoses, patient records can contain information from very long time spans, which combined with the rapid generation rate of medical data makes clinical decision making more complex. Patient trajectory modelling can assist by exploring existing information in a scalable manner, and can contribute in augmenting health care quality by fostering preventive medicine practices. We propose a solution to model patient trajectories that combines different types of information and considers the temporal aspect of clinical data. This solution leverages two different architectures: one supporting flexible sets of input features, to convert patient admissions into dense representations; and a second exploring extracted admission representations in a recurrent-based architecture, where patient trajectories are processed in sub-sequences using a sliding window mechanism. The developed solution was evaluated on two different clinical outcomes, unexpected patient readmission and disease progression, using the publicly available MIMIC-III clinical database. The results obtained demonstrate the potential of the first architecture to model readmission and diagnoses prediction using single patient admissions. While information from clinical text did not show the discriminative power observed in other existing works, this may be explained by the need to fine-tune the clinicalBERT model. Finally, we demonstrate the potential of the sequence-based architecture using a sliding window mechanism to represent the input data, attaining comparable performances to other existing solutions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 316,727 |
2405.08237 | A predictive learning model can simulate temporal dynamics and context
effects found in neural representations of continuous speech | Speech perception involves storing and integrating sequentially presented items. Recent work in cognitive neuroscience has identified temporal and contextual characteristics in humans' neural encoding of speech that may facilitate this temporal processing. In this study, we simulated similar analyses with representations extracted from a computational model that was trained on unlabelled speech with the learning objective of predicting upcoming acoustics. Our simulations revealed temporal dynamics similar to those in brain signals, implying that these properties can arise without linguistic knowledge. Another property shared between brains and the model is that the encoding patterns of phonemes support some degree of cross-context generalization. However, we found evidence that the effectiveness of these generalizations depends on the specific contexts, which suggests that this analysis alone is insufficient to support the presence of context-invariant encoding. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 454,027 |
1706.07903 | Joint and Competitive Caching Designs in Large-Scale Multi-Tier Wireless
Multicasting Networks | Caching and multicasting are two promising methods to support massive content delivery in multi-tier wireless networks. In this paper, we consider a random caching and multicasting scheme with caching distributions in the two tiers as design parameters, to achieve efficient content dissemination in a two-tier large-scale cache-enabled wireless multicasting network. First, we derive tractable expressions for the successful transmission probabilities in the general region as well as the high SNR and high user density region, respectively, utilizing tools from stochastic geometry. Then, for the case of a single operator for the two tiers, we formulate the optimal joint caching design problem to maximize the successful transmission probability in the asymptotic region, which is nonconvex in general. By using the block successive approximate optimization technique, we develop an iterative algorithm, which is shown to converge to a stationary point. Next, for the case of two different operators, one for each tier, we formulate the competitive caching design game where each tier maximizes its successful transmission probability in the asymptotic region. We show that the game has a unique Nash equilibrium (NE) and develop an iterative algorithm, which is shown to converge to the NE under a mild condition. Finally, by numerical simulations, we show that the proposed designs achieve significant gains over existing schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 75,915 |
2007.08922 | Can Learned Frame-Prediction Compete with Block-Motion Compensation for
Video Coding? | Given recent advances in learned video prediction, we investigate whether a simple video codec using a pre-trained deep model for next frame prediction based on previously encoded/decoded frames without sending any motion side information can compete with standard video codecs based on block-motion compensation. Frame differences given learned frame predictions are encoded by a standard still-image (intra) codec. Experimental results show that the rate-distortion performance of the simple codec with symmetric complexity is on average better than that of x264 codec on 10 MPEG test videos, but does not yet reach the level of x265 codec. This result demonstrates the power of learned frame prediction (LFP), since unlike motion compensation, LFP does not use information from the current picture. The implications of training with L1, L2, or combined L2 and adversarial loss on prediction performance and compression efficiency are analyzed. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 187,786 |
2501.04067 | Explainable Time Series Prediction of Tyre Energy in Formula One Race
Strategy | Formula One (F1) race strategy takes place in a high-pressure and fast-paced environment where split-second decisions can drastically affect race results. Two of the core decisions of race strategy are when to make pit stops (i.e. replace the cars' tyres) and which tyre compounds (hard, medium or soft, in normal conditions) to select. The optimal pit stop decisions can be determined by estimating the tyre degradation of these compounds, which in turn can be computed from the energy applied to each tyre, i.e. the tyre energy. In this work, we trained deep learning models, using the Mercedes-AMG PETRONAS F1 team's historic race data consisting of telemetry, to forecast tyre energies during races. Additionally, we fitted XGBoost, a decision tree-based machine learning algorithm, to the same dataset and compared the results, with both giving impressive performance. Furthermore, we incorporated two different explainable AI methods, namely feature importance and counterfactual explanations, to gain insights into the reasoning behind the forecasts. Our contributions thus result in an explainable, automated method which could assist F1 teams in optimising their race strategy. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 523,089 |
2404.04960 | PairAug: What Can Augmented Image-Text Pairs Do for Radiology? | Current vision-language pre-training (VLP) methodologies predominantly depend on paired image-text datasets, a resource that is challenging to acquire in radiology due to privacy considerations and labelling complexities. Data augmentation provides a practical solution to overcome the issue of data scarcity, however, most augmentation methods exhibit a limited focus, prioritising either image or text augmentation exclusively. Acknowledging this limitation, our objective is to devise a framework capable of concurrently augmenting medical image and text data. We design a Pairwise Augmentation (PairAug) approach that contains an Inter-patient Augmentation (InterAug) branch and an Intra-patient Augmentation (IntraAug) branch. Specifically, the InterAug branch of our approach generates radiology images using synthesised yet plausible reports derived from a Large Language Model (LLM). The generated pairs can be considered a collection of new patient cases since they are artificially created and may not exist in the original dataset. In contrast, the IntraAug branch uses newly generated reports to manipulate images. This process allows us to create new paired data for each individual with diverse medical conditions. Our extensive experiments on various downstream tasks covering medical image classification zero-shot and fine-tuning analysis demonstrate that our PairAug, concurrently expanding both image and text data, substantially outperforms image-/text-only expansion baselines and advanced medical VLP baselines. Our code is released at \url{https://github.com/YtongXie/PairAug}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 444,882 |
1310.0365 | The complex-valued encoding for dicision-making based on aliasing data | It is proposed a complex valued channel encoding for multidimensional data. The basic approach contains overlapping of complex nonlinear mappings. Its development leads to sparse representation of multi-channel data, increasing their dimensions and the distance between the images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 27,485 |
1810.10158 | Randomized Gradient Boosting Machine | Gradient Boosting Machine (GBM) introduced by Friedman is a powerful supervised learning algorithm that is very widely used in practice---it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In spite of the usefulness of GBM in practice, our current theoretical understanding of this method is rather limited. In this work, we propose Randomized Gradient Boosting Machine (RGBM) which leads to substantial computational gains compared to GBM, by using a randomization scheme to reduce search in the space of weak-learners. We derive novel computational guarantees for RGBM. We also provide a principled guideline towards better step-size selection in RGBM that does not require a line search. Our proposed framework is inspired by a special variant of coordinate descent that combines the benefits of randomized coordinate descent and greedy coordinate descent; and may be of independent interest as an optimization algorithm. As a special case, our results for RGBM lead to superior computational guarantees for GBM. Our computational guarantees depend upon a curious geometric quantity that we call Minimal Cosine Angle, which relates to the density of weak-learners in the prediction space. On a series of numerical experiments on real datasets, we demonstrate the effectiveness of RGBM over GBM in terms of obtaining a model with good training and/or testing data fidelity with a fraction of the computational cost. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 111,217 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.