text
stringlengths
0
9.39k
source
stringlengths
16
523
In-context learning enables large language models (LLMs) to perform a variety of tasks, including solving reinforcement learning (RL) problems. Given their potential use as (autonomous) decision-making agents, it is important to understand how these models behave in RL tasks and the extent to which they are susceptible to biases. Motivated by the fact that, in humans, it has been widely documented that the value of a choice outcome depends on how it compares to other local outcomes, the present study focuses on whether similar value encoding biases apply to LLMs. Results from experiments with multiple bandit tasks and models show that LLMs exhibit behavioral signatures of relative value encoding. Adding explicit outcome comparisons to the prompt magnifies the bias, impairing the ability of LLMs to generalize from the outcomes presented in-context to new choice problems, similar to effects observed in humans. Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm that incorporates relative values at the outcome encoding stage. Lastly, we present preliminary evidence that the observed biases are not limited to fine-tuned LLMs, and that relative value processing is detectable in the final hidden layer activations of a raw, pretrained model. These findings have important implications for the use of LLMs in decision-making applications.
Relative Value Encoding in Large Language Models: A Multi-Task, Multi-Model Investigation.
Sixth-generation wireless communication has emerged, stimulating the rapid growth of numerous types of real-time applications that are characterized by their high data computing demands and formation of massive data traffic. Cybertwin-enabled edge computing has become a logical way to satisfy the enormous user demands. However, there are drawbacks to this advancement as well. The effective distribution of resources while balancing the demands for computing, communication, and caching is a major problem in edge networks. The resource allocation problem in dynamic edge computing systems is too complex to address with traditional statistical optimization techniques. Therefore, a Joint Resource allocation method using Self-Organized Map (SOM)-based Deep Reinforcement Learning (DRL) is proposed for cybertwin-enabled 6G wired + wireless (hybrid) networks. This approach controls the clustering capabilities of SOM to organize the state space, followed by the decision-making strength of RL to select optimal actions for resource allocation in dynamic and real-time environments. The objective is to minimize overall latency and energy consumption. From the results analysis, using SOM-DRL, the hybrid network model outperforms the wireless-only model in terms of latency and energy consumption than the existing MATD3 method by achieves 3.34% of energy consumption, 3.17% of latency, and 7.30% of completion time.
Efficient joint resource allocation using self organized map based Deep Reinforcement Learning for cybertwin enabled 6G networks.
Animals make decisions based on the value of potential outcomes. This perceived value is not fixed; it changes depending on internal needs, such as hunger or thirst, and past experiences. The basolateral amygdala (BLA) is known to be crucial for updating predicted reward values. However, it has been unclear how the BLA represents the specific value of different rewards. Two-photon calcium imaging in male mice showed that population response magnitude scaled with subjective value, and different rewards recruited distinct neuronal subpopulations. Value representations quickly re-scaled when a novel, higher-value reward appeared, and internal state shaped them: thirst selectively boosted responses to water, whereas aversive experience dampened sucrose responses. Thus, BLA circuits carry flexible, stimulus-specific value signals that integrate relative value and current affective or homeostatic conditions, providing a neural basis for adaptive decision making and learning. Our findings reveal that the BLA maintains adaptable, reward-specific value signals, essential for guiding choices according to current needs and changing circumstances.
Stimulus-specific and adaptive value representations in the basolateral amygdala in male mice.
Despite significant advances in cancer immunotherapy, current approaches face critical limitations, including systemic toxicity, inadequate tumor penetration, and insufficient therapeutic efficacy against immunosuppressive tumor microenvironments. A significant research gap exists in developing platforms that can simultaneously address these challenges while providing real-time monitoring capabilities. We critically analyze recent innovations in scaffold-based delivery systems that enable the controlled release of immunomodulatory agents directly within tumor microenvironments, thereby minimizing systemic exposure and associated toxicities. This comprehensive review examines engineered protein scaffolds as multifunctional platforms for cancer immunotherapy, emphasizing their design principles, synthetic methodologies, and therapeutic applications. Successful protein scaffolds demonstrate crucial performance characteristics including thermal stability (Tm >70 °C), high target specificity with sub-nanomolar binding affinities, and minimal immunogenicity (>95 % human-like sequence identity). We evaluate scaffold effectiveness through quantitative metrics including tumor-to-background ratios exceeding 3:1 for imaging applications, circulation half-lives >12 h for therapeutic delivery, and production yields above 100 mg/L in recombinant expression systems. Clinical benchmarks for these platforms include comparison with conventional antibody performance, demonstrating improved tissue penetration due to their compact size (<50 kDa), enhanced proteolytic resistance through rational engineering, and reduced off-target effects confirmed through multi-organoid models. Advanced technologies including AI-driven generative models and reinforcement learning algorithms are revolutionizing scaffold design, while cell-free protein synthesis systems and orthogonal translation machinery enable on-demand production of complex architectures. By integrating insights from structural biology, computational modeling, and synthetic biology, this review highlights remarkable progress in protein scaffold engineering for dual-modality cancer applications while addressing ongoing challenges in translating these promising platforms to clinical practice.
Transformative protein scaffold designs for dual-modality cancer applications: Advances in therapeutic delivery and molecular imaging of tumor microenvironments.
In both males and females, linking rewards with salient audiovisual cues in simulated gambling games increases risky choice in humans and rats. However, the prevalence and severity of gambling problems differs in men and women. In previous work, reinforcement learning (RL) models were applied to data from male rats performing the rat gambling task (rGT) to investigate the computational processes promoting risky choice. In the rGT, the optimal strategy is to favor options paired with smaller per-trial gains but shorter and less frequent time-out penalties. Rewards are either delivered with (cued) or without (uncued) concurrent audiovisual cues. Previous work showed these cues drive risky decision making by causing male rats to under weigh the relative cost of timeout punishments, specifically for one of the highly risky options. Here, we applied the same methodology to a large dataset from female rats performing the cued and uncued rGT to investigate whether the same cognitive mechanism drives risky decision making across sexes. Cues decreased the learning rate from all time-out penalties in female rats, rather than specifically from those paired with a risky option. Although females were less sensitive to the shortest time-outs associated with the one pellet option (P1), this computation failed to promote choice of this comparatively safe option due to the overall lower learning rate from penalties. Differences revealed by computational modeling in the way risky choice develops across sexes may help us understand the divergent trajectory of gambling disorder in men and women.
Divergent effects of win-paired cues on learning from timeout penalties in female and male rats.
In this paper, we propose a robust control method for the automatic treatment of targeted anti-angiogenic molecular therapy based on multi-input multi-output (MIMO) nonlinear fractional and non-fractional models using the backstepping (BS) approach. This protocol aims to eradicate tumour cells while preserving high levels of the body's natural effector cells and maintaining drug dosage within safe limits. The exponential stability of the controlled system is mathematically demonstrated using the Lyapunov theorem. Consequently, the tumour volume's convergence rate can be precisely controlled-a critical factor in cancer treatment. To fine-tune the controller gains, a soft actor-critic (SAC) algorithm within the framework of deep reinforcement learning (DRL) is employed, with a reward function designed based on the specific requirements of the system. Additionally, the Lyapunov theorem is used to mathematically verify the system's robustness against parametric uncertainty. Compared to state-of-the-art approaches, the proposed scheme demonstrates superior long-term performance, achieving complete tumour eradication and drug delivery convergence to zero within 50 days while preserving high effector cell levels.
Designing a Resilient Controller for Cancer Immunotherapy: Application to a Fractional-Order Tumour-Immune Model.
Generalization from past experience is an important feature of intelligent systems. When faced with a new task, one efficient computational approach is to evaluate solutions to earlier tasks as candidates for reuse. Consistent with this idea, we found that human participants (n = 38) learned optimal solutions to a set of training tasks and generalized them to novel test tasks in a reward-selective manner. This behavior was consistent with a computational process based on the successor representation known as successor features and generalized policy improvement (SF&GPI). Neither model-free perseveration or model-based control using a complete model of the environment could explain choice behavior. Decoding from functional magnetic resonance imaging data revealed that solutions from the SF&GPI algorithm were activated on test tasks in visual and prefrontal cortex. This activation had a functional connection to behavior in that stronger activation of SF&GPI solutions in visual areas was associated with increased behavioral reuse. These findings point to a possible neural implementation of an adaptive algorithm for generalization across tasks.
Neural evidence that humans reuse strategies to solve new tasks.
Small-molecule drugs play a critical role in cancer therapy by selectively targeting key signaling pathways that drive tumor growth. While deep learning models have advanced drug discovery, there remains a lack of generative frameworks for <i>de novo</i> covalent molecule design using a fragment-based approach. To address this, we propose MOFF (MOlecule generation with Functional Fragments), a reinforcement learning framework for molecule generation. MOFF is specifically designed to generate both covalent and noncovalent compounds based on functional fragments. The model leverages docking scores as reward functions and is trained using the Soft Actor-Critic algorithm. We evaluate MOFF through case studies targeting Bruton's tyrosine kinase (BTK) and the epidermal growth factor receptor (EGFR), demonstrating that MOFF can generate ligand-like molecules with favorable docking scores and drug-like properties, compared to baseline models and ChEMBL compounds. As a computational validation, molecular dynamics (MD) simulations were conducted on selected top-scoring molecules to assess potential binding stability. These results highlight MOFF as a flexible and extensible framework for fragment-based molecule generation, with the potential to support downstream applications.
Improving Covalent and Noncovalent Molecule Generation via Reinforcement Learning with Functional Fragments.
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.
<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.
Neurological tremors, prevalent among a large population, are one of the most rampant movement disorders. Biomechanical loading and exoskeletons show promise in enhancing patient well-being, but traditional control algorithms limit their efficacy in dynamic movements and personalized interventions. Furthermore, a pressing need exists for more comprehensive and robust validation methods to ensure the effectiveness and generalizability of proposed solutions. This paper proposes a physical simulation approach modeling multiple arm joints and tremor propagation. This study also introduces a novel adaptable reinforcement learning environment tailored for disorders with tremors. We present a deep reinforcement learning-based encoder-actor controller for Parkinson's tremors in various shoulder and elbow joint axes displayed in dynamic movements. Our findings suggest that such a control strategy offers a viable solution for tremor suppression in real-world scenarios. By overcoming the limitations of traditional control algorithms, this work takes a new step in adapting biomechanical loading into the everyday life of patients. This work also opens avenues for more adaptive and personalized interventions in managing movement disorders.
Learning to suppress tremors: a deep reinforcement learning-enabled soft exoskeleton for Parkinson's patients.
Lecture-based teaching is widely used in preclinical medical education, offering a systematic way to deliver complex information efficiently. However, its effectiveness heavily relies on the instructional behaviors of lecturers. Despite its importance, limited research has explored the specific differences between effective and ineffective teaching behaviors perceived by students. This study aims to analyze these behaviors systematically to provide actionable insights for enhancing teaching competencies. This study surveyed 92 first-year medical students to evaluate effective and ineffective teaching behaviors. A 30-item questionnaire was developed based on existing literature. Data analysis included descriptive statistics to rank teaching behaviors and chi-square tests to examine their correlations. Effective behaviors included appropriate voice volume, clear pronunciation, error-free lecture materials, clear explanations of learning objectives, and humor. Ineffective behaviors were poor voice clarity, insufficient summarization, lack of follow-up session introductions, absence of online resources, and poor interaction. Significant relationships between effective and ineffective behaviors were observed in some items. The study highlights those effective behaviors, such as recalling prior learning, utilizing materials, and engaging students, enhance learning outcomes. Faculty development should focus on avoiding ineffective behaviors for novice faculty and reinforcing effective ones for mid-career faculty to improve teaching quality in medical education.
Medical students' perspectives on effective and ineffective teaching behaviors in lectures.
Deep generative models provide a powerful solution for the de novo design of molecules. However, the majority of existing methods only generate molecules for a single target. Generating molecules with biological activities against multiple specific targets and desired properties remains an extremely difficult challenge. In this study, we propose a novel 3D molecule generation framework based on reinforcement learning and diffusion model to generate molecules with predefined properties for given multiple targets. The proposed framework, MDRL, uses a diffusion model to understand the 3D chemical structure of molecules and employs Kolmogorov-Arnold Networks instead of Multilayer Perceptron to enhance model performance. Through reinforcement learning, the framework is able to generate molecules that simultaneously target two targets and further optimizes multiple molecular properties. Experimental results show that our model exhibits comparable performance to various state-of-the-art molecular generation models, and MDRL can effectively navigate chemical space to design polypharmacological compounds and control multiple molecular properties. In multiple case studies, we verify that the generated molecules can simultaneously target two targets through molecular docking and assess the model's ability to control multiple molecular properties. The results in this study highlight the advantages and practicalities of our model in generating polypharmacological compounds with desired properties.
A 3D generation framework using diffusion model and reinforcement learning to generate multi-target compounds with desired properties.
Autistic adolescents are more likely to experience depression than their non-autistic peers, yet risk factors for depression in autistic adolescents are not well understood. Better mechanistic knowledge of depression in autistic adolescents is critical to understanding higher prevalence rates and developing targeted interventions. Altered reward responsiveness and social processes, as assessed by clinical and neural measures [i.e., electroencephalography (EEG)], are important risk factors for depression in non-autistic adolescents that remain largely unexplored in autistic adolescents, even though autistic people have higher rates of depression, exhibit reward differences, and often experience difficulties in social interactions. Therefore, a multimethod investigation of social and nonsocial reward responsivity and their associations with depression symptoms in autistic adolescents, particularly over time, is needed. The current project will employ clinical and neural measures (i.e., interviews, EEG tasks) of social and nonsocial reward responsivity and depression to test associations between these constructs in autistic adolescents for the first time. A clinical sample of 100 autistic adolescents (14-17 years old) without intellectual disability and with varying severity of depression symptoms (at least 50% with current depression) will be recruited. Clinical and neural measures will be administered at two timepoints one year apart. Planned analyses will test cross-sectional and longitudinal relations between clinical and neural measures of reward responsivity and depression symptoms. This systematic study of reward responsivity and depression in autistic adolescents is likely to advance our collective understanding of depression in this population by informing risk stratification models and identifying potential intervention targets. Findings may also establish the reliability of several clinical and neural measures of reward responsivity in this population that can eventually be used to measure treatment outcome and identify predictors of treatment response.
Study protocol for a multimethod investigation of the development of social and nonsocial reward responsivity and depression in autistic adolescents: Reward and Depression in Autism (RDA).
Midbrain dopamine neurons (DANs) signal reward-prediction errors that teach recipient circuits about expected rewards<sup>1</sup>. However, DANs are thought to provide a substrate for temporal difference (TD) reinforcement learning (RL), an algorithm that learns the mean of temporally discounted expected future rewards, discarding useful information about experienced distributions of reward amounts and delays<sup>2</sup>. Here we present time-magnitude RL (TMRL), a multidimensional variant of distributional RL that learns the joint distribution of future rewards over time and magnitude. We also uncover signatures of TMRL-like computations in the activity of optogenetically identified DANs in mice during behaviour. Specifically, we show that there is significant diversity in both temporal discounting and tuning for the reward magnitude across DANs. These features allow the computation of a two-dimensional, probabilistic map of future rewards from just 450 ms of the DAN population response to a reward-predictive cue. Furthermore, reward-time predictions derived from this code correlate with anticipatory behaviour, suggesting that similar information is used to guide decisions about when to act. Finally, by simulating behaviour in a foraging environment, we highlight the benefits of a joint probability distribution of reward over time and magnitude in the face of dynamic reward landscapes and internal states. These findings show that rich probabilistic reward information is learnt and communicated to DANs, and suggest a simple, local-in-time extension of TD algorithms that explains how such information might be acquired and computed.
A multidimensional distributional map of future reward in dopamine neurons.
To thrive in complex environments, animals and artificial agents must learn to act adaptively to maximize fitness and rewards. Such adaptive behaviour can be learned through reinforcement learning<sup>1</sup>, a class of algorithms that has been successful at training artificial agents<sup>2-5</sup> and at characterizing the firing of dopaminergic neurons in the midbrain<sup>6-8</sup>. In classical reinforcement learning, agents discount future rewards exponentially according to a single timescale, known as the discount factor. Here we explore the presence of multiple timescales in biological reinforcement learning. We first show that reinforcement agents learning at a multitude of timescales possess distinct computational benefits. Next, we report that dopaminergic neurons in mice performing two behavioural tasks encode reward prediction error with a diversity of discount time constants. Our model explains the heterogeneity of temporal discounting in both cue-evoked transient responses and slower timescale fluctuations known as dopamine ramps. Crucially, the measured discount factor of individual neurons is correlated across the two tasks, suggesting that it is a cell-specific property. Together, our results provide a new paradigm for understanding functional heterogeneity in dopaminergic neurons and a mechanistic basis for the empirical observation that humans and animals use non-exponential discounts in many situations<sup>9-12</sup>, and open new avenues for the design of more-efficient reinforcement learning algorithms.
Multi-timescale reinforcement learning in the brain.
One-fourth of Indians are hypertensive, and the majority relies on groundwater for drinking. But the role of groundwater physicochemical properties and contamination in hypertension remains understudied. The study investigates the association between physicochemical groundwater characteristics andcontaminants and hypertension risk in India. This study used data from the fifth round of the National Family Health Survey (NFHS-5 collected 2019-2021), including health, socio-demographics, and food and dietary information (n = 712,666 individuals). The physicochemical characteristics of groundwater data were derived from the Central Groundwater Board (CGWB, 2019-2021). This groundwater data from raster maps was linked to NFHS-5 records using cluster shapefiles and merging them with individual records via cluster IDs. Bivariate and multivariable regressions were used to identify factors associated with hypertension at the individual level. Moran's I statistics, Local Indicator of Spatial Association (LISA) cluster maps, and the Spatial Error Model (SEM) were used at district levels to investigate the spatial association. Machine learning models, including Artificial Neural Networks (ANN), Random Forest and Extreme Gradient Boosting (XGBoost), were used to predict hypertension risk zones. Physicochemical drinking water composition is a key factor in hypertension risk. Elevated groundwater pH (>8.5, Adjusted Odds Ratio (AOR): 2.12), electrical conductivity (>300 μS/cm, AOR: 1.06), sulphate (>200 mg/L,  AOR: 1.16), arsenic (>0.01 mg/L, AOR: 1.09), nitrate (>45 mg/L, AOR: 1.07), and magnesium (>30 mg/L, AOR: 1.03) are associated to higher odds of hypertension. The Random Forest model demonstrated the highest predictive performance, with a coefficient of determination (R²) of 0.9970, mean absolute error (MAE) of 0.0012, and mean squared error (MSE) of 0.0077. It effectively identified high-risk zones in the northwestern (Delhi, Punjab, Haryana, and Rajasthan) and eastern (West Bengal and Bihar) regions of India. This study highlights how important groundwater quality is in determining the incidence of hypertension, pointing to groundwater physicochemical properties and contaminants such as electrical conductivity, sulphate, arsenic, nitrate, and magnesium as essential factors. Our research is the first of its kind to comprehensively map hypertension risk zones using machine learning models and geospatial analysis. The findings highlight that water quality is a modifiable risk factor, reinforcing the need for improved drinking water supply systems, regular water quality testing, and targeted interventions in high-risk regions. This study emphasizes the importance of intersectoral collaborations to enhance public health outcomes.
Investigating the association between groundwater contaminants and hypertension risk in India: a machine learning-based analysis.
Concrete compressive strength (CS) is crucial for ensuring the safety, durability, and performance of structures. So, its precise simulation helps anticipate material behavior under various conditions. Despite a comprehensive experimental investigation of the impact of silica (SiO<sub>2</sub>) on the CS of the fiber-reinforced concrete, its mathematical aspects were not well studied. So, this study integrates the ANFIS (adaptive neuro-fuzzy inference system) and ELM (extreme learning machine) machine learning models with three optimization algorithms, i.e., WCA (water cycle algorithm), PSO (particle swarm optimization), and GWO (grey wolf optimizer) to precisely estimate the CS of fiber-reinforced concrete (FRC) containing SiO<sub>2</sub>. An experimental database comprising 228 datasets is used to develop the models, compare their accuracy, and select the best one. The database contains information on the volumetric percentage of fibers, sample age, amount of coarse/fine aggregates, water, cement, nano silica, and binder as independent features, while the compressive strength is the target variable. The sensitivity assessment approves that the training and generalization abilities of the ELM and ANFIS models for the CS prediction of FRC are improved by their integration with the GWO algorithm. The best model (i.e., ELM-GWO) predicts the testing datasets with the R<sup>2</sup> (coefficient of determination), RMSE (root mean square error), SI (scatter index), RPD (relative percent deviation), and PMARE (percent mean absolute relative error) values of 0.9510, 3.985 MPa, 0.061, 0.8, and 5.421, respectively.
Prediction of compressive strength of fiber-reinforced concrete containing silica (SiO<sub>2</sub>) based on metaheuristic optimization algorithms and machine learning techniques.
The distributed multiple-input multiple-output (MIMO) radar system exhibits superior target localization capability by jointly processing target information from multiple radars under different observation angles. To improve the resource utilization of the distributed MIMO radar system, this paper proposes a hybrid action space reinforcement learning (HAS-RL) method, aiming to maximize the target localization performance under the radar resource constraints. Specifically, the Cramer-Rao Lower Bound (CRLB) incorporating the transmit radar power and receive radar selection is first derived and employed as the target localization performance metric of the distributed MIMO radar system. Subsequently, the radar resource allocation problem is modeled as a constrained optimization problem with continuous and discrete variables, and a hybrid action space reinforcement learning is proposed to solve the above optimization problem. Simulation results demonstrate that the proposed HAS-RL method can obtain better target localization performance under the given radar resource constraints.
Resource allocation of distributed MIMO radar based on the hybrid action space reinforcement learning.
The increasing demand for wind turbines and cost pressures in the wind energy industry have made the Wind Turbine Pultruded Panels Production Scheduling Problem (WTPP-PSP) a critical challenge. To address the production scheduling requirements of WTPP-PSP, an intelligent platform is proposed for wind turbine pultruded panel production systems, leveraging intelligent decision-making to tackle the problem. A multi-objective model based on mixed-integer linear programming is developed, considering sequence-dependent completion and setup time constraints. The model aims to maximize customer satisfaction, minimize total setup time, and reduce deviations in workshop machine loads. To solve this problem, an Adaptive Crayfish Optimization Algorithm (ACOA) is introduced. This algorithm incorporates crossover and mutation operators, making it effective for discrete optimization problems. Furthermore, an improved crowding distance calculation enhances the algorithm's performance in multi-objective optimization by improving solution distribution. Reinforcement learning is employed to dynamically adjust temperature parameters, improving both exploration and exploitation capabilities and thus enhancing the convergence of the algorithm. The performance comparison using multi-objective metrics such as HV, IGD, GD, and NR demonstrates that ACOA significantly outperforms COA, WOA, and NSGA-II, with average improvements of 76%, 80%, 28%, and 220%, respectively. These results highlight ACOA's consistent advantages in coverage, convergence, and solution diversity. In the application to WTPP-PSP, the proposed algorithm outperforms COA by approximately 13%, 10%, and 8% in the three objectives.
Adaptive crayfish optimization algorithm for multi-objective scheduling optimization in distributed production workshops.
Striatal acetylcholine (ACh) signaling is thought to counteract reinforcement signals, promoting extinction and behavioral flexibility. Changes in striatal ACh signals have been reported during learning, but how ACh signals for learning and extinction are spatially organized to enable region-specific plasticity is unclear. We used array photometry in mice to reveal a topography of opposing changes in ACh release across distinct striatal regions. Reward prediction error encoding was localized to specific phases of ACh dynamics in anterior dorsal striatum (aDS): positive and negative prediction errors were expressed in dips and elevations respectively. Silencing ACh release in aDS impaired extinction, suggesting a role for ACh elevations in down-regulating cue-reward associations. Dopamine release in aDS dipped for cues during extinction, inverse to ACh, while glutamate input onto cholinergic interneurons was unchanged. These findings pinpoint where and suggest an intrastriatal mechanism for how ACh dynamics shape region-specific plasticity to gate learning and promote extinction.
Distinct spatially organized striatum-wide acetylcholine dynamics for the learning and extinction of Pavlovian associations.
Substance use disorder (SUD) shares common clinical features, including impulsive and compulsive behaviors, which are associated with dysfunctions in the brain's reward circuit. Resting-state functional magnetic resonance imaging (rs-fMRI) studies have shown inconsistent results due to variability in the substances and stages of addiction. Identifying common neurobiological patterns in SUD could improve both our understanding of the disorder and the development of treatment strategies. We conducted a comprehensive meta-analysis of 53 whole-brain rs-fMRI studies involving SUD patients. The Seed-based d Mapping toolkit was used to analyze connectivity patterns of key brain regions in the reward circuit: anterior cingulate cortex (ACC), prefrontal cortex (PFC), striatum, thalamus, and amygdala. Additionally, we explored correlations between resting-state functional connectivity (rsFC) patterns and impulsivity scores. The meta-analysis included 1700 SUD patients and 1792 healthy controls (HCs). Compared with HCs, SUD patients exhibited significant dysfunctions in the cortical-striatal-thalamic-cortical circuit. The ACC exhibited increased connectivity with the inferior frontal gyrus (IFG), lentiform nucleus, and putamen. The PFC demonstrated hyperconnectivity with the superior frontal gyrus (SFG) and striatum, as well as hypoconnectivity with the IFG. The striatum showed hyperconnectivity with the SFG and hypoconnectivity with the median cingulate gyrus (MCG). Thalamic connectivity with the SFG, dorsal ACC, and caudate nucleus was reduced. The amygdala exhibited hypoconnectivity with the SFG and ACC. Alterations in connectivity were also observed between several seed regions and the parahippocampal gyrus. Notably, the total score of the BIS-11 in SUD patients was significantly negatively correlated with reduced rsFC between the striatum and MCG. After family-wise error (FWE) correction, dysfunctions in the cortical-striatal-cortical circuit persisted. Our findings revealed specific network abnormalities in SUD patients, highlighting disrupted connectivity within the brain's reward circuit. These abnormalities were associated with impulsivity and may provide a theoretical basis for effective interventions to restore normal connectivity patterns.
Common neural patterns of substance use disorder: a seed-based resting-state functional connectivity meta-analysis.
Controlling electromagnetic (EM) waves at will is fundamentally important for diverse applications, ranging from optical microcavities, super-resolution imaging, to quantum information processing. Decades ago, the forays into metamaterials and transformation optics have ignited unprecedented interest to create an invisibility cloak-a closed space with any object inside invisible. However, all features of the scattering waves become stochastic and uncontrollable when EM waves interact with an open and disordered environment, making an open invisible space almost impossible. Counterintuitively, here we for the first time present an open, cluttered, and dynamic but invisible space, wherein any freely-moving object maintains invisible. To adapt to the disordered environment, we randomly organize a swarm of reconfigurable metasurfaces, and master them by MetaSeeker, a population-based reinforcement learning (RL). MetaSeeker constructs a narcissistic internal world to mirror the stochastic physical world, capable of autonomous preferment, evolution, and adaptation. In the perception-decision-execution experiment, multiple RL agents automatically interact with the ever-changing environments and integrate a post-hoc explainability to visualize the decision-making process. The hidden objects, such as vehicle cluster and experimenter, can freely scale, race, and track in the invisible space, with the environmental similarity of 99.5%. Our results constitute a monumental stride to reshape the evolutionary landscape of metasurfaces from individual to swarm intelligence and usher in the remote management of entire EM space.
MetaSeeker: sketching an open invisible space with self-play reinforcement learning.
The search for cost-effective population-based physical activity interventions continues. Therefore, we developed a novel just-in-time adaptive digital assistant supported by machine learning (ie, MoveMentor). Beta-testing is essential to evaluate both technical performance and user acceptance. The aim of this study was to assess app usability, acceptability, and technical performance through iterative rounds of beta-testing. Insufficiently active people (age: 39.8 [10.2]; 86% female) participated in 2 rounds of beta-testing (round 1, n = 112; round 2, n = 41). Participants downloaded the digital assistant app onto their phone to use during the study period (round 1: 12 wk, round 2: 4 wk). Participants were asked complete at least 4 educational and 5 chat conversations, rate over 50 notifications, and complete an online follow-up survey at week 4 examining aspects of app usability and acceptability. Descriptive statistics and t tests were used to analyze outcomes. Across both rounds, the app demonstrated good overall usability scores (System Usability Scale: 75.3 out of 100) but lower usefulness ratings. Round 2 participants showed increased engagement with features including action plans (P < .001), educational conversations (P < .001), and personalization features (P < .001), and they appreciated the educational conversations more (P < .05). Technical issues including data syncing problems and chat limitations persisted across both rounds. The notification system received mixed feedback, though customization options in round 2 reduced complaints (12.2%-7.3%). The app demonstrated good acceptability and usability but low usefulness. The iterative beta-testing successfully identified areas for improvement and enabled meaningful enhancements to content and user engagement features. While some technical challenges persisted, the beta-testing provided clear direction for ongoing improvements.
Impact of Iterative Development and Beta-Testing on the Usability and Acceptability of a Novel Just-in-Time Adaptive Digital Physical Activity Intervention.
The prevalence of dementia is increasing in Canada and in many countries internationally. People living with dementia are highly dependent on family and friend care partners, who may have little knowledge of the disorder. Web-based interventions in dementia have been shown to improve care partner mental health and reduce burden, but few have been widely implemented or rigorously studied. We developed a web- and email-based dementia education platform for care partners, iGeriCare, which includes 12 asynchronous, multimedia, e-learning lessons and email-based content to reinforce the learning. The primary objective of this pilot study is to evaluate the feasibility and care partner acceptance of the intervention, including study methods. Secondary objectives will examine the effectiveness of the educational resources on family care partners' knowledge, self-efficacy, and sense of burden. This study is a 2-arm, pilot, feasibility, randomized controlled trial. A total of 125 family or friend care partners for a person living with dementia-who are residing in Canada, aged 16+ years, and comfortable using the internet and email-will be recruited using coinvestigator networks and Facebook digital marketing advertisements. Participants will be randomly assigned to either the intervention group (receiving the dementia web- and email-based educational intervention) or the control group (receiving an alternate topic e-learning lesson and emails) and will have 8 weeks to complete baseline surveys and the assigned e-learning. After 8 weeks, participants will have 2 weeks to complete poststudy surveys. This protocol will be repeated with a second cohort of 100 care partners recruited from a paid panel service based on learnings from initial feasibility results. Initial recruitment began on September 6, 2022, and concluded on October 2, 2022. A total of 125 participants were randomly assigned to the intervention (n=61) or control (n=64) group. Data collection concluded in January 2023. Preliminary feasibility results showed a substantial number of participants who did not engage with the protocol as intended. A decision was made to recruit a second cohort of participants to address these protocol deviations. Secondary recruitment began on June 12, 2023, and concluded on June 27, 2023. A total of 100 participants were randomly assigned to the intervention (n=53) or control (n=47) group. Data collection concluded in September 2023. Further results will be published in peer-reviewed journals and presented at conferences. This study is investigating the feasibility, acceptability, and effectiveness of a web- and email-based dementia care partner educational intervention. The results of this study will contribute to the planning of a larger randomized controlled trial in the future, as well as the evaluation of innovative, cost-effective, and efficient dementia care partner resources that can complement traditional approaches. ClinicalTrials.gov NCT05114187; https://clinicaltrials.gov/study/NCT05114187. DERR1-10.2196/67048.
Web-Based Education Program for Care Partners of People Living With Dementia (iGeriCare): Protocol for a Pilot Randomized Controlled Trial.
Nurse scheduling is a complex challenge in health care, impacting both patient care quality and nurse well-being. Traditional scheduling methods often fail to consider individual preferences, leading to dissatisfaction, burnout, and high turnover. Inadequate scheduling practices, including restricted autonomy and lack of transparency, can further reduce nurse morale and negatively affect patient outcomes. Research suggests that participative scheduling approaches incorporating nurse preferences can improve job satisfaction. Artificial intelligence (AI) and mathematical optimization methods, such as mixed-integer programming (MIP), constraint programming (CP), genetic programming (GP), and reinforcement learning (RL), offer potential solutions to optimize scheduling and address these challenges. This study aims to develop a framework for integrating nurses' preferences into AI-supported scheduling methods by gathering qualitative insights from nurses and supervisors and mapping these to mathematical and AI-based scheduling techniques. Focus group interviews were conducted with 21 participants (nurses, supervisors, and temporary staff) from Swiss health care institutions to understand experiences and preferences related to staff scheduling. Qualitative data were analyzed using open and axial coding to extract key themes. These themes were then mapped to AI methodologies, including MIP, CP, GP, and RL, based on their suitability to address identified scheduling challenges. The study revealed key priorities in nurse scheduling. Fairness and participation were highlighted by 85% (18/21) of interview participants, emphasizing the need for transparent and inclusive scheduling. Flexibility and autonomy were preferred by 76% (16/21), favoring shift swaps and self-scheduling. AI expectations were mixed: 62% (13/21) saw potential for improved efficiency and fairness, while 38% (8/21) expressed concerns over reliability and loss of human oversight. Mapping to AI methods showed MIP as effective for fair shift allocation, CP for complex rule-based conditions, GP for handling unforeseen absences, and RL for dynamic schedule adaptation in hospital environments. A preliminary AI implementation of MIP in a training hospital unit (35 staff members) showed how to design a system from a mathematical perspective. AI-supported scheduling systems can significantly enhance fairness, transparency, and efficiency in nurse scheduling. However, concerns regarding AI reliability, adaptability to individual needs, and human oversight must be addressed. A hybrid approach integrating AI recommendations with human decision-making may be optimal. Future research should explore the broader implementation of AI-driven scheduling models and assess their impact on nurse satisfaction and patient outcomes over time.
Integrating Nurse Preferences Into AI-Based Scheduling Systems: Qualitative Study.
The bipartite consensus (BC) issue for nonlinear multiagent systems (NMASs) with unknown system dynamics information is investigated in this article. Initially, the dynamics of NMASs are represented using the Takagi-Sugeno (T-S) fuzzy model. Subsequently, to achieve distributed control, a minmax game policy is introduced, where each agent aims to minimize its performance index while its neighbors attempt to maximize it. Consequently, the BC problem for NMASs is reformulated as a zero-sum game, transforming the controller design into solving a set of game algebraic Riccati equations (GAREs). To solve such equations, a novel scaling off-policy iteration (PI) algorithm is proposed. The key features of the proposed learning algorithm can be outlined in three main aspects: 1) during the learning process, the reliance on system dynamics is relaxed; 2) compared with the PI method, the requirement for initial admissible control policies is eliminated; and 3) a more rapid convergence speed is achieved than traditional value iteration. Finally, the effectiveness and advantages of the proposed method are validated through a simulation example and a series of comparative experiments.
Reinforcement-Learning-Based Fuzzy Bipartite Consensus for Multiagent Systems: A Novel Scaling Off-Policy Learning Scheme.
Within veterinary education, there is an increasing shift toward a distributed teaching model, requiring clinicians to assume roles as novice educators. To support their development, the University of Surrey pioneered a training program focused on promoting educational theory and feedback delivery skills. This study investigates the reflections of 79 novice clinical educators on their experiences with structured feedback training, analyzed using inductive thematic analysis. Five key themes emerged: adopting a structured feedback approach, fostering self-assessment and reflection, providing specific and constructive feedback, creating a supportive learning environment, and overcoming challenges in delivering negative feedback. Findings revealed that 99% (<i>n</i> = 79) of educators recognised the importance of structured feedback, advocating for established models to guide delivery. Additionally, 87% (<i>n</i> = 69) highlighted the value of self-reflection, viewing feedback as a two-way dialogue. Specific and constructive feedback was deemed critical by 76% (<i>n</i> = 60), emphasizing the balance between positive reinforcement and areas for improvement. Creating a supportive learning environment was seen as essential by 66% (<i>n</i> = 52) of educators, while 37% (<i>n</i> = 29) acknowledged challenges in delivering negative feedback due to concerns about student demotivation. Training helped reframe negative feedback as a growth opportunity, promoting actionable and constructive guidance. The study suggests redefining "feedback sessions" as "reflective teaching sessions" to better capture the interactive and developmental nature of the process. These findings underscore the necessity of structured training for novice clinical educators, advocating for clear frameworks, reflective dialogue, and a reframed approach to feedback delivery to enhance student learning.
Reflections of Novice Veterinary Clinical Educators on Feedback Training: Insights from a UK Training Programme.
Human brain fiber tractography using diffusion magnetic resonance imaging is a crucial stage in mapping brain white matter structures, pre-surgical planning, and extracting connectivity patterns. Accurate and reliable tractography, by providing detailed geometric information about the position of neural pathways, minimizes the risk of damage during neurosurgical procedures. Both tractography itself and its post-processing steps such as bundle segmentation are usually used in these contexts. Many approaches have been put forward in the past decades and recently, multiple data-driven tractography algorithms and automatic segmentation pipelines have been proposed to address the limitations of traditional methods. Several of these recent methods are based on learning algorithms that have demonstrated promising results. In this study, in addition to introducing diffusion MRI datasets, we review learning-based algorithms such as conventional machine learning, deep learning, reinforcement learning and dictionary learning methods that have been used for white matter tract, nerve and pathway recognition as well as whole brain streamlines or whole brain tractogram creation. The contribution is to discuss both tractography and tract recognition methods, in addition to extending previous related reviews with most recent methods, covering architectures as well as network details, assess the efficiency of learning-based methods through a comprehensive comparison in this field, and finally demonstrate the important role of learning-based methods in tractography.
A review on learning-based algorithms for tractography and human brain white matter tracts recognition.
Brain oscillations, or rhythms, coordinate communication across distributed brain networks. These rhythms provide a foundation for the brain network interactions required for cognition. Oscillations coexist with non-rhythmic background aperiodic activity that forms a characteristic 1/f pattern in power spectra. Aperiodic brain activity is associated with cognition and can confound the detection of oscillations. In this Registered Report, we applied time-resolved spectral parameterization to EEG recordings during two common cognitive tasks. Neural dynamics recorded during many cognitive paradigms show similar patterns, including synchronization of mediofrontal theta (4-8 Hz) and desynchronization of posterior alpha (9-13 Hz) and central beta (15-30 Hz). Our results indicate that common task time-frequency signatures, including mediofrontal theta synchronization and parietal alpha desynchronization, can be attributed primarily to neural oscillatory phenomena. Intriguingly, we uncover evidence of stimulus-locked aperiodic power changes, which are responsive to the need for cognitive control and to reinforcement processing. Furthermore, aperiodic power correlated strongly with non-baseline-corrected total power estimates, and whereas oscillatory power correlated strongly with portions of baseline-corrected power estimates, it failed to correlate with other portions of baseline-corrected power. Finally, after baseline correction, aperiodic correlations with TF power remain high. These results indicate two primary outcomes. First, task TF signatures in theta and alpha bands reflect primarily parameterized oscillations. Second, aperiodic activity is time-dependent during cognitive processing, and these dynamics are not accounted for by baseline correction.
Oscillatory and Aperiodic Contributions to EEG Event-Related Time-Frequency Metrics During Cognitive Control and Reinforcement Processing: A Registered Report.
Supervisory control theory (SCT) is widely used as safeguard mechanism with control of discrete event systems (DESs). In complex continuous systems, in order to avoid system's behavior violating specifications, the supervised control problem of these systems is quite different. Continuous state and action spaces of high dimension make languages of automaton no longer suitable for describing the information of specifications which remains challenging on control of real physical systems. Reinforcement learning (RL) automatically learns complex decisions through trial and error, but it requires the design of precise reward functions combined with domain knowledge. For complex scenarios where the reward function cannot be achieved or is only with sparse rewards, we proposed a novel supervised optimal control framework based on trajectory imitation (TI) and reinforcement learning (RL) in this paper. Firstly, behavior cloning (BC) is adopted to pre-train the policy model based on a small number of human demonstrations. Secondly, a generative adversarial imitation learning (GAIL) method is carried out to obtain the implicit characteristics of demonstration data. Furthermore, after the primary and implicit features are extracted by the above steps, a Demo-based RL algorithm is designed by adding the demonstration data to the RL replay buffer with augmented loss function to enhance the system performance to its maximum potential. Finally, the proposed method is validated through multiple simulation experiments on object relocation and tool using task of dexterous multifingered hands. In handling the more complex tool using task, the proposed approach achieves a 19.7% decrease in convergence time as opposed to the latest method. And the proposed method for the two tasks results in policies that display natural movements and shows higher robustness compared with the baseline model.
Supervised optimal control in complex continuous systems with trajectory imitation and reinforcement learning.
A large number of in-service reinforced concrete structures are now entering the mid-to-late stages of their service life. Efficient detection of damage characteristics and accurate prediction of material performance degradation have become essential for ensuring the safety of these structures. Traditional damage detection methods, which primarily rely on manual inspections and sensor monitoring, are inefficient and lack accuracy. Similarly, performance prediction models for reinforced concrete materials, which are often based on limited experimental data and polynomial fitting, oversimplify the influencing factors. In contrast, partial differential equation models that account for degradation mechanisms are computationally intensive and difficult to solve. Recent advancements in deep learning and machine learning, as part of artificial intelligence, have introduced innovative approaches for both damage detection and material performance prediction in reinforced concrete structures. This paper provides a comprehensive overview of machine learning and deep learning theories and models, and reviews the current research on their application to the durability of reinforced concrete structures, focusing on two main areas: intelligent damage detection and predictive modeling of material durability. Finally, the article discusses future trends and offers insights into the intelligent innovation of concrete structure durability.
Data-intelligence driven methods for durability, damage diagnosis and performance prediction of concrete structures.
Rigorous study design and analytical standards are required to generate reliable findings in healthcare from artificial intelligence (AI) research. One crucial but often overlooked aspect is the determination of appropriate sample sizes for studies developing AI-based prediction models for individual diagnosis or prognosis. Specifically, the number of participants and outcome events required in datasets for model training and evaluation remains inadequately addressed. Most AI studies do not provide a rationale for their chosen sample sizes and frequently rely on datasets that are inadequate for training or evaluating a clinical prediction model. Among the ten principles of Good Machine Learning Practice established by the US Food and Drug Administration, the UK Medicines and Healthcare products Regulatory Agency, and Health Canada, guidance on sample size is directly relevant to at least three principles. To reinforce this recommendation, we outline seven reasons why inadequate sample size negatively affects model training, evaluation, and performance. Using a range of examples, we illustrate these issues and discuss the potentially harmful consequences for patient care and clinical adoption. Additionally, we address challenges associated with increasing sample sizes in AI research and highlight existing approaches and software for calculating the minimum sample sizes required for model training and evaluation.
Importance of sample size on the quality and utility of AI-based prediction models for healthcare.
Healthcare is inherently human-centered, and professionalism is crucial for improving healthcare systems. Traditionally developed through role modeling, professionalism now necessitates explicit teaching. Despite the inclusion of professionalism-related competencies by Indian regulatory bodies in medicine and nursing, its structured teaching remains limited. To address this, we introduced concepts of Professionalism and Professional Identity Formation (PIF) to newly joined faculty at our university, following established frameworks. This report details the workshop process and key lessons learned. A six-member core faculty team with expertise in health professions education conducted a daylong workshop. The morning session focused on contemporary perspectives on professionalism, while the afternoon session addressed PIF. Anonymous post-session feedback was collected via Google Forms, with quantitative responses rated on a five-point Likert scale and qualitative feedback analyzed using Braun and Clarke's six-phase thematic analysis. Three workshops were held in June 2024 for faculty who had joined over the past 2 years. Attendance across the three sessions was 25, 23, and 24 participants, respectively, from medicine, physiotherapy, nursing, and allied health disciplines. Response rates for feedback were 80% (<i>n</i> = 20), 43% (<i>n</i> = 10), and 45% (<i>n</i> = 11). Fifty percent (<i>n</i> = 21) reported high satisfaction, 41% (<i>n</i> = 16) moderate, and 7% (<i>n</i> = 3) low. Participants noted shifts in their perceptions, recognizing professionalism as a learned skill rather than an inherent trait. Key takeaways included the significance of effective communication, empathy, and resilience in shaping professional identity. In terms of educational impact, participants intended to model professionalism, reinforce positive behaviors, and explicitly integrate PIF into their teaching. Proposed strategies included experiential learning, structured orientation, and active learning methods. The workshop was appreciated for its interactive elements, such as group discussions, case-based learning, and open forums. However, areas for improvement included better time management, concise delivery, enhanced multimedia use, and more structured engagement. Participants also suggested extending the workshop to resident doctors and a mixed audience of residents and consultants. The workshop effectively conveyed professionalism and PIF concepts. Future iterations could benefit from refinements in structure and delivery, as well as audience expansion. Our experience may guide similar faculty development initiatives in other healthcare institutions.
Workshop on professionalism and professional identity formation for newly recruited faculty at a healthcare university: lessons learnt.
Recent progress in computational biology has driven the development of machine learning models for predicting protein post-translational modification sites. However, challenges such as data imbalance and limited sequence-context representation continue to hinder prediction accuracy, particularly for less frequent modifications like succinylation. In this study, we propose RLSuccSite, a reinforcement learning-based framework specifically designed to predict succinylation sites by addressing the class imbalance issue via a dynamic with balanced reward mechanism. To enhance sequence feature representation, this study also introduces Three-Peaks Enhanced Method for Physicochemical Property Scores (TPEM-PPS), a physicochemical property-driven feature extraction method that incorporates position-aware scoring to reflect amino acid contributions more effectively. The code and data of RLSuccSite can be obtained from the website: https://github.com/Zhangqingchao-Ch/RLSuccSite.git .Scientific contribution This study applies reinforcement learning to protein succinylation sites prediction, introducing a dynamic with balanced reward mechanism that effectively addresses dataset imbalance. Additionally, this study proposes a novel Three-Peaks Enhanced Method for Physicochemical Scoring, which captures residue contributions with higher precision than traditional feature extraction techniques.
RLSuccSite: succinylation sites prediction based on reinforcement learning dynamic with balanced reward mechanism and three-peaks enhanced method for physicochemical property scores.
Body-part-centered response fields are pervasive in single neurons, functional magnetic resonance imaging, electroencephalography and behavior, but there is no unifying formal explanation of their origins and role. In the present study, we used reinforcement learning and artificial neural networks to demonstrate that body-part-centered fields do not simply reflect stimulus configuration, but rather action value: they naturally arise from the basic assumption that agents often experience positive or negative reward after contacting environmental objects. This perspective successfully reproduces experimental findings that are foundational in the peripersonal space literature. It also suggests that peripersonal fields provide building blocks that create a modular model of the world near the agent: an egocentric value map. This concept is strongly supported by the emergent modularity that we observed in our artificial networks. The short-term, close-range, egocentric map is analogous to the long-term, long-range, allocentric hippocampal map. This perspective fits empirical data from multiple experiments, provides testable predictions and accommodates existing explanations of peripersonal fields.
Egocentric value maps of the near-body environment.
Diabetes Mellitus is a chronic metabolic disorder affecting a substantial global population leading to complications such as retinopathy, nephropathy, neuropathy, foot problems, heart attacks, and strokes if left unchecked. Prompt detection and diagnosis are crucial in managing and averting these complications. This study compares the effectiveness of a Decision Tree Classifier and an Artificial Neural Network (ANN) in predicting Diabetes Mellitus. The Decision Tree Classifier demonstrated superior performance, achieving a 97.7% accuracy rate compared to the ANN's 94.7%. The Decision Tree Classifier also achieved higher precision (96.9% vs. 88.8%) and recall (96.5% vs. 90.2%) than the ANN, along with a balanced F1 score of 96.5% versus 90.2%. The Matthews Correlation Coefficient (MCC) confirmed a stronger correlation between predictions and actual labels for the Decision Tree Classifier (87.4%) compared to the ANN (78%). Furthermore, the Area Under Curve (AUC) score of 96% for the Decision Tree Classifier was higher than that of ANN (78%). The relative importance feature analysis clearly established glycated hemoglobin (HbA1c) as the paramount factor in predicting diabetes mellitus. Diabetic patients showed markedly higher cholesterol and triglycerides, increasing cardiovascular risk, while High Density Lipoprotein (HDL) and Low-Density Lipoprotein (LDL) levels showed no significant difference between diabetics and non-diabetics. However, Very Low-Density Lipoprotein (VLDL) was significantly elevated, suggesting altered lipid transport in diabetes. Body Mass Index (BMI) was also notably higher in diabetics, reinforcing the link between obesity and diabetes risk. Principal Component analysis further highlighted five clusters of health-related variables, identifying age-related metabolic indicators (AGE, HbA1c, BMI), kidney function markers (creatinine (Cr), Urea), cardiovascular lipid profiles (Cholesterol, LDL), lipid transport (VLDL), and protective cardiovascular indicator (HDL). The study highlights the superiority of decision tree classifier in predicting Diabetes Mellitus, suggesting its potential for significant clinical applications in diagnosis and management.
Data-driven diabetes mellitus prediction and management: a comparative evaluation of decision tree classifier and artificial neural network models along with statistical analysis.
The rapid evolution of smart grids, driven by rising global energy demand and renewable energy integration, calls for intelligent, adaptive, and energy-efficient resource allocation strategies. Traditional energy management methods, based on static models or heuristic algorithms, often fail to handle real-time grid dynamics, leading to suboptimal energy distribution, high operational costs, and significant energy wastage. To overcome these challenges, this paper presents ORA-DL (Optimized Resource Allocation using Deep Learning) an advanced framework that integrates deep learning, Internet of Things (IoT)-based sensing, and real-time adaptive control to optimize smart grid energy management. ORA-DL employs deep neural networks, reinforcement learning, and multi-agent decision-making to accurately predict energy demand, allocate resources efficiently, and enhance grid stability. The framework leverages both historical and real-time data for proactive power flow management, while IoT-enabled sensors ensure continuous monitoring and low-latency response through edge and cloud computing infrastructure. Experimental results validate the effectiveness of ORA-DL, achieving 93.38% energy demand prediction accuracy, improving grid stability to 96.25%, and reducing energy wastage to 12.96%. Furthermore, ORA-DL enhances resource distribution efficiency by 15.22% and reduces operational costs by 22.96%, significantly outperforming conventional techniques. These performance gains are driven by real-time analytics, predictive modelling, and adaptive resource modulation. By combining AI-driven decision-making, IoT sensing, and adaptive learning, ORA-DL establishes a scalable, resilient, and sustainable energy management solution. The framework also provides a foundation for future advancements, including integration with edge computing, cybersecurity measures, and reinforcement learning enhancements, marking a significant step forward in smart grid optimization.
A deep learning and IoT-driven framework for real-time adaptive resource allocation and grid optimization in smart energy systems.
Occupational stress is a major concern for employers and organizations as it compromises decision-making and overall safety of workers. Studies indicate that work-stress contributes to severe mental strain, increased accident rates, and in extreme cases, even suicides. This study aims to enhance early detection of occupational stress through machine learning (ML) methods, providing stakeholders with better insights into the underlying causes of stress to improve occupational safety. Utilizing a newly published workplace survey dataset, we developed a novel feature selection pipeline identifying 39 key indicators of work-stress. An ensemble of three ML models achieved a state-of-the-art accuracy of 90.32%, surpassing existing studies. The framework's generalizability was confirmed through a three-step validation technique: holdout-validation, 10-fold cross-validation, and external-validation with synthetic data generation, achieving an accuracy of 89% on unseen data. We also introduced a 1D-CNN to enable hierarchical and temporal learning from the data. Additionally, we created an algorithm to convert tabular data into texts with 100% information retention, facilitating domain analysis with large language models, revealing that occupational stress is more closely related to the biomedical domain than clinical or generalist domains. Ablation studies reinforced our feature selection pipeline, and revealed sociodemographic features as the most important. Explainable AI techniques identified excessive workload and ambiguity (27%), poor communication (17%), and a positive work environment (16%) as key stress factors. Unlike previous studies relying on clinical settings or biomarkers, our approach streamlines stress detection from simple survey questions, offering a real-time, deployable tool for periodic stress assessment in workplaces.
Early detection of occupational stress: Enhancing workplace safety with machine learning and large language models.
Indirect punishment traditionally sustains cooperation in social systems through reputation or norms, often by reducing defectors' payoffs indirectly. In this study, we redefine indirect punishment for structured populations as a spatially explicit mechanism, where individuals on a square lattice target second-order defectors-those harming their neighbors-rather than their own immediate defectors, guided by the principle: "I help you by punishing those who defect against you". Using evolutionary simulations, we compare this adapted indirect punishment to direct punishment, where individuals punish immediate defectors. Results show that within a narrow range of low punishment costs and fines, adapted indirect punishment outperforms direct punishment in promoting cooperation. However, outside this cost-fine region, outcomes vary: direct punishment may excel, both may be equally effective, or neither improves cooperation, depending on the parameter values. These findings hold even when network reciprocity alone does not support cooperation. Notably, when adapted indirect punishment outperforms direct punishment in promoting cooperation, defectors face stricter penalties without appreciably increasing punishers' costs, making it more efficient than direct punishment. Overall, our findings provide insights into the role of indirect punishment in structured populations and highlight its importance in understanding the evolution of cooperation.
Indirect punishment can outperform direct punishment in promoting cooperation in structured populations.
Homeopathy and Yoga are rooted in holistic paradigms emphasizing the unity of body, mind, and spirit. Over the recent years, Yoga has been increasingly integrated into the medical education system. Integrating yoga with homeopathic education presents a unique opportunity to reinforce shared healing philosophies while enhancing learner outcomes. This narrative review investigates the scope of systematic integration of yoga into homeopathic medical education, examining its potential to improve student well-being and holistic care delivery. Literature published from 2012 onwards was systematically searched in databases- Scopus, Web of Science, PubMed, Embase, using keywords related to yoga, homeopathy, and medical education. Professional guidelines, policy documents, and regulatory frameworks were also analysed. Data synthesis employed thematic analysis to identify key patterns and recommendations. Evidence suggests that structured yoga practices can enhance stress management, self-awareness, and empathetic patient interactions among medical students. The alignment between yoga's principles and homeopathy's holistic approach can strengthen students' understanding of integrative healthcare. Successful implementation requires coordinated efforts in curriculum design, faculty development, institutional support, and formal assessment strategies, along with the availability of resources. Educational institutions can utilize phased implementation approaches, elective modules, and digital learning platforms to facilitate integration. Incorporating yoga into homeopathic medical education demonstrates promise in reinforcing holistic healing principles and enhancing clinical competencies. Future research should prioritize longitudinal studies and controlled trials to measure specific educational and clinical outcomes. Homeopathy; yoga; medical education; complementary medicine; holistic care; curriculum integration.
Scope and Role of Integrating Yoga in Homeopathic Medical Education: An Explorative Review.
The sense of agency refers to the subjective experience of controlling one's own actions and their outcomes. While agency is often thought to increase with better performance, it remains unclear how it evolves during learning. In this study, we investigated how the sense of agency changes as individuals learn when to act through reinforcement-based adaptation. We used intentional binding (IB)-a widely used, though debated, proxy measure for agency-related processes -to track temporal compression between actions and outcomes during a time-based learning task. Across four experiments, we found that IB decreased with learning, but only when feedback was imprecise yet stable, and when the outcome used to probe IB was irrelevant to the learning task. These results suggest that agency-related processes, as indexed by IB, may diminish when adaptation guides action selection, and when the outcome becomes less epistemically relevant. We discuss the possible implications of these changes in IB with learning for the sense of agency.
EXPRESS: Intentional binding decreases during learning: implications for sense of agency.
The Iowa gambling task (IGT) is widely used to study risky decision-making and learning from rewards and punishments. Although numerous cognitive models have been developed using reinforcement learning frameworks to investigate the processes underlying the IGT, no single model has consistently been identified as superior, largely due to the overlooked importance of model flexibility in capturing choice patterns. This study examines whether human reinforcement learning models adequately capture key experimental choice patterns observed in IGT data. Using simulation and parameter space partitioning (PSP) methods, we explored the parameter space of two recently introduced models-Outcome-Representation Learning and Value plus Sequential Exploration-alongside four traditional models. PSP, a global analysis method, investigates what patterns are relevant to the parameters' spaces of a model, thereby providing insights into model flexibility. The PSP study revealed varying potentials among candidate models to generate relevant choice patterns in IGT, suggesting that model selection may be dependent on the specific choice patterns present in a given dataset. We investigated central choice patterns and fitted all models by analyzing a comprehensive data pool (<i>N</i> = 1428) comprising 45 behavioral datasets from both healthy and clinical populations. Applying Akaike and Bayesian information criteria, we found that the Value plus Sequential Exploration model outperformed others due to its balanced potential to generate all experimentally observed choice patterns. These findings suggested that the search for a suitable IGT model may have reached its conclusion, emphasizing the importance of aligning a model's parameter space with experimentally observed choice patterns for achieving high accuracy in cognitive modeling.
Do Human Reinforcement Learning Models Account for Key Experimental Choice Patterns in the Iowa Gambling Task?
Dental disease is a prevalent chronic condition associated with substantial financial burden, personal suffering, and increased risk of systemic diseases. Despite widespread recommendations for twice-daily tooth brushing, adherence to recommended oral self-care behaviors remains sub-optimal due to factors such as forgetfulness and disengagement. To address this, we developed Oralytics, a mHealth intervention system designed to complement clinician-delivered preventative care for marginalized individuals at risk for dental disease. Oralytics incorporates an online reinforcement learning algorithm to determine optimal times to deliver intervention prompts that encourage oral self-care behaviors. We have deployed Oralytics in a registered clinical trial. The deployment required careful design to manage challenges specific to the clinical trials setting in the U.S. In this paper, we (1) highlight key design decisions of the RL algorithm that address these challenges and (2) conduct a re-sampling analysis to evaluate algorithm design decisions. A second phase (randomized control trial) of Oralytics is planned to start in spring 2025.
A Deployed Online Reinforcement Learning Algorithm In An Oral Health Clinical Trial.
Elevated ovarian hormones during fear extinction can enhance fear extinction memory retention and reduce fear renewal, but the mechanisms remain unknown. High levels of ovarian hormones are associated with heightened dopamine (DA) transmission, a key player in fear extinction. In males, stimulation of substantia nigra (SN) DA neurons during fear extinction reduces renewal; an effect mimicked by DA D1 receptor agonist administration into the dorsolateral striatum (DLS), a primary target of the SN. The current studies tested the role of the SN-DLS pathway in estrous cycle-modulation of fear extinction and relapse. Male and female Long-Evans rats were used to investigate the effects of sex and ovarian hormone levels during fear extinction on later fear relapse and underlying mechanisms. Fear extinction-induced cFos in SN DA neurons was quantified with double-label immunohistochemistry. An intersectional chemogenetic approach was used to determine whether SN-DLS pathway activity during fear extinction is necessary and sufficient for observed effects of ovarian hormones on fear relapse. Finally, fast scan cyclic voltammetry revealed the effects of sex and ovarian hormones on electrically-evoked DA release in the DLS and verified the effectiveness of chemogenetic approaches. Female rats exposed to fear extinction during proestrus or estrus (Pro/Est; high hormones) had less relapse (renewal and spontaneous recovery) compared to males or females exposed to fear extinction during metestrus or diestrus (Met/Di; low hormones). Fear extinction-induced cFos within SN DA neurons and electrically-evoked DA release in the DLS was highest in female rats during Pro/Est. The behavioral and neurochemical effects of Pro/Est were mimicked by estradiol administration to ovariectomized female rats. Inhibition of the SN-DLS pathway suppressed electrically-evoked DA release in the DLS and restored fear renewal in females exposed to simultaneous fear extinction and SN-DLS inhibition during Pro/Est. Conversely, stimulation of the SN-DLS pathway during extinction reduced fear renewal in males. Results indicate that ovarian hormones present during fear extinction reduce later fear relapse through a SN-DLS dopamine pathway. Data suggest the SN-DLS DA pathway is a novel target for the reduction of fear relapse in both sexes.
High ovarian hormones present during fear extinction reduce fear relapse through a nigrostriatal dopamine pathway.
This study evaluated the educational impact of dental seminars involving practicing pharmacists and dentists on pharmacy students. Its primary objective was to assess changes in learning attitudes and improvements in oral care knowledge through lectures and group discussions. The dental seminars were conducted during October-November 2024, and participants included 14 second-year pharmacy students. Participants completed surveys before and after the seminars that addressed topics such as "antibiotics in dental care" and "pharmacist-dentist collaboration." Survey results indicated that the proportion of students recognizing the relationship between pharmacy and dentistry increased from 71.5 to 100%. Furthermore, the percentage of students who expressed a desire to learn more about dentistry increased from 42.9 to 85.7%. Knowledge assessments revealed significant improvements in the understanding of appropriate use of antibiotics for tooth extraction and oral care during cancer therapy. However, no improvement was observed in foundational knowledge of dental anatomy, underscoring the necessity of reinforcing basic education. This study provided preliminary insights into the effectiveness of incorporating dental education into the pharmacy curricula. These findings can contribute to the development and refinement of educational programs at other universities and institutions.
[Educational Effects of Introducing a Dental Seminar for Pharmacy Students: Questionnaire on Changes in Learning Attitudes and Knowledge Improvement].
Despite empirical support of goal-directed behavior models of dependence, the role of mood on substance use is unclear. The Reinforcer Pathology (RP) model may be useful to describe it specific effects in substance-related variables. This study aims to test mood induction's effect on tobacco demand and integrate results into the RP model. Sixty-two participants from the general population, aged 18-34, who smoked at least five cigarettes daily and presented no severe mental health conditions completed the study using a two-group design (between-subject factor: pleasant vs unpleasant mood induction; within factor: pre-, post-induction). They complete measures of mood status, tobacco reinforcing efficacy, delay discounting, depressive, anxiety and stress symptoms, environmental reinforcement, negative/positive urgency and tobacco-related/free reinforcement. Before mood induction, all participants were sated with nicotine after being asked to smoke freely. While pleasant mood reduced intensity, O<sub>max</sub> and breakpoint and increased elasticity, unpleasant mood produced the opposite pattern. This effect was dose dependent and effect sizes were large (f = 0.39-0.50). Mood induction did not significantly affect delay discounting significantly. The association between classical RP variables and new candidates (emotional symptoms, pleasant/negative urgency, tobacco-related/free reinforcement) was differently influenced by mood valance (r = |.359-.532|). Results support the goal-directed behavior model of dependence and extend the RP model by integrating the role of mood induction. The effect of mood seems particularly large in intensity, O<sub>max</sub>, and elasticity and this effect may depend on emotional regulation skills and contextual variables, such as substance-free reinforcement and environmental reward.
Unpleasant mood reverses satiety's effect on tobacco reinforcement.
Achieving highly efficient treatment planning in intensity-modulated radiotherapy (IMRT) is challenging due to the complex interactions between radiation beams and the human body. The introduction of artificial intelligence (AI) has automated treatment planning, significantly improving efficiency. However, existing automatic treatment planning agents often rely on supervised or unsupervised AI models that require large datasets of high-quality patient data for training. Additionally, these networks are generally not universally applicable across patient cases from different institutions and can be vulnerable to adversarial attacks. Deep reinforcement learning (DRL), which mimics the trial-and-error process used by human planners, offers a promising new approach to address these challenges.  PURPOSE: This work aims to develop a stochastic policy-based DRL agent for automatic treatment planning that facilitates effective training with limited datasets, universal applicability across diverse patient datasets, and robust performance under adversarial attacks. We employ an actor-critic with experience replay (ACER) architecture to develop the automatic treatment planning agent. This agent operates the treatment planning system (TPS) for inverse treatment planning by automatically tuning treatment planning parameters (TPPs). We use prostate cancer IMRT patient cases as our testbed, which includes one target and two organs at risk (OARs), along with 18 discrete TPP tuning actions. The network takes dose-volume histograms (DVHs) as input and outputs a policy for effective TPP tuning, accompanied by an evaluation function for that policy. Training utilizes DVHs from treatment plans generated by an in-house TPS under randomized TPPs for a single patient case, with validation conducted on two other independent cases. Both online asynchronous learning and offline, sample-efficient experience replay methods are employed to update the network parameters. After training, six groups, comprising more than 300 initial treatment plans drawn from three datasets, were used for testing. These groups have beam and anatomical configurations distinct from those of the training case. The ProKnow scoring system for prostate cancer IMRT, with a maximum score of 9, is used to evaluate plan quality. The robustness of the network is further assessed through adversarial attacks using the fast gradient sign method (FGSM). Despite being trained on treatment plans from a single patient case, the network converges efficiently when validated on two independent cases. For testing performance, the mean <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mo>±</mo> <annotation>$\pm$</annotation></semantics> </math> standard deviation of the plan scores across all test cases before ACER-based treatment planning is <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>6.17</mn> <mo>±</mo> <mn>1.90</mn></mrow> <annotation>$6.17 \pm 1.90$</annotation></semantics> </math> . After implementing ACER-based treatment planning, <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>92.29</mn> <mo>%</mo></mrow> <annotation>$92.29\%$</annotation></semantics> </math> of the cases achieve a perfect score of 9, with only <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>6.65</mn> <mo>%</mo></mrow> <annotation>$6.65\%$</annotation></semantics> </math> scoring between 8 and 9, and no cases being below 7. The corresponding mean <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mo>±</mo> <annotation>$\pm$</annotation></semantics> </math> standard deviation is <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mn>8.92</mn> <mo>±</mo> <mn>0.29</mn></mrow> <annotation>$8.92 \pm 0.29$</annotation></semantics> </math> . This performance highlights the ACER agent's high generality across patient data from various sources. Further analysis indicates that the ACER agent effectively prioritizes leading reasonable TPP tuning actions over obviously unsuitable ones by several orders of magnitude, showing its efficacy. Additionally, results from FGSM attacks demonstrate that the ACER-based agent remains comparatively robust against various levels of perturbation. We successfully trained a DRL agent using the ACER technique for high-quality treatment planning in prostate cancer IMRT. It achieves high generality across diverse patient datasets and exhibits high robustness against adversarial attacks.
Actor critic with experience replay-based automatic treatment planning for prostate cancer intensity modulated radiotherapy.
Integrating human support with chatbot-based behavior change interventions raises three challenges: (1) attuning the support to an individual's state (e.g., motivation) for enhanced engagement, (2) limiting the use of the concerning human resources for enhanced efficiency, and (3) optimizing outcomes on ethical aspects (e.g., fairness). Therefore, we conducted a study in which 679 smokers and vapers had a 20% chance of receiving human feedback between five chatbot sessions. We find that having received feedback increases retention and effort spent on preparatory activities. However, analyzing a reinforcement learning (RL) model fit on the data shows there are also states where not providing feedback is better. Even this "standard" benefit-maximizing RL model is value-laden. It not only prioritizes people who would benefit most, but also those who are already doing well and want feedback. We show how four other ethical principles can be incorporated to favor other smoker subgroups, yet, interdependencies exist.
Psychological, economic, and ethical factors in human feedback for a chatbot-based smoking cessation intervention.
Obstructive sleep apnea (OSA) may impact outcomes in acute coronary syndrome (ACS) patients. The Global Registry of Acute Coronary Events (GRACE) score assesses cardiovascular risk post-ACS. This study evaluated whether incorporating the STOP-BANG score (a surrogate for OSA) enhances GRACE's predictive ability. A total of 227 myocardial infarction (MI) patients were included, with 66 (29.07%) experiencing in-hospital cardiovascular events. Patients with events were older, predominantly male, and had worse clinical markers, including lower hemoglobin and ejection fraction and higher RDW, creatinine, CRP, and GRACE scores (p < 0.001). While STOP-BANG was higher in event patients, risk group classification was non-significant (p = 0.3). Three models were trained: (1) all selected features, (2) GRACE alone, and (3) GRACE + STOP-BANG. The Extra Trees Classifier performed best (ROC-AUC = 0.82). Adding STOP-BANG improved the F1-score, accuracy, and precision but had a non-significant effect on ROC-AUC. The decision curve analysis showed an increased net benefit when STOP-BANG was incorporated. Feature importance analysis ranked STOP-BANG highest in models, reinforcing its relevance. While this study showed that STOP-BANG improved risk stratification, further multicenter validation is needed to confirm its clinical utility in ACS risk models.
Incorporating the STOP-BANG questionnaire improves prediction of cardiovascular events during hospitalization after myocardial infarction.
Event-related desynchronization (ERD) and event-related synchronization (ERS) are critical neurophysiological phenomena associated with motor execution and inhibitory processes. Their utility spans neurophysiological biomarker research and Brain-Computer Interface (BCI) development. However, standardized frameworks for analyzing ERD and ERS oscillations across motor tasks and frequency ranges remain scarce. This study conducted a cross-sectional analysis of 76 healthy participants from the DEFINE cohort to explore ERD and ERS variations across four motor-related tasks (Motor Execution, Motor Imagery, Active Observation, and Passive Observation) and six frequency bands (Delta, Theta, Low Alpha, High Alpha, Low Beta, and High Beta) using C3 electrode activity. Repeated measures ANOVA revealed task-sensitive ERD and ERS power modulations, with oscillatory responses spanning the 1-30 Hz spectrum. Beta activity exhibited pronounced differences between tasks, highlighting its relevance in motor control, while other bands showed distinct task-dependent variations. These findings underscore the variability in ERD/ERS patterns across different tasks and frequency bands, reinforcing the importance of further research into standardized analytical frameworks. By refining ERD/ERS analyses, our study contributes to developing reference frameworks that can enhance clinical and Brain-Computer Interface (BCI) applications.
Dynamics of sensorimotor-related brain oscillations: EEG insights from healthy individuals in varied upper limb movement conditions.
Defensive reactivity (startle) is increased during anticipation of temporally unpredictable (> predictable) threat. Startle also seems to be potentiated during reward anticipation, yet how this is affected by temporal unpredictability had not previously been examined. In addition to unpredictability, between-subject differences in how people prepare for and attempt to regulate their response to motivationally salient events might affect defensive reactivity in response to reward. Specifically, contrast avoidance is the self-reported tendency to avoid shifts in emotion, and although typically studied in relation to negative events, it is theorized to apply to positive events, which can set the stage for greater downward shifts in emotion. We used a novel paradigm-the no (N) reward, predictable (P) reward, and unpredictable (U) reward task-to examine the effects of temporal unpredictability and individual differences in contrast avoidance on startle eyeblink and EEG component, the reward positivity (RewP) during anticipation and receipt of rewarding feedback. Sixty-five participants performed the NPU-reward task during EEG and EMG data collection and completed the Contrast Avoidance Questionnaire (Worry version). Startle eyeblinks were potentiated during P versus N reward cues only (i.e., not U > N). By contrast, the RewP was larger for both P and U compared to N reward feedback. In addition, individuals with greater contrast avoidance had larger startle eyeblinks during P compared to U reward inter-stimulus intervals. Therefore, the timing of reward delivery may be important in modulating anticipatory defensive reflexes, and contrast avoidance may interfere with reductions in defense reactivity following rewarding feedback.
Wait for It: Defensive Reactivity and Individual Differences in Contrast Avoidance in the NPU-Reward Task.
Deep learning (DL) has been used to differentiate papilledema from healthy eyes and optic disc elevation on fundus photos. As we described optic nerve head (ONH) and peripapillary retina (PPR) optical coherence tomography (OCT) features that distinguish non-arteritic anterior ischemic optic neuropathy (NAION) from papilledema, we hypothesized that a DL approach using the full 3D OCT volume could reliably differentiate NAION, papilledema and healthy eyes. This retrospective review analyzed OCT scans from eyes with acute NAION, papilledema, and healthy eyes from randomized and non-randomized clinical trials. We investigated a total of 4619 raw spectral domain ONH volume scans from 1539 eyes, including 1138 from eyes with idiopathic intracranial hypertension (IIH, Frisén grade ≥ 1), 648 from eyes with acute NAION, and 2833 scans from healthy eyes. We performed external validation on an additional 1663 scans from 742 eyes across these groups. We fine-tuned three ResNet 3D-18 models: one with the entire OCT volume, one with the PPR, and one with the optic nerve head excluding the PPR. We then evaluated the models on an external validation set. The primary outcome measures were accuracy, area under the Receiver Operating Characteristic curve (AUC-ROC), and weighted precision, recall, and F1 scores. Our model classified the three conditions using the entire scan with an internal validation accuracy of 94.9%, macro-average AUC-ROC of 0.986 with weighted F1 scores ranging from 0.93-0.95. In external validation, the entire scan model had an accuracy of 90.1% with a macro-average AUC-ROC of 0.977 and weighted F1-score range of 0.89-0.94. The PPR alone model attained an accuracy of 94.2%, with a macro-average AUC-ROC of 0.966 and weighted F1-score range of 0.81-0.88. The ONH alone model reached an accuracy of 85.0% with an AUC-ROC of 0.965 and weighted F1-score range of 0.84-0.89. Our findings demonstrate that the model using the whole ONH OCT scan is a robust diagnostic tool for differentiating causes of swollen ONH. Changes in the PPR due to ONH swelling as well as ONH alone can also differentiate the disorders. The results reinforce the potential of automated approaches in assisting in the diagnosis of acquired optic disc swelling.
Deep Learning Differentiates Papilledema, NAION, and Healthy Eyes with Unsegmented 3D OCT Volumes.
Anhedonia, the loss of pleasure, is prevalent and impairing. Parsing its computational basis promises to explain its transdiagnostic character. One manifestation of anhedonia-reward insensitivity-may be linked to limited memory. Further, the need to economize on limited memory engenders a perseverative bias towards frequently chosen actions. Anhedonia may also be linked with deviations from optimal perseveration for a given memory capacity, a pattern that causes inefficiency because it results in less reward for the same memory cost. To test these hypotheses, we apply a theory of optimal decision-making under memory constraints that decomposes behavior into a memory component and an efficiency component. We apply this theory to behavior on the Probabilistic Reward Task, a reward learning paradigm validated in anhedonia, and perform secondary analysis of a randomized controlled trial testing κ-opioid receptor (KOR) antagonism for anhedonia (N=24 KOR; N=31 placebo), as well as analyses of three other datasets (N=100, 66, 24 respectively). We fit a resource-bounded reinforcement-learning model to behavior. Across clinical and nonclinical populations, anhedonia is associated with deficits in efficiency but not memory. The reinforcement learning models demonstrate that deficits in efficiency arise from the inability to perseverate optimally. KOR antagonism, which likely elevates tonic dopamine, increases both memory and efficiency, and the model demonstrates that this arises from increased reward sensitivity and perseveration. KOR antagonism therefore has distinct cognitive effects, only one related to anhedonia. These findings have potential implications for the applications of KOR antagonists.
Computationally-informed insights into anhedonia and treatment by k-opioid receptor antagonism.
Cognitive control is a fundamental ability that enables to detect and resolve conflict. However, this ability is not encapsulated but liable to learning and motivational factors. Among them, previous studies have shown that the contingency created between conflict and performance by means of feedback, as well as its actual motivational value, influenced the behavioral manifestations of cognitive control. In this EEG study, we sought to shed light on the brain mechanisms underlying this modulation. To this end, fifty-eight participants performed the confound-minimized Stroop task wherein either congruent (i.e., no-conflict) or incongruent trials (i.e., conflict) were selectively reinforced by a performance feedback at the block level. Moreover, this feedback was either negative or neutral. At the behavioral results, we replicated previous results showing that conflict adaptation slightly improved when congruent trials were reinforced, while the reinforcement of incongruent trials led to a reduction of the congruency effect instead. Interestingly, at the EEG level, we found that this dissociation was captured by different event-related potentials (ERPs, as well as frontal alpha), but not mid-frontal theta (MFT), which was increased by conflict and performance feedback throughout. When incongruent trials were reinforced by the feedback, mostly the stimulus-locked N450 and the preceding occipital P1 component changed. In comparison, when congruent trials were selectively reinforced, the feedback-locked P3 component was altered. These findings suggest that depending on the specific contingency created between conflict and performance feedback, either stimulus or feedback-locked brain processes guide the implementation of cognitive control.
Electrophysiological evidence for flexible adjustments in cognitive control depending on feedback's contingency.
The rapid advancement of Artificial Intelligence (AI)-driven recommendation systems in healthcare presents significant economic implications, particularly in the context of neurological disorders. These systems offer opportunities to enhance diagnostic accuracy, optimize resource allocation, and improve patient outcomes. However, conventional economic models fail to address the dynamic complexities of AI integration in healthcare, including market inefficiencies and stakeholder behaviors. To bridge this gap, we propose a Dynamic Equilibrium Model for Health Economics (DEHE), incorporating reinforcement learning and stochastic optimization. This model captures uncertainty in healthcare decision-making and includes dynamic pricing, behavioral incentives, and adaptive insurance premium mechanisms. Our experimental results demonstrate that DEHE improves economic efficiency by optimizing AI-driven recommendations while balancing healthcare cost and accessibility. Through multi-agent simulations, the model shows strong real-world applicability and stability. It effectively addresses asymmetric information, moral hazard, and market dynamics. This study offers a novel economic framework for integrating AI-driven systems in neurological healthcare. We recommend the adoption of adaptive policy mechanisms and stakeholder-specific incentives to enhance cost-effectiveness and equitable access. These insights contribute to the development of more sustainable and inclusive AI-based healthcare policies.
Economic implications of artificial intelligence-driven recommended systems in healthcare: a focus on neurological disorders.
Online question-and-answer (Q&A) systems based on the Large Language Model (LLM) have progressively diverged from recreational to professional use. However, beginners in programming often struggle to correct code errors independently, limiting their learning efficiency. This paper proposed a Multi-Agent framework with environmentally reinforcement learning (E-RL) for code correction called Code Learning (Co-Learning) community, assisting beginners to correct code errors independently. It evaluates the performance of multiple LLMs from an original dataset with 702 error codes, uses it as a reward or punishment criterion for E-RL; Analyzes input error codes by the current agent; selects the appropriate LLM-based agent to achieve optimal error correction accuracy and reduce correction time. Experiment results showed that 3% improvement in Precision score and 15% improvement in time cost as compared with no E-RL method respectively. The results indicate that integrating E-RL with a multi-agent selection strategy can effectively enhance both the accuracy and efficiency of LLM-based code correction systems, making them more practical for educational and professional programming support scenarios.
Co-Learning: code learning for multi-agent reinforcement collaborative framework with conversational natural language interfaces.
Objectives and importance of the study Applications of artificial intelligence (AI) platforms and technologies to healthcare have been widely promoted as offering revolutionary improvements and efficiencies in clinical practice and health services organisation. Practical applications of AI in public health are now emerging and receiving similar attention. This paper provides an overview of the issues and examples of research that help separate the potential from the hype. Methods Selective review and analysis of cross-section of relevant literature. Results Great potential exists for the use of AI in public health practice and research. This includes immediate applications in improving health education and communication directly with the public, as well as great potential for the productive use of generative AI through chatbots and virtual assistants in health communication. AI also has applications in disease surveillance and public health science, for example in improving epidemic and pandemic early warning systems, in synthetic data generation, in sequential decision-making in uncertain conditions (reinforcement learning) and in disease risk prediction. Most published research examining these and other applications is at a fairly early stage, making it difficult to separate the probable benefits from the hype. This research is undoubtedly demonstrating great potential but also identifying challenges, for example in the quality and relevance of health information being produced by generative AI; in access, trust and use of the technology by different populations; and in the practical application of AI to support disease surveillance and public health science. There are real risks that current access and patterns of use may exacerbate existing inequities in health and that the orientation towards the personalisation of health advice may divert attention away from underlying social and economic determinants of health. Conclusions Realising the potential of AI not only requires further research and experimentation but also careful consideration of its ethical implications and thoughtful regulation. This will ensure that advances in these technologies serve the best interests of individuals and communities worldwide and don't exacerbate existing health inequalities.
Artificial intelligence and public health: prospects, hype and challenges.
Nurses need competence and confidence to assess for Social Determinants of Health (SDOH) and meaningfully mitigate the barriers they present to health. While acute care nurses are in an ideal position to address SDOH and optimize the continuum of care, evidence suggests they lack the necessary knowledge and confidence to address SDOH in acute care. The purpose of this project was to describe the frequency of SDOH topics encountered by undergraduate nursing students during clinical learning in acute care and whether those topics were addressed by the student independently or in collaboration with another healthcare professional. Student nurses (n = 148) documented patient encounters over 2 semesters. An average of 7.53 SDOH topics per patient was identified. Access to primary health care, social support networks, and nutritious foods were the most frequent SDOH topics. The least frequently encountered SDOH topics were immigration status, proximity to crime and violence, and climate change. Nursing students encountered many SDOH topics during clinical education although they were rarely prepared to address them independently. The results of this project reinforce the pressing need to develop nursing competency with SDOH and can inform design for curricular integration of SDOH.
Preparing the Nurses of the Future to Address Health Disparities.
Despite the profound advancements that deep learning models have achieved across a multitude of domains, their propensity to learn spurious correlations significantly impedes their applicability to tasks necessitating causal and counterfactual reasoning. In this paper, we propose a Bidirectional Neural Network, which innovatively consolidates forward causal reasoning with inverse counterfactual reasoning into a cohesive framework. This integration is facilitated through the implementation of multi-stacked affine coupling layers, which ensure the network's invertibility, thereby enabling bidirectional reasoning capabilities within a singular architectural construct. To augment the network's trainability and to ensure the bidirectional differentiability of the parameters, we introduce an orthogonal weight normalization technique. Additionally, the counterfactual reasoning capacity of the Bidirectional Neural Network is embedded within the policy function of reinforcement learning, thereby effectively addressing the challenges associated with reward sparsity in the blood glucose control scenario. We evaluate our framework on two pivotal tasks: causal-based blood glucose forecasting and counterfactual-based blood glucose control. The empirical results affirm that our model not only exemplifies enhanced generalization in causal reasoning but also significantly surpasses comparative models in handling out-of-distribution data. Furthermore, in blood glucose control tasks, the integration of counterfactual reasoning markedly improves decision efficacy, sample efficiency, and convergence velocity. It is our expectation that the Bidirectional Neural Network will pave novel pathways in the exploration of causal and counterfactual reasoning, thus providing groundbreaking methods for complex decision-making processes. Code is available at https://github.com/HITshenrj/BNN.
A bidirectional reasoning approach for blood glucose control via invertible neural networks.
The integration and utilization of digital media, gamified learning strategies, and artificial intelligence (AI) are fundamentally transforming the landscape of oncology education and learning. These technologies collectively enhance knowledge dissemination, facilitate professional networking and mentoring, and enrich the overall educational experience for the learner. Digital formats, including social media, offer flexible and asynchronous learning modalities that provide access to the latest studies and research, expert commentary, and opportunities for professional development and collaboration. Gamification, through the application of game-based elements within educational frameworks, promotes critical thinking, supports the acquisition and reinforcement of key concepts, and fosters engagement. AI, increasingly recognized for its disruptive potential, introduces a new dimension of information exchange and novel methodologies for learner assessment and customization of educational pathways. These tools also can present potential challenges, such as the proliferation of misinformation, heightened academic and professional competition, and the imperative to critically appraise AI-generated outputs. This article examines the transformative role of these emerging technologies in oncology education, while also addressing the associated risks and underscoring the need for learners to cultivate evaluative and critical thinking skills to navigate these tools responsibly.
Leveling Up: Harnessing Cutting-Edge Technology to Enhance Oncology Education and Learning.
Patients with post-COVID-19-related symptoms require active and timely support in self-management. Just-in-time adaptive interventions (JITAI) seem promising in meeting these needs, as they aim to provide tailored interventions based on patient-centred measures. This systematic scoping review explores the suitability and examines key components of a potential JITAI in post-COVID-19 syndrome. Databases (PsycINFO, PubMed, and Scopus) were searched using terms related to post-COVID-19-related symptom clusters (fatigue and pain; respiratory problems; cognitive dysfunction; psychological problems) and to JITAI. Studies were summarised to identify potential components (interventions options, tailoring variables and decision rules), feasibility and effectiveness, and potential barriers. Out of the 341 screened records, 11 papers were included (five single-armed pilot or feasibility studies, three two-armed randomised controlled trial studies, and three observational studies). Two articles addressed fatigue or pain-related complaints, and nine addressed psychological problems. No articles about JITAI for respiratory problems or cognitive dysfunction clusters were found. Most interventions provided monitoring, education or reinforcement support, using mostly ecological momentary assessments or smartphone-based sensing. JITAIs were found to be acceptable and feasible, and seemingly effective, although evidence is limited. Given these findings, a JITAI for post-COVID-19 syndrome is promising, but needs to fit the complex, multifaceted nature of its symptoms. Future studies should assess the feasibility of machine learning to accurately predict when to execute timely interventions.
Suitability of just-in-time adaptive intervention in post-COVID-19-related symptoms: A systematic scoping review.
Fault transfer diagnosis is a key technology to ensure the reliability and safety of industrial systems, the core of which is to identify the health status of the equipment among different working conditions with multiclassification methods. However, most of them are based on a closed-set assumption that the label space among different working conditions is consistent, which is hard to satisfy in a practical industrial environment as unknown faults would inevitably occur during operation, i.e., the open-set fault transfer diagnosis (OSFTD) problem. Moreover, during the transfer process, unnecessary source-specific knowledge tends to be adapted, which brings about biased diagnostics on both domain and category. Aiming at this issue, an OSFTD framework, coined as knowledge transfer and reinforcement based on biunbiased neural network (KTR-BUNN), is proposed. First, a domain-unbiased knowledge transfer subnet is proposed, including an uncertainty-aware fault transferability evaluator (FTE) that estimates the transferability of target-domain samples unbiasedly to guide distribution alignment of known faults and a triple-tier unknown fault separator (UFS) that takes transferability as the criterion to extrapolate unknown faults. Second, a class-unbiased knowledge reinforcement subnet is designed to promote the recognition of fault semantic features at the embedding space, where fault knowledge graphs (FKGs) are constructed to describe the relationships between fault types, and they are optimized by a contrastive fault correlation loss, so that fine-grained class-level fault features can be further aligned. The knowledge transfer and knowledge reinforcement mechanisms work jointly to facilitate the performance of OSFTD. Finally, extensive experimental results conducted on diverse diagnostic tasks illustrate the superiority of the proposed KTR-BUNN.
Knowledge Transfer and Reinforcement Based on Biunbiased Neural Network: A Novel Solution for Open-Set Fault Transfer Diagnosis.
This study systematically examined the impact of three feature selection techniques (Boruta, Extreme gradient boosting (XGBoost), and Lasso) for optimizing four machine learning models (Random forest (RF), XGBoost, Logistic regression (LR), and Support vector machine (SVM)) in predicting bone density prevalence. Our findings revealed that varying data partitioning ratios (training and test sets: 0.6:0.4; 0.7:0.3; 0.8:0.2; 0.9:0.1) minimally impacted the prediction accuracy across all four models, a conclusion reinforced by 10-fold cross validation. Besides, principal component analysis (PCA) led to substantial accuracy degradation (0.6-0.8 range), suggesting incompatibility with this study's requirements due to the inherent complex decision boundaries in the original high-dimensional data. Comparative analysis demonstrated that the Boruta-XGBoost combination achieved superior performance (accuracy: 0.9083 ± 0.0146), significantly outperforming the Lasso-LR combination (0.7480 ± 0.0157) across all evaluation frameworks. Regarding model evaluation metrics, the RF model exhibited enhanced discriminative capacity with Area under the receiver operating characteristic (AUROC) values of 0.85, 0.81, and 0.80 under different feature selection approaches, surpassing the SVM model (0.78, 0.76, and 0.76). This advantage likely stems from RF's native capability to capture non-linear relationships and feature interactions.
Comparative Analysis of Feature Extraction Methods and Machine Learning Models for Predicting Osteoporosis Prevalence.
Rewarded stimuli are prioritized by the attentional system. Behavioral performance is improved when the task-relevant dimension is tied to a potential reward but is impaired when the irrelevant dimension is reward related. Within the rewarded Stroop task, the facilitation (reward responsiveness) and impairment (modulation of interference of reward association; MIRA) from reward-associated stimuli are thought to be due to different cognitive processes. In four experiments, we explored whether reward responsiveness and MIRA were influenced by reward magnitude and persisted following reward discontinuation. We manipulated how informed participants were of the stimulus-reward contingency based on whether they received stimulus-reward color instructions and whether or not the stimulus-reward contingency was certain (i.e., one color was always tied to one reward outcome). Results suggest that greater reward magnitude increased reward responsiveness, especially when participants were informed about the stimulus-reward contingency. However, greater impairment (MIRA) by a large versus small reward related color word was only observed when participants had little knowledge of the reward contingency (i.e., no instructions and a more uncertain mapping of stimuli to rewards) or during the extinction phase when reward associated colors were less relevant. These findings highlight the distinction between reward responsiveness to maximize gains and the unintentional prioritization of related but irrelevant information and suggest that reward associations that elicit greater reward responsiveness do not necessarily lead to greater impairment of conflict processing.
An examination of how reward associations facilitate and impair Stroop performance.
Infection prevention and control education has traditionally been conducted in a lecture-based manner, and simulation-based educational strategies have become increasingly prevalent in the field of medical education in recent years. This systematic review aimed to compare the effectiveness of the simulation-based and traditional strategies of infection prevention and control education and to show the differences between these educational approaches. Furthermore, we identified the characteristics of simulation-based strategies for infection prevention and control education. Systematic reviews and meta-analyses were performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search was conducted using the CENTRAL, MEDLINE, and Scopus databases for articles published between January 1990 and September 2022. This study focused on students enrolled in medical and health professional courses. As such, healthcare professionals already working in clinical settings, as well as kindergarten and elementary school students were excluded from the study. The quality of the included studies and the risk of bias in each study were assessed. A total of 254 articles were identified; 21 underwent secondary screening. Ultimately, 10 articles were selected for the final review. Educational strategies between simulation- and lecture-based education showed improvements in knowledge acquisition. There was no significant difference in the rate of improvement between the two educational strategies. The characteristics of simulation-based educational strategies included confidence in skill performance, decision-making and problem-solving skills, emotional aspects related to infectious diseases (such as fear, empathy, self-reflection, and integration of complex information), and student satisfaction. This systematic review suggests that simulation-based education is effective in developing students' skills and attitudes, while traditional lecture-based methods are more suited for reinforcing students' knowledge. Therefore, it is essential to choose educational strategies based on specific learning objectives and outcomes. This systematic review protocol was preregistered in the Open Science Framework: https://osf.io/uj623/.
Simulation-based infection prevention and control training for medical and healthcare students: a systematic review.
Family therapy for anorexia nervosa (FT-AN) is the first-line outpatient treatment for young people with anorexia nervosa (AN) in the UK. However, some require more intensive interventions, such as day programmes (DPs), which provide structured multidisciplinary care, including nutritional rehabilitation. Despite the integral role of dietitians in DPs, their specific responsibilities remain under-researched. This study explores clinician perspectives on the role of dietitians in adolescent AN treatment to inform future research and consensus guidelines. A qualitative study using semi-structured interviews was conducted with 11 clinicians working in one DP for young people with AN. Participants were recruited from the Intensive Treatment Programme at the Maudsley Centre for Child and Adolescent Eating Disorders. Reflexive thematic analysis identified key themes regarding dietitians' contributions to treatment. Clinicians emphasised the dietitian's role in early treatment containment, reinforcing therapeutic approaches and empowering parents in meal planning and nutritional rehabilitation. Dietitians were seen as crucial in personalising treatment based on cultural and sensory needs and adapting meal plans as young people progressed. They also played a key role in guiding transitions between treatment phases, particularly from weight restoration to maintenance. However, challenges included an over-reliance on dietitians for nutritional decisions and a 'good cop, bad cop' dynamic, where therapists avoided difficult conversations about food. Findings highlight dietitians' essential role in DP treatment for AN but suggest that excessive reliance may limit therapist autonomy. Strengthening collaboration through shared decision-making and bidirectional learning is recommended. Further research should explore these dynamics across diverse settings.
The Role of the Dietitian Within a Day Programme for Adolescent Anorexia Nervosa: A Reflexive Thematic Analysis of Child and Adolescent Eating Disorder Clinician Perspectives.
This study investigated cerebellar involvement in reinforcement learning and prediction error (RL-PE) processing. Participants with pure cerebellar degeneration and demographically matched healthy controls performed a probabilistic feedback-based learning task while brain activity was recorded using electroencephalography (EEG). Structural magnetic resonance imaging was used to quantify cerebellar gray matter volume (GMV). Data from 21 cerebellar and 25 control participants were included in the analysis. We aimed to determine if feedback-based learning was impaired in patients relative to controls, and if single-trial RL-PEs were reflected in FRN, P3a, and P3b in the event-related potential (ERP) in patients and controls. Analysis of behavioral data revealed no differences in accuracy between patients and controls. Crucially, ERP analysis revealed that, while in controls, coding of RL-PEs was found in FRN and P3a for positive and in P3b for positive and negative feedback, these effects were absent in patients. Voxel-based morphometry revealed widely distributed cerebellar GMV reduction in patients, most pronounced in bilateral Crus I/ II and bilateral lobules I-IV. Multiple regressions in patients revealed a negative correlation between GMV in bilateral Crus I and II and FRN amplitudes. The present study extends previous evidence for cerebellar involvement in RL-PE processing in humans and advances our understanding of the cerebellum's role in performance monitoring and adaptive control of behavior.
Impaired reinforcement learning and coding of prediction errors in patients with cerebellar degeneration - a study with EEG and voxel-based morphometry.
Human Immunodeficiency Virus (HIV) is a retrovirus that weakens the immune system, increasing vulnerability to infections and cancers. HIV spreads primarily via sharing needles, from mother to child during childbirth or breastfeeding, or unprotected sexual intercourse. Therefore, early diagnosis and treatment are crucial to prevent the disease progression of HIV to AIDS, which is associated with higher mortality. This study introduces a machine learning-based framework for the classification of HIV infections crucial for preventing the disease's progression and transmission risk to improve long-term health outcomes. Firstly, the challenges posed by an imbalanced dataset is addressed, using the Synthetic Minority Over-sampling Technique (SMOTE) oversampling technique, which was chosen over two alternative methods based on its superior performance. Additionally, we enhance dataset quality by removing outliers using the interquartile range (IQR) method. A comprehensive two-step feature selection process is employed, resulting in a reduction from 22 original features to 12 critical variables. We evaluate five machine learning models, identifying the Random Forest Classifier (RFC) and Decision Tree Classifier (DTC) as the most effective, as they demonstrate higher classification performance compared to the other models. By integrating these models into a voting classifier, we achieve an overall accuracy of 89%, a precision of 90.84%, a recall of 87.63%, and a F1-score of 98.21%. The model undergoes validation on multiple external datasets with varying instance counts, reinforcing its robustness. Furthermore, an analysis focusing solely on CD4 and CD8 cell counts which are essential lab test data for HIV monitoring, demonstrates an accuracy of 87%, emphasizing the significance of these clinical features for the classification task. Moreover, these outcomes underscore the potential of combining machine learning techniques with critical clinical data to enhance the accuracy of HIV infection classification, ultimately contributing to improved patient management and treatment strategies. These findings also highlight the scalability of the approach, showing that it can be efficiently adapted for large-scale use across various healthcare environments, including those with limited resources, making it suitable for widespread deployment in both high- and low-resource settings.
Scalable and robust machine learning framework for HIV classification using clinical and laboratory data.
Learning thrives on cognitive flexibility and exploration. Subjects with schizophrenia have impaired cognitive flexibility and maladaptive exploration patterns. The basal ganglia-dorsolateral prefrontal cortex (BG-DLPFC) network plays a significant role in learning processes. However, how this network maintains cognitive flexibility and exploration patterns and what alters these patterns in schizophrenia remains elusive. Using a combination of extracellular recordings, pharmacological manipulations, macro-stimulation techniques, and mathematical modeling, we show that in the nonhuman primate (NHP), the external segment of the globus pallidus (GPe, the central nucleus of the BG network) modulates cognitive flexibility and exploration patterns (experiments were done in females only). We found that chronic, low-dose administration of N-methyl-D-aspartate receptor (NMDA-R) antagonist, phencyclidine (PCP), decreases directed exploration but increases random exploration, as seen in schizophrenia. In line with adaptive working-memory reinforcement-learning models of the BG-DLPFC network, low-frequency GPe macro-stimulation restores the balance of both exploration types. Our findings suggest that exploration-exploitation imbalance reflects abnormal BG-DLPFC activity and that GPe stimulation may restore it.
Basal ganglia deep brain stimulation restores cognitive flexibility and exploration-exploitation balance disrupted by NMDA-R antagonism.
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.
Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.
Enhanced motivational sensitivity to reward is associated with several psychiatric conditions, including prolonged grief disorder (PGD). Although reasons for this association remain unclear, it is possible that individuals higher in reward sensitivity are more prone to yearning for a lost loved one, especially if they have difficulty reengaging in new life goals. We sought to examine this hypothesis in a cross-sectional cohort of 274 adults recruited online who reported a lifetime history of surviving at least one sudden death loss. Motivational sensitivity to reward was associated with more severe yearning, particularly among individuals who have difficulty reengaging in new life goals. This pattern of associations was specific to individuals with more severe PGD symptoms. Findings support previous research suggesting that reward sensitivity may play an important role in the pathogenesis of PGD and highlight potentially important intervention targets in at-risk bereaved populations.
Motivational and Self-Regulatory Factors Associated With Yearning and Prolonged Grief Symptoms.
The striatum plays a key role in decision-making, with its effects varying with anatomical location and direct and indirect pathway striatal projecting neuron (d- and iSPN) populations. Using a mouse gambling task with a reinforcement-learning model, we described individual decision-making profiles as a combination of three archetypal strategies: Optimizers, Risk-averse, and Explorers. These strategies reflected stable differences in the parameters generating decisions (sensitivity to the reward magnitude, to risk, or to punishment) derived from a reinforcement-learning model of animal choice. Chemogenetic manipulation showed that dorsomedial striatum (DMS) neurons substantially affect decision-making, while the nucleus accumbens (NAc) and dorsolateral striatum neurons (DLS) have lesser or no effects, respectively. Specifically, DMS dSPNs decrease risk aversion by increasing the perceived value of risky choices, while DMS iSPNs emphasize large gains, affecting decisions depending on decision-making profiles. Hence, we propose that striatal populations from different subregions influence distinct decision-making parameters, leading to profile-dependent choices.
Direct and indirect striatal projecting neurons exert strategy-dependent effects on decision-making.
Coordinating the motion between lower and upper limbs and aligning limb control with perception are substantial challenges in robotics, particularly in dynamic environments. To this end, we introduce an approach for enabling legged mobile manipulators to play badminton, a task that requires precise coordination of perception, locomotion, and arm swinging. We propose a unified reinforcement learning-based control policy for whole-body visuomotor skills involving all degrees of freedom to achieve effective shuttlecock tracking and striking. This policy is informed by a perception noise model that uses real-world camera data, allowing for consistent perception error levels between simulation and deployment and encouraging learned active perception behaviors. Our method includes a shuttlecock prediction model and constrained reinforcement learning for robust motion control to enhance deployment readiness. Extensive experimental results in a variety of environments validate the robot's capability to predict shuttlecock trajectories, navigate the service area effectively, and execute precise strikes against human players, demonstrating the feasibility of using legged mobile manipulators in complex and dynamic sports scenarios.
Learning coordinated badminton skills for legged manipulators.
With the application of new-generation information technologies such as big data, artificial intelligence, and the energy Internet in Power Internet of Things (IoT) systems, a large number of IoT terminals, acquisition terminals, and transmission devices have achieved integrated interconnection and comprehensive information interaction. However, this transformation also brings new challenges: the security risk of intrusions into power IoT systems has significantly increased, making the assurance of power system information security a research hotspot. Penetration testing, as an essential means of information security protection, is critical for identifying and fixing security vulnerabilities. Given the complexity of power IoT systems and the limitations of traditional manual testing methods, this paper proposes an automated penetration testing method that combines prior knowledge with deep reinforcement learning. It aims to intelligently explore optimal attack paths under conditions where the system state is unknown. By constructing an ontology knowledge model to fully utilize prior knowledge and introducing an attention mechanism to address the issue of varying state spaces, the efficiency of penetration testing can be improved. Experimental results show that the proposed method effectively optimizes path decision-making for penetration testing, providing support for the security protection of power IoT systems.
Intelligent penetration testing method for power internet of things systems combining ontology knowledge and reinforcement learning.
The job shop scheduling problem (JSSP) is a classic NP-hard problem. This article focuses on a realistic variant of the JSSP incorporating fuzzy processing times, with the objective of minimizing the maximum completion time. We propose a proximal policy optimization with graph transformer (GT-PPO) algorithm, which leverages proximal policy optimization (PPO) as the foundational framework, to address this problem for the first time. First, the intricate variability in states and actions often leads to suboptimal scheduling outcomes. To address this, we refine the representation of states and actions for improved performance. Second, to overcome inherent limitations of conventional graph neural networks (GNNs)-including difficulty in handling heterogeneity, over-squashing, and limited ability to capture long-range dependencies-we employ a graph transformer (GT) architecture for the first time in this study. These transformers effectively capture both the topological relationships in fuzzy disjunctive graph models and the long-range dependencies in large-scale JSSP instances. Additionally, we also reduce the computational complexity of the GT to $O(n)$ , enabling the agent to derive optimal scheduling solutions for large disjunctive graphs more efficiently, with reduced memory usage. Finally, the testing results demonstrate the strong robustness of our model across various scales of generated instances and public datasets after a single training session. Notably, on large-scale DMU and Taillard public datasets, the model exhibited exceptional robustness, further validating its effectiveness in addressing large-scale fuzzy JSSP.
A Reinforcement Learning Control Framework Based on Scalable Graph Transformer for Large-Scale Fuzzy Job Shop Scheduling Problems.
Recent evidence suggests that motor imagery is insufficient for updating internal models, essential for predicting and refining overt movement outcomes. The covert nature of motor imagery limits exposure to errors, perhaps preventing the updating of internal models. To explore this, 90 participants were exposed to a prism that shifted vision leftward, completing 20 physical pointing trials followed by either 230 more physical pointing trials [physical practice (PP)], 230 imagined pointing trials [physical practice motor imagery (PP-MI)], 230 unrelated task trials [physical practice control (PP-CTRL)], or no further trials [physical practice none (PP-None)]. We hypothesized that if exposure to errors is needed for motor imagery to update internal models, then PP-MI would exhibit aftereffects, characterized by pointing opposite to the prism shift (i.e., rightwards), similar to PP, but differing from PP-CTRL and PP-None. After prism exposure, all groups showed significant aftereffects (PP: 4.73° ± 2.12°, PP-MI: 2.62 ± 1.61, PP-CTRL: 2.58 ± 1.53, PP-None: 3.11 ± 1.68), however there were no significant differences in the magnitude of aftereffects between PP-MI, and PP-CTRL/PP-None. Our findings demonstrate that motor imagery alone is insufficient for updating internal models, even when participants are initially exposed to errors under a prism shift. This further reinforces that motor imagery is not a direct simulation of overt movement, as proposed by Motor Simulation Theory- the foundation for its use in rehabilitation. Deepening our understanding of how learning occurs through motor imagery is crucial for enhancing its effectiveness in practical applications like rehabilitation.
Even with exposure to errors, motor imagery cannot update internal models.
Autonomous synthesis platforms integrating machine learning with in situ diagnostics have the potential to revolutionize thin-film growth by enabling real-time process optimization and reducing the need for manual tuning. However, their application to molecular beam epitaxy (MBE) remains underdeveloped. Here, we present a machine learning-guided framework for MBE growth of GaSe films, leveraging reflection high-energy electron diffraction (RHEED) as an in situ diagnostic alongside <i>ex situ</i> characterization via X-ray diffraction and atomic force microscopy. Unsupervised learning on RHEED patterns reveals a well-defined boundary between high- and low-quality samples, capturing physically meaningful features. Mutual information analysis shows a strong correlation between RHEED embeddings and rocking curve full-width at half-maximum (fwhm), while the correlation with AFM root-mean-square (RMS) roughness is weak. Among key growth conditions, growth rate most strongly influences fwhm, whereas the Se/Ga flux ratio primarily affects RMS roughness and the RHEED embeddings. Supervised learning models trained to predict fwhm and RMS roughness demonstrate moderate accuracy, with significant improvement achieved by incorporating RHEED embeddings. Furthermore, anomaly detection via residual analysis in supervised learning aligns well with unsupervised classification from RHEED, reinforcing the reliability of the predictive models. This study establishes a data-driven framework for machine learning-assisted MBE, paving the way for real-time process control and accelerated optimization of thin-film synthesis.
Multimodal Machine Learning Analysis of GaSe Molecular Beam Epitaxy Growth Conditions.
Background Eclampsia is a critical obstetric emergency associated with significant maternal and fetal mortality and morbidity. This retrospective observational study assesses the clinical characteristics, management strategies, and findings in eclamptic patients, emphasizing lessons learnt from treatment delays and therapeutic interventions. Aims The study aimed to determine perinatal outcomes in eclamptic women and to evaluate perinatal outcomes based on the interval between the initial convulsion and delivery, as well as the duration of treatment before delivery and delivery methods. Study setting and design Eclamptic women who met the inclusion criteria and were admitted from January 1, 2012, to December 31, 2024, to the labor ward at BLDE (DU) Shri B M Patil Medical College Hospital and Research Centre in Vijayapura, Karnataka, India, were included in this study. Investigation findings and medical data were gathered and assessed. Results The study included 192 pregnant women having eclampsia, all beyond 28 weeks of gestation, meeting specified inclusion and exclusion criteria. A total of 192 babies were delivered, with 58 perinatal deaths (30.2%) recorded. Obstetric analysis revealed that primigravida patients constituted the majority, reinforcing their higher risk profile. Perinatal mortality was raised in individuals with systolic blood pressure (BP) of ≥160mm Hg, diastolic of ≥110mm Hg, newborns of birth weight less than 2 kg, and urine albumin levels exceeding 2+. Perinatal mortality was comparatively low when provided within six hours of convulsion and starting medical care. The cesarean section rate was high, reflecting the need for rapid stabilization. Conclusion This study highlights that early and appropriate use of medical management coupled with decisive delivery planning results in high fetal viability and acceptable maternal outcomes. The predominance of primigravida and the high cesarean rates suggest that eclampsia management protocols require continuous refinement to improve response times and further enhance fetomaternal safety. Emphasis on early recognition and rapid intervention remains essential in reducing morbidity and mortality associated with this obstetric emergency.
Lessons Learned in the Management of Eclampsia: A Retrospective Observational Study in Pregnant Women.
International medical students at I-Shou University's School of Medicine for International Students (SMIS) receive Taiwan government-funded scholarships to cultivate skilled and compassionate medical professionals from the Caribbean, Central America, and the Pacific Islands. This study examines the meaningful impact of Caribbean medical students' participation in interviews with the families of silent teachers, a central element of Taiwan's distinctive approach to anatomical education. Through these interviews, students were exposed to the deeply personal narratives of body donors, such as their life stories, motivations for donation, and their values, such as altruism, family devotion, and reverence for life. These interactions offered the students a rare opportunity to bridge the gap between technical medical training and healthcare's emotional, ethical, and cultural dimensions. This study examines reflective practices' impact on Caribbean medical students' development during interactions with Silent Teacher donors. Reflective narratives from 28 culturally diverse students were analyzed using thematic analysis. The experience enhanced the students' understanding of the significance of body donation in Taiwanese society, which contrasts with more anonymous approaches in Western medical education. As a result, international students commented on key professional attributes, including cultural humility, empathy, and a stronger ethical awareness. The family interviews allowed students to engage in the human aspect of medicine, reinforcing the importance of compassionate care and emotional intelligence in their future medical practice. This program is a meaningful model for integrating humanistic and ethical learning into the curriculum, especially for international students, fostering their growth into well-rounded, culturally aware, and empathetic physicians.
Empathy and cultural humility: Caribbean medical students' experience in Taiwan's Silent Teacher family interviews.
With the widespread adoption of wireless communication technologies in modern high-speed rail systems, the Train-to-Ground (T2G) communication system for Electric/Diesel Multiple Units (EMU/DMU) has become essential for train operation monitoring and fault diagnosis. However, this system is increasingly vulnerable to various cyber-physical threats, necessitating more intelligent and adaptive security protection mechanisms. This paper presents an intelligent security defense framework that integrates intrusion detection, risk scoring, and response mechanisms to enhance the security and responsiveness of the T2G communication system. First, feature selection is performed on the TON_IoT dataset to develop a Dream Optimization Algorithm (DOA)-optimized backpropagation neural network (DOA-BPNN) model for efficient anomaly detection. A Bayesian risk scoring module then quantifies detection outcomes and classifies risk levels, improving threat detection accuracy. Finally, a Q-learning-based reinforcement learning (RL) module dynamically selects optimal defense actions based on identified risk levels and attack patterns to mitigate system threats. Experimental results demonstrate improved performance in both multi-class and binary classification tasks compared to conventional methods. The implementation of the Bayesian risk scoring and decision-making modules leads to a 63.56% reduction in system risk scores, confirming the effectiveness and robustness of the proposed approach in an experimental environment.
A Hybrid Security Framework for Train-to-Ground (T2G) Communication Using DOA-Optimized BPNN Detection, Bayesian Risk Scoring, and RL-Based Response.
Since the World Health Organization (WHO) issued guidelines for developing a non-sputum test for active tuberculosis (TB) diagnosis that exhibits similar performance characteristics to sputum-based diagnosis, salivary diagnostic techniques have gained prominence as potential screening tools or adjuncts to existing diagnostics. We searched online databases for studies that looked at salivary diagnostic techniques. Afterwards, duplicates were removed, titles and abstracts were screened, and full-text studies were assessed for eligibility based on inclusion and exclusion criteria. The studies chosen for final analysis underwent a rigorous quality assessment following a QUADAS-2 template, and data were extracted. The primary outcome assessed the difference in mean levels of interleukins between TB+ patients and TB-controls (Hedges' g). We then conducted two subgroup analyses: the first segregated the control group into healthy patients, and those with other respiratory diseases (ORD), and the second addressed three different interleukins separately (IL-6, IL-5, IL-17). The secondary outcome involved comparing salivary molecular diagnostic assays to WHO guidelines. This study is registered with PROSPERO, CRD42024536884. A total of 17 studies, out of an initial 1010, were chosen for the final analysis, but one was then excluded for being of poor quality. Our meta-analyses for the primary outcome revealed minimal diagnostic potential for interleukins. Our first subgroup analysis showed that interleukins were incapable of differentiating active TB patients from both healthy controls and ORD patients. Our second subgroup analysis showed that IL-17 was reduced in active TB patients. Assessment of the secondary outcome revealed that most studies relied on a GeneXpert MTB/RIF assay on saliva, but none fulfilled WHO guidelines for a non-sputum test. Individual biomarkers currently lack sufficient discriminatory power to definitively distinguish active tuberculosis from healthy individuals or those with other respiratory diseases (ORD), reinforcing the need for multi-biomarker panels. Interleukins may be alternatively used as markers for prognosis, severity, or treatment response. Our findings also suggest that assays are unable to meet WHO guidelines.
The Role of Salivary Diagnostic Techniques in Screening for Active Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis.
The application of ceramic particle-reinforced metal matrix composites (CPRMMCs) in the nuclear power sector is primarily dependent on their mechanical and thermal properties. A comprehensive understanding of the structure-property (SP) linkages between microstructures and macroscopic properties is critical for optimizing material properties. However, traditional studies on SP linkages generally rely on experimental methods, theoretical analysis, and numerical simulations, which are often associated with high time and economic costs. To address this challenge, this study proposes a novel method based on Materials Informatics (MI), combining the finite element method (FEM), graph Fourier transform, principal component analysis (PCA), and machine learning models to establish the SP linkages between the microstructure and thermodynamic properties of CPRMMCs. Specifically, FEM is used to model the microstructures of CPRMMCs with varying particle volume fractions and sizes, and their elastic modulus, thermal conductivity, and coefficient of thermal expansion are computed. Next, the statistical features of the microstructure are captured using graph Fourier transform based on two-point spatial correlations, and PCA is applied to reduce dimensionality and extract key features. Finally, a polynomial kernel support vector regression (Poly-SVR) model optimized by Bayesian methods is employed to establish the nonlinear relationship between the microstructure and thermodynamic properties. The results show that this method can effectively predict FEM results using only 5-6 microstructure features, with the <i>R</i><sup>2</sup> values exceeding 0.91 for the prediction of thermodynamic properties. This study provides a promising approach for accelerating the innovation and design optimization of CPRMMCs.
Modeling the Structure-Property Linkages Between the Microstructure and Thermodynamic Properties of Ceramic Particle-Reinforced Metal Matrix Composites Using a Materials Informatics Approach.
This study presents the fabrication and characterization of ZnO-CNT composite-based optoelectronic synaptic devices via a sol-gel process. By incorporating various concentrations of CNTs (0-2.0 wt%) into ZnO thin films, we investigated their effects on synaptic behaviors under ultraviolet (UV) stimulation. The CNT addition enhanced the electrical and optical performance by forming a p-n heterojunction with ZnO, which promoted charge separation and suppressed recombination. As a result, the 1.5 wt% CNT device exhibited the highest excitatory postsynaptic current (EPSC), improved paired-pulse facilitation, and prolonged memory retention. Learning-forgetting cycles revealed that repeated stimulation reduced the number of pulses required for relearning while extending the forgetting time, mimicking biological memory reinforcement. Energy consumption per pulse was estimated at 16.34 nJ, suggesting potential for low-power neuromorphic applications. A 3 × 3 device array was also employed for visual memory simulation, showing spatially controllable and stable memory states depending on CNT content. To support these findings, structural and optical analyses were conducted using scanning electron microscopy (SEM), UV-visible absorption spectroscopy, photoluminescence (PL) spectroscopy, and Raman spectroscopy. These findings demonstrate that the synaptic characteristics of ZnO-based devices can be finely tuned through CNT incorporation, providing a promising pathway for the development of energy-efficient and adaptive optoelectronic neuromorphic systems.
Synaptic Plasticity and Memory Retention in ZnO-CNT Nanocomposite Optoelectronic Synaptic Devices.
Ultra-high-performance fiber-reinforced concrete (UHPFRC) exhibits exceptional tensile properties, but its tensile strength is highly dependent on fiber distribution, orientation, and count, making accurate strength estimation challenging. This study introduces a novel approach in which tensile strength estimation is achieved by analyzing fiber characteristics at predicted cracking locations using deep learning. Using X-ray computed tomography (CT) and image analysis techniques, the fiber orientation factor (μ<sub>0</sub>) and average efficiency factor ((μ<sub>1</sub>)<sup>-</sup>) were determined at predicted cracking locations. A deep learning model (YOLOv11) was trained to identify regions with a defective distribution, achieving a mean Average Precision (mAP@0.5) of 0.87, demonstrating its high reliability in predicting cracking locations. The overall cracking location prediction success rate was 73% for strain-hardening specimens. The estimated tensile strength was then compared with uniaxial tensile test (UTT) results, revealing an average experiment-estimation error of 5.72% and an average theory-estimation error of 3.34% for strain-hardening specimens, whereas strain-softening specimens exhibited significantly higher errors, with an average experiment-estimation error of 43.09% and an average theory-estimation error of 15.73%. These findings highlight the strong correlation between fiber count, cracking behavior, and tensile strength in UHPFRC, offering a trustworthy, non-destructive framework for estimating tensile performance in UHPFRC elements.
Tensile Strength Estimation of UHPFRC Based on Predicted Cracking Location Using Deep Learning.
<b>Background:</b> The transcription factor SOX9 plays a critical role in various diseases, including hepatocellular carcinoma (HCC), and has been implicated in resistance to sorafenib treatment. Accurate assessment of SOX9 expression is important for guiding personalized therapy in HCC patients; however, a reliable non-invasive method for evaluating SOX9 status remains lacking. This study aims to develop a deep learning (DL) model capable of preoperatively and non-invasively predicting SOX9 expression from CT images in HCC patients. <b>Methods:</b> We retrospectively analyzed a dataset comprising 4011 CT images from 101 HCC patients who underwent surgical resection followed by sorafenib therapy at West China Hospital, Sichuan University. A deep reinforcement learning (DRL) approach was proposed to enhance prediction accuracy by identifying and focusing on image regions highly correlated with SOX9 expression, thereby reducing the impact of background noise. <b>Results:</b> Our DRL-based model achieved an area under the curve (AUC) of 91.00% (95% confidence interval: 88.64-93.15%), outperforming conventional DL methods by over 10%. Furthermore, survival analysis revealed that patients with SOX9-positive tumors had significantly shorter recurrence-free survival (RFS) and overall survival (OS) compared to SOX9-negative patients, highlighting the prognostic value of SOX9 status. <b>Conclusions:</b> This study demonstrates that a DRL-enhanced DL model can accurately and non-invasively predict SOX9 expression in HCC patients using preoperative CT images. These findings support the clinical utility of imaging-based SOX9 assessment in informing treatment strategies and prognostic evaluation for patients with advanced HCC.
Deep Reinforcement Learning for CT-Based Non-Invasive Prediction of SOX9 Expression in Hepatocellular Carcinoma.
<b>Background/Objectives</b>: Lower-limb amputation (LLA) leads to disability, impaired mobility, and reduced quality of life, affecting 1.6 million people in the USA. Post-amputation, motor cortex reorganization occurs, contributing to phantom limb pain (PLP). Transcranial magnetic stimulation (TMS) assesses changes in cortical excitability, helping to identify compensatory mechanisms. This study investigated the association between TMS metrics and clinical and neurophysiological outcomes in LLA patients. <b>Methods</b>: A cross-sectional analysis of the DEFINE cohort, with 59 participants, was carried out. TMS metrics included resting motor threshold (rMT), motor-evoked potential (MEP) amplitude, short intracortical inhibition (SICI), and intracortical facilitation (ICF). <b>Results</b>: Multivariate analysis revealed increased ICF and rMT in the affected hemisphere of PLP patients, while SICI was reduced with the presence of PLP. A positive correlation between SICI and EEG theta oscillations in the frontal, central, and parietal regions suggested compensatory mechanisms in the unaffected hemisphere. Increased MEP was associated with reduced functional independence. <b>Conclusions</b>: SICI appears to be a key factor linked to the presence of PLP, but not its intensity. Reduced SICI may indicate impaired cortical compensation, contributing to PLP. Other neural mechanisms, including central sensitization and altered thalamocortical connectivity, may influence PLP's severity. Our findings align with those of prior studies, reinforcing low SICI as a marker of maladaptive neuroplasticity in amputation-related pain. Additionally, longer amputation duration was associated with disrupted SICI, suggesting an impact of long-term plasticity changes.
Defective Intracortical Inhibition as a Marker of Impaired Neural Compensation in Amputees Undergoing Rehabilitation.
<b>Background</b>: Antimicrobial resistance (AMR) poses a growing threat to veterinary medicine and food safety. This study examines <i>Escherichia coli</i> antibiotic resistance patterns in ducks, focusing on multidrug-resistant (MDR) strains. Understanding resistance patterns and predicting MDR occurrence are critical for effective intervention strategies. <b>Methods</b>: <i>E. coli</i> isolates were collected from duck samples across multiple regions. Descriptive statistics and resistance frequency analyses were conducted. A decision tree classifier and a neural network were trained to predict MDR status. Cross-resistance relationships were visualized using graph-based models, and Monte Carlo simulations estimated MDR prevalence variations. <b>Results</b>: Monte Carlo simulations estimated an average MDR prevalence of 79.6% (95% CI: 73.1-86.1%). Key predictors in MDR classification models were enrofloxacin, neomycin, amoxicillin, and florfenicol. Strong cross-resistance associations were detected between neomycin and spectinomycin, as well as amoxicillin and doxycycline. <b>Conclusions</b>: The high prevalence of MDR strains underscores the urgent need to revise antibiotic usage guidelines in veterinary settings. The effectiveness of predictive models suggests that machine learning tools can aid in the early detection of MDR, contributing to the optimization of treatment strategies and the mitigation of resistance spread. The alarming MDR prevalence in <i>E. coli</i> isolates from ducks reinforces the importance of targeted surveillance and antimicrobial stewardship. Predictive models, including decision trees and neural networks, provide valuable insights into resistance trends, while Monte Carlo simulations further validate these findings, emphasizing the need for proactive antimicrobial management.
Antimicrobial Susceptibility Profiles of <i>Escherichia coli</i> Isolates from Clinical Cases of Ducks in Hungary Between 2022 and 2023.
Immunogenic cell death (ICD) has been implicated in sepsis, a condition with high mortality, through mechanisms involving endoplasmic reticulum stress and other pathophysiological pathways. This study aimed to identify and validate ICD-related biomarkers for sepsis diagnosis and to elucidate their underlying mechanisms. Publicly available datasets (GSE65682, GSE95233 and GSE69528) and 57 ICD-related genes (ICDRGs) were collected for analysis. Candidate genes were selected using differential expression analysis and weighted gene co-expression network analysis (WGCNA). By integrating machine learning models, receiver operating characteristic (ROC) curves, and gene expression analysis, biomarkers for sepsis diagnosis were identified. Gene set enrichment analysis (GSEA) and gene set variation analysis (GSVA) were conducted to explore the potential mechanisms by which the biomarkers influence sepsis. Additionally, immune infiltration analysis, subcellular localization, and disease association analysis were carried out. Finally, reverse transcription quantitative polymerase chain reaction (RT-qPCR) was used to validate the expression of the biomarkers in clinical sepsis blood samples. The biomarkers BCL2, PRF1, CXCR3, and EIF2AK3 demonstrated robust diagnostic potential for sepsis, each exhibiting an area under the curve (AUC) exceeding 0.8 in both the GSE65682 and GSE95233 datasets. These biomarkers were significantly downregulated in sepsis and were predominantly enriched in the ribosome. GSVA identified the top three activated pathways as β-alanine metabolism, citrate cycle/TCA cycle, and glyoxylate and dicarboxylate metabolism, while the most inhibited pathways included glycosphingolipid biosynthesis (lacto and neolacto series), α-linolenic acid metabolism, and linoleic acid metabolism. Immune infiltration analysis revealed reduced infiltration in sepsis, with CD8 + T cells showing the highest positive correlation with activated NK cells and PRF1. Subcellular localization analysis indicated that all four biomarkers were situated on the organelle membrane. Disease association analysis revealed correlations between these biomarkers and conditions such as hypertension and asthma. RT-qPCR analysis confirmed that the expression patterns of the biomarkers were consistent with the dataset findings, reinforcing the reliability and validity of the bioinformatic analyses. This study identified four ICD-related biomarkers (BCL2, PRF1, CXCR3, and EIF2AK3) that may help recognize early signs of sepsis, facilitate monitoring of disease progression, and have significant potential for clinical diagnosis and therapeutic strategies in sepsis.
Immunogenic cell death biomarkers for sepsis diagnosis and mechanism via integrated bioinformatics.
This study introduces Glucose Level Understanding and Control Optimized for Safety and Efficacy (GLUCOSE), a distributional offline reinforcement learning algorithm for optimizing insulin dosing after cardiac surgery. Trained on 5228 patients, tested on 920, and externally validated on 649, GLUCOSE achieved a mean estimated reward of 0.0 [-0.07, 0.06] in internal testing and -0.63 [-0.74, -0.52] in external validation, outperforming clinician returns of -1.29 [-1.37, -1.20] and -1.02 [-1.16, -0.89]. In multi-phase human validation, GLUCOSE first showed a significantly lower mean absolute error (MAE) in insulin dosing, with 0.9 units MAE versus clinicians' 1.97 units (p < 0.001) in internal testing and 1.90 versus 2.24 units (p = 0.003) in external validation. The second and third phases found GLUCOSE's performance as comparable to or exceeding that of senior clinicians in MAE, safety, effectiveness, and acceptability. These findings suggest GLUCOSE as a robust tool for improving postoperative glucose management.
A distributional reinforcement learning model for optimal glucose control after cardiac surgery.
The increasing adoption of machine learning (ML) in fiber-reinforced polymer (FRP) composite design has led to a reliance on black-box models, which achieve high predictive accuracy but lack interpretability. Python symbolic regression (PySR) offers a solution by deriving explicit equations that reveal the governing mechanics of composite structures. This study focuses on hybrid FRP bolted connections, which are rapidly adopted in the industry but remain insufficiently addressed in academic research. To address this gap, a framework was developed to identify key design parameters and predict damage initiation loads by integrating experimental testing, finite element modeling (FEM), and ML. Feature selection and ML models analyzed the dataset, providing insights that guided PySR in deriving interpretable equations. Hybrid L-joint specimens were fabricated and tested to determine damage initiation loads, with results validating FEM models in ABAQUS. A design of experiments approach structured the dataset, and feature selection identified key factors influencing joint performance. ML models assessed dataset quality, with Huber regression emerging as the best-performing model. Based on insights from feature analysis and ML models, PySR derived a compact, interpretable equation that provided greater accuracy and deeper physical insights than the Huber model. This equation aids hybrid L-joint design by improving the understanding of damage initiation mechanics. Beyond predictive accuracy, the findings highlight the model's scalability to different bolt sizes, equally spaced row of bolts, and stacking sequences. This study demonstrates the potential of interpretable ML in structural engineering applications, particularly for hybrid composite-metal joints, where transparent models are essential for design optimization and predictive accuracy.
Integrating machine learning and symbolic regression for predicting damage initiation in hybrid FRP bolted connections.
Primary care physicians often feel pressure to rush through the seemingly endless patient care and administrative work we are faced with daily. In residency, I learned how to be efficient, how to juggle multiple things at once, and how to think quickly: all valuable skills. I received positive reinforcement for taking on more responsibilities and roles. By the end of residency, I had forgotten how to slow myself down. When I started my first job, my developing relationship with a new patient showed me just how crucial slowing down can be. In this essay, I reflect on my post-residency efforts to be more deliberate, patient, and mindful. I think about why, in our current medical landscape, it can feel so hard to slow down.
The Difficulty, and Power, of Slowing Down.
Pavlovian conditioning tasks have been used to identify the neural systems involved with learning cue-outcome relationships. In delay conditioning, the conditioned stimulus (CS) overlaps or co-terminates with the unconditioned stimulus (US). Prior studies demonstrate that dopamine in the nucleus accumbens (NAc) regulates behavioral responding during delay conditioning. Furthermore, the dopamine response to the CS reflects the relative value of the upcoming reward in these tasks. In contrast to delay conditioning, trace conditioning involves a "trace" period separating the end of the CS and the US delivery. While dopamine has been implicated in trace conditioning, no studies have examined how NAc dopamine responds to reward-related stimuli in these tasks. Here, we developed a within-subject trace conditioning task where distinct CSs signaled either a short trace period (5 s) or a long trace period (55 s) prior to food reward delivery. Male rats exhibited greater conditioned responding and a faster response latency to the Short Trace CS relative to the Long Trace CS. Voltammetry recordings in the NAc found that the CS-evoked dopamine response increased on Short Trace trials but decreased on Long Trace trials. Conversely, US-evoked dopamine responses were greater on Long Trace trials relative to Short Trace trials. The CS dopamine response correlated with the response latency and not with conditioned responding. Furthermore, the relationship between CS dopamine and latency was best explained by an exponential function. Our results collectively illustrate that the trace period is encoded by the bidirectional NAc dopamine response to the CS during pavlovian conditioning.
Nucleus Accumbens Dopamine Encodes the Trace Period during Appetitive Pavlovian Conditioning.
In recent years, researchers have integrated the historically separate, reinforcement learning (RL), and evidence-accumulation-to-bound approaches to decision modeling. A particular outcome of these efforts has been the RL-DDM, a model that combines value learning through reinforcement with a diffusion decision model (DDM). While the RL-DDM is a conceptually elegant extension of the original DDM, it faces a similar problem to the DDM in that it does not scale well to decisions with more than two options. Furthermore, in its current form, the RL-DDM lacks flexibility when it comes to adapting to rapid, context-cued changes in the reward environment. The question of how to best extend combined RL and DDM models so they can handle multiple choices remains open. Moreover, it is currently unclear how these algorithmic solutions should map to neurophysical processes in the brain, particularly in relation to so-called go/no-go-type models of decision making in the basal ganglia. Here, we propose a solution that addresses these issues by combining a previously proposed decision model based on the multichoice sequential probability ratio test (MSPRT), with a dual-pathway model of decision threshold learning in the basal ganglia region of the brain. Our model learns decision thresholds to optimize the trade-off between time cost and the cost of errors and so efficiently allocates the amount of time for decision deliberation. In addition, the model is context dependent and hence flexible to changes to the speed-accuracy trade-off (SAT) in the environment. Furthermore, the model reproduces the magnitude effect, a phenomenon seen experimentally in value-based decisions and is agnostic to the types of evidence and so can be used on perceptual decisions, value-based decisions, and other types of modeled evidence. The broader significance of the model is that it contributes to the active research area of how learning systems interact by linking the previously separate models of RL-DDM to dopaminergic models of motivation and risk taking in the basal ganglia, as well as scaling to multiple alternatives.
Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives.
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.
Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection.
Schizophrenia presents significant treatment challenges, particularly due to medication resistance observed in some patients receiving antipsychotics. Emerging research suggests a potential link between impaired reinforcement learning, the severity of psychotic symptoms, and dopamine system abnormalities. Exploring reinforcement learning in therapeutic settings could provide critical insights into the efficacy of antipsychotic treatments. This study aimed to investigate whether neurocognitive profiles, specifically choice strategies and model fitting parameters assessed using the Dynamic Reward Task (DRT), could provide insights into treatment response variability among patients with schizophrenia. We conducted a comprehensive neurocognitive assessment on chronic schizophrenia patients experiencing psychotic relapse, categorized by treatment response (high-response vs low-response). Participants underwent DRT, Wisconsin Card Sorting Test (WCST), and Continuous Performance Test (CPT) to evaluate reward processing, executive function, and sustained attention, respectively. We employed statistical analyses to compare task performance between groups and assess changes before and after antipsychotic treatment. We identified significant differences in treatment effects across different response groups in DRT scores, choice strategies, and model-fitting parameters. Conversely, all schizophrenia groups had consistent abnormalities on the WCST and CPT evaluations compared to controls. Our findings highlight the efficacy of DRT, WCST, and CPT in delineating neurocognitive profiles relevant to treatment response in schizophrenia. Specifically, the DRT effectively differentiated between high- and low-response patients. Distinct deficits in reward processing and executive function identified here may serve as potential indicators, informing personalized treatment strategies tailored to individual responses to antipsychotic medication.
Advancing Understanding of Treatment Response in Schizophrenia With Psychosis Using a Novel Dynamic Reward Task.
Although motivation is a central aspect of the practice of a physical activity, it is a challenging endeavour to predict an individual's level of motivation during the activity. The objective of this study was to assess the feasibility of measuring motivation through brain recording methods during physical activity, with a specific focus on cycling. The experiment employed the Effort Expenditure for Reward Task (EEfRT), a decision-making task based on effort and reward, conducted under two conditions: one involving cycling on an ergometer at moderate intensity and the other without cycling. The P300, an event-related potential linked to motivation, was recorded using electroencephalography. A total of 20 participants were recruited to complete the EEfRT, which involved making effort-based decisions of increasing difficulty in order to receive varying levels of monetary reward. The results demonstrated that the P300 amplitude was influenced by the act of cycling, exhibiting a reduction during the cycling session. This reduction may be explained by a reallocation of cognitive resources due to the exertion of physical effort, which is consistent with the transient hypofrontality theory. In terms of behaviour, participants demonstrated a tendency to make more challenging choices when the potential rewards were higher or the probability of gaining them was lower. This pattern was observed in both the cycling and non-cycling conditions. A positive correlation was identified between P300 amplitude and the proportion of difficult choices, particularly under conditions of low reward probability. This suggests that P300 may serve as a neural marker of motivation. The study demonstrates the feasibility of using electroencephalography to monitor motivation during exercise in real-time, with potential applications in rehabilitation settings. However, further research is required to refine the design and explore the effects of different exercise types on motivation.
A measure of event-related potentials (ERP) indices of motivation during cycling.
A zero-day vulnerability is a critical security weakness of software or hardware that has not yet been found and, for that reason, neither the vendor nor the users are informed about it. These vulnerabilities may be taken advantage of by malicious people to execute cyber-attacks leading to severe effects on organizations and individuals. Given that nobody knows and is aware of these weaknesses, it becomes challenging to detect and prevent them. For the real-time zero-day vulnerabilities detection, we bring out a novel reinforcement learning (RL) methodology with the help of Deep Q-Networks (DQN). It works by learning the vulnerabilities without any prior knowledge of vulnerabilities, and it is evaluated using rigorous statistical metrics. Traditional methods are surpassed by this one that is able to adjust to changing threats and cope with intricate state spaces while providing scalability to cybersecurity personnel. In this paper, we introduce a new methodology that uses reinforcement learning for zero-day vulnerability detection. Zero-day vulnerabilities are security weaknesses that have never been exposed or published and are considered highly dangerous for systems and networks. Our method exploits reinforcement learning, a sub-type of machine learning which trains agents to make decisions and take actions to maximize an approximation of some underlying cumulative reward signal and discover patterns and features within data related to zero-day discovery. Training of the agent could allow for real-time detection and classification of zero-day vulnerabilities. Our approach will have the potential as a powerful tool of detection and defense against zero-day vulnerabilities and probably brings significant benefits to security experts and researchers in the field of cyber-security. The new method of discovering vulnerabilities that this approach provides has many comparative advantages over the previous approaches. It is applicable to systems with complex behaviour, such as the ones presented throughout this thesis, and can respond to new security threats in real time. Moreover, it does not require any knowledge about vulnerability itself. Because of that, it will discover hidden weak points. In the present paper, we analyzed the statistical evaluation of forecasted values for several parameters in a reinforcement learning environment. We have taken 1000 episodes for training the model and a further 1000 episodes for forecasting using the trained model. We used statistical measures in the evaluation, which showed that the Alpha value was at 0.10, thereby indicating good accuracy in the forecast. Beta was at 0.00, meaning no bias within the forecast. Gamma was also at 0.00, resulting in a very high level of precision within the forecast. MASE was 3.91 and SMAPE was 1.59, meaning that a very minimal percentage error existed within the forecast. The MAE value was at 6.34, while the RMSE was 10.22, meaning a relatively low average difference within actuals and the forecasted values. Results The results demonstrate the effectiveness of reinforcement learning models in solving complex problems and suggest that the model improves in accuracy with more training data added.
Cyber security Enhancements with reinforcement learning: A zero-day vulnerabilityu identification perspective.
Grief is a reaction to loss that is observed across human cultures and even in other species. While the particular expressions of grief vary significantly, universal aspects include experiences of emotional pain and frequent remembering of what was lost. Despite its prevalence, and its obvious nature, considering grief from a functional perspective is puzzling: <i>Why</i> do we grieve? Why is it <i>painful</i>? And why is it sometimes prolonged enough to be clinically impairing? Using the framework of reinforcement learning with memory replay, we offer answers to these questions and suggest, counterintuitively, that grief may function to maximize future reward. That is, grieving may help to unlearn old habits so that alternative sources of reward can be found. We additionally perform a set of simulations that identify and explore optimal grieving parameters and use our model to account for empirical phenomena such as individual differences in human grief trajectories. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Adapting to loss: A computational model of grief.
The desire to appear virtuous can motivate people to punish wrongdoers, a desirable outcome when punishment is clearly deserved. Yet claims that "virtue signaling" is fueling a culture of outrage suggest that reputation concerns may inspire even potentially unmerited punishment. Moreover, might reputation do <i>more</i> to drive punishment in ambiguous situations, where punishment is less clearly deserved, eroding punishers' sensitivity to moral nuance? Across eight studies focused on the U.S. political context (total <i>n</i>= 15,472 Americans from MTurk and Prolific), we show that reputation can drive ambiguously deserved punishment. In situations involving politicized moral transgressions, including those where the case for punishing the transgressor is judged to be relatively ambiguous, subjects expect punishers to be perceived positively by co-partisans, and punish at higher rates when punishing is observable to a co-partisan audience. Moreover, reputation can drive punishment in ambiguous situations even among individuals who personally question the morality of punishment, highlighting the power of reputation to push people away from their values. Yet we find no evidence that reputation erodes sensitivity to nuance by doing more to drive punishment in more ambiguous situations. Instead, subjects expect punishment to look better when more <i>unambiguously</i> deserved, and making punishment observable does as much or more to drive punishment in unambiguous than ambiguous situations-even when the co-partisan audience is strongly ideological (and so might have been expected to encourage undiscerning punishment). We thus suggest that reputation can make people more punitive, even in ambiguous situations, but does not diminish sensitivity to nuance. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Punitive but discerning: Reputation can fuel ambiguously deserved punishment, but does not erode sensitivity to nuance.
The acquisition of clinical skills is a fundamental component of veterinary education, necessitating effective instructional methods that balance theoretical knowledge and practical application. Although this study primarily aimed to assess the effectiveness of clinical skills laboratory (CSL) training in skill development of first-year veterinary students, an emerging observation was the gender-based differences in skills acquisition and improvement. Given the limited existing research on this aspect, these findings contribute to the understanding of potential gender-related learning variations in surgical training. In this prospective, blinded, randomized clinical trial, 140 first-year veterinary students were tasked with basic suturing exercises. Performance scores demonstrated improvement across all assessed skills, with notable gains in suturing proficiency following CSL training. Students who participated in hands-on practice achieved significantly higher posttest scores compared with those who relied solely on online instruction, reinforcing the effectiveness of practical training. Notably, female students in both groups exhibited a statistically higher increase in performance scores than their male counterparts. These findings underscore the importance of practical, model-based training in CSL for fostering skills acquisition and revealed the impact of gender on skill development. This study contributes to the growing body of evidence supporting the integration of experiential learning into veterinary education and offers insights into optimizing training methods to enhance student outcomes.
Impact of Clinical Skills Laboratory Training and Online Education on Suture Skill Development in Veterinary Students: A Gender-Based Analysis.