id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2001.09857
Ileana Jelescu
Yujian Diao, Ting Yin, Rolf Gruetter, Ileana O. Jelescu
An optimized pipeline for functional connectivity analysis in the rat brain
25 pages, 11 figures
null
10.3389/fnins.2021.602170
null
q-bio.NC physics.med-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Resting state functional MRI (rs-fMRI) is a widespread and powerful tool for investigating functional connectivity and brain disorders. However, functional connectivity analysis can be seriously affected by random and structured noise from non-neural sources such as physiology. Thus, it is essential to first reduce thermal noise and then correctly identify and remove non-neural artefacts from rs-fMRI signals through optimized data processing methods. However, existing tools that correct for these effects have been developed for human brain and are not readily transposable to rat data. Therefore, the aim of the present study was to establish a data processing pipeline that can robustly remove random and structured noise from rat rs-fMRI data. It includes a novel denoising approach based on the Marchenko-Pastur Principle Component Analysis (MP-PCA) method, FMRIB's ICA-based Xnoiseifier (FIX) for automatic artefact classification and cleaning, and global signal regression. Our results show that: I) MP-PCA denoising substantially improves the temporal signal-to-noise ratio; II) the pre-trained FIX classifier achieves a high accuracy in artefact classification; III) both artefact cleaning and global signal regression are essential steps in minimizing the within-group variability in control animals and identifying functional connectivity changes in a rat model of sporadic Alzheimer's disease, as compared to controls.
[ { "created": "Mon, 27 Jan 2020 15:23:56 GMT", "version": "v1" }, { "created": "Fri, 4 Sep 2020 10:49:59 GMT", "version": "v2" } ]
2021-04-01
[ [ "Diao", "Yujian", "" ], [ "Yin", "Ting", "" ], [ "Gruetter", "Rolf", "" ], [ "Jelescu", "Ileana O.", "" ] ]
Resting state functional MRI (rs-fMRI) is a widespread and powerful tool for investigating functional connectivity and brain disorders. However, functional connectivity analysis can be seriously affected by random and structured noise from non-neural sources such as physiology. Thus, it is essential to first reduce thermal noise and then correctly identify and remove non-neural artefacts from rs-fMRI signals through optimized data processing methods. However, existing tools that correct for these effects have been developed for human brain and are not readily transposable to rat data. Therefore, the aim of the present study was to establish a data processing pipeline that can robustly remove random and structured noise from rat rs-fMRI data. It includes a novel denoising approach based on the Marchenko-Pastur Principle Component Analysis (MP-PCA) method, FMRIB's ICA-based Xnoiseifier (FIX) for automatic artefact classification and cleaning, and global signal regression. Our results show that: I) MP-PCA denoising substantially improves the temporal signal-to-noise ratio; II) the pre-trained FIX classifier achieves a high accuracy in artefact classification; III) both artefact cleaning and global signal regression are essential steps in minimizing the within-group variability in control animals and identifying functional connectivity changes in a rat model of sporadic Alzheimer's disease, as compared to controls.
0707.2047
Maria Barbi
Aurelien Bancaud, Natalia Conde e Silva, Maria Barbi, Gaudeline Wagner, Jean-Francois Allemand, Julien Mozziconacci, Christophe Lavelle, Vincent Croquette, Jean-Marc Victor, Ariel Prunell and Jean-Louis Viovy
Structural plasticity of single chromatin fibers revealed by torsional manipulation
18 pages, 7 figures, Supplementary information available at http://www.nature.com/nsmb/journal/v13/n5/suppinfo/nsmb1087_S1.html
Nature Structural and Molecular Biology 13, 444-450, 2006
10.1038/nsmb1087
null
q-bio.BM physics.bio-ph q-bio.SC
null
Magnetic tweezers are used to study the mechanical response under torsion of single nucleosome arrays reconstituted on tandem repeats of 5S positioning sequences. Regular arrays are extremely resilient and can reversibly accommodate a large amount of supercoiling without much change in length. This behavior is quantitatively described by a molecular model of the chromatin 3-D architecture. In this model, we assume the existence of a dynamic equilibrium between three conformations of the nucleosome, which are determined by the crossing status of the entry/exit DNAs (positive, null or negative). Torsional strain, in displacing that equilibrium, extensively reorganizes the fiber architecture. The model explains a number of long-standing topological questions regarding DNA in chromatin, and may provide the ground to better understand the dynamic binding of most chromatin-associated proteins.
[ { "created": "Fri, 13 Jul 2007 15:56:45 GMT", "version": "v1" } ]
2007-08-01
[ [ "Bancaud", "Aurelien", "" ], [ "Silva", "Natalia Conde e", "" ], [ "Barbi", "Maria", "" ], [ "Wagner", "Gaudeline", "" ], [ "Allemand", "Jean-Francois", "" ], [ "Mozziconacci", "Julien", "" ], [ "Lavelle", "Christophe", "" ], [ "Croquette", "Vincent", "" ], [ "Victor", "Jean-Marc", "" ], [ "Prunell", "Ariel", "" ], [ "Viovy", "Jean-Louis", "" ] ]
Magnetic tweezers are used to study the mechanical response under torsion of single nucleosome arrays reconstituted on tandem repeats of 5S positioning sequences. Regular arrays are extremely resilient and can reversibly accommodate a large amount of supercoiling without much change in length. This behavior is quantitatively described by a molecular model of the chromatin 3-D architecture. In this model, we assume the existence of a dynamic equilibrium between three conformations of the nucleosome, which are determined by the crossing status of the entry/exit DNAs (positive, null or negative). Torsional strain, in displacing that equilibrium, extensively reorganizes the fiber architecture. The model explains a number of long-standing topological questions regarding DNA in chromatin, and may provide the ground to better understand the dynamic binding of most chromatin-associated proteins.
1306.1390
Sarbaz H. A. Khoshnaw
Sarbaz H. A. Khoshnaw
Reduction of Mathematical Models of Nuclear Receptor Binding to Promoter Regions
23 pages, 6 figures and 4 tables
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study kinetic model of Nuclear Receptor Binding to Promoter Regions. This model is written as a system of ordinary differential equations. Model reduction techniques have been used to simplify chemical kinetics.In this case study, the technique of Pseudo-first order approximation is applied to simplify the reaction rates. CellDesigner has been used to draw the structures of chemical reactions of Nuclear Receptor Binding to Promoter Regions. After model reduction, the general analytical solution for reduced model is given and the number of species and reactions are reduced from 9 species and 6 reactions to 6 species and 5 reactions.
[ { "created": "Thu, 6 Jun 2013 12:21:19 GMT", "version": "v1" }, { "created": "Sat, 10 Aug 2013 18:51:00 GMT", "version": "v2" } ]
2013-08-13
[ [ "Khoshnaw", "Sarbaz H. A.", "" ] ]
We study kinetic model of Nuclear Receptor Binding to Promoter Regions. This model is written as a system of ordinary differential equations. Model reduction techniques have been used to simplify chemical kinetics.In this case study, the technique of Pseudo-first order approximation is applied to simplify the reaction rates. CellDesigner has been used to draw the structures of chemical reactions of Nuclear Receptor Binding to Promoter Regions. After model reduction, the general analytical solution for reduced model is given and the number of species and reactions are reduced from 9 species and 6 reactions to 6 species and 5 reactions.
2301.05307
Thomas F. Varley
Thomas F Varley, Maria Pope, Maria Grazia Puxeddu, Joshua Faskowitz, Olaf Sporns
Partial entropy decomposition reveals higher-order structures in human brain activity
null
null
null
null
q-bio.NC cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
The standard approach to modeling the human brain as a complex system is with a network, where the basic unit of interaction is a pairwise link between two brain regions. While powerful, this approach is limited by the inability to assess higher-order interactions involving three or more elements directly. In this work, we present a method for capturing higher-order dependencies in discrete data based on partial entropy decomposition (PED). Our approach decomposes the joint entropy of the whole system into a set of strictly non-negative partial entropy atoms that describe the redundant, unique, and synergistic interactions that compose the system's structure. We begin by showing how the PED can provide insights into the mathematical structure of both the FC network itself, as well as established measures of higher-order dependency such as the O-information. When applied to resting state fMRI data, we find robust evidence of higher-order synergies that are largely invisible to standard functional connectivity analyses. This synergistic structure distinct from structural features based on redundancy that have previously dominated FC analyses. Our approach can also be localized in time, allowing a frame-by-frame analysis of how the distributions of redundancies and synergies change over the course of a recording. We find that different ensembles of regions can transiently change from being redundancy-dominated to synergy-dominated, and that the temporal pattern is structured in time. These results provide strong evidence that there exists a large space of unexplored structures in human brain data that have been largely missed by a focus on bivariate network connectivity models. This synergistic "shadow structures" is dynamic in time and, likely will illuminate new and interesting links between brain and behavior.
[ { "created": "Thu, 12 Jan 2023 21:37:56 GMT", "version": "v1" } ]
2023-01-16
[ [ "Varley", "Thomas F", "" ], [ "Pope", "Maria", "" ], [ "Puxeddu", "Maria Grazia", "" ], [ "Faskowitz", "Joshua", "" ], [ "Sporns", "Olaf", "" ] ]
The standard approach to modeling the human brain as a complex system is with a network, where the basic unit of interaction is a pairwise link between two brain regions. While powerful, this approach is limited by the inability to assess higher-order interactions involving three or more elements directly. In this work, we present a method for capturing higher-order dependencies in discrete data based on partial entropy decomposition (PED). Our approach decomposes the joint entropy of the whole system into a set of strictly non-negative partial entropy atoms that describe the redundant, unique, and synergistic interactions that compose the system's structure. We begin by showing how the PED can provide insights into the mathematical structure of both the FC network itself, as well as established measures of higher-order dependency such as the O-information. When applied to resting state fMRI data, we find robust evidence of higher-order synergies that are largely invisible to standard functional connectivity analyses. This synergistic structure distinct from structural features based on redundancy that have previously dominated FC analyses. Our approach can also be localized in time, allowing a frame-by-frame analysis of how the distributions of redundancies and synergies change over the course of a recording. We find that different ensembles of regions can transiently change from being redundancy-dominated to synergy-dominated, and that the temporal pattern is structured in time. These results provide strong evidence that there exists a large space of unexplored structures in human brain data that have been largely missed by a focus on bivariate network connectivity models. This synergistic "shadow structures" is dynamic in time and, likely will illuminate new and interesting links between brain and behavior.
2206.10700
Cooper Mellema
Cooper J. Mellema, Kevin P. Nguyen, Alex Treacher, Aixa Andrade Hernandez, Albert A. Montillo
Longitudinal Prognosis of Parkinsons Outcomes using Causal Connectivity
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Parkinsons disease (PD) is a movement disorder and the second most common neurodengerative disease but despite its relative abundance, there are no clinically accepted neuroimaging biomarkers to make prognostic predictions or differentiate between the similar atypical neurodegenerative diseases Multiple System Atrophy and Progressive Supranuclear Palsy. Abnormal connectivity in circuits including the motor circuit and basal ganglia have been previously shown as early markers of neurodegeneration. Therefore, we postulate the combination patterns of interregional dysconnectivity across the brain can be used to form a patient-specific predictive model of disease state and progression in PD. These models, which employ connectivity calculated from noninvasively measured functional MRI, differentially predict between PD and the atypical lookalikes, predict progression on a disease-specific scale, and predict cognitive decline. Further, we identify the connections most informative for progression and diagnosis. When predicting the one-year progression in the Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) and Montreal Cognitive assessment (MoCA), mean absolute errors of 1.8 and 0.6 basis points in the prediction are achieved respectively. A balanced accuracy of 0.68 is attained when distinguishing idiopathic PD versus the lookalikes and healthy controls. We additionally find network components strongly associated with the prognostic and diagnostic tasks, particularly incorporating connections within deep nuclei, motor regions, and the Thalamus. These predictions, using an MRI modality readily available in most clinical settings, demonstrate the strong potential of fMRI connectivity as a prognostic biomarker in Parkinsons disease.
[ { "created": "Tue, 21 Jun 2022 19:44:49 GMT", "version": "v1" } ]
2022-06-23
[ [ "Mellema", "Cooper J.", "" ], [ "Nguyen", "Kevin P.", "" ], [ "Treacher", "Alex", "" ], [ "Hernandez", "Aixa Andrade", "" ], [ "Montillo", "Albert A.", "" ] ]
Parkinsons disease (PD) is a movement disorder and the second most common neurodengerative disease but despite its relative abundance, there are no clinically accepted neuroimaging biomarkers to make prognostic predictions or differentiate between the similar atypical neurodegenerative diseases Multiple System Atrophy and Progressive Supranuclear Palsy. Abnormal connectivity in circuits including the motor circuit and basal ganglia have been previously shown as early markers of neurodegeneration. Therefore, we postulate the combination patterns of interregional dysconnectivity across the brain can be used to form a patient-specific predictive model of disease state and progression in PD. These models, which employ connectivity calculated from noninvasively measured functional MRI, differentially predict between PD and the atypical lookalikes, predict progression on a disease-specific scale, and predict cognitive decline. Further, we identify the connections most informative for progression and diagnosis. When predicting the one-year progression in the Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) and Montreal Cognitive assessment (MoCA), mean absolute errors of 1.8 and 0.6 basis points in the prediction are achieved respectively. A balanced accuracy of 0.68 is attained when distinguishing idiopathic PD versus the lookalikes and healthy controls. We additionally find network components strongly associated with the prognostic and diagnostic tasks, particularly incorporating connections within deep nuclei, motor regions, and the Thalamus. These predictions, using an MRI modality readily available in most clinical settings, demonstrate the strong potential of fMRI connectivity as a prognostic biomarker in Parkinsons disease.
1308.1988
Rodolfo Hermans
Rodolfo I. Hermans
Probability of observing a number of unfolding events while stretching poly-proteins
5 Pages, 5 figures
null
10.1021/la501161p
null
q-bio.BM cond-mat.soft physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mechanical stretching of single poly-proteins is an emerging tool for the study of protein (un)folding, chemical catalysis and polymer physics at the single molecule level. The observed processes i.e unfolding or reduction events, are typically considered to be stochastic and by its nature are susceptible to be censored by the finite duration of the experiment. Here we develop a formal analytical and experimental description on the number of observed events under various conditions of practical interest. We provide a rule of thumb to define the experiment protocol duration. Finally we provide a methodology to accurately estimate the number of stretched molecules based on the number of observed unfolding events. Using this analysis on experimental data we conclude for the first time that poly-ubiquitin binds at a random position both to the substrate and to the pulling probe and that observing all the existing modules is the less likely event.
[ { "created": "Thu, 8 Aug 2013 22:02:01 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2013 17:18:51 GMT", "version": "v2" } ]
2014-07-17
[ [ "Hermans", "Rodolfo I.", "" ] ]
The mechanical stretching of single poly-proteins is an emerging tool for the study of protein (un)folding, chemical catalysis and polymer physics at the single molecule level. The observed processes i.e unfolding or reduction events, are typically considered to be stochastic and by its nature are susceptible to be censored by the finite duration of the experiment. Here we develop a formal analytical and experimental description on the number of observed events under various conditions of practical interest. We provide a rule of thumb to define the experiment protocol duration. Finally we provide a methodology to accurately estimate the number of stretched molecules based on the number of observed unfolding events. Using this analysis on experimental data we conclude for the first time that poly-ubiquitin binds at a random position both to the substrate and to the pulling probe and that observing all the existing modules is the less likely event.
1612.03834
Marco Alberto Javarone
Marco Alberto Javarone and Daniele Marinazzo
Evolutionary Dynamics of Group Formation
13 pages, 5 figures
null
10.1371/journal.pone.0187960
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a model, based on the Evolutionary Game Theory, for studying the dynamics of group formation. The latter constitutes a relevant phenomenon observed in different animal species, whose individuals tend to cluster together forming groups of different size. Results of previous investigations suggest that this phenomenon might have similar reasons across different species, such as improving the individual safety (e.g. from predators), and increasing the probability to get food resources. Remarkably, the group size might strongly vary from species to species, and sometimes even within the same species. In the proposed model, an agent population tries to form homogeneous groups. The homogeneity of a group is computed according to a spin vector, that characterizes each agent, and represents a set a features (e.g. physical traits). We analyze the formation of groups of different size, on varying a parameter named 'individual payoff'. The latter represents the gain one agent would receive acting individually. In particular, the agents choose whether to form a group (receiving a 'group payoff'), or if to play individually (receiving an 'individual payoff'). The phase diagram representing the equilibria of our population shows a sharp transition between the 'group phase' and the 'individual phase', in correspondence of a critical 'individual payoff'. In addition, we found that forming (homogeneous) small groups is easier than forming big groups. To conclude, we deem that our model and the related results supports the hypothesis that the phenomenon of group formation has evolutionary roots.
[ { "created": "Mon, 12 Dec 2016 18:26:52 GMT", "version": "v1" } ]
2018-02-07
[ [ "Javarone", "Marco Alberto", "" ], [ "Marinazzo", "Daniele", "" ] ]
We introduce a model, based on the Evolutionary Game Theory, for studying the dynamics of group formation. The latter constitutes a relevant phenomenon observed in different animal species, whose individuals tend to cluster together forming groups of different size. Results of previous investigations suggest that this phenomenon might have similar reasons across different species, such as improving the individual safety (e.g. from predators), and increasing the probability to get food resources. Remarkably, the group size might strongly vary from species to species, and sometimes even within the same species. In the proposed model, an agent population tries to form homogeneous groups. The homogeneity of a group is computed according to a spin vector, that characterizes each agent, and represents a set a features (e.g. physical traits). We analyze the formation of groups of different size, on varying a parameter named 'individual payoff'. The latter represents the gain one agent would receive acting individually. In particular, the agents choose whether to form a group (receiving a 'group payoff'), or if to play individually (receiving an 'individual payoff'). The phase diagram representing the equilibria of our population shows a sharp transition between the 'group phase' and the 'individual phase', in correspondence of a critical 'individual payoff'. In addition, we found that forming (homogeneous) small groups is easier than forming big groups. To conclude, we deem that our model and the related results supports the hypothesis that the phenomenon of group formation has evolutionary roots.
1402.6373
Satoru Morita
Satoru Morita and Jin Yoshimura
Disadvantages of Preferential Dispersals in Fluctuating Environments
null
J. Phys. Soc. Jpn. 84, 034801 (2015)
10.7566/JPSJ.84.034801
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has not been known whether preferential dispersal is adaptive in fluctuating environments. We investigate the effect of preferential and random dispersals in bet-hedging systems by using a discrete stochastic metapopulation model, where each site fluctuates between good and bad environments with temporal correlation. To explore the optimal migration pattern, an analytical estimation of the total growth is derived by mean field approximation. We found that the preference for fertile sites is disadvantageous when transportation among sites has a cost or the sensitivity of preference is high.
[ { "created": "Tue, 25 Feb 2014 23:27:48 GMT", "version": "v1" } ]
2015-11-12
[ [ "Morita", "Satoru", "" ], [ "Yoshimura", "Jin", "" ] ]
It has not been known whether preferential dispersal is adaptive in fluctuating environments. We investigate the effect of preferential and random dispersals in bet-hedging systems by using a discrete stochastic metapopulation model, where each site fluctuates between good and bad environments with temporal correlation. To explore the optimal migration pattern, an analytical estimation of the total growth is derived by mean field approximation. We found that the preference for fertile sites is disadvantageous when transportation among sites has a cost or the sensitivity of preference is high.
1512.09115
Peter Waddell
Peter J. Waddell
Expanded Distance-based Phylogenetic Analyses of Fossil Homo Skull Shape Evolution
null
null
null
null
q-bio.PE q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyses of a set of 47 fossil and 4 modern skulls using phylogenetic geometric morphometric methods corroborate and refine earlier results. These include evidence that the African Iwo Eleru skull, only about 12,000 years old, indeed represents a new species of near human. In contrast, the earliest known anatomically modern human skull, Qafzeh 9, the skull of Eve from Israel/Palestine, is validated as fully modern in form. Analyses clearly show evidence of archaic introgression into Gravettian, pre_Gravettian, Qafzeh, and Upper Cave (China) populations of near modern humans, and in about that order of increasing archaic content. The enigmatic Saldahna (Elandsfontein) skull emerges as a probable first representative of that lineage, which exclusive of Neanderthals that, eventually lead to modern humans. There is also evidence that the poorly dated Kabwe (Broken Hill) skull represents a much earlier distinct lineage. The clarity of the results bode well for quantitative statistical phylogenetic methods making significant inroads in the stalemates of paleoanthropology.
[ { "created": "Wed, 30 Dec 2015 20:45:38 GMT", "version": "v1" } ]
2015-12-31
[ [ "Waddell", "Peter J.", "" ] ]
Analyses of a set of 47 fossil and 4 modern skulls using phylogenetic geometric morphometric methods corroborate and refine earlier results. These include evidence that the African Iwo Eleru skull, only about 12,000 years old, indeed represents a new species of near human. In contrast, the earliest known anatomically modern human skull, Qafzeh 9, the skull of Eve from Israel/Palestine, is validated as fully modern in form. Analyses clearly show evidence of archaic introgression into Gravettian, pre_Gravettian, Qafzeh, and Upper Cave (China) populations of near modern humans, and in about that order of increasing archaic content. The enigmatic Saldahna (Elandsfontein) skull emerges as a probable first representative of that lineage, which exclusive of Neanderthals that, eventually lead to modern humans. There is also evidence that the poorly dated Kabwe (Broken Hill) skull represents a much earlier distinct lineage. The clarity of the results bode well for quantitative statistical phylogenetic methods making significant inroads in the stalemates of paleoanthropology.
2308.13035
Sanjana Srivastava
Shan Guleria, Benjamin Schwartz, Yash Sharma, Philip Fernandes, James Jablonski, Sodiq Adewole, Sanjana Srivastava, Fisher Rhoads, Michael Porter, Michelle Yeghyayan, Dylan Hyatt, Andrew Copland, Lubaina Ehsan, Donald Brown, Sana Syed
The intersection of video capsule endoscopy and artificial intelligence: addressing unique challenges using machine learning
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Introduction: Technical burdens and time-intensive review processes limit the practical utility of video capsule endoscopy (VCE). Artificial intelligence (AI) is poised to address these limitations, but the intersection of AI and VCE reveals challenges that must first be overcome. We identified five challenges to address. Challenge #1: VCE data are stochastic and contains significant artifact. Challenge #2: VCE interpretation is cost-intensive. Challenge #3: VCE data are inherently imbalanced. Challenge #4: Existing VCE AIMLT are computationally cumbersome. Challenge #5: Clinicians are hesitant to accept AIMLT that cannot explain their process. Methods: An anatomic landmark detection model was used to test the application of convolutional neural networks (CNNs) to the task of classifying VCE data. We also created a tool that assists in expert annotation of VCE data. We then created more elaborate models using different approaches including a multi-frame approach, a CNN based on graph representation, and a few-shot approach based on meta-learning. Results: When used on full-length VCE footage, CNNs accurately identified anatomic landmarks (99.1%), with gradient weighted-class activation mapping showing the parts of each frame that the CNN used to make its decision. The graph CNN with weakly supervised learning (accuracy 89.9%, sensitivity of 91.1%), the few-shot model (accuracy 90.8%, precision 91.4%, sensitivity 90.9%), and the multi-frame model (accuracy 97.5%, precision 91.5%, sensitivity 94.8%) performed well. Discussion: Each of these five challenges is addressed, in part, by one of our AI-based models. Our goal of producing high performance using lightweight models that aim to improve clinician confidence was achieved.
[ { "created": "Thu, 24 Aug 2023 19:00:26 GMT", "version": "v1" } ]
2023-08-28
[ [ "Guleria", "Shan", "" ], [ "Schwartz", "Benjamin", "" ], [ "Sharma", "Yash", "" ], [ "Fernandes", "Philip", "" ], [ "Jablonski", "James", "" ], [ "Adewole", "Sodiq", "" ], [ "Srivastava", "Sanjana", "" ], [ "Rhoads", "Fisher", "" ], [ "Porter", "Michael", "" ], [ "Yeghyayan", "Michelle", "" ], [ "Hyatt", "Dylan", "" ], [ "Copland", "Andrew", "" ], [ "Ehsan", "Lubaina", "" ], [ "Brown", "Donald", "" ], [ "Syed", "Sana", "" ] ]
Introduction: Technical burdens and time-intensive review processes limit the practical utility of video capsule endoscopy (VCE). Artificial intelligence (AI) is poised to address these limitations, but the intersection of AI and VCE reveals challenges that must first be overcome. We identified five challenges to address. Challenge #1: VCE data are stochastic and contains significant artifact. Challenge #2: VCE interpretation is cost-intensive. Challenge #3: VCE data are inherently imbalanced. Challenge #4: Existing VCE AIMLT are computationally cumbersome. Challenge #5: Clinicians are hesitant to accept AIMLT that cannot explain their process. Methods: An anatomic landmark detection model was used to test the application of convolutional neural networks (CNNs) to the task of classifying VCE data. We also created a tool that assists in expert annotation of VCE data. We then created more elaborate models using different approaches including a multi-frame approach, a CNN based on graph representation, and a few-shot approach based on meta-learning. Results: When used on full-length VCE footage, CNNs accurately identified anatomic landmarks (99.1%), with gradient weighted-class activation mapping showing the parts of each frame that the CNN used to make its decision. The graph CNN with weakly supervised learning (accuracy 89.9%, sensitivity of 91.1%), the few-shot model (accuracy 90.8%, precision 91.4%, sensitivity 90.9%), and the multi-frame model (accuracy 97.5%, precision 91.5%, sensitivity 94.8%) performed well. Discussion: Each of these five challenges is addressed, in part, by one of our AI-based models. Our goal of producing high performance using lightweight models that aim to improve clinician confidence was achieved.
1405.1444
Robert McGibbon
Robert T. McGibbon, Bharath Ramsundar, Mohammad M. Sultan, Gert Kiss, and Vijay S. Pande
Understanding Protein Dynamics with L1-Regularized Reversible Hidden Markov Models
null
Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014
null
null
q-bio.BM stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a machine learning framework for modeling protein dynamics. Our approach uses L1-regularized, reversible hidden Markov models to understand large protein datasets generated via molecular dynamics simulations. Our model is motivated by three design principles: (1) the requirement of massive scalability; (2) the need to adhere to relevant physical law; and (3) the necessity of providing accessible interpretations, critical for both cellular biology and rational drug design. We present an EM algorithm for learning and introduce a model selection criteria based on the physical notion of convergence in relaxation timescales. We contrast our model with standard methods in biophysics and demonstrate improved robustness. We implement our algorithm on GPUs and apply the method to two large protein simulation datasets generated respectively on the NCSA Bluewaters supercomputer and the Folding@Home distributed computing network. Our analysis identifies the conformational dynamics of the ubiquitin protein critical to cellular signaling, and elucidates the stepwise activation mechanism of the c-Src kinase protein.
[ { "created": "Tue, 6 May 2014 20:16:41 GMT", "version": "v1" } ]
2014-05-08
[ [ "McGibbon", "Robert T.", "" ], [ "Ramsundar", "Bharath", "" ], [ "Sultan", "Mohammad M.", "" ], [ "Kiss", "Gert", "" ], [ "Pande", "Vijay S.", "" ] ]
We present a machine learning framework for modeling protein dynamics. Our approach uses L1-regularized, reversible hidden Markov models to understand large protein datasets generated via molecular dynamics simulations. Our model is motivated by three design principles: (1) the requirement of massive scalability; (2) the need to adhere to relevant physical law; and (3) the necessity of providing accessible interpretations, critical for both cellular biology and rational drug design. We present an EM algorithm for learning and introduce a model selection criteria based on the physical notion of convergence in relaxation timescales. We contrast our model with standard methods in biophysics and demonstrate improved robustness. We implement our algorithm on GPUs and apply the method to two large protein simulation datasets generated respectively on the NCSA Bluewaters supercomputer and the Folding@Home distributed computing network. Our analysis identifies the conformational dynamics of the ubiquitin protein critical to cellular signaling, and elucidates the stepwise activation mechanism of the c-Src kinase protein.
1308.3700
Robert Patro
Rob Patro (1), Stephen M. Mount (2) and Carl Kingsford (1) ((1) Lane Center for Computational Biology, School of Computer Science, Carnegie Mellon University, (2) Department of Cell Biology and Molecular Genetics and Center for Bioinformatics and Computational Biology, University of Maryland)
Sailfish: Alignment-free Isoform Quantification from RNA-seq Reads using Lightweight Algorithms
28 pages, 2 main figures, 2 algorithm displays, 5 supplementary figures and 2 supplementary notes. Accompanying software available at http://www.cs.cmu.edu/~ckingsf/software/sailfish
null
10.1038/nbt.2862
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA-seq has rapidly become the de facto technique to measure gene expression. However, the time required for analysis has not kept up with the pace of data generation. Here we introduce Sailfish, a novel computational method for quantifying the abundance of previously annotated RNA isoforms from RNA-seq data. Sailfish entirely avoids mapping reads, which is a time-consuming step in all current methods. Sailfish provides quantification estimates much faster than existing approaches (typically 20-times faster) without loss of accuracy.
[ { "created": "Fri, 16 Aug 2013 19:51:34 GMT", "version": "v1" } ]
2014-04-25
[ [ "Patro", "Rob", "" ], [ "Mount", "Stephen M.", "" ], [ "Kingsford", "Carl", "" ] ]
RNA-seq has rapidly become the de facto technique to measure gene expression. However, the time required for analysis has not kept up with the pace of data generation. Here we introduce Sailfish, a novel computational method for quantifying the abundance of previously annotated RNA isoforms from RNA-seq data. Sailfish entirely avoids mapping reads, which is a time-consuming step in all current methods. Sailfish provides quantification estimates much faster than existing approaches (typically 20-times faster) without loss of accuracy.
2008.01600
Donghyun Kim
Donghyun Kim
Factors involved in Cancer Screening Participation: Multilevel Mediation Model
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we identify the factors associated with cancer screening participation in Korea. We expand upon previous studies through a multilevel mediation model and a composite regional socioeconomic status index which combines education level and income level. Results of the model indicate that education level, nutritional education status and income level are significantly associated with cancer screening participation. With our findings in mind, we recommend health authorities to increase promotional health campaigns toward certain at-risk groups and expand the availability of nutrition education programs.
[ { "created": "Mon, 3 Aug 2020 08:54:04 GMT", "version": "v1" } ]
2020-08-05
[ [ "Kim", "Donghyun", "" ] ]
In this paper, we identify the factors associated with cancer screening participation in Korea. We expand upon previous studies through a multilevel mediation model and a composite regional socioeconomic status index which combines education level and income level. Results of the model indicate that education level, nutritional education status and income level are significantly associated with cancer screening participation. With our findings in mind, we recommend health authorities to increase promotional health campaigns toward certain at-risk groups and expand the availability of nutrition education programs.
1811.11042
Pieter Libin
Pieter Libin, Laurens Hernalsteen, Kristof Theys, Perpetua Gomes, Ana Abecasis, and Ann Nowe
Bayesian inference of set-point viral load transmission models
Accepted at BNAIC 2018 (Benelux AI conference)
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When modelling HIV epidemics, it is important to incorporate set-point viral load and its heritability. As set-point viral load distributions can differ significantly amongst epidemics, it is imperative to account for the observed local variation. This can be done by using a heritability model and fitting it to a local set-point viral load distribution. However, as the fitting procedure needs to take into account the actual transmission dynamics (i.e., social network, sexual behaviour), a complex model is required. Furthermore, in order to use the estimates in subsequent modelling analyses to inform prevention policies, it is important to assess parameter robustness. In order to fit set-point viral load models without the need to capture explicitly the transmission dynamics, we present a new protocol. Firstly, we approximate the transmission network from a phylogeny that was inferred from sequences collected in the local epidemic. Secondly, as this transmission network only comprises a single instance of the transmission network space, and our aim is to assess parameter robustness, we infer the transmission network distribution. Thirdly, we fit the parameters of the selected set-point viral load model on multiple samples from the transmission network distribution using approximate Bayesian inference. Our new protocol enables researchers to fit set-point viral load models in their local context, and diagnose the model parameter's uncertainty. Such parameter estimates are essential to enable subsequent modelling analyses, and thus crucial to improve prevention policies.
[ { "created": "Thu, 8 Nov 2018 13:41:20 GMT", "version": "v1" } ]
2018-11-28
[ [ "Libin", "Pieter", "" ], [ "Hernalsteen", "Laurens", "" ], [ "Theys", "Kristof", "" ], [ "Gomes", "Perpetua", "" ], [ "Abecasis", "Ana", "" ], [ "Nowe", "Ann", "" ] ]
When modelling HIV epidemics, it is important to incorporate set-point viral load and its heritability. As set-point viral load distributions can differ significantly amongst epidemics, it is imperative to account for the observed local variation. This can be done by using a heritability model and fitting it to a local set-point viral load distribution. However, as the fitting procedure needs to take into account the actual transmission dynamics (i.e., social network, sexual behaviour), a complex model is required. Furthermore, in order to use the estimates in subsequent modelling analyses to inform prevention policies, it is important to assess parameter robustness. In order to fit set-point viral load models without the need to capture explicitly the transmission dynamics, we present a new protocol. Firstly, we approximate the transmission network from a phylogeny that was inferred from sequences collected in the local epidemic. Secondly, as this transmission network only comprises a single instance of the transmission network space, and our aim is to assess parameter robustness, we infer the transmission network distribution. Thirdly, we fit the parameters of the selected set-point viral load model on multiple samples from the transmission network distribution using approximate Bayesian inference. Our new protocol enables researchers to fit set-point viral load models in their local context, and diagnose the model parameter's uncertainty. Such parameter estimates are essential to enable subsequent modelling analyses, and thus crucial to improve prevention policies.
1402.6533
Victor Lakhno
A.S. Shigaev, O.A. Ponomarev, V.D. Lakhno
Theoretical and Experimental Investigations of DNA Open States
Translation into English of a revised version of the original article (v.1) published in Russian. Review paper, 106 pages
Mathematical biology and bioinformatics, 13 (S.), t162-t267 (2018)
10.17537/2018.13.t162
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Literature data on the properties of DNA open states are reviewed and analyzed. These states are formed as a result of strong DNA fluctuations and have a great impact on a number of biochemical processes; among them is charge transfer in DNA, for example. A comparative analysis of experimental data on the kinetics and thermodynamics of DNA open states for a wide temperature range was carried out. Discrepancies between the results of various experiments have been explained. Three types of DNA open states are recognized based on their differences in thermodynamic properties and other characteristics. Besides, an up-to-date definition of the term "open state" is given. A review is carried out for simple mathematical models of DNA in most of which the state of one pair is described by one or two variables. The main problems arising in theoretical investigations of heterogeneous DNA in the framework of models of this level are considered. The role of each group of models in interpretation of experimental data is discussed. Special consideration is given to the studies of the transfer and localization of the nucleotide pairs oscillations' energy by mechanical models. These processes are shown to play a key role in the dynamics of a heterogeneous duplex. Their theoretical interpretation is proven to be very important for the development of modern molecular biology and biophysics. The main features of the theoretical approaches are considered which enabled describing various experimental data. Prospects of the models' development are described, particular details of their optimization are suggested, and possible ways of modernization of some experimental techniques are discussed.
[ { "created": "Wed, 26 Feb 2014 13:18:34 GMT", "version": "v1" }, { "created": "Mon, 3 Feb 2020 08:36:05 GMT", "version": "v2" } ]
2020-02-04
[ [ "Shigaev", "A. S.", "" ], [ "Ponomarev", "O. A.", "" ], [ "Lakhno", "V. D.", "" ] ]
Literature data on the properties of DNA open states are reviewed and analyzed. These states are formed as a result of strong DNA fluctuations and have a great impact on a number of biochemical processes; among them is charge transfer in DNA, for example. A comparative analysis of experimental data on the kinetics and thermodynamics of DNA open states for a wide temperature range was carried out. Discrepancies between the results of various experiments have been explained. Three types of DNA open states are recognized based on their differences in thermodynamic properties and other characteristics. Besides, an up-to-date definition of the term "open state" is given. A review is carried out for simple mathematical models of DNA in most of which the state of one pair is described by one or two variables. The main problems arising in theoretical investigations of heterogeneous DNA in the framework of models of this level are considered. The role of each group of models in interpretation of experimental data is discussed. Special consideration is given to the studies of the transfer and localization of the nucleotide pairs oscillations' energy by mechanical models. These processes are shown to play a key role in the dynamics of a heterogeneous duplex. Their theoretical interpretation is proven to be very important for the development of modern molecular biology and biophysics. The main features of the theoretical approaches are considered which enabled describing various experimental data. Prospects of the models' development are described, particular details of their optimization are suggested, and possible ways of modernization of some experimental techniques are discussed.
2404.15355
Patricio Arru\'e Pa
Patricio Arru\'e, Kaveh Laksari, Nancy Sweitzer, Mindy Fain, Nima Toosizadeh
Frailty Assessment in Aortic Stenosis based on Dynamic Interconnection between Cardiac and Motor Systems
arXiv admin note: substantial text overlap with arXiv:2303.13591
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Aortic stenosis (AS) is the most common acquired valvar disease and is associated with increased risk for frailty. Frailty as a geriatric syndrome is associated with muscle weakness and a compromised autonomic nervous system (ANS) performance in older adults. The purpose of the current work was to assess differences in both motor and ANS performance, and interaction between them, as symptoms of frailty in community dwelling older adults with and without AS. Results: Eighty-six participants were recruited, including 30 with (age=72$\pm$11, 10 non-frail and 20 pre-frail/frail) and 56 without AS (age=80$\pm$8, 12 non-frail and 44 pre-frail/frail). There was a significant difference in UEF motor score between older adults with and without AS (p<0.01, mean values of 0.57$\pm$0.25 and 0.48$\pm$0.23, respectively). Differences in UEF motor score was also observed between the frailty groups (p=0.02, mean values of 0.55$\pm$0.24 and 0.40$\pm$0.20 for pre-frail/frail and non-frail, respectively). CCM parameters showed significant differences between the frailty groups (p=0.02, mean CCM of 0.69$\pm$0.05 for non-frail and 0.54$\pm$0.03 for pre-frail/frail), but not between the AS groups (p>0.70). No significant interaction was observed between frailty and AS condition (p>0.08). Conclusion: Current findings suggest that ANS measures may be highly associated with frailty regardless of AS condition. Combining motor and HR dynamics parameters in a multimodal model may provide a promising tool for frailty assessment
[ { "created": "Tue, 16 Apr 2024 20:58:15 GMT", "version": "v1" } ]
2024-04-25
[ [ "Arrué", "Patricio", "" ], [ "Laksari", "Kaveh", "" ], [ "Sweitzer", "Nancy", "" ], [ "Fain", "Mindy", "" ], [ "Toosizadeh", "Nima", "" ] ]
Background: Aortic stenosis (AS) is the most common acquired valvar disease and is associated with increased risk for frailty. Frailty as a geriatric syndrome is associated with muscle weakness and a compromised autonomic nervous system (ANS) performance in older adults. The purpose of the current work was to assess differences in both motor and ANS performance, and interaction between them, as symptoms of frailty in community dwelling older adults with and without AS. Results: Eighty-six participants were recruited, including 30 with (age=72$\pm$11, 10 non-frail and 20 pre-frail/frail) and 56 without AS (age=80$\pm$8, 12 non-frail and 44 pre-frail/frail). There was a significant difference in UEF motor score between older adults with and without AS (p<0.01, mean values of 0.57$\pm$0.25 and 0.48$\pm$0.23, respectively). Differences in UEF motor score was also observed between the frailty groups (p=0.02, mean values of 0.55$\pm$0.24 and 0.40$\pm$0.20 for pre-frail/frail and non-frail, respectively). CCM parameters showed significant differences between the frailty groups (p=0.02, mean CCM of 0.69$\pm$0.05 for non-frail and 0.54$\pm$0.03 for pre-frail/frail), but not between the AS groups (p>0.70). No significant interaction was observed between frailty and AS condition (p>0.08). Conclusion: Current findings suggest that ANS measures may be highly associated with frailty regardless of AS condition. Combining motor and HR dynamics parameters in a multimodal model may provide a promising tool for frailty assessment
1612.08084
William Bialek
Mariela D. Petkova, Ga\v{s}per Tka\v{c}ik, William Bialek, Eric F. Wieschaus, and Thomas Gregor
Optimal decoding of information from a genetic network
null
Cell 176, 844 (2019)
10.1016/j.cell.2019.01.007
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression levels carry information about signals that have functional significance for the organism. Using the gap gene network in the fruit fly embryo as an example, we show how this information can be decoded, building a dictionary that translates expression levels into a map of implied positions. The optimal decoder makes use of graded variations in absolute expression level, resulting in positional estimates that are precise to ~1% of the embryo's length. We test this optimal decoder by analyzing gap gene expression in embryos lacking some of the primary maternal inputs to the network. The resulting maps are distorted, and these distortions predict, with no free parameters, the positions of expression stripes for the pair-rule genes in the mutant embryos.
[ { "created": "Fri, 23 Dec 2016 20:33:57 GMT", "version": "v1" } ]
2024-01-23
[ [ "Petkova", "Mariela D.", "" ], [ "Tkačik", "Gašper", "" ], [ "Bialek", "William", "" ], [ "Wieschaus", "Eric F.", "" ], [ "Gregor", "Thomas", "" ] ]
Gene expression levels carry information about signals that have functional significance for the organism. Using the gap gene network in the fruit fly embryo as an example, we show how this information can be decoded, building a dictionary that translates expression levels into a map of implied positions. The optimal decoder makes use of graded variations in absolute expression level, resulting in positional estimates that are precise to ~1% of the embryo's length. We test this optimal decoder by analyzing gap gene expression in embryos lacking some of the primary maternal inputs to the network. The resulting maps are distorted, and these distortions predict, with no free parameters, the positions of expression stripes for the pair-rule genes in the mutant embryos.
2308.02257
Jun Guo
Xingkun Niu, Feng Gao, Shaojie Hou, Shihao Liu, Xinmin Zhao, Jun Guo, Liping Wang, Feng Zhang
Enhancing Cell Proliferation and Migration by MIR-Carbonyl Vibrational Coupling: Insights from Transcriptome Profiling
20 pages, 5 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell proliferation and migration highly relate to normal tissue self-healing, therefore it is highly significant for artificial controlling. Recently, vibrational strong coupling between biomolecules and Mid-infrared (MIR) light photons has been successfully used to modify in vitro bioreactions, neuronal signaling and even animal behavior. However, the synergistic effects from molecules to cells remains unclear, and the regulation of MIR on cells needs to be explained from the molecular level. Herein, the proliferation rate and migration capacity of fibroblasts were increased by 156% and 162.5%, respectively, by vibratory coupling of 5.6 micrometers photons with carbonyl groups in biomolecules. Through transcriptome sequencing analysis, the regulatory mechanism of infrared light in 5.6 micrometers was explained from the level of signal pathway and cell components. 5.6 micrometers optical high power lasers can regulate cell function through vibrational strong coupling while minimizing photothermal damage. This work not only sheds light on the non-thermal effect on MIR light-based on wound healing, but also provides new evidence to future frequency medicine.
[ { "created": "Thu, 3 Aug 2023 08:45:47 GMT", "version": "v1" } ]
2023-08-07
[ [ "Niu", "Xingkun", "" ], [ "Gao", "Feng", "" ], [ "Hou", "Shaojie", "" ], [ "Liu", "Shihao", "" ], [ "Zhao", "Xinmin", "" ], [ "Guo", "Jun", "" ], [ "Wang", "Liping", "" ], [ "Zhang", "Feng", "" ] ]
Cell proliferation and migration highly relate to normal tissue self-healing, therefore it is highly significant for artificial controlling. Recently, vibrational strong coupling between biomolecules and Mid-infrared (MIR) light photons has been successfully used to modify in vitro bioreactions, neuronal signaling and even animal behavior. However, the synergistic effects from molecules to cells remains unclear, and the regulation of MIR on cells needs to be explained from the molecular level. Herein, the proliferation rate and migration capacity of fibroblasts were increased by 156% and 162.5%, respectively, by vibratory coupling of 5.6 micrometers photons with carbonyl groups in biomolecules. Through transcriptome sequencing analysis, the regulatory mechanism of infrared light in 5.6 micrometers was explained from the level of signal pathway and cell components. 5.6 micrometers optical high power lasers can regulate cell function through vibrational strong coupling while minimizing photothermal damage. This work not only sheds light on the non-thermal effect on MIR light-based on wound healing, but also provides new evidence to future frequency medicine.
0807.0076
Nicolas Destainville
Nicolas Destainville (LPT, IPBS), Aude Sauliere (IPBS), Laurence Salome (IPBS)
Comment to the Paper of Michael J. Saxton: "A Biological Interpretation of Transient Anomalous Subdiffusion. I. Qualitative Model"
To appear in Biophysical Journal
Biophys. J. 95, 3117 (2008)
10.1529/biophysj.108.136739
null
q-bio.SC cond-mat.soft q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper, Michael J. Saxton proposes to interpret as anomalous diffusion the occurrence of apparent transient sub-diffusive regimes in mean-squared displacements (MSD) plots, calculated from experimental trajectories of molecules diffusing in living cells, acquired by Single Particle (or Molecule) Tracking techniques (SPT or SMT). In this Comment, without questioning the existence of sub-diffusive behaviors, which certainly play a key role in numbers of mechanisms in living systems, we point out that the data used by J.M. Saxton can as well be fitted by a simple law, resulting from confined diffusion at short times, with a slower free diffusion superimposed at larger times. When visualizing MSD plots, the transition from short-term diffusion confined in domains of size L, to slower, longer-term free diffusion, can be confused with anomalous diffusion over several orders of magnitude of time.
[ { "created": "Tue, 1 Jul 2008 07:49:56 GMT", "version": "v1" } ]
2009-11-13
[ [ "Destainville", "Nicolas", "", "LPT, IPBS" ], [ "Sauliere", "Aude", "", "IPBS" ], [ "Salome", "Laurence", "", "IPBS" ] ]
In a recent paper, Michael J. Saxton proposes to interpret as anomalous diffusion the occurrence of apparent transient sub-diffusive regimes in mean-squared displacements (MSD) plots, calculated from experimental trajectories of molecules diffusing in living cells, acquired by Single Particle (or Molecule) Tracking techniques (SPT or SMT). In this Comment, without questioning the existence of sub-diffusive behaviors, which certainly play a key role in numbers of mechanisms in living systems, we point out that the data used by J.M. Saxton can as well be fitted by a simple law, resulting from confined diffusion at short times, with a slower free diffusion superimposed at larger times. When visualizing MSD plots, the transition from short-term diffusion confined in domains of size L, to slower, longer-term free diffusion, can be confused with anomalous diffusion over several orders of magnitude of time.
2201.07960
Evan Johnson
Evan Johnson and Alan Hastings
Coexistence in spatiotemporally fluctuating environments
40 pages, 1 figure, 2 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Ecologists have put forward many explanations for coexistence, but these are only partial explanations; nature is complex, so it is reasonable to assume that in any given ecological community, multiple mechanisms of coexistence are operating at the same time. Here, we present a methodology for quantifying the relative importance of different explanations for coexistence, based on an extension of Modern Coexistence Theory. Current versions of Modern Coexistence Theory only allow for the analysis of communities that are affected by spatial or temporal environmental variation, but not both. We show how to analyze communities with spatiotemporal fluctuations, how to parse the importance of spatial variation and temporal variation, and how to measure everything with either mathematical expressions or simulation experiments. Our extension of Modern Coexistence Theory allows empiricists to use realistic models and more data to better infer the mechanisms of coexistence in real communities.
[ { "created": "Thu, 20 Jan 2022 02:39:39 GMT", "version": "v1" } ]
2022-01-21
[ [ "Johnson", "Evan", "" ], [ "Hastings", "Alan", "" ] ]
Ecologists have put forward many explanations for coexistence, but these are only partial explanations; nature is complex, so it is reasonable to assume that in any given ecological community, multiple mechanisms of coexistence are operating at the same time. Here, we present a methodology for quantifying the relative importance of different explanations for coexistence, based on an extension of Modern Coexistence Theory. Current versions of Modern Coexistence Theory only allow for the analysis of communities that are affected by spatial or temporal environmental variation, but not both. We show how to analyze communities with spatiotemporal fluctuations, how to parse the importance of spatial variation and temporal variation, and how to measure everything with either mathematical expressions or simulation experiments. Our extension of Modern Coexistence Theory allows empiricists to use realistic models and more data to better infer the mechanisms of coexistence in real communities.
1410.0587
Alexey Mikaberidze
Alexey Mikaberidze, Christopher C. Mundt and Sebastian Bonhoeffer
The effect of spatial scales on the reproductive fitness of plant pathogens
29 pages, 6 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plant diseases often cause serious yield losses in agriculture. A pathogen's reproductive fitness can be quantified by the basic reproductive number, R0. Since pathogen transmission between host plants depends on the spatial separation between them, R0 is strongly influenced by the spatial scales of pathogen dispersal and the spatial scales of the host population. The basic reproductive number was found to increase with the field size at small field sizes and to saturate to a constant value at large field sizes. It reaches a maximum in quadratic fields and decreases as the field becomes elongated. This pattern appears to be quite general: it holds for dispersal kernels that decrease exponentially or faster as well as for "fat-tailed" dispersal kernels that decrease slower than exponential (i.e. power-law kernels). We used this approach to estimate R0 in wheat stripe rust (an important pathogen caused by Puccinia striiformis), since disease gradients for this pathogen were thoroughly measured over large distances [Sackett and Mundt, Phytopathology, 95, 983 (2005)]. For the two largest datasets, we estimated R0 in the limit of large fields to be of the order of 50. These estimates are consistent with independent field observations [Cowger et al. (2005), Phytopathology, 95, 97282; Farber et al. (2013), Phytopathology, 103, 41]. We present a proof of principle of a novel approach to estimate the basic reproductive number, R0, of plant pathogens using wheat stripe rust as a case study. We found that the spatial extent over which R0 changes strongly is quite fine-scaled (about 30 m of the linear extension of the field). Our results indicate that in order to optimize the spatial scale of deployment of fungicides or host resistances, the adjustments should be made at a fine spatial scale.
[ { "created": "Thu, 2 Oct 2014 15:25:42 GMT", "version": "v1" } ]
2014-10-03
[ [ "Mikaberidze", "Alexey", "" ], [ "Mundt", "Christopher C.", "" ], [ "Bonhoeffer", "Sebastian", "" ] ]
Plant diseases often cause serious yield losses in agriculture. A pathogen's reproductive fitness can be quantified by the basic reproductive number, R0. Since pathogen transmission between host plants depends on the spatial separation between them, R0 is strongly influenced by the spatial scales of pathogen dispersal and the spatial scales of the host population. The basic reproductive number was found to increase with the field size at small field sizes and to saturate to a constant value at large field sizes. It reaches a maximum in quadratic fields and decreases as the field becomes elongated. This pattern appears to be quite general: it holds for dispersal kernels that decrease exponentially or faster as well as for "fat-tailed" dispersal kernels that decrease slower than exponential (i.e. power-law kernels). We used this approach to estimate R0 in wheat stripe rust (an important pathogen caused by Puccinia striiformis), since disease gradients for this pathogen were thoroughly measured over large distances [Sackett and Mundt, Phytopathology, 95, 983 (2005)]. For the two largest datasets, we estimated R0 in the limit of large fields to be of the order of 50. These estimates are consistent with independent field observations [Cowger et al. (2005), Phytopathology, 95, 97282; Farber et al. (2013), Phytopathology, 103, 41]. We present a proof of principle of a novel approach to estimate the basic reproductive number, R0, of plant pathogens using wheat stripe rust as a case study. We found that the spatial extent over which R0 changes strongly is quite fine-scaled (about 30 m of the linear extension of the field). Our results indicate that in order to optimize the spatial scale of deployment of fungicides or host resistances, the adjustments should be made at a fine spatial scale.
0712.1554
F. Cecconi
Carlo Guardiani, Fabio Cecconi, Roberto Livi
Computational analysis of folding and mutation properties of C5 domain from Myosin binding protein C
RevTeX, 10 pages, 9 eps-figures
null
null
null
q-bio.GN cond-mat.soft cond-mat.stat-mech q-bio.BM
null
Thermal folding Molecular Dynamics simulations of the domain C5 from Myosin Binding Protein C were performed using a native-centric model to study the role of three mutations related to Familial Hypertrophic Cardiomyopathy. Mutation of Asn755 causes the largest shift of the folding temperature, and the residue is located in the CFGA' beta-sheet featuring the highest Phi-values. The mutation thus appears to reduce the thermodynamic stability in agreement with experimental data. The mutations on Arg654 and Arg668, conversely, cause a little change in the folding temperature and they reside in the low Phi-value BDE beta-sheet, so that their pathologic role cannot be related to impairment of the folding process but possibly to the binding with target molecules. As the typical signature of Domain C5 is the presence of a longer and destabilizing CD-loop with respect to the other Ig-like domains we completed the work with a bioinformatic analysis of this loop showing a high density of negative charge and low hydrophobicity. This indicates the CD-loop as a natively unfolded sequence with a likely coupling between folding and ligand binding.
[ { "created": "Mon, 10 Dec 2007 17:56:05 GMT", "version": "v1" } ]
2007-12-11
[ [ "Guardiani", "Carlo", "" ], [ "Cecconi", "Fabio", "" ], [ "Livi", "Roberto", "" ] ]
Thermal folding Molecular Dynamics simulations of the domain C5 from Myosin Binding Protein C were performed using a native-centric model to study the role of three mutations related to Familial Hypertrophic Cardiomyopathy. Mutation of Asn755 causes the largest shift of the folding temperature, and the residue is located in the CFGA' beta-sheet featuring the highest Phi-values. The mutation thus appears to reduce the thermodynamic stability in agreement with experimental data. The mutations on Arg654 and Arg668, conversely, cause a little change in the folding temperature and they reside in the low Phi-value BDE beta-sheet, so that their pathologic role cannot be related to impairment of the folding process but possibly to the binding with target molecules. As the typical signature of Domain C5 is the presence of a longer and destabilizing CD-loop with respect to the other Ig-like domains we completed the work with a bioinformatic analysis of this loop showing a high density of negative charge and low hydrophobicity. This indicates the CD-loop as a natively unfolded sequence with a likely coupling between folding and ligand binding.
q-bio/0501014
Ophir Flomenbom
O. Flomenbom and J. Klafter
Single Stranded DNA Translocation Through a Fluctuating Nanopore
6 pages, 3 figures
J.T. Fourkas, P. Levitz, M. Urbakh, K.J. Wahl Eds. Dynamics in Small Confining Systems, MRS Proceedings (Boston, 2003)
null
null
q-bio.SC cond-mat.soft
null
We investigate the translocation of a single stranded DNA (ssDNA) through a pore, which fluctuates between two conformations, by using coupled master equations (ME). The probability density function (PDF) of the first passage times (FPT) of the translocation process is calculated, displaying a triple, double or mono-peaked behavior, depending on the system parameters. An analytical expression for the mean first passage time (MFPT) of the translocation process is derived, and provides an extensive characterization of the translocation process.
[ { "created": "Tue, 11 Jan 2005 23:54:07 GMT", "version": "v1" }, { "created": "Sat, 15 Jan 2005 13:21:14 GMT", "version": "v2" } ]
2007-05-23
[ [ "Flomenbom", "O.", "" ], [ "Klafter", "J.", "" ] ]
We investigate the translocation of a single stranded DNA (ssDNA) through a pore, which fluctuates between two conformations, by using coupled master equations (ME). The probability density function (PDF) of the first passage times (FPT) of the translocation process is calculated, displaying a triple, double or mono-peaked behavior, depending on the system parameters. An analytical expression for the mean first passage time (MFPT) of the translocation process is derived, and provides an extensive characterization of the translocation process.
q-bio/0602010
Alain Destexhe
Michelle Rudolph and Alain Destexhe
On the use of analytic expressions for the voltage distribution to analyze intracellular recordings
13 pages, 5 figures
Neural Computation (in press, 2006)
null
null
q-bio.NC q-bio.QM
null
Different analytic expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed, and can be very helpful to analyze experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare, and which expression should be used in a given situation. In this note, we provide a comparison of the different approximations available, with an aim to delineate which expression is most suitable for analyzing experimental data.
[ { "created": "Thu, 9 Feb 2006 20:31:02 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rudolph", "Michelle", "" ], [ "Destexhe", "Alain", "" ] ]
Different analytic expressions for the membrane potential distribution of membranes subject to synaptic noise have been proposed, and can be very helpful to analyze experimental data. However, all of these expressions are either approximations or limit cases, and it is not clear how they compare, and which expression should be used in a given situation. In this note, we provide a comparison of the different approximations available, with an aim to delineate which expression is most suitable for analyzing experimental data.
1404.6420
Gabriel Kreiman
Jedediah M. Singer, Joseph R. Madsen, William S. Anderson, Gabriel Kreiman
Sensitivity to Timing and Order in Human Visual Cortex
10 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the how the brain encodes visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences between parts as small as 17 ms. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. This sensitivity to the order of stimulus presentation provides evidence that the brain may use differences in relative timing as a means of representing information.
[ { "created": "Fri, 25 Apr 2014 14:00:42 GMT", "version": "v1" } ]
2014-04-28
[ [ "Singer", "Jedediah M.", "" ], [ "Madsen", "Joseph R.", "" ], [ "Anderson", "William S.", "" ], [ "Kreiman", "Gabriel", "" ] ]
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the how the brain encodes visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences between parts as small as 17 ms. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. This sensitivity to the order of stimulus presentation provides evidence that the brain may use differences in relative timing as a means of representing information.
0810.4732
David Basanta
David Basanta, Matthias Simon, Haralambos Hatzikirou and Andreas Deutsch
Evolutionary game theory elucidates the role of glycolysis in glioma progression and invasion
Preprint of paper to be published in Cell Proliferation in December 2008
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumour progression has been described as a sequence of traits or phenotypes that cells have to acquire if the neoplasm is to become an invasive and malignant cancer. Although the genetic mutations that lead to these phenotypes are random, the process by which some of these mutations become successful and spread is influenced by the tumour microenvironment and the presence of other phenotypes. It is thus likely that some phenotypes that are essential in tumour progression will emerge in the tumour population only with the prior presence of other different phenotypes. In this paper we use evolutionary game theory to analyse the interactions between three different tumour cell phenotypes defined by autonomous growth, anaerobic glycolysis, and cancer cell invasion. The model allows to understand certain specific aspects of glioma progression such as the emergence of diffuse tumour cell invasion in low-grade tumours. We find that the invasive phenotype is more likely to evolve after the appearance of the glycolytic phenotype which would explain the ubiquitous presence of invasive growth in malignant tumours. The result suggests that therapies which increase the fitness cost of switching to anaerobic glycolysis might decrease the probability of the emergence of more invasive phenotypes
[ { "created": "Mon, 27 Oct 2008 00:11:05 GMT", "version": "v1" } ]
2008-10-28
[ [ "Basanta", "David", "" ], [ "Simon", "Matthias", "" ], [ "Hatzikirou", "Haralambos", "" ], [ "Deutsch", "Andreas", "" ] ]
Tumour progression has been described as a sequence of traits or phenotypes that cells have to acquire if the neoplasm is to become an invasive and malignant cancer. Although the genetic mutations that lead to these phenotypes are random, the process by which some of these mutations become successful and spread is influenced by the tumour microenvironment and the presence of other phenotypes. It is thus likely that some phenotypes that are essential in tumour progression will emerge in the tumour population only with the prior presence of other different phenotypes. In this paper we use evolutionary game theory to analyse the interactions between three different tumour cell phenotypes defined by autonomous growth, anaerobic glycolysis, and cancer cell invasion. The model allows to understand certain specific aspects of glioma progression such as the emergence of diffuse tumour cell invasion in low-grade tumours. We find that the invasive phenotype is more likely to evolve after the appearance of the glycolytic phenotype which would explain the ubiquitous presence of invasive growth in malignant tumours. The result suggests that therapies which increase the fitness cost of switching to anaerobic glycolysis might decrease the probability of the emergence of more invasive phenotypes
1511.08602
Stefanie Hirsch
Stefanie Hirsch, Angelika Manhart, Christian Schmeiser
Mathematical Modeling of Myosin Induced Bistability of Lamellipodial Fragments
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For various cell types and for lamellipodial fragments on flat surfaces, externally induced and spontaneous transitions between symmetric nonmoving states and polarized migration have been observed. This behavior is indicative of bistability of the cytoskeleton dynamics. In this work, the Filament Based Lamellipodium Model (FBLM), a two-dimensional, anisotropic, two-phase continuum model for the dynamics of the actin filament network in lamellipodia, is extended by a new description of actin-myosin interaction. For appropriately chosen parameter values, the resulting model has bistable dynamics with stable states showing the qualitative features observed in experiments. This is demonstrated by numerical simulations and by an analysis of a strongly simplified version of the FBLM with rigid filaments and planar lamellipodia at the cell front and rear.
[ { "created": "Fri, 27 Nov 2015 10:12:06 GMT", "version": "v1" } ]
2015-11-30
[ [ "Hirsch", "Stefanie", "" ], [ "Manhart", "Angelika", "" ], [ "Schmeiser", "Christian", "" ] ]
For various cell types and for lamellipodial fragments on flat surfaces, externally induced and spontaneous transitions between symmetric nonmoving states and polarized migration have been observed. This behavior is indicative of bistability of the cytoskeleton dynamics. In this work, the Filament Based Lamellipodium Model (FBLM), a two-dimensional, anisotropic, two-phase continuum model for the dynamics of the actin filament network in lamellipodia, is extended by a new description of actin-myosin interaction. For appropriately chosen parameter values, the resulting model has bistable dynamics with stable states showing the qualitative features observed in experiments. This is demonstrated by numerical simulations and by an analysis of a strongly simplified version of the FBLM with rigid filaments and planar lamellipodia at the cell front and rear.
1710.02563
Luca Mazzucato
Luca Mazzucato, Giancarlo La Camera, Alfredo Fontanini
Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli
37 pages, 4+3 figures; v2: improved results, 7 new supplementary figures; refs added
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensory stimuli can be recognized more rapidly when they are expected. This phenomenon depends on expectation affecting the cortical processing of sensory information. However, virtually nothing is known on the mechanisms responsible for the effects of expectation on sensory networks. Here, we report a novel computational mechanism underlying the expectation-dependent acceleration of coding observed in the gustatory cortex (GC) of alert rats. We use a recurrent spiking network model with a clustered architecture capturing essential features of cortical activity, including the metastable activity observed in GC before and after gustatory stimulation. Relying both on network theory and computer simulations, we propose that expectation exerts its function by modulating the intrinsically generated dynamics preceding taste delivery. Our model, whose predictions are confirmed in the experimental data, demonstrates how the modulation of intrinsic metastable activity can shape sensory coding and mediate cognitive processes such as the expectation of relevant events. Altogether, these results provide a biologically plausible theory of expectation and ascribe a new functional role to intrinsically generated, metastable activity.
[ { "created": "Fri, 6 Oct 2017 19:30:20 GMT", "version": "v1" }, { "created": "Fri, 2 Nov 2018 17:53:21 GMT", "version": "v2" } ]
2018-11-05
[ [ "Mazzucato", "Luca", "" ], [ "La Camera", "Giancarlo", "" ], [ "Fontanini", "Alfredo", "" ] ]
Sensory stimuli can be recognized more rapidly when they are expected. This phenomenon depends on expectation affecting the cortical processing of sensory information. However, virtually nothing is known on the mechanisms responsible for the effects of expectation on sensory networks. Here, we report a novel computational mechanism underlying the expectation-dependent acceleration of coding observed in the gustatory cortex (GC) of alert rats. We use a recurrent spiking network model with a clustered architecture capturing essential features of cortical activity, including the metastable activity observed in GC before and after gustatory stimulation. Relying both on network theory and computer simulations, we propose that expectation exerts its function by modulating the intrinsically generated dynamics preceding taste delivery. Our model, whose predictions are confirmed in the experimental data, demonstrates how the modulation of intrinsic metastable activity can shape sensory coding and mediate cognitive processes such as the expectation of relevant events. Altogether, these results provide a biologically plausible theory of expectation and ascribe a new functional role to intrinsically generated, metastable activity.
1201.5082
Sayak Mukherjee
Michael Dworkin, Sayak Mukherjee, Ciriyam Jayaprakash and Jayajit Das
Dramatic reduction of dimensionality in large biochemical networks due to strong pair correlations
22 pages, 4 figures, The supplementary material is available at http://planetx.nationwidechildrens.org/~jayajit/, accepted for publication in the Journal of Royal Society Interface
null
null
null
q-bio.QM cond-mat.stat-mech q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large multidimensionality of high-throughput datasets pertaining to cell signaling and gene regulation renders it difficult to extract mechanisms underlying the complex kinetics involving various biochemical compounds (e.g., proteins, lipids). Data-driven models often circumvent this difficulty by using pair correlations of the protein expression levels to produce a small numbers (<10) of principal components, each a linear combination of the concentrations, to successfully model how cells respond to different stimuli. However, it is not understood if this reduction is specific to a particular biological system or to nature of the stimuli used in these experiments. We study temporal changes in pair correlations described by the covariance matrix between different molecular species that evolve following deterministic mass action kinetics in large biologically relevant reaction networks and show that this dramatic reduction of dimensions (from hundreds to <5) arises from the strong correlations between different species at any time and is in sensitive of the form of the nonlinear interactions, network architecture and values of rate constants and concentrations over a wide range. We relate temporal changes in the eigenvalue spectrum of the covariance matrix to low-dimensional, local changes in directions of the trajectory embedded in much larger dimensions using elementary differential geometry. We illustrate how to extract biologically relevant insights such as identifying significant time scales and groups of correlated chemical species from our analysis. Our work provides for the first time a theoretical underpinning for the successful experimental analysis and points to way to extract mechanisms from large- scale high throughput data sets.
[ { "created": "Tue, 24 Jan 2012 18:37:17 GMT", "version": "v1" } ]
2012-01-25
[ [ "Dworkin", "Michael", "" ], [ "Mukherjee", "Sayak", "" ], [ "Jayaprakash", "Ciriyam", "" ], [ "Das", "Jayajit", "" ] ]
Large multidimensionality of high-throughput datasets pertaining to cell signaling and gene regulation renders it difficult to extract mechanisms underlying the complex kinetics involving various biochemical compounds (e.g., proteins, lipids). Data-driven models often circumvent this difficulty by using pair correlations of the protein expression levels to produce a small numbers (<10) of principal components, each a linear combination of the concentrations, to successfully model how cells respond to different stimuli. However, it is not understood if this reduction is specific to a particular biological system or to nature of the stimuli used in these experiments. We study temporal changes in pair correlations described by the covariance matrix between different molecular species that evolve following deterministic mass action kinetics in large biologically relevant reaction networks and show that this dramatic reduction of dimensions (from hundreds to <5) arises from the strong correlations between different species at any time and is in sensitive of the form of the nonlinear interactions, network architecture and values of rate constants and concentrations over a wide range. We relate temporal changes in the eigenvalue spectrum of the covariance matrix to low-dimensional, local changes in directions of the trajectory embedded in much larger dimensions using elementary differential geometry. We illustrate how to extract biologically relevant insights such as identifying significant time scales and groups of correlated chemical species from our analysis. Our work provides for the first time a theoretical underpinning for the successful experimental analysis and points to way to extract mechanisms from large- scale high throughput data sets.
2007.06847
Po-Nan Li
Po-Nan Li and Saulo H. P. de Oliveira and Soichi Wakatsuki and Henry van den Bedem
Sequence-guided protein structure determination using graph convolutional and recurrent networks
6 pages, 5 figures; accepted to IEEE BIBE 2020
null
null
null
q-bio.BM cs.CE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single particle, cryogenic electron microscopy (cryo-EM) experiments now routinely produce high-resolution data for large proteins and their complexes. Building an atomic model into a cryo-EM density map is challenging, particularly when no structure for the target protein is known a priori. Existing protocols for this type of task often rely on significant human intervention and can take hours to many days to produce an output. Here, we present a fully automated, template-free model building approach that is based entirely on neural networks. We use a graph convolutional network (GCN) to generate an embedding from a set of rotamer-based amino acid identities and candidate 3-dimensional C$\alpha$ locations. Starting from this embedding, we use a bidirectional long short-term memory (LSTM) module to order and label the candidate identities and atomic locations consistent with the input protein sequence to obtain a structural model. Our approach paves the way for determining protein structures from cryo-EM densities at a fraction of the time of existing approaches and without the need for human intervention.
[ { "created": "Tue, 14 Jul 2020 06:24:07 GMT", "version": "v1" }, { "created": "Mon, 31 Aug 2020 13:59:01 GMT", "version": "v2" }, { "created": "Thu, 3 Sep 2020 02:25:28 GMT", "version": "v3" } ]
2020-09-04
[ [ "Li", "Po-Nan", "" ], [ "de Oliveira", "Saulo H. P.", "" ], [ "Wakatsuki", "Soichi", "" ], [ "Bedem", "Henry van den", "" ] ]
Single particle, cryogenic electron microscopy (cryo-EM) experiments now routinely produce high-resolution data for large proteins and their complexes. Building an atomic model into a cryo-EM density map is challenging, particularly when no structure for the target protein is known a priori. Existing protocols for this type of task often rely on significant human intervention and can take hours to many days to produce an output. Here, we present a fully automated, template-free model building approach that is based entirely on neural networks. We use a graph convolutional network (GCN) to generate an embedding from a set of rotamer-based amino acid identities and candidate 3-dimensional C$\alpha$ locations. Starting from this embedding, we use a bidirectional long short-term memory (LSTM) module to order and label the candidate identities and atomic locations consistent with the input protein sequence to obtain a structural model. Our approach paves the way for determining protein structures from cryo-EM densities at a fraction of the time of existing approaches and without the need for human intervention.
2108.10072
Matthias W\"odlinger
Matthias W\"odlinger, Michael Reiter, Lisa Weijler, Margarita Maurer-Granofszky, Angela Schumich, Elisa O. Sajaroff, Stefanie Groeneveld-Krentz, Jorge G.Rossi, Leonid Karawajew, Richard Ratei, Michael Dworzak
Automated Identification of Cell Populations in Flow Cytometry Data with Transformers
The article has been published as an open access article in the Journal for Computers in Biology and Medicine: https://doi.org/10.1016/j.compbiomed.2022.105314
Computers in Biology and Medicine, Vol. 144 (2022)
10.1016/j.compbiomed.2022.105314
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Acute Lymphoblastic Leukemia (ALL) is the most frequent hematologic malignancy in children and adolescents. A strong prognostic factor in ALL is given by the Minimal Residual Disease (MRD), which is a measure for the number of leukemic cells persistent in a patient. Manual MRD assessment from Multiparameter Flow Cytometry (FCM) data after treatment is time-consuming and subjective. In this work, we present an automated method to compute the MRD value directly from FCM data. We present a novel neural network approach based on the transformer architecture that learns to directly identify blast cells in a sample. We train our method in a supervised manner and evaluate it on publicly available ALL FCM data from three different clinical centers. Our method reaches a median F1 score of ~0.94 when evaluated on 519 B-ALL samples and shows better results than existing methods on 4 different datasets
[ { "created": "Mon, 23 Aug 2021 11:10:38 GMT", "version": "v1" }, { "created": "Mon, 7 Mar 2022 09:31:02 GMT", "version": "v2" } ]
2022-03-08
[ [ "Wödlinger", "Matthias", "" ], [ "Reiter", "Michael", "" ], [ "Weijler", "Lisa", "" ], [ "Maurer-Granofszky", "Margarita", "" ], [ "Schumich", "Angela", "" ], [ "Sajaroff", "Elisa O.", "" ], [ "Groeneveld-Krentz", "Stefanie", "" ], [ "Rossi", "Jorge G.", "" ], [ "Karawajew", "Leonid", "" ], [ "Ratei", "Richard", "" ], [ "Dworzak", "Michael", "" ] ]
Acute Lymphoblastic Leukemia (ALL) is the most frequent hematologic malignancy in children and adolescents. A strong prognostic factor in ALL is given by the Minimal Residual Disease (MRD), which is a measure for the number of leukemic cells persistent in a patient. Manual MRD assessment from Multiparameter Flow Cytometry (FCM) data after treatment is time-consuming and subjective. In this work, we present an automated method to compute the MRD value directly from FCM data. We present a novel neural network approach based on the transformer architecture that learns to directly identify blast cells in a sample. We train our method in a supervised manner and evaluate it on publicly available ALL FCM data from three different clinical centers. Our method reaches a median F1 score of ~0.94 when evaluated on 519 B-ALL samples and shows better results than existing methods on 4 different datasets
2204.06631
Jonas Ranft
Jonas Ranft and Benjamin Lindner
A self-consistent analytical theory for rotator networks under stochastic forcing: effects of intrinsic noise and common input
21 pages, 7 figures
null
10.1063/5.0096000
null
q-bio.NC nlin.AO
http://creativecommons.org/licenses/by/4.0/
Despite the incredible complexity of our brains' neural networks, theoretical descriptions of neural dynamics have led to profound insights into possible network states and dynamics. It remains challenging to develop theories that apply to spiking networks and thus allow one to characterize the dynamic properties of biologically more realistic networks. Here, we build on recent work by van Meegen & Lindner who have shown that "rotator networks," while considerably simpler than real spiking networks and therefore more amenable to mathematical analysis, still allow to capture dynamical properties of networks of spiking neurons. This framework can be easily extended to the case where individual units receive uncorrelated stochastic input which can be interpreted as intrinsic noise. However, the assumptions of the theory do not apply anymore when the input received by the single rotators is strongly correlated among units. As we show, in this case the network fluctuations become significantly non-Gaussian, which calls for a reworking of the theory. Using a cumulant expansion, we develop a self-consistent analytical theory that accounts for the observed non-Gaussian statistics. Our theory provides a starting point for further studies of more general network setups and information transmission properties of these networks.
[ { "created": "Wed, 13 Apr 2022 20:29:35 GMT", "version": "v1" } ]
2022-07-06
[ [ "Ranft", "Jonas", "" ], [ "Lindner", "Benjamin", "" ] ]
Despite the incredible complexity of our brains' neural networks, theoretical descriptions of neural dynamics have led to profound insights into possible network states and dynamics. It remains challenging to develop theories that apply to spiking networks and thus allow one to characterize the dynamic properties of biologically more realistic networks. Here, we build on recent work by van Meegen & Lindner who have shown that "rotator networks," while considerably simpler than real spiking networks and therefore more amenable to mathematical analysis, still allow to capture dynamical properties of networks of spiking neurons. This framework can be easily extended to the case where individual units receive uncorrelated stochastic input which can be interpreted as intrinsic noise. However, the assumptions of the theory do not apply anymore when the input received by the single rotators is strongly correlated among units. As we show, in this case the network fluctuations become significantly non-Gaussian, which calls for a reworking of the theory. Using a cumulant expansion, we develop a self-consistent analytical theory that accounts for the observed non-Gaussian statistics. Our theory provides a starting point for further studies of more general network setups and information transmission properties of these networks.
1311.4747
Wan-Chung Hu Dr.
Wan-Chung Hu
Sepsis is a syndrome with hyperactivity of TH17-like innate immunity and hypoactivity of adaptive immunity
null
null
null
null
q-bio.GN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, there are two major theories for the pathogenesis of sepsis: hyperimmune and hypoimmune. Hyperimmune theory suggests that cytokine storm causes the symptoms of sepsis. On the contrary, hypoimmune theory suggests that immunosuppression causes the manifestations of sepsis. By using microarray study, this study implies that hyperactivity of TH17-like innate immunity and failure of adaptive immunity are noted in sepsis patients. I find out that innate immunity related genes are significantly up-regulated including CD14, TLR1,2,4,5,8, HSP70, CEBP proteins, AP1(JUNB, FOSL2), TGF-{\beta}, IL-6, TGF-{\alpha}, CSF2 receptor, TNFRSF1A, S100A binding proteins, CCR2, formyl peptide receptor2, amyloid proteins, pentraxin, defensins, CLEC5A, whole complement machinery, CPD, NCF, MMP, neutrophil elastase, caspases, IgG and IgA Fc receptors(CD64, CD32), ALOX5, PTGS, LTB4R, LTA4H, and ICAM1. Majority of adaptive immunity genes are down-regulated including MHC related genes, TCR genes, granzymes/perforin, CD40, CD8, CD3, TCR signaling, BCR signaling, T & B cell specific transcription factors, NK killer receptors, and TH17 helper specific transcription factors(STAT3, RORA, REL). In addition, Treg related genes are up-regulated including TGF{\beta}, IL-15, STAT5B, SMAD2/4, CD36, and thrombospondin. Thus, both hyperimmune and hypoimmune play important roles in the pathophysiology of sepsis.
[ { "created": "Tue, 19 Nov 2013 14:08:32 GMT", "version": "v1" } ]
2013-11-20
[ [ "Hu", "Wan-Chung", "" ] ]
Currently, there are two major theories for the pathogenesis of sepsis: hyperimmune and hypoimmune. Hyperimmune theory suggests that cytokine storm causes the symptoms of sepsis. On the contrary, hypoimmune theory suggests that immunosuppression causes the manifestations of sepsis. By using microarray study, this study implies that hyperactivity of TH17-like innate immunity and failure of adaptive immunity are noted in sepsis patients. I find out that innate immunity related genes are significantly up-regulated including CD14, TLR1,2,4,5,8, HSP70, CEBP proteins, AP1(JUNB, FOSL2), TGF-{\beta}, IL-6, TGF-{\alpha}, CSF2 receptor, TNFRSF1A, S100A binding proteins, CCR2, formyl peptide receptor2, amyloid proteins, pentraxin, defensins, CLEC5A, whole complement machinery, CPD, NCF, MMP, neutrophil elastase, caspases, IgG and IgA Fc receptors(CD64, CD32), ALOX5, PTGS, LTB4R, LTA4H, and ICAM1. Majority of adaptive immunity genes are down-regulated including MHC related genes, TCR genes, granzymes/perforin, CD40, CD8, CD3, TCR signaling, BCR signaling, T & B cell specific transcription factors, NK killer receptors, and TH17 helper specific transcription factors(STAT3, RORA, REL). In addition, Treg related genes are up-regulated including TGF{\beta}, IL-15, STAT5B, SMAD2/4, CD36, and thrombospondin. Thus, both hyperimmune and hypoimmune play important roles in the pathophysiology of sepsis.
2307.05377
Millard Sloan PhD
M. L. Sloan
The Role of Schwartz Measures in Human Tri-Color Vision
null
null
null
null
q-bio.NC cs.CV math.PR
http://creativecommons.org/licenses/by/4.0/
The human tri-color vision process may be characterized as follows: 1. A requirement of three scalar quantities to fully define a color (for example, intensity, hue, and purity), with 2. These scalar measures linear in the intensity of the incident light, allowing in general any specific color to be duplicated by an additive mixture of light from three standardized (basis) colors, 3. The exception being that the spectral colors are unique, in that they cannot be duplicated by any positive mixture of other colors. These characteristics strongly suggest that human color vision makes use of Schwartz measures in processing color data. This hypothesis is subject to test. In this brief paper, the results of this hypothesis are shown to be in good agreement with measured data.
[ { "created": "Thu, 22 Jun 2023 17:31:14 GMT", "version": "v1" } ]
2023-07-12
[ [ "Sloan", "M. L.", "" ] ]
The human tri-color vision process may be characterized as follows: 1. A requirement of three scalar quantities to fully define a color (for example, intensity, hue, and purity), with 2. These scalar measures linear in the intensity of the incident light, allowing in general any specific color to be duplicated by an additive mixture of light from three standardized (basis) colors, 3. The exception being that the spectral colors are unique, in that they cannot be duplicated by any positive mixture of other colors. These characteristics strongly suggest that human color vision makes use of Schwartz measures in processing color data. This hypothesis is subject to test. In this brief paper, the results of this hypothesis are shown to be in good agreement with measured data.
1010.0370
Carlos Espinosa-Soto
Carlos Espinosa-Soto, Olivier C. Martin and Andreas Wagner
Phenotypic robustness can increase phenotypic variability after non-genetic perturbations in gene regulatory circuits
11 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-genetic perturbations, such as environmental change or developmental noise, can induce novel phenotypes. If an induced phenotype confers a fitness advantage, selection may promote its genetic stabilization. Non-genetic perturbations can thus initiate evolutionary innovation. Genetic variation that is not usually phenotypically visible may play an important role in this process. Populations under stabilizing selection on a phenotype that is robust to mutations can accumulate such variation. After non-genetic perturbations, this variation can become a source of new phenotypes. We here study the relationship between a phenotype's robustness to mutations and a population's potential to generate novel phenotypic variation. To this end, we use a well-studied model of transcriptional regulation circuits. Such circuits are important in many evolutionary innovations. We find that phenotypic robustness promotes phenotypic variability in response to non-genetic perturbations, but not in response to mutation. Our work suggests that non-genetic perturbations may initiate innovation more frequently in mutationally robust gene expression traits.
[ { "created": "Sat, 2 Oct 2010 22:35:27 GMT", "version": "v1" } ]
2015-03-17
[ [ "Espinosa-Soto", "Carlos", "" ], [ "Martin", "Olivier C.", "" ], [ "Wagner", "Andreas", "" ] ]
Non-genetic perturbations, such as environmental change or developmental noise, can induce novel phenotypes. If an induced phenotype confers a fitness advantage, selection may promote its genetic stabilization. Non-genetic perturbations can thus initiate evolutionary innovation. Genetic variation that is not usually phenotypically visible may play an important role in this process. Populations under stabilizing selection on a phenotype that is robust to mutations can accumulate such variation. After non-genetic perturbations, this variation can become a source of new phenotypes. We here study the relationship between a phenotype's robustness to mutations and a population's potential to generate novel phenotypic variation. To this end, we use a well-studied model of transcriptional regulation circuits. Such circuits are important in many evolutionary innovations. We find that phenotypic robustness promotes phenotypic variability in response to non-genetic perturbations, but not in response to mutation. Our work suggests that non-genetic perturbations may initiate innovation more frequently in mutationally robust gene expression traits.
0910.0977
Noel Malod-Dognin
No\"el Malod-Dognin (INRIA - Irisa), Rumen Andonov (INRIA - Irisa), Nicola Yanev
Maximum Cliques in Protein Structure Comparison
null
null
null
RR-7053
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing the similarity between two protein structures is a crucial task in molecular biology, and has been extensively investigated. Many protein structure comparison methods can be modeled as maximum clique problems in specific k-partite graphs, referred here as alignment graphs. In this paper, we propose a new protein structure comparison method based on internal distances (DAST) which is posed as a maximum clique problem in an alignment graph. We also design an algorithm (ACF) for solving such maximum clique problems. ACF is first applied in the context of VAST, a software largely used in the National Center for Biotechnology Information, and then in the context of DAST. The obtained results on real protein alignment instances show that our algorithm is more than 37000 times faster than the original VAST clique solver which is based on Bron & Kerbosch algorithm. We furthermore compare ACF with one of the fastest clique finder, recently conceived by Ostergard. On a popular benchmark (the Skolnick set) we observe that ACF is about 20 times faster in average than the Ostergard's algorithm.
[ { "created": "Tue, 6 Oct 2009 11:43:01 GMT", "version": "v1" } ]
2009-10-07
[ [ "Malod-Dognin", "Noël", "", "INRIA - Irisa" ], [ "Andonov", "Rumen", "", "INRIA - Irisa" ], [ "Yanev", "Nicola", "" ] ]
Computing the similarity between two protein structures is a crucial task in molecular biology, and has been extensively investigated. Many protein structure comparison methods can be modeled as maximum clique problems in specific k-partite graphs, referred here as alignment graphs. In this paper, we propose a new protein structure comparison method based on internal distances (DAST) which is posed as a maximum clique problem in an alignment graph. We also design an algorithm (ACF) for solving such maximum clique problems. ACF is first applied in the context of VAST, a software largely used in the National Center for Biotechnology Information, and then in the context of DAST. The obtained results on real protein alignment instances show that our algorithm is more than 37000 times faster than the original VAST clique solver which is based on Bron & Kerbosch algorithm. We furthermore compare ACF with one of the fastest clique finder, recently conceived by Ostergard. On a popular benchmark (the Skolnick set) we observe that ACF is about 20 times faster in average than the Ostergard's algorithm.
2405.15109
Herbert Sauro Dr
Lucian Smith and Herbert M Sauro
An Update to the SBML Human-Readable Antimony Language
null
null
null
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Antimony is a high-level, human-readable text-based language designed for defining and sharing models in the systems biology community. It enables scientists to describe biochemical networks and systems using a simple and intuitive syntax. It allows users to easily create, modify, and distribute reproducible computational models. By allowing the concise representation of complex biological processes, Antimony enhances collaborative efforts, improves reproducibility, and accelerates the iterative development of models in systems biology. This paper provides an update to the Antimony language since it was introduced in 2009. In particular, we highlight new annotation features, support for flux balance analysis, a new rateOf method, support for probability distributions and uncertainty, named stochiometries, and algebraic rules. Antimony is also now distributed as a C/C++ library, together with python and Julia bindings, as well as a JavaScript version for use within a web browser. Availability: https://github.com/sys-bio/antimony.
[ { "created": "Thu, 23 May 2024 23:54:41 GMT", "version": "v1" } ]
2024-05-27
[ [ "Smith", "Lucian", "" ], [ "Sauro", "Herbert M", "" ] ]
Antimony is a high-level, human-readable text-based language designed for defining and sharing models in the systems biology community. It enables scientists to describe biochemical networks and systems using a simple and intuitive syntax. It allows users to easily create, modify, and distribute reproducible computational models. By allowing the concise representation of complex biological processes, Antimony enhances collaborative efforts, improves reproducibility, and accelerates the iterative development of models in systems biology. This paper provides an update to the Antimony language since it was introduced in 2009. In particular, we highlight new annotation features, support for flux balance analysis, a new rateOf method, support for probability distributions and uncertainty, named stochiometries, and algebraic rules. Antimony is also now distributed as a C/C++ library, together with python and Julia bindings, as well as a JavaScript version for use within a web browser. Availability: https://github.com/sys-bio/antimony.
0712.1973
Massimo Sandal
Massimo Sandal, Francesco Valle, Isabella Tessari, Stefano Mammi, Elisabetta Bergantino, Francesco Musiani, Marco Brucale, Luigi Bubacco, and Bruno Samori'
Conformational equilibria in monomeric alpha-synuclein at the single molecule level
37 pages, 9 figures (including supplementary material)
null
null
null
q-bio.BM physics.bio-ph q-bio.SC
null
Natively unstructured proteins defy the classical "one sequence-one structure" paradigm of protein science. Monomers of these proteins in pathological conditions can aggregate in the cell, a process that underlies socially relevant neurodegenerative diseases such as Alzheimer and Parkinson. A full comprehension of the formation and structure of the so-called misfolded intermediates from which the aggregated states ensue is still lacking. We characterized the folding and the conformational diversity of alpha-synuclein (aSyn), a natively unstructured protein involved in Parkinson disease, by mechanically stretching single molecules of this protein and recording their mechanical properties. These experiments permitted us to directly observe directly and quantify three main classes of conformations that, under in vitro physiological conditions, exist simultaneously in the aSyn sample, including disordered and "beta-like" structures. We found that this class of "beta-like" structures is directly related to aSyn aggregation. In fact, their relative abundance increases drastically in three different conditions known to promote the formation of aSyn fibrils: the presence of Cu2+, the occurrence of the pathogenic A30P mutation, and high ionic strength. We expect that a critical concentration of aSyn with a "beta-like" structure must be reached to trigger fibril formation. This critical concentration is therefore controlled by a chemical equilibrium. Novel pharmacological strategies can now be tailored to act upstream, before the aggregation process ensues, by targeting this equilibrium. To this end, Single Molecule Force Spectroscopy can be an effective tool to tailor and test new pharmacological agents.
[ { "created": "Wed, 12 Dec 2007 16:29:12 GMT", "version": "v1" } ]
2007-12-13
[ [ "Sandal", "Massimo", "" ], [ "Valle", "Francesco", "" ], [ "Tessari", "Isabella", "" ], [ "Mammi", "Stefano", "" ], [ "Bergantino", "Elisabetta", "" ], [ "Musiani", "Francesco", "" ], [ "Brucale", "Marco", "" ], [ "Bubacco", "Luigi", "" ], [ "Samori'", "Bruno", "" ] ]
Natively unstructured proteins defy the classical "one sequence-one structure" paradigm of protein science. Monomers of these proteins in pathological conditions can aggregate in the cell, a process that underlies socially relevant neurodegenerative diseases such as Alzheimer and Parkinson. A full comprehension of the formation and structure of the so-called misfolded intermediates from which the aggregated states ensue is still lacking. We characterized the folding and the conformational diversity of alpha-synuclein (aSyn), a natively unstructured protein involved in Parkinson disease, by mechanically stretching single molecules of this protein and recording their mechanical properties. These experiments permitted us to directly observe directly and quantify three main classes of conformations that, under in vitro physiological conditions, exist simultaneously in the aSyn sample, including disordered and "beta-like" structures. We found that this class of "beta-like" structures is directly related to aSyn aggregation. In fact, their relative abundance increases drastically in three different conditions known to promote the formation of aSyn fibrils: the presence of Cu2+, the occurrence of the pathogenic A30P mutation, and high ionic strength. We expect that a critical concentration of aSyn with a "beta-like" structure must be reached to trigger fibril formation. This critical concentration is therefore controlled by a chemical equilibrium. Novel pharmacological strategies can now be tailored to act upstream, before the aggregation process ensues, by targeting this equilibrium. To this end, Single Molecule Force Spectroscopy can be an effective tool to tailor and test new pharmacological agents.
1512.02602
Sergio Pequito
Cassiano O. Becker, Sergio Pequito, George J. Pappas, Michael B. Miller, Scott T. Grafton, Danielle S. Bassett and Victor M. Preciado
Accurately Predicting Functional Connectivity from Diffusion Imaging
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to predict with high accuracy the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically unveil the role of structural paths in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigen-modes of the predicted functional connectivity are aligned with activity patterns associated with different cognitive systems. Our work offers the potential to infer properties of brain dynamics in clinical or developmental populations with low tolerance for functional neuroimaging.
[ { "created": "Tue, 8 Dec 2015 19:53:53 GMT", "version": "v1" }, { "created": "Thu, 10 Dec 2015 00:34:56 GMT", "version": "v2" }, { "created": "Wed, 20 Apr 2016 17:36:08 GMT", "version": "v3" } ]
2016-04-21
[ [ "Becker", "Cassiano O.", "" ], [ "Pequito", "Sergio", "" ], [ "Pappas", "George J.", "" ], [ "Miller", "Michael B.", "" ], [ "Grafton", "Scott T.", "" ], [ "Bassett", "Danielle S.", "" ], [ "Preciado", "Victor M.", "" ] ]
Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to predict with high accuracy the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically unveil the role of structural paths in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigen-modes of the predicted functional connectivity are aligned with activity patterns associated with different cognitive systems. Our work offers the potential to infer properties of brain dynamics in clinical or developmental populations with low tolerance for functional neuroimaging.
1204.3075
Lukas Geyrhofer
Lukas Geyrhofer and Oskar Hallatschek
Stochastic delocalization of finite populations
null
J. Stat. Mech. (2013) P01007
10.1088/1742-5468/2013/01/P01007
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneities in environmental conditions often induce corresponding heterogeneities in the distribution of species. In the extreme case of a localized patch of increased growth rates, reproducing populations can become strongly concentrated at the patch despite the entropic tendency for population to distribute evenly. Several deterministic mathematical models have been used to characterize the conditions under which localized states can form, and how they break down due to convective driving forces. Here, we study the delocalization of a finite population in the presence of number fluctuations. We find that any finite population delocalizes on sufficiently long time scales. Depending on parameters, however, populations may remain localized for a very long time. The typical waiting time to delocalization increases exponentially with both population size and distance to the critical wind speed of the deterministic approximation. We augment these simulation results by a mathematical analysis that treats the reproduction and migration of individuals as branching random walks subject to global constraints. For a particular constraint, different from a fixed population size constraint, this model yields a solvable first moment equation. We find that this solvable model approximates very well the fixed population size model for large populations, but starts to deviate as population sizes are small. The analytical approach allows us to map out a phase diagram of the order parameter as a function of the two driving parameters, inverse population size and wind speed. Our results may be used to extend the analysis of delocalization transitions to different settings, such as the viral quasi-species scenario.
[ { "created": "Fri, 13 Apr 2012 18:52:29 GMT", "version": "v1" }, { "created": "Mon, 17 Dec 2012 13:09:09 GMT", "version": "v2" } ]
2013-01-17
[ [ "Geyrhofer", "Lukas", "" ], [ "Hallatschek", "Oskar", "" ] ]
Heterogeneities in environmental conditions often induce corresponding heterogeneities in the distribution of species. In the extreme case of a localized patch of increased growth rates, reproducing populations can become strongly concentrated at the patch despite the entropic tendency for population to distribute evenly. Several deterministic mathematical models have been used to characterize the conditions under which localized states can form, and how they break down due to convective driving forces. Here, we study the delocalization of a finite population in the presence of number fluctuations. We find that any finite population delocalizes on sufficiently long time scales. Depending on parameters, however, populations may remain localized for a very long time. The typical waiting time to delocalization increases exponentially with both population size and distance to the critical wind speed of the deterministic approximation. We augment these simulation results by a mathematical analysis that treats the reproduction and migration of individuals as branching random walks subject to global constraints. For a particular constraint, different from a fixed population size constraint, this model yields a solvable first moment equation. We find that this solvable model approximates very well the fixed population size model for large populations, but starts to deviate as population sizes are small. The analytical approach allows us to map out a phase diagram of the order parameter as a function of the two driving parameters, inverse population size and wind speed. Our results may be used to extend the analysis of delocalization transitions to different settings, such as the viral quasi-species scenario.
2402.12391
Haoyang Liu
Haoyang Liu, Yijiang Li, Jinglin Jian, Yuxuan Cheng, Jianrong Lu, Shuyi Guo, Jinglei Zhu, Mianchen Zhang, Miantong Zhang, Haohan Wang
Toward a Team of AI-made Scientists for Scientific Discovery from Gene Expression Data
18 pages, 2 figures; added contact
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning has emerged as a powerful tool for scientific discovery, enabling researchers to extract meaningful insights from complex datasets. For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare. However, the traditional process for analyzing such datasets demands substantial human effort and expertise for the data selection, processing, and analysis. To address this challenge, we introduce a novel framework, a Team of AI-made Scientists (TAIS), designed to streamline the scientific discovery pipeline. TAIS comprises simulated roles, including a project manager, data engineer, and domain expert, each represented by a Large Language Model (LLM). These roles collaborate to replicate the tasks typically performed by data scientists, with a specific focus on identifying disease-predictive genes. Furthermore, we have curated a benchmark dataset to assess TAIS's effectiveness in gene identification, demonstrating our system's potential to significantly enhance the efficiency and scope of scientific exploration. Our findings represent a solid step towards automating scientific discovery through large language models.
[ { "created": "Thu, 15 Feb 2024 06:30:12 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2024 03:42:32 GMT", "version": "v2" } ]
2024-02-22
[ [ "Liu", "Haoyang", "" ], [ "Li", "Yijiang", "" ], [ "Jian", "Jinglin", "" ], [ "Cheng", "Yuxuan", "" ], [ "Lu", "Jianrong", "" ], [ "Guo", "Shuyi", "" ], [ "Zhu", "Jinglei", "" ], [ "Zhang", "Mianchen", "" ], [ "Zhang", "Miantong", "" ], [ "Wang", "Haohan", "" ] ]
Machine learning has emerged as a powerful tool for scientific discovery, enabling researchers to extract meaningful insights from complex datasets. For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare. However, the traditional process for analyzing such datasets demands substantial human effort and expertise for the data selection, processing, and analysis. To address this challenge, we introduce a novel framework, a Team of AI-made Scientists (TAIS), designed to streamline the scientific discovery pipeline. TAIS comprises simulated roles, including a project manager, data engineer, and domain expert, each represented by a Large Language Model (LLM). These roles collaborate to replicate the tasks typically performed by data scientists, with a specific focus on identifying disease-predictive genes. Furthermore, we have curated a benchmark dataset to assess TAIS's effectiveness in gene identification, demonstrating our system's potential to significantly enhance the efficiency and scope of scientific exploration. Our findings represent a solid step towards automating scientific discovery through large language models.
1209.4890
Enzo Tagliazucchi
Enzo Tagliazucchi, Frederic von Wegner, Astrid Morzelewski, Verena Brodbeck, Helmut Laufs
Electrophysiological correlates of non-stationary BOLD functional connectivity fluctuations
39 pages, 10 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spontaneous fluctuations of the BOLD (Blood Oxygen Level-Dependent) signal, measured with fMRI (functional Magnetic Resonance Imaging), display a rich and neurobiologically relevant functional connectivity structure. This structure is usually revealed using time averaging methods, which prevent the detection of functional connectivity changes over time. In this work we studied the electrophysiological correlates of dynamical BOLD functional connectivity fluctuations, by means of long (approx. 50 min) joint electroencephalographic (EEG) and fMRI recordings. We identified widespread positive and negative correlations between EEG spectral power and fMRI BOLD connectivity fluctuations in a network of 90 cortical and subcortical regions. In particular, increased alpha (8-12 Hz) and beta (15-30 Hz) power were related to decreased functional connectivity, whereas gamma (30-60 Hz) power correlated positively with BOLD connectivity between specific brain regions. Furthermore, these patterns were altered for subjects undergoing vigilance changes, with an involvement of the slow delta (0.4 - 4 Hz) band in localized positive correlations. Finally, graph theoretical indices of network structure also exhibited sharp changes over time, with average path length correlating positively with alpha power extracted from central and frontal electrodes. Our results strongly suggest that non-stationary BOLD functional connectivity has a neurophysiological origin. Positive correlations of BOLD connectivity with gamma can be interpreted as increased average binding over relatively long periods of time, possibly due to spontaneous cognition occurring during rest. Negative correlations with alpha suggest functional inhibition of local and long-range connectivity, associated with an idling state of the brain.
[ { "created": "Fri, 21 Sep 2012 19:40:01 GMT", "version": "v1" } ]
2012-09-24
[ [ "Tagliazucchi", "Enzo", "" ], [ "von Wegner", "Frederic", "" ], [ "Morzelewski", "Astrid", "" ], [ "Brodbeck", "Verena", "" ], [ "Laufs", "Helmut", "" ] ]
Spontaneous fluctuations of the BOLD (Blood Oxygen Level-Dependent) signal, measured with fMRI (functional Magnetic Resonance Imaging), display a rich and neurobiologically relevant functional connectivity structure. This structure is usually revealed using time averaging methods, which prevent the detection of functional connectivity changes over time. In this work we studied the electrophysiological correlates of dynamical BOLD functional connectivity fluctuations, by means of long (approx. 50 min) joint electroencephalographic (EEG) and fMRI recordings. We identified widespread positive and negative correlations between EEG spectral power and fMRI BOLD connectivity fluctuations in a network of 90 cortical and subcortical regions. In particular, increased alpha (8-12 Hz) and beta (15-30 Hz) power were related to decreased functional connectivity, whereas gamma (30-60 Hz) power correlated positively with BOLD connectivity between specific brain regions. Furthermore, these patterns were altered for subjects undergoing vigilance changes, with an involvement of the slow delta (0.4 - 4 Hz) band in localized positive correlations. Finally, graph theoretical indices of network structure also exhibited sharp changes over time, with average path length correlating positively with alpha power extracted from central and frontal electrodes. Our results strongly suggest that non-stationary BOLD functional connectivity has a neurophysiological origin. Positive correlations of BOLD connectivity with gamma can be interpreted as increased average binding over relatively long periods of time, possibly due to spontaneous cognition occurring during rest. Negative correlations with alpha suggest functional inhibition of local and long-range connectivity, associated with an idling state of the brain.
2006.00043
David Smith
David J. Smith, Alessandro Prete, Angela E. Taylor, Niki Karavitaki and Wiebke Arlt
Modelling oral adrenal cortisol support
8 pages, 2 figures 1 table
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simplified mathematical model of oral hydrocortisone delivery in adrenal insufficiency is described; the model is based on three components (gastric hydrocortisone, free serum cortisol and bound serum cortisol) and is formulated in terms of linear kinetics, taking into account the dynamics of glucocorticoid-protein binding. Motivated by the need to optimise cortisol replacement in the situations of COVID-19 infection, the model is fitted to recently-published data on 50 mg dosing and earlier data on 10 mg dosing. The fitted model is used to predict typical responses to standard dosing regimes, which involve a larger dose in the morning and 1 or 2 smaller doses later in the day, and the same regimes with doses doubled. In all cases there is a circadian-like response, with early morning nadir. The model is also used to consider an alternative dosing strategy based on four equal and equally-spaced doses of 10, 20 or 30 mg per 24 h, resulting in a more even response resembling a response to sustained inflammatory stress.
[ { "created": "Fri, 29 May 2020 19:22:55 GMT", "version": "v1" } ]
2020-06-02
[ [ "Smith", "David J.", "" ], [ "Prete", "Alessandro", "" ], [ "Taylor", "Angela E.", "" ], [ "Karavitaki", "Niki", "" ], [ "Arlt", "Wiebke", "" ] ]
A simplified mathematical model of oral hydrocortisone delivery in adrenal insufficiency is described; the model is based on three components (gastric hydrocortisone, free serum cortisol and bound serum cortisol) and is formulated in terms of linear kinetics, taking into account the dynamics of glucocorticoid-protein binding. Motivated by the need to optimise cortisol replacement in the situations of COVID-19 infection, the model is fitted to recently-published data on 50 mg dosing and earlier data on 10 mg dosing. The fitted model is used to predict typical responses to standard dosing regimes, which involve a larger dose in the morning and 1 or 2 smaller doses later in the day, and the same regimes with doses doubled. In all cases there is a circadian-like response, with early morning nadir. The model is also used to consider an alternative dosing strategy based on four equal and equally-spaced doses of 10, 20 or 30 mg per 24 h, resulting in a more even response resembling a response to sustained inflammatory stress.
0910.2436
Mario Nicodemi
A. Scialdone and M. Nicodemi
Mechanics and Dynamics of X-Chromosome Pairing at X Inactivation
null
PLoS Comp.Bio. 4, e1000244 (2008)
null
null
q-bio.GN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the onset of X Chromosomes Inactivation, the vital process whereby female mammal cells equalize X products with respect to males, the X chromosomes are colocalized along their Xic (X-Inactivation Center) regions. The mechanism inducing recognition and pairing of the X's remains, though, elusive. Starting from recent discoveries on the molecular factors and on the DNA sequences (the so-called ``pairing sites'') involved, we dissect the mechanical basis of Xic colocalization by using a Statistical Physics model. We show that soluble DNA specific binding molecules, as those experimentally identified, can be indeed sufficient to induce the spontaneous colocalization of the homologous chromosomes, but only when their concentration, or chemical affinity, rises above a threshold value, as a consequence of a thermodynamic phase transition. We derive the likelihood of pairing and its probability distribution. Chromosome dynamics has two stages: an initial independent Brownian diffusion followed, after a characteristic time scale, by recognition and pairing. Finally, we investigate the effects of DNA deletion/insertions in the region of pairing sites and compare model predictions to available experimental data.
[ { "created": "Tue, 13 Oct 2009 17:19:17 GMT", "version": "v1" } ]
2009-10-14
[ [ "Scialdone", "A.", "" ], [ "Nicodemi", "M.", "" ] ]
At the onset of X Chromosomes Inactivation, the vital process whereby female mammal cells equalize X products with respect to males, the X chromosomes are colocalized along their Xic (X-Inactivation Center) regions. The mechanism inducing recognition and pairing of the X's remains, though, elusive. Starting from recent discoveries on the molecular factors and on the DNA sequences (the so-called ``pairing sites'') involved, we dissect the mechanical basis of Xic colocalization by using a Statistical Physics model. We show that soluble DNA specific binding molecules, as those experimentally identified, can be indeed sufficient to induce the spontaneous colocalization of the homologous chromosomes, but only when their concentration, or chemical affinity, rises above a threshold value, as a consequence of a thermodynamic phase transition. We derive the likelihood of pairing and its probability distribution. Chromosome dynamics has two stages: an initial independent Brownian diffusion followed, after a characteristic time scale, by recognition and pairing. Finally, we investigate the effects of DNA deletion/insertions in the region of pairing sites and compare model predictions to available experimental data.
2309.10824
C\'esar Silva
Ahmed Elmwafy, Jos\'e J. Oliveira, C\'esar M. Silva
Generalized non-autonomous Cohen-Grossberg neural network model
30 pages
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present paper, we investigate both the global exponential stability and the existence of a periodic solution of a general differential equation with unbounded distributed delays. The main stability criterion depends on the dominance of the non-delay terms over the delay terms. The criterion for the existence of a periodic solution is obtained with the application of the coincide degree theorem. We use the main results to get criteria for the existence and global exponential stability of periodic solutions of a generalized higher-order periodic Cohen-Grossberg neural network model with discrete-time varying delays and infinite distributed delays. Additionally, we provide a comparison with the results in the literature and a numerical simulation to illustrate the effectiveness of some of our results.
[ { "created": "Mon, 4 Sep 2023 13:11:21 GMT", "version": "v1" } ]
2023-09-21
[ [ "Elmwafy", "Ahmed", "" ], [ "Oliveira", "José J.", "" ], [ "Silva", "César M.", "" ] ]
In the present paper, we investigate both the global exponential stability and the existence of a periodic solution of a general differential equation with unbounded distributed delays. The main stability criterion depends on the dominance of the non-delay terms over the delay terms. The criterion for the existence of a periodic solution is obtained with the application of the coincide degree theorem. We use the main results to get criteria for the existence and global exponential stability of periodic solutions of a generalized higher-order periodic Cohen-Grossberg neural network model with discrete-time varying delays and infinite distributed delays. Additionally, we provide a comparison with the results in the literature and a numerical simulation to illustrate the effectiveness of some of our results.
1701.00077
Pietro Hiram Guzzi
Pietro Hiram Guzzi, Giuseppe Agapito, Marianna Milano, Mario Cannataro
Learning Weighted Association Rules in Human Phenotype Ontology
null
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Human Phenotype Ontology (HPO) is a structured repository of concepts (HPO Terms) that are associated to one or more diseases. The process of association is referred to as annotation. The relevance and the specificity of both HPO terms and annotations are evaluated by a measure defined as Information Content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of Association Rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents HPO-Miner (Human Phenotype Ontology-based Weighted Association Rules) a methodology for extracting Weighted Association Rules. HPO-Miner can extract relevant rules from a biological point of view. A case study on using of HPO-Miner on publicly available HPO annotation datasets is used to demonstrate the effectiveness of our methodology.
[ { "created": "Sat, 31 Dec 2016 09:19:52 GMT", "version": "v1" } ]
2017-01-03
[ [ "Guzzi", "Pietro Hiram", "" ], [ "Agapito", "Giuseppe", "" ], [ "Milano", "Marianna", "" ], [ "Cannataro", "Mario", "" ] ]
The Human Phenotype Ontology (HPO) is a structured repository of concepts (HPO Terms) that are associated to one or more diseases. The process of association is referred to as annotation. The relevance and the specificity of both HPO terms and annotations are evaluated by a measure defined as Information Content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of Association Rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents HPO-Miner (Human Phenotype Ontology-based Weighted Association Rules) a methodology for extracting Weighted Association Rules. HPO-Miner can extract relevant rules from a biological point of view. A case study on using of HPO-Miner on publicly available HPO annotation datasets is used to demonstrate the effectiveness of our methodology.
2208.12265
Anindita Bhadra
Rohan Sarkar, Sreelekshmi R, Abhijit Nayek, Anirban Bhowmick, Poushali Chakraborty, Rituparna Sonowal, Debsruti Dasgupta, Rounak Banerjee, Aritra Roy, Amartya Baran Mandal and Anindita Bhadra
Eating Smart: Free-ranging dogs follow an optimal foraging strategy while scavenging in groups
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Foraging and acquiring of food is a delicate balance between managing the costs, both energy and social, and individual preferences. Previous research on the solitary foraging of free ranging dogs showed that they prioritized the nutritionally highest valued food patch first but do not ignore other less valuable food either, displaying typical scavenger behaviour. The current experiment was carried out on groups of dogs with the same set up to see the change in foraging strategies, if any, under the influence of social cost like intra-group competition. We found multiple differences between the strategies of dogs foraging alone versus in groups with competition playing an implicit role in the decision making of dogs when foraging in groups. Dogs were able to continually assess and evaluate the available resources in a patch and adjust their behaviour accordingly. Foraging in groups also provided benefits of reduced individual vigilance. The various decisions and choices made seemed to have a basis in the optimal foraging theory wherein the dogs harvested the nutritionally richest patch possible with the least risk and cost involved but was willing to compromise if that was not possible. This underscores the cognitive, quick decision-making abilities and adaptable behaviour of these dogs.
[ { "created": "Thu, 25 Aug 2022 06:34:53 GMT", "version": "v1" } ]
2022-08-29
[ [ "Sarkar", "Rohan", "" ], [ "R", "Sreelekshmi", "" ], [ "Nayek", "Abhijit", "" ], [ "Bhowmick", "Anirban", "" ], [ "Chakraborty", "Poushali", "" ], [ "Sonowal", "Rituparna", "" ], [ "Dasgupta", "Debsruti", "" ], [ "Banerjee", "Rounak", "" ], [ "Roy", "Aritra", "" ], [ "Mandal", "Amartya Baran", "" ], [ "Bhadra", "Anindita", "" ] ]
Foraging and acquiring of food is a delicate balance between managing the costs, both energy and social, and individual preferences. Previous research on the solitary foraging of free ranging dogs showed that they prioritized the nutritionally highest valued food patch first but do not ignore other less valuable food either, displaying typical scavenger behaviour. The current experiment was carried out on groups of dogs with the same set up to see the change in foraging strategies, if any, under the influence of social cost like intra-group competition. We found multiple differences between the strategies of dogs foraging alone versus in groups with competition playing an implicit role in the decision making of dogs when foraging in groups. Dogs were able to continually assess and evaluate the available resources in a patch and adjust their behaviour accordingly. Foraging in groups also provided benefits of reduced individual vigilance. The various decisions and choices made seemed to have a basis in the optimal foraging theory wherein the dogs harvested the nutritionally richest patch possible with the least risk and cost involved but was willing to compromise if that was not possible. This underscores the cognitive, quick decision-making abilities and adaptable behaviour of these dogs.
2207.01370
Mareike Fischer
Mareike Fischer and Tom Niklas Hamann and Kristina Wicke
How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks which are, as opposed to trees, suitable to describe processes like hybridization and horizontal gene transfer, play a substantial role in evolutionary research. However, while non-treelike events need to be taken into account, they are relatively rare, which implies that biologically relevant networks are often assumed to be similar to trees in the sense that they can be obtained by taking a tree and adding some additional edges. This observation led to the concept of so-called tree-based networks, which recently gained substantial interest in the literature. Unfortunately, though, identifying such networks in the unrooted case is an NP-complete problem. Therefore, classes of networks for which tree-basedness can be guaranteed are of the utmost interest. The most prominent such class is formed by so-called edge-based networks, which have a close relationship to generalized series-parallel graphs known from graph theory. They can be identified in linear time and are in some regards biologically more plausible than general tree-based networks. While concerning the latter proximity measures for general networks have already been introduced, such measures are not yet available for edge-basedness. This means that for an arbitrary unrooted network, the "distance" to the nearest edge-based network could so far not be determined. The present manuscript fills this gap by introducing two classes of proximity measures for edge-basedness, one based on the given network itself and one based on its so-called leaf shrink graph (LS graph). Both classes contain four different proximity measures, whose similarities and differences we study subsequently.
[ { "created": "Mon, 4 Jul 2022 12:47:54 GMT", "version": "v1" }, { "created": "Tue, 5 Jul 2022 08:51:26 GMT", "version": "v2" } ]
2022-07-06
[ [ "Fischer", "Mareike", "" ], [ "Hamann", "Tom Niklas", "" ], [ "Wicke", "Kristina", "" ] ]
Phylogenetic networks which are, as opposed to trees, suitable to describe processes like hybridization and horizontal gene transfer, play a substantial role in evolutionary research. However, while non-treelike events need to be taken into account, they are relatively rare, which implies that biologically relevant networks are often assumed to be similar to trees in the sense that they can be obtained by taking a tree and adding some additional edges. This observation led to the concept of so-called tree-based networks, which recently gained substantial interest in the literature. Unfortunately, though, identifying such networks in the unrooted case is an NP-complete problem. Therefore, classes of networks for which tree-basedness can be guaranteed are of the utmost interest. The most prominent such class is formed by so-called edge-based networks, which have a close relationship to generalized series-parallel graphs known from graph theory. They can be identified in linear time and are in some regards biologically more plausible than general tree-based networks. While concerning the latter proximity measures for general networks have already been introduced, such measures are not yet available for edge-basedness. This means that for an arbitrary unrooted network, the "distance" to the nearest edge-based network could so far not be determined. The present manuscript fills this gap by introducing two classes of proximity measures for edge-basedness, one based on the given network itself and one based on its so-called leaf shrink graph (LS graph). Both classes contain four different proximity measures, whose similarities and differences we study subsequently.
2008.06346
Nilmani Mathur
Nilmani Mathur and Gargi Shaw
An empirical model on the dynamics of Covid-19 spread in human population
47 pages, 19 figures and 2 tables
null
null
TIFR/TH/20-21
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a mathematical model to analyze the time evolution of the total number of infected population with Covid-19 disease at a region in the ongoing pandemic. Using the available data of Covid-19 infected population on various countries we formulate a model which can successfully track the time evolution from early days to the saturation period in a given wave of this infectious disease. It involves a set of effective parameters which can be extracted from the available data. Using those parameters the future trajectories of the disease spread can also be projected. A set of differential equations is also proposed whose solutions are these time evolution trajectories. Using such a formalism we project the future time evolution trajectories of infection spread for a number of countries where the Covid-19 infection is still rapidly rising.
[ { "created": "Thu, 13 Aug 2020 11:02:06 GMT", "version": "v1" } ]
2020-08-17
[ [ "Mathur", "Nilmani", "" ], [ "Shaw", "Gargi", "" ] ]
We propose a mathematical model to analyze the time evolution of the total number of infected population with Covid-19 disease at a region in the ongoing pandemic. Using the available data of Covid-19 infected population on various countries we formulate a model which can successfully track the time evolution from early days to the saturation period in a given wave of this infectious disease. It involves a set of effective parameters which can be extracted from the available data. Using those parameters the future trajectories of the disease spread can also be projected. A set of differential equations is also proposed whose solutions are these time evolution trajectories. Using such a formalism we project the future time evolution trajectories of infection spread for a number of countries where the Covid-19 infection is still rapidly rising.
2309.07080
Abdolmahdi Bagheri
Abdolmahdi Bagheri, Mohammad Pasande, Kevin Bello, Babak Nadjar Araabi, Alireza Akhondi-Asl
Discovering Dynamic Effective Connectome of Brain with Bayesian Dynamic DAG Learning
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
Understanding the complex mechanisms of the brain can be unraveled by extracting the Dynamic Effective Connectome (DEC). Recently, score-based Directed Acyclic Graph (DAG) discovery methods have shown significant improvements in extracting the causal structure and inferring effective connectivity. However, learning DEC through these methods still faces two main challenges: one with the fundamental impotence of high-dimensional dynamic DAG discovery methods and the other with the low quality of fMRI data. In this paper, we introduce Bayesian Dynamic DAG learning with M-matrices Acyclicity characterization (BDyMA) method to address the challenges in discovering DEC. The presented dynamic causal model enables us to discover direct feedback loop edges as well. Leveraging an unconstrained framework in the BDyMA method leads to more accurate results in detecting high-dimensional networks, achieving sparser outcomes, making it particularly suitable for extracting DEC. Additionally, the score function of the BDyMA method allows the incorporation of prior knowledge into the process of dynamic causal discovery which further enhances the accuracy of results. Comprehensive simulations on synthetic data and experiments on Human Connectome Project (HCP) data demonstrate that our method can handle both of the two main challenges, yielding more accurate and reliable DEC compared to state-of-the-art and traditional methods. Additionally, we investigate the trustworthiness of DTI data as prior knowledge for DEC discovery and show the improvements in DEC discovery when the DTI data is incorporated into the process.
[ { "created": "Thu, 7 Sep 2023 22:54:06 GMT", "version": "v1" }, { "created": "Sun, 29 Oct 2023 11:47:28 GMT", "version": "v2" }, { "created": "Sat, 9 Mar 2024 17:56:38 GMT", "version": "v3" } ]
2024-03-12
[ [ "Bagheri", "Abdolmahdi", "" ], [ "Pasande", "Mohammad", "" ], [ "Bello", "Kevin", "" ], [ "Araabi", "Babak Nadjar", "" ], [ "Akhondi-Asl", "Alireza", "" ] ]
Understanding the complex mechanisms of the brain can be unraveled by extracting the Dynamic Effective Connectome (DEC). Recently, score-based Directed Acyclic Graph (DAG) discovery methods have shown significant improvements in extracting the causal structure and inferring effective connectivity. However, learning DEC through these methods still faces two main challenges: one with the fundamental impotence of high-dimensional dynamic DAG discovery methods and the other with the low quality of fMRI data. In this paper, we introduce Bayesian Dynamic DAG learning with M-matrices Acyclicity characterization (BDyMA) method to address the challenges in discovering DEC. The presented dynamic causal model enables us to discover direct feedback loop edges as well. Leveraging an unconstrained framework in the BDyMA method leads to more accurate results in detecting high-dimensional networks, achieving sparser outcomes, making it particularly suitable for extracting DEC. Additionally, the score function of the BDyMA method allows the incorporation of prior knowledge into the process of dynamic causal discovery which further enhances the accuracy of results. Comprehensive simulations on synthetic data and experiments on Human Connectome Project (HCP) data demonstrate that our method can handle both of the two main challenges, yielding more accurate and reliable DEC compared to state-of-the-art and traditional methods. Additionally, we investigate the trustworthiness of DTI data as prior knowledge for DEC discovery and show the improvements in DEC discovery when the DTI data is incorporated into the process.
2204.06032
Lam Ho
Wensha Zhang and Toby Kenney and Lam Si Tung Ho
Evolutionary shift detection with ensemble variable selection
null
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
1. Abrupt environmental changes can lead to evolutionary shifts in trait evolution. Identifying these shifts is an important step in understanding the evolutionary history of phenotypes. 2. We propose an ensemble variable selection method (R package ELPASO) for the evolutionary shift detection task and compare it with existing methods (R packages l1ou and PhylogeneticEM) under several scenarios. 3. The performances of methods are highly dependent on the selection criterion. When the signal sizes are small, the methods using the Bayesian information criterion (BIC) have better performances. And when the signal sizes are large enough, the methods using the phylogenetic Bayesian information criterion (pBIC) (Khabbazian et al., 2016) have better performance. Moreover, the performance is heavily impacted by measurement error and tree reconstruction error. 4. Ensemble method + pBIC tends to perform less conservatively than l1ou + pBIC, and Ensemble method + BIC is more conservatively than l1ou + BIC. PhylogeneticEM is even more conservative with small signal sizes and falls between l1ou + pBIC and Ensemble method + BIC with large signal sizes. The results can differ between the methods, but none clearly outperforms the others. By applying multiple methods to a single dataset, we can access the robustness of each detected shift, based on the agreement among methods.
[ { "created": "Tue, 12 Apr 2022 18:36:53 GMT", "version": "v1" } ]
2022-04-14
[ [ "Zhang", "Wensha", "" ], [ "Kenney", "Toby", "" ], [ "Ho", "Lam Si Tung", "" ] ]
1. Abrupt environmental changes can lead to evolutionary shifts in trait evolution. Identifying these shifts is an important step in understanding the evolutionary history of phenotypes. 2. We propose an ensemble variable selection method (R package ELPASO) for the evolutionary shift detection task and compare it with existing methods (R packages l1ou and PhylogeneticEM) under several scenarios. 3. The performances of methods are highly dependent on the selection criterion. When the signal sizes are small, the methods using the Bayesian information criterion (BIC) have better performances. And when the signal sizes are large enough, the methods using the phylogenetic Bayesian information criterion (pBIC) (Khabbazian et al., 2016) have better performance. Moreover, the performance is heavily impacted by measurement error and tree reconstruction error. 4. Ensemble method + pBIC tends to perform less conservatively than l1ou + pBIC, and Ensemble method + BIC is more conservatively than l1ou + BIC. PhylogeneticEM is even more conservative with small signal sizes and falls between l1ou + pBIC and Ensemble method + BIC with large signal sizes. The results can differ between the methods, but none clearly outperforms the others. By applying multiple methods to a single dataset, we can access the robustness of each detected shift, based on the agreement among methods.
2307.08576
Sijin Cai
Sijin Cai, Nanfeng Zhang, Jiaying Zhu, Yanjie Liu, Yongjin Zhou
A Study on the Performance of Generative Pre-trained Transformer (GPT) in Simulating Depressed Individuals on the Standardized Depressive Symptom Scale
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: Depression is a common mental disorder with societal and economic burden. Current diagnosis relies on self-reports and assessment scales, which have reliability issues. Objective approaches are needed for diagnosing depression. Objective: Evaluate the potential of GPT technology in diagnosing depression. Assess its ability to simulate individuals with depression and investigate the influence of depression scales. Methods: Three depression-related assessment tools (HAMD-17, SDS, GDS-15) were used. Two experiments simulated GPT responses to normal individuals and individuals with depression. Compare GPT's responses with expected results, assess its understanding of depressive symptoms, and performance differences under different conditions. Results: GPT's performance in depression assessment was evaluated. It aligned with scoring criteria for both individuals with depression and normal individuals. Some performance differences were observed based on depression severity. GPT performed better on scales with higher sensitivity. Conclusion: GPT accurately simulates individuals with depression and normal individuals during depression-related assessments. Deviations occur when simulating different degrees of depression, limiting understanding of mild and moderate cases. GPT performs better on scales with higher sensitivity, indicating potential for developing more effective depression scales. GPT has important potential in depression assessment, supporting clinicians and patients.
[ { "created": "Mon, 17 Jul 2023 15:44:13 GMT", "version": "v1" } ]
2023-07-18
[ [ "Cai", "Sijin", "" ], [ "Zhang", "Nanfeng", "" ], [ "Zhu", "Jiaying", "" ], [ "Liu", "Yanjie", "" ], [ "Zhou", "Yongjin", "" ] ]
Background: Depression is a common mental disorder with societal and economic burden. Current diagnosis relies on self-reports and assessment scales, which have reliability issues. Objective approaches are needed for diagnosing depression. Objective: Evaluate the potential of GPT technology in diagnosing depression. Assess its ability to simulate individuals with depression and investigate the influence of depression scales. Methods: Three depression-related assessment tools (HAMD-17, SDS, GDS-15) were used. Two experiments simulated GPT responses to normal individuals and individuals with depression. Compare GPT's responses with expected results, assess its understanding of depressive symptoms, and performance differences under different conditions. Results: GPT's performance in depression assessment was evaluated. It aligned with scoring criteria for both individuals with depression and normal individuals. Some performance differences were observed based on depression severity. GPT performed better on scales with higher sensitivity. Conclusion: GPT accurately simulates individuals with depression and normal individuals during depression-related assessments. Deviations occur when simulating different degrees of depression, limiting understanding of mild and moderate cases. GPT performs better on scales with higher sensitivity, indicating potential for developing more effective depression scales. GPT has important potential in depression assessment, supporting clinicians and patients.
2405.03829
Yujiang Wang
Christopher Thornton, Billy C. Smith, Guillermo M. Besne, Bethany Little, Yujiang Wang
Unsupervised Machine Learning Identifies Latent Ultradian States in Multi-Modal Wearable Sensor Signals
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wearable sensors such as smartwatches have become ubiquitous in recent years, allowing the easy and continual measurement of physiological parameters such as heart rate, physical activity, body temperature, and blood glucose in an every-day setting. This multi-modal data offers the potential to identify latent states occurring across physiological measures, which may represent important bio-behavioural states that could not be observed in any single measure. Here we present an approach, utilising a hidden semi-Markov model, to identify such states in data collected using a smartwatch, electrocardiogram, and blood glucose monitor, over two weeks from a sample of 9 participants. We found 26 latent ultradian states across the sample, with many occurring at particular times of day. Here we describe some of these, as well as their association with subjective mood and time use diaries. These methods provide a novel avenue for developing insights into the physiology of everyday life.
[ { "created": "Mon, 6 May 2024 20:20:06 GMT", "version": "v1" } ]
2024-05-08
[ [ "Thornton", "Christopher", "" ], [ "Smith", "Billy C.", "" ], [ "Besne", "Guillermo M.", "" ], [ "Little", "Bethany", "" ], [ "Wang", "Yujiang", "" ] ]
Wearable sensors such as smartwatches have become ubiquitous in recent years, allowing the easy and continual measurement of physiological parameters such as heart rate, physical activity, body temperature, and blood glucose in an every-day setting. This multi-modal data offers the potential to identify latent states occurring across physiological measures, which may represent important bio-behavioural states that could not be observed in any single measure. Here we present an approach, utilising a hidden semi-Markov model, to identify such states in data collected using a smartwatch, electrocardiogram, and blood glucose monitor, over two weeks from a sample of 9 participants. We found 26 latent ultradian states across the sample, with many occurring at particular times of day. Here we describe some of these, as well as their association with subjective mood and time use diaries. These methods provide a novel avenue for developing insights into the physiology of everyday life.
1910.03975
Dmitry Gromov
Dmitry Gromov, Ethan O. Romero-Severson
Within-host phenotypic evolution and the population-level control of chronic viral infections by treatment and prophylaxis
null
null
10.3390/math8091500
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chronic viral infections can persist in an infected person for decades. From the perspective of the virus, a single infection can span thousands of generations, leading to a highly diverse population of viruses with its own complex evolutionary history. We propose a mathematical framework for understanding how the emergence of new viral strains and phenotype within infected persons affects the population-level control of those infections by both non-curative treatment and chemo-prophylactic measures. We consider the within-host emergence of new strains that lack phenotype novelty and also the evolution of variability in contagiousness, resistance to therapy, and efficacy of prophylaxis. Our framework balances the need for verisimilitude with our desire to retain a model that can be approached analytically. We show how to compute the population-level basic reproduction number accounting for the within-host evolutionary process where new phenotypes emerge and are lost in infected persons, which we also extend to include both treatment and prophylactic control efforts. This allows us to make clear statements about both the global and relative efficacy of different control efforts accounting for within-host phenotypic evolution. Finally, we give expressions for the endemic equilibrium of these models for certain constrained versions of the within-host evolutionary model providing a potential method for estimating within-host evolutionary parameters from population-level genetic sequence data.
[ { "created": "Wed, 9 Oct 2019 13:26:40 GMT", "version": "v1" } ]
2023-05-30
[ [ "Gromov", "Dmitry", "" ], [ "Romero-Severson", "Ethan O.", "" ] ]
Chronic viral infections can persist in an infected person for decades. From the perspective of the virus, a single infection can span thousands of generations, leading to a highly diverse population of viruses with its own complex evolutionary history. We propose a mathematical framework for understanding how the emergence of new viral strains and phenotype within infected persons affects the population-level control of those infections by both non-curative treatment and chemo-prophylactic measures. We consider the within-host emergence of new strains that lack phenotype novelty and also the evolution of variability in contagiousness, resistance to therapy, and efficacy of prophylaxis. Our framework balances the need for verisimilitude with our desire to retain a model that can be approached analytically. We show how to compute the population-level basic reproduction number accounting for the within-host evolutionary process where new phenotypes emerge and are lost in infected persons, which we also extend to include both treatment and prophylactic control efforts. This allows us to make clear statements about both the global and relative efficacy of different control efforts accounting for within-host phenotypic evolution. Finally, we give expressions for the endemic equilibrium of these models for certain constrained versions of the within-host evolutionary model providing a potential method for estimating within-host evolutionary parameters from population-level genetic sequence data.
1402.6910
Marios Kyriazis Dr
Marios Kyriazis
Technological integration and hyper-connectivity: Tools for promoting extreme human lifespans
10 pages, 1 Figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/3.0/
Artificial, neurobiological, and social networks are three distinct complex adaptive systems (CAS), each containing discrete processing units (nodes, neurons, and humans respectively). Despite the apparent differences, these three networks are bound by common underlying principles which describe the behaviour of the system in terms of the connections of its components, and its emergent properties. The longevity (long-term retention and functionality) of the components of each of these systems is also defined by common principles. Here, I will examine some properties of the longevity and function of the components of artificial and neurobiological systems, and generalise these to the longevity and function of the components of social CAS. In other words, I will show that principles governing the long-term functionality of computer nodes and of neurons, may be extrapolated to the study of the long-term functionality of humans (or more precisely, of the noemes, an abstract combination of existence and digital fame). The study of these phenomena can provide useful insights regarding practical ways that can be used in order to maximize human longevity. The basic law governing these behaviours is the Law of Requisite Usefulness, which states that the length of retention of an agent within a CAS is proportional to the contribution of the agent to the overall adaptability of the system. Key Words: Complex Adaptive Systems, Hyper-connectivity, Human Longevity, Adaptability and Evolution, Noeme
[ { "created": "Thu, 27 Feb 2014 14:05:02 GMT", "version": "v1" } ]
2014-07-22
[ [ "Kyriazis", "Marios", "" ] ]
Artificial, neurobiological, and social networks are three distinct complex adaptive systems (CAS), each containing discrete processing units (nodes, neurons, and humans respectively). Despite the apparent differences, these three networks are bound by common underlying principles which describe the behaviour of the system in terms of the connections of its components, and its emergent properties. The longevity (long-term retention and functionality) of the components of each of these systems is also defined by common principles. Here, I will examine some properties of the longevity and function of the components of artificial and neurobiological systems, and generalise these to the longevity and function of the components of social CAS. In other words, I will show that principles governing the long-term functionality of computer nodes and of neurons, may be extrapolated to the study of the long-term functionality of humans (or more precisely, of the noemes, an abstract combination of existence and digital fame). The study of these phenomena can provide useful insights regarding practical ways that can be used in order to maximize human longevity. The basic law governing these behaviours is the Law of Requisite Usefulness, which states that the length of retention of an agent within a CAS is proportional to the contribution of the agent to the overall adaptability of the system. Key Words: Complex Adaptive Systems, Hyper-connectivity, Human Longevity, Adaptability and Evolution, Noeme
q-bio/0408007
Igor Berezovsky N.
Igor N. Berezovsky, Eugene I. Shakhnovich
Of sequence and structure: Strategies of protein thermostability in evolutionary perspective
41 pages, 4 figures, 3 tables. Submitted
null
null
null
q-bio.BM q-bio.PE
null
In this work we employ various methods of analysis (unfolding simulations and comparative analysis of structures and sequences of proteomes of thermophilic organisms) to show that organisms can follow two major strategies of thermophilic adaptation: (i) General, non-specific, structure-based, when proteomes of certain thermophilic organisms show significant structural bias toward proteins of higher compactness. In this case thermostability is achieved by greater overall number of stabilizing contacts, none of which may be especially strong, and (ii) Specific, sequence-based, whereby sequence variations aimed at strengthening specific types of interactions (e.g. electrostatics) are applied without significantly changing structures of proteins. The choice of a certain strategy is a direct consequence of evolutionary history and environmental conditions of particular (hyper) thermophilic species: ancient hyperthermophilic organisms that directly evolved in hot environment, pursued mostly structure-based strategy, while later evolved organisms whose thermophilic adaptation was a consequence of their recolonization of hot environment, pursued specific, sequence-based strategy of thermophilic adaptation.
[ { "created": "Thu, 12 Aug 2004 20:52:10 GMT", "version": "v1" }, { "created": "Fri, 3 Dec 2004 20:56:02 GMT", "version": "v2" } ]
2007-05-23
[ [ "Berezovsky", "Igor N.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
In this work we employ various methods of analysis (unfolding simulations and comparative analysis of structures and sequences of proteomes of thermophilic organisms) to show that organisms can follow two major strategies of thermophilic adaptation: (i) General, non-specific, structure-based, when proteomes of certain thermophilic organisms show significant structural bias toward proteins of higher compactness. In this case thermostability is achieved by greater overall number of stabilizing contacts, none of which may be especially strong, and (ii) Specific, sequence-based, whereby sequence variations aimed at strengthening specific types of interactions (e.g. electrostatics) are applied without significantly changing structures of proteins. The choice of a certain strategy is a direct consequence of evolutionary history and environmental conditions of particular (hyper) thermophilic species: ancient hyperthermophilic organisms that directly evolved in hot environment, pursued mostly structure-based strategy, while later evolved organisms whose thermophilic adaptation was a consequence of their recolonization of hot environment, pursued specific, sequence-based strategy of thermophilic adaptation.
2402.01305
Erik Blom
Erik Blom, Stefan Engblom, Gesina Menz
Modeling the hallmarks of avascular tumors
9 pages, 2 figures; submitted to ENUMATH23 Proceedings
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
We present a stochastic computational model of avascular tumors, emphasizing the detailed implementation of the first four so-called hallmarks of cancer: self-sufficiency in growth factors, resistance to growth inhibitors, avoidance of apoptosis, and unlimited growth potential. Our goal is to provide a foundational understanding of the first steps of cancer malignancy while addressing modeling uncertainties, thus bringing us closer to a first-principles grasp of this process. Preliminary numerical simulations illustrate the comprehensiveness of our perspective.
[ { "created": "Fri, 2 Feb 2024 10:51:14 GMT", "version": "v1" } ]
2024-02-05
[ [ "Blom", "Erik", "" ], [ "Engblom", "Stefan", "" ], [ "Menz", "Gesina", "" ] ]
We present a stochastic computational model of avascular tumors, emphasizing the detailed implementation of the first four so-called hallmarks of cancer: self-sufficiency in growth factors, resistance to growth inhibitors, avoidance of apoptosis, and unlimited growth potential. Our goal is to provide a foundational understanding of the first steps of cancer malignancy while addressing modeling uncertainties, thus bringing us closer to a first-principles grasp of this process. Preliminary numerical simulations illustrate the comprehensiveness of our perspective.
q-bio/0703011
Miloje M. Rakocevic
Miloje M. Rakocevic
Genetic Code as a Harmonic System: three Supplements
Three supplements to paper q-bio/0610044 [q-bio.OT], with 38 pages including 16 + 7 + 6 Tables
null
null
null
q-bio.OT q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper represents three supplements to the source paper, q-bio/0610044 [q-bio.OT], with three new series of harmonic structures of the genetic code, determined by Gauss arithmetical algorithm; by Table of Minimal Adding, as in (Rakocevic, 2011a: Table 4; 2011b: Table 4); all structures in relation to Binary-code tree (Rakocevic, 1998). The determination itself is realized through atom and nucleon number balancing and nuancing of molekular polarity. In the first supplement the word is about some additional harmonic structures in relation to a previous our paper (Rakocevic, 2004); in the second one about the relation that structures with the polarity of protein amino acids. In the third supplement we give new ideas about the genetic code by an inclusion of the notions cipher of the genetic code and the key of that cipher.
[ { "created": "Mon, 5 Mar 2007 06:15:21 GMT", "version": "v1" }, { "created": "Fri, 3 Aug 2007 21:40:09 GMT", "version": "v2" }, { "created": "Thu, 15 Feb 2018 08:52:36 GMT", "version": "v3" } ]
2018-02-16
[ [ "Rakocevic", "Miloje M.", "" ] ]
The paper represents three supplements to the source paper, q-bio/0610044 [q-bio.OT], with three new series of harmonic structures of the genetic code, determined by Gauss arithmetical algorithm; by Table of Minimal Adding, as in (Rakocevic, 2011a: Table 4; 2011b: Table 4); all structures in relation to Binary-code tree (Rakocevic, 1998). The determination itself is realized through atom and nucleon number balancing and nuancing of molekular polarity. In the first supplement the word is about some additional harmonic structures in relation to a previous our paper (Rakocevic, 2004); in the second one about the relation that structures with the polarity of protein amino acids. In the third supplement we give new ideas about the genetic code by an inclusion of the notions cipher of the genetic code and the key of that cipher.
2105.05792
H\'el\`ene Delano\"e-Ayari
H\'el\`ene Delano\"e-Ayari, Nicolas Bouchonville, Marie Cour\c{c}on, Alice Nicolas
Linear correlation between active and resistive stresses informs on force generation and stress transmission in adherent cells
5 pages, 4 figures
null
10.1103/PhysRevLett.129.098101
null
q-bio.CB
http://creativecommons.org/licenses/by-sa/4.0/
Animal cells are active, contractile objects. While bioassays address the molecular characterization of cell contractility, the mechanical characterization of the active forces in cells remains challenging. Here by confronting theoretical analysis and experiments, we calculated both the resistive and the active components of the intracellular stresses that build up following cell adhesion. We obtained a linear relationship between the divergence of the resistive stress and the traction forces, which we show is the consequence of the cell adhering and applying forces on the surface only through very localized adhesion points (whose size is inferior to our best resolution, of 400 nm). This entails that there is no measurable forces outside of these active point sources, and also that the resistive and active stresses inside cells are proportional.
[ { "created": "Wed, 12 May 2021 17:04:18 GMT", "version": "v1" }, { "created": "Sat, 12 Mar 2022 15:32:27 GMT", "version": "v2" }, { "created": "Wed, 23 Mar 2022 21:06:02 GMT", "version": "v3" } ]
2022-09-14
[ [ "Delanoë-Ayari", "Hélène", "" ], [ "Bouchonville", "Nicolas", "" ], [ "Courçon", "Marie", "" ], [ "Nicolas", "Alice", "" ] ]
Animal cells are active, contractile objects. While bioassays address the molecular characterization of cell contractility, the mechanical characterization of the active forces in cells remains challenging. Here by confronting theoretical analysis and experiments, we calculated both the resistive and the active components of the intracellular stresses that build up following cell adhesion. We obtained a linear relationship between the divergence of the resistive stress and the traction forces, which we show is the consequence of the cell adhering and applying forces on the surface only through very localized adhesion points (whose size is inferior to our best resolution, of 400 nm). This entails that there is no measurable forces outside of these active point sources, and also that the resistive and active stresses inside cells are proportional.
1707.04668
Brandon Schlomann
Brandon H. Schlomann
Stationary moments, diffusion limits, and extinction times for logistic growth with random catastrophes
16 pages, 3 figures + 2 supplemental figures. Text partially rewritten after feedback on v1. 3 new plots total included
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation.
[ { "created": "Sat, 15 Jul 2017 00:36:24 GMT", "version": "v1" }, { "created": "Thu, 7 Dec 2017 03:50:03 GMT", "version": "v2" } ]
2017-12-08
[ [ "Schlomann", "Brandon H.", "" ] ]
A central problem in population ecology is understanding the consequences of stochastic fluctuations. Analytically tractable models with Gaussian driving noise have led to important, general insights, but they fail to capture rare, catastrophic events, which are increasingly observed at scales ranging from global fisheries to intestinal microbiota. Due to mathematical challenges, growth processes with random catastrophes are less well characterized and it remains unclear how their consequences differ from those of Gaussian processes. In the face of a changing climate and predicted increases in ecological catastrophes, as well as increased interest in harnessing microbes for therapeutics, these processes have never been more relevant. To better understand them, I revisit here a differential equation model of logistic growth coupled to density-independent catastrophes that arrive as a Poisson process, and derive new analytic results that reveal its statistical structure. First, I derive exact expressions for the model's stationary moments, revealing a single effective catastrophe parameter that largely controls low order statistics. Then, I use weak convergence theorems to construct its Gaussian analog in a limit of frequent, small catastrophes, keeping the stationary population mean constant for normalization. Numerically computing statistics along this limit shows how they transform as the dynamics shifts from catastrophes to diffusions, enabling quantitative comparisons. For example, the mean time to extinction increases monotonically by orders of magnitude, demonstrating significantly higher extinction risk under catastrophes than under diffusions. Together, these results provide insight into a wide range of stochastic dynamical systems important for ecology and conservation.
1502.06638
Aldo Ledesma
Aldo Ledesma Duran, I. Santamaria-Holek
Multiscale modeling of exocytosis in the fertilization process
16 pages, 9 figures, 1 table
J Phys Chem Biophys 4: 161 (2014)
10.4172/2161-0398.1000161
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the implementation of a multiscale biophysico-chemical model able to cope with the main mechanisms underlying cumulative exocytosis in cells. The model is based on a diffusion equation in the presence of external forces that links calcium signaling and the biochemistry associated to the activity of cytoskeletal-based protein motors. This multiscale model offers an excellent quantitative spatio-temporal description of the cumulative exocytosis measured by means of fluorescence experiments. We also review pre-existing models reported in the literature on calcium waves, protein motor activation and dynamics, and intracellular directed transport of vesicles. As an example of the proposed model, we analyze the formation of the shield against polyspermy in the early events of fertilization in sea urchin eggs.
[ { "created": "Mon, 23 Feb 2015 21:49:28 GMT", "version": "v1" }, { "created": "Mon, 4 Jan 2016 23:01:11 GMT", "version": "v2" } ]
2016-01-06
[ [ "Duran", "Aldo Ledesma", "" ], [ "Santamaria-Holek", "I.", "" ] ]
We discuss the implementation of a multiscale biophysico-chemical model able to cope with the main mechanisms underlying cumulative exocytosis in cells. The model is based on a diffusion equation in the presence of external forces that links calcium signaling and the biochemistry associated to the activity of cytoskeletal-based protein motors. This multiscale model offers an excellent quantitative spatio-temporal description of the cumulative exocytosis measured by means of fluorescence experiments. We also review pre-existing models reported in the literature on calcium waves, protein motor activation and dynamics, and intracellular directed transport of vesicles. As an example of the proposed model, we analyze the formation of the shield against polyspermy in the early events of fertilization in sea urchin eggs.
1901.07465
Denis Boyer
Andrea Falc\'on-Cort\'es, Denis Boyer, Gabriel Ramos-Fern\'andez
Collective learning from individual experiences and information transfer during group foraging
10 pages, 4 Figures, accepted in J.R.S. Interface
null
null
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living in groups brings benefits to many animals, such as a protection against predators and an improved capacity for sensing and making decisions while searching for resources in uncertain environments. A body of studies has shown how collective behaviors within animal groups on the move can be useful for pooling information about the current state of the environment. The effects of interactions on collective motion have been mostly studied in models of agents with no memory. Thus, whether coordinated behaviors can emerge from individuals with memory and different foraging experiences is still poorly understood. By means of an agent based model, we quantify how individual memory and information fluxes can contribute to improving the foraging success of a group in complex environments. In this context, we define collective learning as a coordinated change of behavior within a group resulting from individual experiences and information transfer. We show that an initially scattered population of foragers visiting dispersed resources can gradually achieve cohesion and become selectively localized in space around the most salient resource sites. Coordination is lost when memory or information transfer among individuals is suppressed. The present modelling framework provides predictions for empirical studies of collective learning and could also find applications in swarm robotics and motivate new search algorithms based on reinforcement.
[ { "created": "Tue, 22 Jan 2019 17:00:58 GMT", "version": "v1" } ]
2019-01-23
[ [ "Falcón-Cortés", "Andrea", "" ], [ "Boyer", "Denis", "" ], [ "Ramos-Fernández", "Gabriel", "" ] ]
Living in groups brings benefits to many animals, such as a protection against predators and an improved capacity for sensing and making decisions while searching for resources in uncertain environments. A body of studies has shown how collective behaviors within animal groups on the move can be useful for pooling information about the current state of the environment. The effects of interactions on collective motion have been mostly studied in models of agents with no memory. Thus, whether coordinated behaviors can emerge from individuals with memory and different foraging experiences is still poorly understood. By means of an agent based model, we quantify how individual memory and information fluxes can contribute to improving the foraging success of a group in complex environments. In this context, we define collective learning as a coordinated change of behavior within a group resulting from individual experiences and information transfer. We show that an initially scattered population of foragers visiting dispersed resources can gradually achieve cohesion and become selectively localized in space around the most salient resource sites. Coordination is lost when memory or information transfer among individuals is suppressed. The present modelling framework provides predictions for empirical studies of collective learning and could also find applications in swarm robotics and motivate new search algorithms based on reinforcement.
1811.03481
Jinbo Xu
Jinbo Xu
Distance-based Protein Folding Powered by Deep Learning
null
null
10.1073/pnas.1821309116
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contact-assisted protein folding has made very good progress, but two challenges remain. One is accurate contact prediction for proteins lack of many sequence homologs and the other is that time-consuming folding simulation is often needed to predict good 3D models from predicted contacts. We show that protein distance matrix can be predicted well by deep learning and then directly used to construct 3D models without folding simulation at all. Using distance geometry to construct 3D models from our predicted distance matrices, we successfully folded 21 of the 37 CASP12 hard targets with a median family size of 58 effective sequence homologs within 4 hours on a Linux computer of 20 CPUs. In contrast, contacts predicted by direct coupling analysis (DCA) cannot fold any of them in the absence of folding simulation and the best CASP12 group folded 11 of them by integrating predicted contacts into complex, fragment-based folding simulation. The rigorous experimental validation on 15 CASP13 targets show that among the 3 hardest targets of new fold our distance-based folding servers successfully folded 2 large ones with <150 sequence homologs while the other servers failed on all three, and that our ab initio folding server also predicted the best, high-quality 3D model for a large homology modeling target. Further experimental validation in CAMEO shows that our ab initio folding server predicted correct fold for a membrane protein of new fold with 200 residues and 229 sequence homologs while all the other servers failed. These results imply that deep learning offers an efficient and accurate solution for ab initio folding on a personal computer.
[ { "created": "Thu, 8 Nov 2018 15:08:19 GMT", "version": "v1" }, { "created": "Mon, 12 Nov 2018 01:56:36 GMT", "version": "v2" } ]
2022-10-12
[ [ "Xu", "Jinbo", "" ] ]
Contact-assisted protein folding has made very good progress, but two challenges remain. One is accurate contact prediction for proteins lack of many sequence homologs and the other is that time-consuming folding simulation is often needed to predict good 3D models from predicted contacts. We show that protein distance matrix can be predicted well by deep learning and then directly used to construct 3D models without folding simulation at all. Using distance geometry to construct 3D models from our predicted distance matrices, we successfully folded 21 of the 37 CASP12 hard targets with a median family size of 58 effective sequence homologs within 4 hours on a Linux computer of 20 CPUs. In contrast, contacts predicted by direct coupling analysis (DCA) cannot fold any of them in the absence of folding simulation and the best CASP12 group folded 11 of them by integrating predicted contacts into complex, fragment-based folding simulation. The rigorous experimental validation on 15 CASP13 targets show that among the 3 hardest targets of new fold our distance-based folding servers successfully folded 2 large ones with <150 sequence homologs while the other servers failed on all three, and that our ab initio folding server also predicted the best, high-quality 3D model for a large homology modeling target. Further experimental validation in CAMEO shows that our ab initio folding server predicted correct fold for a membrane protein of new fold with 200 residues and 229 sequence homologs while all the other servers failed. These results imply that deep learning offers an efficient and accurate solution for ab initio folding on a personal computer.
2102.11914
Matthew Whelan
Matthew T. Whelan, Tony J. Prescott, Eleni Vasilaki
A Robotic Model of Hippocampal Reverse Replay for Reinforcement Learning
39 pages, 6 figures, 2 tables, journal, submitted to Bioinspiration and Biomimetics
null
null
null
q-bio.NC cs.RO
http://creativecommons.org/licenses/by/4.0/
Hippocampal reverse replay is thought to contribute to learning, and particularly reinforcement learning, in animals. We present a computational model of learning in the hippocampus that builds on a previous model of the hippocampal-striatal network viewed as implementing a three-factor reinforcement learning rule. To augment this model with hippocampal reverse replay, a novel policy gradient learning rule is derived that associates place cell activity with responses in cells representing actions. This new model is evaluated using a simulated robot spatial navigation task inspired by the Morris water maze. Results show that reverse replay can accelerate learning from reinforcement, whilst improving stability and robustness over multiple trials. As implied by the neurobiological data, our study implies that reverse replay can make a significant positive contribution to reinforcement learning, although learning that is less efficient and less stable is possible in its absence. We conclude that reverse replay may enhance reinforcement learning in the mammalian hippocampal-striatal system rather than provide its core mechanism.
[ { "created": "Tue, 23 Feb 2021 19:47:18 GMT", "version": "v1" } ]
2021-02-25
[ [ "Whelan", "Matthew T.", "" ], [ "Prescott", "Tony J.", "" ], [ "Vasilaki", "Eleni", "" ] ]
Hippocampal reverse replay is thought to contribute to learning, and particularly reinforcement learning, in animals. We present a computational model of learning in the hippocampus that builds on a previous model of the hippocampal-striatal network viewed as implementing a three-factor reinforcement learning rule. To augment this model with hippocampal reverse replay, a novel policy gradient learning rule is derived that associates place cell activity with responses in cells representing actions. This new model is evaluated using a simulated robot spatial navigation task inspired by the Morris water maze. Results show that reverse replay can accelerate learning from reinforcement, whilst improving stability and robustness over multiple trials. As implied by the neurobiological data, our study implies that reverse replay can make a significant positive contribution to reinforcement learning, although learning that is less efficient and less stable is possible in its absence. We conclude that reverse replay may enhance reinforcement learning in the mammalian hippocampal-striatal system rather than provide its core mechanism.
2009.12277
Holger Eble
Holger Eble, Michael Joswig, Lisa Lamberti and Will Ludington
Master regulators of evolution and the microbiome in higher dimensions
77 pages, various figures
null
null
null
q-bio.QM math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A longstanding goal of biology is to identify the key genes and species that critically impact evolution, ecology, and health. Network analysis has revealed keystone species that regulate ecosystems and master regulators that regulate cellular genetic networks. Yet these studies have focused on pairwise biological interactions, which can be affected by the context of genetic background and other species present generating higher-order interactions. The important regulators of higher-order interactions are unstudied. To address this, we applied a new high-dimensional geometry approach that quantifies epistasis in a fitness landscape to ask how individual genes and species influence the interactions in the rest of the biological network. We then generated and also reanalyzed 5-dimensional datasets (two genetic, two microbiome). We identified key genes (e.g. the rbs locus and pykF) and species (e.g. Lactobacilli) that control the interactions of many other genes and species. These higher-order master regulators can induce or suppress evolutionary and ecological diversification by controlling the topography of the fitness landscape. Thus, we provide mathematical intuition and justification for exploration of biological networks in higher dimensions.
[ { "created": "Fri, 25 Sep 2020 14:53:05 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2022 19:45:37 GMT", "version": "v2" } ]
2022-09-14
[ [ "Eble", "Holger", "" ], [ "Joswig", "Michael", "" ], [ "Lamberti", "Lisa", "" ], [ "Ludington", "Will", "" ] ]
A longstanding goal of biology is to identify the key genes and species that critically impact evolution, ecology, and health. Network analysis has revealed keystone species that regulate ecosystems and master regulators that regulate cellular genetic networks. Yet these studies have focused on pairwise biological interactions, which can be affected by the context of genetic background and other species present generating higher-order interactions. The important regulators of higher-order interactions are unstudied. To address this, we applied a new high-dimensional geometry approach that quantifies epistasis in a fitness landscape to ask how individual genes and species influence the interactions in the rest of the biological network. We then generated and also reanalyzed 5-dimensional datasets (two genetic, two microbiome). We identified key genes (e.g. the rbs locus and pykF) and species (e.g. Lactobacilli) that control the interactions of many other genes and species. These higher-order master regulators can induce or suppress evolutionary and ecological diversification by controlling the topography of the fitness landscape. Thus, we provide mathematical intuition and justification for exploration of biological networks in higher dimensions.
2110.12904
Pranav Sankhe Mr.
Pranav Sankhe and Ritik Madan
Cortical representations of Auditory Perception using Graph Independent Component on EEG
null
null
10.17605/OSF.IO/DTC3G
null
q-bio.NC eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies indicate that the neurons involved in a cognitive task aren't locally limited but span out to multiple human brain regions. We obtain network components and their locations for the task of listening to music. The recorded EEG data is modeled as a graph, and it is assumed that the overall activity is a contribution of several independent subnetworks. To identify these intrinsic cognitive subnetworks corresponding to music perception, we propose to decompose the whole brain graph-network into multiple subnetworks. We perform this decomposition to a group of brain networks by performing Graph-Independent Component Analysis. Graph-ICA is a variant of ICA that decomposes the measured graph into independent source graphs. Having obtained independent subnetworks, we calculate the electrode positions by computing the local maxima of these subnetwork matrices. We observe that the computed electrodes' location corresponds to the temporal lobes and the Broca's area, which are indeed involved in the task of auditory processing and perception. The computed electrodes also span the brain's frontal lobe, which is involved in attention and generating a stimulus-evoked response. The weight of the subnetwork that corresponds to the aforementioned brain regions increases with the increase in the music recording's tempo. The results suggest that whole-brain networks can be decomposed into independent subnetworks and analyze cognitive responses to music stimulus.
[ { "created": "Thu, 21 Oct 2021 11:04:53 GMT", "version": "v1" } ]
2021-10-26
[ [ "Sankhe", "Pranav", "" ], [ "Madan", "Ritik", "" ] ]
Recent studies indicate that the neurons involved in a cognitive task aren't locally limited but span out to multiple human brain regions. We obtain network components and their locations for the task of listening to music. The recorded EEG data is modeled as a graph, and it is assumed that the overall activity is a contribution of several independent subnetworks. To identify these intrinsic cognitive subnetworks corresponding to music perception, we propose to decompose the whole brain graph-network into multiple subnetworks. We perform this decomposition to a group of brain networks by performing Graph-Independent Component Analysis. Graph-ICA is a variant of ICA that decomposes the measured graph into independent source graphs. Having obtained independent subnetworks, we calculate the electrode positions by computing the local maxima of these subnetwork matrices. We observe that the computed electrodes' location corresponds to the temporal lobes and the Broca's area, which are indeed involved in the task of auditory processing and perception. The computed electrodes also span the brain's frontal lobe, which is involved in attention and generating a stimulus-evoked response. The weight of the subnetwork that corresponds to the aforementioned brain regions increases with the increase in the music recording's tempo. The results suggest that whole-brain networks can be decomposed into independent subnetworks and analyze cognitive responses to music stimulus.
1904.10664
Conrad Burden
Conrad J. Burden and Albert C. Soewongsono
Coalescence in the diffusion limit of a Bienayme-Galton-Watson branching process
28 pages, 6 figures; rewrite to make the paper more self-contained
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of estimating the elapsed time since the most recent common ancestor of a finite random sample drawn from a population which has evolved through a Bienayme-Galton-Watson branching process. More specifically, we are interested in the diffusion limit appropriate to a supercritical process in the near-critical limit evolving over a large number of time steps. Our approach differs from earlier analyses in that we assume the only known information is the mean and variance of the number of offspring per parent, the observed total population size at the time of sampling, and the size of the sample. We obtain a formula for the probability that a finite random sample of the population is descended from a single ancestor in the initial population, and derive a confidence interval for the initial population size in terms of the final population size and the time since initiating the process. We also determine a joint likelihood surface from which confidence regions can be determined for simultaneously estimating two parameters, (1) the population size at the time of the most recent common ancestor, and (2) the time elapsed since the existence of the most recent common ancestor.
[ { "created": "Wed, 24 Apr 2019 07:22:53 GMT", "version": "v1" }, { "created": "Mon, 2 Sep 2019 00:34:17 GMT", "version": "v2" } ]
2019-09-04
[ [ "Burden", "Conrad J.", "" ], [ "Soewongsono", "Albert C.", "" ] ]
We consider the problem of estimating the elapsed time since the most recent common ancestor of a finite random sample drawn from a population which has evolved through a Bienayme-Galton-Watson branching process. More specifically, we are interested in the diffusion limit appropriate to a supercritical process in the near-critical limit evolving over a large number of time steps. Our approach differs from earlier analyses in that we assume the only known information is the mean and variance of the number of offspring per parent, the observed total population size at the time of sampling, and the size of the sample. We obtain a formula for the probability that a finite random sample of the population is descended from a single ancestor in the initial population, and derive a confidence interval for the initial population size in terms of the final population size and the time since initiating the process. We also determine a joint likelihood surface from which confidence regions can be determined for simultaneously estimating two parameters, (1) the population size at the time of the most recent common ancestor, and (2) the time elapsed since the existence of the most recent common ancestor.
2408.06763
Ari Rappoport
Ari Rappoport
A Dynorphin Theory of Depression and Bipolar Disorder
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Major depressive disorder (MDD) is a debilitating health condition affecting a substantial part of the world's population. At present, there is no biological theory of MDD, and treatment is partial at best. Here I present a theory of MDD that explains its etiology, symptoms, pathophysiology, and treatment. MDD involves stressful life events that the person does not manage to resolve. In this situation animals normally execute a 'disengage' survival response. In MDD, this response is chronically executed, leading to depressed mood and the somatic MDD symptoms. To explain the biological mechanisms involved, I present a novel theory of opioids, where each opioid mediates one of the basic survival responses. The opioid mediating 'disengage' is dynorphin. The paper presents strong evidence for chronic dynorphin signaling in MDD and for its causal role in the disorder. The theory also explains bipolar disorder, and the mechanisms behind the treatment of both disorders.
[ { "created": "Tue, 13 Aug 2024 09:39:11 GMT", "version": "v1" } ]
2024-08-14
[ [ "Rappoport", "Ari", "" ] ]
Major depressive disorder (MDD) is a debilitating health condition affecting a substantial part of the world's population. At present, there is no biological theory of MDD, and treatment is partial at best. Here I present a theory of MDD that explains its etiology, symptoms, pathophysiology, and treatment. MDD involves stressful life events that the person does not manage to resolve. In this situation animals normally execute a 'disengage' survival response. In MDD, this response is chronically executed, leading to depressed mood and the somatic MDD symptoms. To explain the biological mechanisms involved, I present a novel theory of opioids, where each opioid mediates one of the basic survival responses. The opioid mediating 'disengage' is dynorphin. The paper presents strong evidence for chronic dynorphin signaling in MDD and for its causal role in the disorder. The theory also explains bipolar disorder, and the mechanisms behind the treatment of both disorders.
0708.2147
Dalius Balciunas
Dalius Balciunas
The logistic equation and a critique of the theory of natural selection
31 pages, 5 figures, appendix
null
null
null
q-bio.PE
null
Species coexistence is one of the central themes in modern ecology. Coexistence is a prerequisite of biological diversity. However, the question arises how biodiversity can be reconciled with the statement of competition theory, which asserts that competing species cannot coexist. To solve this problem natural selection theory is rejected because it contradicts kinetic models of interacting populations. Biological evolution is presented as a process equivalent to a chemical reaction. The main point is that interactions occur between self-replicating units. Under these assumptions biodiversity is possible if and only if species are identical with respect to the patterns of energy flow in which individuals are involved.
[ { "created": "Thu, 16 Aug 2007 07:15:04 GMT", "version": "v1" } ]
2007-08-17
[ [ "Balciunas", "Dalius", "" ] ]
Species coexistence is one of the central themes in modern ecology. Coexistence is a prerequisite of biological diversity. However, the question arises how biodiversity can be reconciled with the statement of competition theory, which asserts that competing species cannot coexist. To solve this problem natural selection theory is rejected because it contradicts kinetic models of interacting populations. Biological evolution is presented as a process equivalent to a chemical reaction. The main point is that interactions occur between self-replicating units. Under these assumptions biodiversity is possible if and only if species are identical with respect to the patterns of energy flow in which individuals are involved.
1005.2108
Stephen Willson
Stephen J. Willson
CSD Homomorphisms Between Phylogenetic Networks
19 pages, 3 figures
IEEE/ACM Transactions on Computational Biology and Bioinformatics (2012) 9: 1128-1138
10.1109/TCBB.2012.52
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since Darwin, species trees have been used as a simplified description of the relationships which summarize the complicated network $N$ of reality. Recent evidence of hybridization and lateral gene transfer, however, suggest that there are situations where trees are inadequate. Consequently it is important to determine properties that characterize networks closely related to $N$ and possibly more complicated than trees but lacking the full complexity of $N$. A connected surjective digraph map (CSD) is a map $f$ from one network $N$ to another network $M$ such that every arc is either collapsed to a single vertex or is taken to an arc, such that $f$ is surjective, and such that the inverse image of a vertex is always connected. CSD maps are shown to behave well under composition. It is proved that if there is a CSD map from $N$ to $M$, then there is a way to lift an undirected version of $M$ into $N$, often with added resolution. A CSD map from $N$ to $M$ puts strong constraints on $N$. In general, it may be useful to study classes of networks such that, for any $N$, there exists a CSD map from $N$ to some standard member of that class.
[ { "created": "Wed, 12 May 2010 14:09:43 GMT", "version": "v1" }, { "created": "Sat, 6 Aug 2016 20:35:26 GMT", "version": "v2" } ]
2016-11-17
[ [ "Willson", "Stephen J.", "" ] ]
Since Darwin, species trees have been used as a simplified description of the relationships which summarize the complicated network $N$ of reality. Recent evidence of hybridization and lateral gene transfer, however, suggest that there are situations where trees are inadequate. Consequently it is important to determine properties that characterize networks closely related to $N$ and possibly more complicated than trees but lacking the full complexity of $N$. A connected surjective digraph map (CSD) is a map $f$ from one network $N$ to another network $M$ such that every arc is either collapsed to a single vertex or is taken to an arc, such that $f$ is surjective, and such that the inverse image of a vertex is always connected. CSD maps are shown to behave well under composition. It is proved that if there is a CSD map from $N$ to $M$, then there is a way to lift an undirected version of $M$ into $N$, often with added resolution. A CSD map from $N$ to $M$ puts strong constraints on $N$. In general, it may be useful to study classes of networks such that, for any $N$, there exists a CSD map from $N$ to some standard member of that class.
2408.07298
Tom Britton
Tom Britton and Frank Ball
Improving the use of social contact studies in epidemic modelling
Supplementary material not included. Contact Tom Britton if you are interested
null
null
null
q-bio.PE physics.soc-ph stat.ME
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social contact studies, investigating social contact patterns in a population sample, have been an important contribution for epidemic models to better fit real life epidemics. A contact matrix $M$, having the \emph{mean} number of contacts between individuals of different age groups as its elements, is estimated and used in combination with a multitype epidemic model to produce better data fitting and also giving more appropriate expressions for $R_0$ and other model outcomes. However, $M$ does not capture \emph{variation} in contacts \emph{within} each age group, which is often large in empirical settings. Here such variation within age groups is included in a simple way by dividing each age group into two halves: the socially active and the socially less active. The extended contact matrix, and its associated epidemic model, empirically show that acknowledging variation in social activity within age groups has a substantial impact on modelling outcomes such as $R_0$ and the final fraction $\tau$ getting infected. In fact, the variation in social activity within age groups is often more important for data fitting than the division into different age groups itself. However, a difficulty with heterogeneity in social activity is that social contact studies typically lack information on if mixing with respect to social activity is assortative or not, i.e.\ do socially active tend to mix more with other socially active or more with socially less active? The analyses show that accounting for heterogeneity in social activity improves the analyses irrespective of if such mixing is assortative or not, but the different assumptions gives rather different output. Future social contact studies should hence also try to infer the degree of assortativity of contacts with respect to social activity.
[ { "created": "Tue, 30 Jul 2024 19:38:14 GMT", "version": "v1" } ]
2024-08-15
[ [ "Britton", "Tom", "" ], [ "Ball", "Frank", "" ] ]
Social contact studies, investigating social contact patterns in a population sample, have been an important contribution for epidemic models to better fit real life epidemics. A contact matrix $M$, having the \emph{mean} number of contacts between individuals of different age groups as its elements, is estimated and used in combination with a multitype epidemic model to produce better data fitting and also giving more appropriate expressions for $R_0$ and other model outcomes. However, $M$ does not capture \emph{variation} in contacts \emph{within} each age group, which is often large in empirical settings. Here such variation within age groups is included in a simple way by dividing each age group into two halves: the socially active and the socially less active. The extended contact matrix, and its associated epidemic model, empirically show that acknowledging variation in social activity within age groups has a substantial impact on modelling outcomes such as $R_0$ and the final fraction $\tau$ getting infected. In fact, the variation in social activity within age groups is often more important for data fitting than the division into different age groups itself. However, a difficulty with heterogeneity in social activity is that social contact studies typically lack information on if mixing with respect to social activity is assortative or not, i.e.\ do socially active tend to mix more with other socially active or more with socially less active? The analyses show that accounting for heterogeneity in social activity improves the analyses irrespective of if such mixing is assortative or not, but the different assumptions gives rather different output. Future social contact studies should hence also try to infer the degree of assortativity of contacts with respect to social activity.
2311.04165
Michael Staelens
Elisabetta Di Gregorio and Michael Staelens and Nazanin Hosseinkhah and Mahroo Karimpoor and Janine Liburd and Lew Lim and Karthik Shankar and Jack A. Tuszynski
Raman Spectroscopy Reveals Photobiomodulation-Induced {\alpha}-Helix to {\beta}-Sheet Transition in Tubulins: Potential Implications for Alzheimer's and Other Neurodegenerative Diseases
15 pages, 3 figures
null
10.3390/nano14131093
null
q-bio.BM q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this study, we employed a Raman spectroscopic analysis of the amide I band of polymerized samples of tubulin exposed to pulsed low-intensity NIR radiation (810 nm, 10 Hz, 22.5 J/cm$^{2}$ dose). Peaks in the Raman fingerprint region (300$\unicode{x2013}$1900 cm$^{-1}$), in particular, in the amide I band (1600$\unicode{x2013}$1700 cm$^{-1}$), can be used to quantify the percentage of protein secondary structures. Under this band, hidden signals of $\mathrm{C}$=$\mathrm{O}$ stretching, belonging to different structures, are superimposed$\unicode{x2014}$producing a complex signal as a result. An accurate decomposition of the amide I band is therefore required for the reliable analysis of the conformation of proteins, which we achieved through a straightforward method employing a Voigt profile. This approach was validated through secondary structure analyses of unexposed control samples, for which comparisons with other values available in the literature could be conducted. Subsequently, using this validated method, we present novel findings of statistically significant alterations in the secondary structures of NIR-exposed tubulin, characterized by a notable decrease in alpha-helix content and a concurrent increase in beta-sheets compared to the control samples. The alpha-helix to beta-sheet transition suggests that PBM reduces microtubule stability and introduces dynamism to allow for the remodeling and, consequently, refreshing of microtubule structures. This newly discovered mechanism could have implications for reducing the risks associated with brain aging, including neurodegenerative diseases like Alzheimer's disease.
[ { "created": "Tue, 7 Nov 2023 17:47:55 GMT", "version": "v1" } ]
2024-06-27
[ [ "Di Gregorio", "Elisabetta", "" ], [ "Staelens", "Michael", "" ], [ "Hosseinkhah", "Nazanin", "" ], [ "Karimpoor", "Mahroo", "" ], [ "Liburd", "Janine", "" ], [ "Lim", "Lew", "" ], [ "Shankar", "Karthik", "" ], [ "Tuszynski", "Jack A.", "" ] ]
In this study, we employed a Raman spectroscopic analysis of the amide I band of polymerized samples of tubulin exposed to pulsed low-intensity NIR radiation (810 nm, 10 Hz, 22.5 J/cm$^{2}$ dose). Peaks in the Raman fingerprint region (300$\unicode{x2013}$1900 cm$^{-1}$), in particular, in the amide I band (1600$\unicode{x2013}$1700 cm$^{-1}$), can be used to quantify the percentage of protein secondary structures. Under this band, hidden signals of $\mathrm{C}$=$\mathrm{O}$ stretching, belonging to different structures, are superimposed$\unicode{x2014}$producing a complex signal as a result. An accurate decomposition of the amide I band is therefore required for the reliable analysis of the conformation of proteins, which we achieved through a straightforward method employing a Voigt profile. This approach was validated through secondary structure analyses of unexposed control samples, for which comparisons with other values available in the literature could be conducted. Subsequently, using this validated method, we present novel findings of statistically significant alterations in the secondary structures of NIR-exposed tubulin, characterized by a notable decrease in alpha-helix content and a concurrent increase in beta-sheets compared to the control samples. The alpha-helix to beta-sheet transition suggests that PBM reduces microtubule stability and introduces dynamism to allow for the remodeling and, consequently, refreshing of microtubule structures. This newly discovered mechanism could have implications for reducing the risks associated with brain aging, including neurodegenerative diseases like Alzheimer's disease.
2305.08987
Sean Paulsen
Sean Paulsen, Lloyd May, Michael Casey
Decoding Imagined Auditory Pitch Phenomena with an Autoencoder Based Temporal Convolutional Architecture
Awarded Best Paper at BRAININFO2021
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Stimulus decoding of functional Magnetic Resonance Imaging (fMRI) data with machine learning models has provided new insights about neural representational spaces and task-related dynamics. However, the scarcity of labelled (task-related) fMRI data is a persistent obstacle, resulting in model-underfitting and poor generalization. In this work, we mitigated data poverty by extending a recent pattern-encoding strategy from the visual memory domain to our own domain of auditory pitch tasks, which to our knowledge had not been done. Specifically, extracting preliminary information about participants' neural activation dynamics from the unlabelled fMRI data resulted in improved downstream classifier performance when decoding heard and imagined pitch. Our results demonstrate the benefits of leveraging unlabelled fMRI data against data poverty for decoding pitch based tasks, and yields novel significant evidence for both separate and overlapping pathways of heard and imagined pitch processing, deepening our understanding of auditory cognitive neuroscience.
[ { "created": "Mon, 15 May 2023 20:02:58 GMT", "version": "v1" } ]
2023-05-17
[ [ "Paulsen", "Sean", "" ], [ "May", "Lloyd", "" ], [ "Casey", "Michael", "" ] ]
Stimulus decoding of functional Magnetic Resonance Imaging (fMRI) data with machine learning models has provided new insights about neural representational spaces and task-related dynamics. However, the scarcity of labelled (task-related) fMRI data is a persistent obstacle, resulting in model-underfitting and poor generalization. In this work, we mitigated data poverty by extending a recent pattern-encoding strategy from the visual memory domain to our own domain of auditory pitch tasks, which to our knowledge had not been done. Specifically, extracting preliminary information about participants' neural activation dynamics from the unlabelled fMRI data resulted in improved downstream classifier performance when decoding heard and imagined pitch. Our results demonstrate the benefits of leveraging unlabelled fMRI data against data poverty for decoding pitch based tasks, and yields novel significant evidence for both separate and overlapping pathways of heard and imagined pitch processing, deepening our understanding of auditory cognitive neuroscience.
1807.07065
K. Eric Drexler
K. Eric Drexler
Molecular Imprinting: The missing piece in the puzzle of abiogenesis?
34 pages, 5 figures, 2 tables, 113 references
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a neglected 2005 paper, Nobel Laureate Paul Lauterbur proposed that molecular imprinting in amorphous materials -- a phenomenon with an extensive experimental literature -- played a key role in abiogenesis. The present paper builds on Lauterbur's idea to propose imprint-mediated templating (IMT), a mechanism for prebiotic peptide replication that could potentially avoid a range of difficulties arising in classic gene-first and metabolism-first models of abiogenesis. Unlike models that propose prebiotic RNA synthesis, activation, and polymerization based on unknown chemistries, peptide/IMT models are compatible with demonstrably realistic prebiotic chemistries: synthesis of dilute mixtures of racemic amino acids from atmospheric gases, and polymerization of unactivated amino acids on hot, intermittently-wetted surfaces. Starting from a peptide/IMT-based genetics, plausible processes could support the elaboration of genetic and metabolic complexity in an early-Earth environment, both explaining the emergence of homochirality and providing a potential bridge to nucleic acid metabolism. Peptide/IMT models suggest directions for both theoretical and experimental inquiry.
[ { "created": "Wed, 18 Jul 2018 19:16:47 GMT", "version": "v1" } ]
2018-07-20
[ [ "Drexler", "K. Eric", "" ] ]
In a neglected 2005 paper, Nobel Laureate Paul Lauterbur proposed that molecular imprinting in amorphous materials -- a phenomenon with an extensive experimental literature -- played a key role in abiogenesis. The present paper builds on Lauterbur's idea to propose imprint-mediated templating (IMT), a mechanism for prebiotic peptide replication that could potentially avoid a range of difficulties arising in classic gene-first and metabolism-first models of abiogenesis. Unlike models that propose prebiotic RNA synthesis, activation, and polymerization based on unknown chemistries, peptide/IMT models are compatible with demonstrably realistic prebiotic chemistries: synthesis of dilute mixtures of racemic amino acids from atmospheric gases, and polymerization of unactivated amino acids on hot, intermittently-wetted surfaces. Starting from a peptide/IMT-based genetics, plausible processes could support the elaboration of genetic and metabolic complexity in an early-Earth environment, both explaining the emergence of homochirality and providing a potential bridge to nucleic acid metabolism. Peptide/IMT models suggest directions for both theoretical and experimental inquiry.
1711.06724
Huifang Xu
Huifang Xu and Yifeng Wang
An Equation for Predicting Binding Strengths of Metal Cations to Protein of Human Serum Transferrin
14 pages, 4 figures, 4 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Because human serum transferrin (hTF) exists freely in serum, it is a potential target for cancer treatment drugs and in curing iron-overloaded conditions in patients via long-term transfusion therapy. The understanding of the interactions between hTF and metal ions is very important for biological, pharmalogical, toxicological, and other protein engineering purposes. In this paper, a simple linear free energy correlation is proposed to predict the binding strength between hTF protein and metal cations. The stability constants for a family of metal-hTF complexes can be correlated to the non-solvation energies and the radii of cations. The binding strength is determined by both the physical properties (charge and size or ionic radius) and chemical properties (non-solvation energy) of a given cation. The binding strengths of either divalent and trivalent metals can then be predicted systematically.
[ { "created": "Fri, 17 Nov 2017 20:53:26 GMT", "version": "v1" }, { "created": "Mon, 1 Nov 2021 19:32:31 GMT", "version": "v2" } ]
2021-11-03
[ [ "Xu", "Huifang", "" ], [ "Wang", "Yifeng", "" ] ]
Because human serum transferrin (hTF) exists freely in serum, it is a potential target for cancer treatment drugs and in curing iron-overloaded conditions in patients via long-term transfusion therapy. The understanding of the interactions between hTF and metal ions is very important for biological, pharmalogical, toxicological, and other protein engineering purposes. In this paper, a simple linear free energy correlation is proposed to predict the binding strength between hTF protein and metal cations. The stability constants for a family of metal-hTF complexes can be correlated to the non-solvation energies and the radii of cations. The binding strength is determined by both the physical properties (charge and size or ionic radius) and chemical properties (non-solvation energy) of a given cation. The binding strengths of either divalent and trivalent metals can then be predicted systematically.
1604.05129
Pedro Alejandro Ortega
Pedro A. Ortega and Naftali Tishby
Memory shapes time perception and intertemporal choices
24 pages, 4 figures, 2 tables. Submitted
null
null
null
q-bio.NC cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a consensus that human and non-human subjects experience temporal distortions in many stages of their perceptual and decision-making systems. Similarly, intertemporal choice research has shown that decision-makers undervalue future outcomes relative to immediate ones. Here we combine techniques from information theory and artificial intelligence to show how both temporal distortions and intertemporal choice preferences can be explained as a consequence of the coding efficiency of sensorimotor representation. In particular, the model implies that interactions that constrain future behavior are perceived as being both longer in duration and more valuable. Furthermore, using simulations of artificial agents, we investigate how memory constraints enforce a renormalization of the perceived timescales. Our results show that qualitatively different discount functions, such as exponential and hyperbolic discounting, arise as a consequence of an agent's probabilistic model of the world.
[ { "created": "Mon, 18 Apr 2016 13:17:55 GMT", "version": "v1" }, { "created": "Sun, 29 May 2016 18:39:52 GMT", "version": "v2" } ]
2016-05-31
[ [ "Ortega", "Pedro A.", "" ], [ "Tishby", "Naftali", "" ] ]
There is a consensus that human and non-human subjects experience temporal distortions in many stages of their perceptual and decision-making systems. Similarly, intertemporal choice research has shown that decision-makers undervalue future outcomes relative to immediate ones. Here we combine techniques from information theory and artificial intelligence to show how both temporal distortions and intertemporal choice preferences can be explained as a consequence of the coding efficiency of sensorimotor representation. In particular, the model implies that interactions that constrain future behavior are perceived as being both longer in duration and more valuable. Furthermore, using simulations of artificial agents, we investigate how memory constraints enforce a renormalization of the perceived timescales. Our results show that qualitatively different discount functions, such as exponential and hyperbolic discounting, arise as a consequence of an agent's probabilistic model of the world.
1711.11270
Don Mai
Jayodita C. Sanghvi, Don Mai, Adam P. Arkin, David V. Schaffer
A comprehensive stochastic computational model of HIV infection from DNA integration to viral burst
21 pages, single column, turned links into hyperlinks, changed pdf title
null
null
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple mechanisms in the HIV lifecycle play a role in its ability to evade therapy and become a chronic, difficult-to-treat infection. Within its major cellular target, the activated T cell, many steps occur between viral entry and viral burst, including reverse transcription of viral RNA, integration of the viral DNA in the host genome, viral transcription, splicing, translation, host and viral regulation, and viral packaging. These steps exploit complex networks of macromolecular interactions that exhibit various forms of stochastic behavior. While each of the steps of HIV infection have been individually studied extensively, the combinatorial contribution of rare events in each of the steps, and how series of these rare events lead to different infection phenotypes, are not well understood. The complexity of these processes render experimental study challenging. Therefore, we have built a comprehensive computational model of this large system, by collating the community's knowledge of the infection process. It is a stochastic model where rates of different events in the system are represented as probabilities of the event occurring in a timestep of the simulation. This model enables an understanding of the noise and variation in the system. The model also facilitates a dissected understanding of each small part of the large complex system, and its impact on the overall system dynamics.
[ { "created": "Thu, 30 Nov 2017 08:55:14 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2017 07:05:51 GMT", "version": "v2" } ]
2017-12-04
[ [ "Sanghvi", "Jayodita C.", "" ], [ "Mai", "Don", "" ], [ "Arkin", "Adam P.", "" ], [ "Schaffer", "David V.", "" ] ]
Multiple mechanisms in the HIV lifecycle play a role in its ability to evade therapy and become a chronic, difficult-to-treat infection. Within its major cellular target, the activated T cell, many steps occur between viral entry and viral burst, including reverse transcription of viral RNA, integration of the viral DNA in the host genome, viral transcription, splicing, translation, host and viral regulation, and viral packaging. These steps exploit complex networks of macromolecular interactions that exhibit various forms of stochastic behavior. While each of the steps of HIV infection have been individually studied extensively, the combinatorial contribution of rare events in each of the steps, and how series of these rare events lead to different infection phenotypes, are not well understood. The complexity of these processes render experimental study challenging. Therefore, we have built a comprehensive computational model of this large system, by collating the community's knowledge of the infection process. It is a stochastic model where rates of different events in the system are represented as probabilities of the event occurring in a timestep of the simulation. This model enables an understanding of the noise and variation in the system. The model also facilitates a dissected understanding of each small part of the large complex system, and its impact on the overall system dynamics.
1611.08750
Ginestra Bianconi
Ginestra Bianconi
Epidemic spreading and bond percolation in multilayer networks
8 pages, 2 figures
J. Stat. Mech. (2017) 034001
10.1088/1742-5468/aa5fd8
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Susceptible-Infected-Recovered (SIR) model is studied in multilayer networks with arbitrary number of links across the layers. By following the mapping to bond percolation we give the analytical expression for the epidemic threshold and the fraction of the infected individuals in arbitrary number of layers. These results provide an exact prediction of the epidemic threshold for infinite locally tree-like multilayer networks, and an lower bound of the epidemic threshold for more general multilayer networks. The case of a multilayer network formed by two interconnected networks is specifically studied as a function of the degree distribution within and across the layers. We show that the epidemic threshold strongly depends on the degree correlations of the multilayer structure. Finally we relate our results to the results obtained in the annealed approximation for the Susceptible-Infected-Susceptible (SIS) model.
[ { "created": "Sat, 26 Nov 2016 22:20:47 GMT", "version": "v1" }, { "created": "Fri, 10 Feb 2017 11:48:27 GMT", "version": "v2" } ]
2017-03-09
[ [ "Bianconi", "Ginestra", "" ] ]
The Susceptible-Infected-Recovered (SIR) model is studied in multilayer networks with arbitrary number of links across the layers. By following the mapping to bond percolation we give the analytical expression for the epidemic threshold and the fraction of the infected individuals in arbitrary number of layers. These results provide an exact prediction of the epidemic threshold for infinite locally tree-like multilayer networks, and an lower bound of the epidemic threshold for more general multilayer networks. The case of a multilayer network formed by two interconnected networks is specifically studied as a function of the degree distribution within and across the layers. We show that the epidemic threshold strongly depends on the degree correlations of the multilayer structure. Finally we relate our results to the results obtained in the annealed approximation for the Susceptible-Infected-Susceptible (SIS) model.
1704.08419
Lou Massa
Earl Bloch, Lou Massa, Kilian Dill
The use of quantum dots to amplify antigen detection
11 pages, 8 figures
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proposal to develop an Improved immunological assay employing primary IgG antibodies and secondary IgM antibodies labeled with quantum dots to amplify antigen detection.
[ { "created": "Thu, 27 Apr 2017 03:25:36 GMT", "version": "v1" } ]
2017-04-28
[ [ "Bloch", "Earl", "" ], [ "Massa", "Lou", "" ], [ "Dill", "Kilian", "" ] ]
Proposal to develop an Improved immunological assay employing primary IgG antibodies and secondary IgM antibodies labeled with quantum dots to amplify antigen detection.
0901.0082
Szymon Niewieczerza{\l}
Szymon Niewieczerza{\l} and Marek Cieplak
Stretching and twisting of the DNA duplexes in coarse grained dynamical models
null
null
10.1088/0953-8984/21/47/474221
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three coarse-grained models of the double-stranded DNA are proposed and compared in the context of mechanical manipulation such as twisting and various schemes of stretching. The models differ in the number of effective beads (between two and five) representing each nucleotide. They all show similar behavior and, in particular, lead to a torque-force phase diagrams qualitatively consistent with experiments and all-atom simulations.
[ { "created": "Wed, 31 Dec 2008 10:31:38 GMT", "version": "v1" } ]
2015-05-13
[ [ "Niewieczerzał", "Szymon", "" ], [ "Cieplak", "Marek", "" ] ]
Three coarse-grained models of the double-stranded DNA are proposed and compared in the context of mechanical manipulation such as twisting and various schemes of stretching. The models differ in the number of effective beads (between two and five) representing each nucleotide. They all show similar behavior and, in particular, lead to a torque-force phase diagrams qualitatively consistent with experiments and all-atom simulations.
1607.05530
Francesco Fumarola
Francesco Fumarola
Verbal Perception and the Word Length Effect
39 pages, 13 figures; data analysis included
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
A theoretical framework is proposed for the understanding of verbal perception -- the conversion of words into meaning, modeled as a compromise between lexical demands and contextual constraints -- and the theory is tested against experiments on short-term memory. The observation that lists of short words are recalled better than lists of long ones has been a long-standing subject of controversy, further complicated by the apparent inversion of the effect for mixed lists. In the framework here proposed, these behaviors emerge as an effect of the different level of localization of short and long words in semantic space. Events corresponding to the recognition of a nonlocal word have a clustering property in phase space, which facilitates associative retrieval. The standard word-length effect arises directly from this property, and the inverse effect from its breakdown. An analysis of data from the PEERS experiments (Healey and Kahana, 2016) confirms the main predictions of the theory. Further predictions are listed and new experiments are proposed. Finally, an interpretation of the above results is presented.
[ { "created": "Tue, 19 Jul 2016 11:38:19 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2016 21:58:12 GMT", "version": "v2" } ]
2016-09-19
[ [ "Fumarola", "Francesco", "" ] ]
A theoretical framework is proposed for the understanding of verbal perception -- the conversion of words into meaning, modeled as a compromise between lexical demands and contextual constraints -- and the theory is tested against experiments on short-term memory. The observation that lists of short words are recalled better than lists of long ones has been a long-standing subject of controversy, further complicated by the apparent inversion of the effect for mixed lists. In the framework here proposed, these behaviors emerge as an effect of the different level of localization of short and long words in semantic space. Events corresponding to the recognition of a nonlocal word have a clustering property in phase space, which facilitates associative retrieval. The standard word-length effect arises directly from this property, and the inverse effect from its breakdown. An analysis of data from the PEERS experiments (Healey and Kahana, 2016) confirms the main predictions of the theory. Further predictions are listed and new experiments are proposed. Finally, an interpretation of the above results is presented.
1108.1999
Max Souza
Max O. Souza
Multiscale Analysis for a Vector-Borne Epidemic Model
null
J. Math. Biol., 68 (5), 1269--1293 (2014)
10.1007/s00285-013-0666-6
null
q-bio.PE math.CA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional studies about disease dynamics have focused on global stability issues, due to their epidemiological importance. We study a classical SIR-SI model for arboviruses in two different directions: we begin by describing an alternative proof of previously known global stability results by using only a Lyapunov approach. In the sequel, we take a different view and we argue that vectors and hosts can have very distinctive intrinsic time-scales, and that such distinctiveness extends to the disease dynamics. Under these hypothesis, we show that two asymptotic regimes naturally appear: the fast host dynamics and the fast vector dynamics. The former regime yields, at leading order, a SIR model for the hosts, but with a rational incidence rate. In this case, the vector disappears from the model, and the dynamics is similar to a directly contagious disease. The latter yields a SI model for the vectors, with the hosts disappearing from the model. Numerical results show the performance of the approximation, and a rigorous proof validates the reduced models.
[ { "created": "Tue, 9 Aug 2011 18:07:47 GMT", "version": "v1" }, { "created": "Fri, 18 Jan 2013 03:52:31 GMT", "version": "v2" }, { "created": "Wed, 27 Feb 2013 03:55:55 GMT", "version": "v3" } ]
2014-08-28
[ [ "Souza", "Max O.", "" ] ]
Traditional studies about disease dynamics have focused on global stability issues, due to their epidemiological importance. We study a classical SIR-SI model for arboviruses in two different directions: we begin by describing an alternative proof of previously known global stability results by using only a Lyapunov approach. In the sequel, we take a different view and we argue that vectors and hosts can have very distinctive intrinsic time-scales, and that such distinctiveness extends to the disease dynamics. Under these hypothesis, we show that two asymptotic regimes naturally appear: the fast host dynamics and the fast vector dynamics. The former regime yields, at leading order, a SIR model for the hosts, but with a rational incidence rate. In this case, the vector disappears from the model, and the dynamics is similar to a directly contagious disease. The latter yields a SI model for the vectors, with the hosts disappearing from the model. Numerical results show the performance of the approximation, and a rigorous proof validates the reduced models.
1107.3037
Alexey Mazur K
Alexey K. Mazur
Local elasticity of strained DNA studied by all-atom simulations
15 pages, 18 figures, to appear in Phys. Rev. E
Phys. Rev. E 84, 021903, 2011
10.1103/PhysRevE.84.021903
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomic DNA is constantly subjected to various mechanical stresses arising from its biological functions and cell packaging. If the local mechanical properties of DNA change under torsional and tensional stress, the activity of DNA-modifying proteins and transcription factors can be affected and regulated allosterically. To check this possibility, appropriate steady forces and torques were applied in the course of all-atom molecular dynamics simulations of DNA with AT- and GC-alternating sequences. It is found that the stretching rigidity grows with tension as well as twisting. The torsional rigidity is not affected by stretching, but it varies with twisting very strongly, and differently for the two sequences. Surprisingly, for AT-alternating DNA it passes through a minimum with the average twist close to the experimental value in solution. For this fragment, but not for the GC-alternating sequence, the bending rigidity noticeably changes with both twisting and stretching. The results have important biological implications and shed light upon earlier experimental observations.
[ { "created": "Fri, 15 Jul 2011 10:43:40 GMT", "version": "v1" } ]
2015-05-28
[ [ "Mazur", "Alexey K.", "" ] ]
Genomic DNA is constantly subjected to various mechanical stresses arising from its biological functions and cell packaging. If the local mechanical properties of DNA change under torsional and tensional stress, the activity of DNA-modifying proteins and transcription factors can be affected and regulated allosterically. To check this possibility, appropriate steady forces and torques were applied in the course of all-atom molecular dynamics simulations of DNA with AT- and GC-alternating sequences. It is found that the stretching rigidity grows with tension as well as twisting. The torsional rigidity is not affected by stretching, but it varies with twisting very strongly, and differently for the two sequences. Surprisingly, for AT-alternating DNA it passes through a minimum with the average twist close to the experimental value in solution. For this fragment, but not for the GC-alternating sequence, the bending rigidity noticeably changes with both twisting and stretching. The results have important biological implications and shed light upon earlier experimental observations.
2005.09190
Muktish Acharyya
Tanmay Das and Muktish Acharyya
Transient behaviour towards the stable limit cycle in the Selkov model of Glycolysis: A physiological disorder
8 oages Latex and 5 eps figures
Physica A 567 (2021) 125684
10.1016/j.physa.2020.125684
PU-Phys-23-5-2020
q-bio.QM cond-mat.soft nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simplified model for the complex glycolytic process was historically proposed by Selkov. It showed the existence of stable limit cycle as an example of Poincare-Bendixson theorem. This limit cycle is nothing but the time eliminated Lissajous plot of the concentrations of Adenosine-diphosphate (ADP) and Fructose-6-phosphate (F6P) of a normal/healthy human. Deviation from this limit cycle is equivalent to the deviation of normal physiological behaviour. It is very important to know how long a human body will take to reach the glycolytic stable limit cycle, if deviated from it. However, till now the convergence time, depending upon different initial parameter values, was not studied in detail. This may have great importance in understanding the recovery time for a diseased individual deviated from normal cycle. Here the convergence time for different initial conditions has been calculated in original Selkov model. It is observed that convergence time, as a function of the distance from the limit cycle, gets saturated away from the cycle. This result seems to be a physiological disorder. A possible mathematical way to incorporate this in the Selkov model, has been proposed.
[ { "created": "Tue, 19 May 2020 03:23:06 GMT", "version": "v1" }, { "created": "Sat, 23 May 2020 05:21:44 GMT", "version": "v2" }, { "created": "Mon, 14 Dec 2020 15:12:55 GMT", "version": "v3" }, { "created": "Sat, 19 Dec 2020 06:12:19 GMT", "version": "v4" } ]
2020-12-29
[ [ "Das", "Tanmay", "" ], [ "Acharyya", "Muktish", "" ] ]
A simplified model for the complex glycolytic process was historically proposed by Selkov. It showed the existence of stable limit cycle as an example of Poincare-Bendixson theorem. This limit cycle is nothing but the time eliminated Lissajous plot of the concentrations of Adenosine-diphosphate (ADP) and Fructose-6-phosphate (F6P) of a normal/healthy human. Deviation from this limit cycle is equivalent to the deviation of normal physiological behaviour. It is very important to know how long a human body will take to reach the glycolytic stable limit cycle, if deviated from it. However, till now the convergence time, depending upon different initial parameter values, was not studied in detail. This may have great importance in understanding the recovery time for a diseased individual deviated from normal cycle. Here the convergence time for different initial conditions has been calculated in original Selkov model. It is observed that convergence time, as a function of the distance from the limit cycle, gets saturated away from the cycle. This result seems to be a physiological disorder. A possible mathematical way to incorporate this in the Selkov model, has been proposed.
2310.01426
Chang Hu
Chang Hu, Krishnakant V. Saboo, Ahmad H. Ali, Brian D. Juran, Konstantinos N. Lazaridis, Ravishankar K. Iyer
REMEDI: REinforcement learning-driven adaptive MEtabolism modeling of primary sclerosing cholangitis DIsease progression
8 pages, 5 figures, 4 appendices
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Primary sclerosing cholangitis (PSC) is a rare disease wherein altered bile acid metabolism contributes to sustained liver injury. This paper introduces REMEDI, a framework that captures bile acid dynamics and the body's adaptive response during PSC progression that can assist in exploring treatments. REMEDI merges a differential equation (DE)-based mechanistic model that describes bile acid metabolism with reinforcement learning (RL) to emulate the body's adaptations to PSC continuously. An objective of adaptation is to maintain homeostasis by regulating enzymes involved in bile acid metabolism. These enzymes correspond to the parameters of the DEs. REMEDI leverages RL to approximate adaptations in PSC, treating homeostasis as a reward signal and the adjustment of the DE parameters as the corresponding actions. On real-world data, REMEDI generated bile acid dynamics and parameter adjustments consistent with published findings. Also, our results support discussions in the literature that early administration of drugs that suppress bile acid synthesis may be effective in PSC treatment.
[ { "created": "Mon, 2 Oct 2023 21:46:01 GMT", "version": "v1" } ]
2023-10-04
[ [ "Hu", "Chang", "" ], [ "Saboo", "Krishnakant V.", "" ], [ "Ali", "Ahmad H.", "" ], [ "Juran", "Brian D.", "" ], [ "Lazaridis", "Konstantinos N.", "" ], [ "Iyer", "Ravishankar K.", "" ] ]
Primary sclerosing cholangitis (PSC) is a rare disease wherein altered bile acid metabolism contributes to sustained liver injury. This paper introduces REMEDI, a framework that captures bile acid dynamics and the body's adaptive response during PSC progression that can assist in exploring treatments. REMEDI merges a differential equation (DE)-based mechanistic model that describes bile acid metabolism with reinforcement learning (RL) to emulate the body's adaptations to PSC continuously. An objective of adaptation is to maintain homeostasis by regulating enzymes involved in bile acid metabolism. These enzymes correspond to the parameters of the DEs. REMEDI leverages RL to approximate adaptations in PSC, treating homeostasis as a reward signal and the adjustment of the DE parameters as the corresponding actions. On real-world data, REMEDI generated bile acid dynamics and parameter adjustments consistent with published findings. Also, our results support discussions in the literature that early administration of drugs that suppress bile acid synthesis may be effective in PSC treatment.
2310.02375
Alex Ushveridze
Alex Ushveridze
On Physical Origins of Learning
44 pages, 8 figures
null
null
null
q-bio.NC cs.AI physics.class-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The quest to comprehend the origins of intelligence raises intriguing questions about the evolution of learning abilities in natural systems. Why do living organisms possess an inherent drive to acquire knowledge of the unknown? Is this motivation solely explicable through natural selection, favoring systems capable of learning due to their increased chances of survival? Or do there exist additional, more rapid mechanisms that offer immediate rewards to systems entering the "learning mode" in the "right ways"? This article explores the latter possibility and endeavors to unravel the possible nature of these ways. We propose that learning may have non-biological and non-evolutionary origin. It turns out that key properties of learning can be observed, explained, and accurately reproduced within simple physical models that describe energy accumulation mechanisms in open resonant-type systems with dissipation.
[ { "created": "Thu, 27 Jul 2023 19:45:19 GMT", "version": "v1" } ]
2023-10-05
[ [ "Ushveridze", "Alex", "" ] ]
The quest to comprehend the origins of intelligence raises intriguing questions about the evolution of learning abilities in natural systems. Why do living organisms possess an inherent drive to acquire knowledge of the unknown? Is this motivation solely explicable through natural selection, favoring systems capable of learning due to their increased chances of survival? Or do there exist additional, more rapid mechanisms that offer immediate rewards to systems entering the "learning mode" in the "right ways"? This article explores the latter possibility and endeavors to unravel the possible nature of these ways. We propose that learning may have non-biological and non-evolutionary origin. It turns out that key properties of learning can be observed, explained, and accurately reproduced within simple physical models that describe energy accumulation mechanisms in open resonant-type systems with dissipation.
2005.01914
Julio Augusto Freyre-Gonz\'alez
Juan M. Escorcia-Rodr\'iguez (1), Andreas Tauch (2), and Julio A. Freyre-Gonz\'alez (1) ((1) Regulatory Systems Biology Research Group, Laboratory of Systems and Synthetic Biology, Center for Genomic Sciences, Universidad Nacional Aut\'onoma de M\'exico, (2) Centrum f\"ur Biotechnologie (CeBiTec), Universit\"at Bielefeld)
Abasy Atlas v2.2: The most comprehensive and up-to-date inventory of meta-curated, historical, bacterial regulatory networks, their completeness and system-level characterization
25 pages, 6 figures, 11 pages supplementary information
Computational and Structural Biotechnology Journal 18:1228-1237 (2020)
10.1016/j.csbj.2020.05.015
null
q-bio.MN q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Some organism-specific databases about regulation in bacteria have become larger, accelerated by high-throughput methodologies, while others are no longer updated or accessible. Each database homogenize its datasets, giving rise to heterogeneity across databases. Such heterogeneity mainly encompasses different names for a gene and different network representations, generating duplicated interactions that could bias network analyses. Abasy (Across-bacteria systems) Atlas consolidates information from different sources into meta-curated regulatory networks in bacteria. The high-quality networks in Abasy Atlas enable cross-organisms analyses, such as benchmarking studies where gold standards are required. Nevertheless, network incompleteness still casts doubts on the conclusions of network analyses, and available sampling methods cannot reflect the curation process. To tackle this problem, the updated version of Abasy Atlas presented in this work provides historical snapshots of regulatory networks. Thus, network analyses can be performed at different completeness levels, making possible to identify potential bias and to predict future results. We leverage the recently found constraint in the complexity of regulatory networks to develop a novel model to quantify the total number of regulatory interactions as a function of the genome size. This completeness estimation is a valuable insight that may aid in the daunting task of network curation, prediction, and validation. The new version of Abasy Atlas provides 76 networks (204,282 regulatory interactions) covering 42 bacteria (64% Gram-positive and 36% Gram-negative) distributed in 9 species, containing 8,459 regulons and 4,335 modules.
[ { "created": "Tue, 5 May 2020 02:09:30 GMT", "version": "v1" }, { "created": "Sat, 23 May 2020 05:08:05 GMT", "version": "v2" } ]
2021-01-11
[ [ "Escorcia-Rodríguez", "Juan M.", "" ], [ "Tauch", "Andreas", "" ], [ "Freyre-González", "Julio A.", "" ] ]
Some organism-specific databases about regulation in bacteria have become larger, accelerated by high-throughput methodologies, while others are no longer updated or accessible. Each database homogenize its datasets, giving rise to heterogeneity across databases. Such heterogeneity mainly encompasses different names for a gene and different network representations, generating duplicated interactions that could bias network analyses. Abasy (Across-bacteria systems) Atlas consolidates information from different sources into meta-curated regulatory networks in bacteria. The high-quality networks in Abasy Atlas enable cross-organisms analyses, such as benchmarking studies where gold standards are required. Nevertheless, network incompleteness still casts doubts on the conclusions of network analyses, and available sampling methods cannot reflect the curation process. To tackle this problem, the updated version of Abasy Atlas presented in this work provides historical snapshots of regulatory networks. Thus, network analyses can be performed at different completeness levels, making possible to identify potential bias and to predict future results. We leverage the recently found constraint in the complexity of regulatory networks to develop a novel model to quantify the total number of regulatory interactions as a function of the genome size. This completeness estimation is a valuable insight that may aid in the daunting task of network curation, prediction, and validation. The new version of Abasy Atlas provides 76 networks (204,282 regulatory interactions) covering 42 bacteria (64% Gram-positive and 36% Gram-negative) distributed in 9 species, containing 8,459 regulons and 4,335 modules.
1304.3218
Carsten Dormann
Carsten F. Dormann and Rouven Strauss
Detecting modules in quantitative bipartite networks: the QuaBiMo algorithm
19 pages, 10 figures, 4 tables
Methods in Ecology and Evolution 5 (2014) 90-98
10.1111/2041-210X.12139
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological networks are often composed of different sub-communities (often referred to as modules). Identifying such modules has the potential to develop a better understanding of the assembly of ecological communities and to investigate functional overlap or specialisation. The most informative form of networks are quantitative or weighted networks. Here we introduce an algorithm to identify modules in quantitative bipartite (or two-mode) networks. It is based on the hierarchical random graphs concept of Clauset et al. (2008 Nature 453: 98-101) and is extended to include quantitative information and adapted to work with bipartite graphs. We define the algorithm, which we call QuaBiMo, sketch its performance on simulated data and illustrate its potential usefulness with a case study.
[ { "created": "Thu, 11 Apr 2013 07:04:20 GMT", "version": "v1" } ]
2014-03-14
[ [ "Dormann", "Carsten F.", "" ], [ "Strauss", "Rouven", "" ] ]
Ecological networks are often composed of different sub-communities (often referred to as modules). Identifying such modules has the potential to develop a better understanding of the assembly of ecological communities and to investigate functional overlap or specialisation. The most informative form of networks are quantitative or weighted networks. Here we introduce an algorithm to identify modules in quantitative bipartite (or two-mode) networks. It is based on the hierarchical random graphs concept of Clauset et al. (2008 Nature 453: 98-101) and is extended to include quantitative information and adapted to work with bipartite graphs. We define the algorithm, which we call QuaBiMo, sketch its performance on simulated data and illustrate its potential usefulness with a case study.
2311.10200
Fatih Dinc
Fatih Dinc, Adam Shai, Mark Schnitzer, Hidenori Tanaka
CORNN: Convex optimization of recurrent neural networks for rapid inference of neural dynamics
Accepted at NeurIPS 2023
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations in behaving animals. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). Performing this training in real-time could open doors for research techniques and medical applications to model and control interventions at single-cell resolution and drive desired forms of animal behavior. However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings, CORNN attained training speeds ~100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation.
[ { "created": "Thu, 16 Nov 2023 21:14:28 GMT", "version": "v1" } ]
2023-11-20
[ [ "Dinc", "Fatih", "" ], [ "Shai", "Adam", "" ], [ "Schnitzer", "Mark", "" ], [ "Tanaka", "Hidenori", "" ] ]
Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations in behaving animals. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). Performing this training in real-time could open doors for research techniques and medical applications to model and control interventions at single-cell resolution and drive desired forms of animal behavior. However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings, CORNN attained training speeds ~100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation.
2407.19634
Li Chen
Guozhong Zheng, Zhenwei Ding, Jiqiang Zhang, Shengfeng Deng, Weiran Cai, Li Chen
The evolution of cooperation with Q-learning: the impact of information perception
12pages, 13figures, comments are appreciated
null
null
null
q-bio.PE cond-mat.stat-mech nlin.AO
http://creativecommons.org/licenses/by/4.0/
The inherent huge complexities in human beings show a remarkable diversity in response to complex surroundings, enabling us to tackle problems from different perspectives. In the realm of cooperation studies, however, existing work assumes that individuals get access to the same kind of information to make their decisions, in contrast to the facts that individuals often perceive differently. Here, within the reinforcement learning framework, we investigate the impact of information perception on the evolution of cooperation in a 2-person scenario when playing the prisoner's dilemma game. We demonstrate that distinctly different evolution processes are observed in three information perception scenarios, revealing that the structure of information significantly affects the emergence of cooperation. Notably, the asymmetric information scenario exhibits a rich dynamical process, including the cooperation emergence, breakdown, and reconstruction, akin to psychological changes in humans. Our findings indicate that the information structure is vital to the emergence of cooperation, shedding new light on establishing mutually stable cooperative relationships and understanding human behavioral complexities in general.
[ { "created": "Mon, 29 Jul 2024 01:33:20 GMT", "version": "v1" } ]
2024-07-30
[ [ "Zheng", "Guozhong", "" ], [ "Ding", "Zhenwei", "" ], [ "Zhang", "Jiqiang", "" ], [ "Deng", "Shengfeng", "" ], [ "Cai", "Weiran", "" ], [ "Chen", "Li", "" ] ]
The inherent huge complexities in human beings show a remarkable diversity in response to complex surroundings, enabling us to tackle problems from different perspectives. In the realm of cooperation studies, however, existing work assumes that individuals get access to the same kind of information to make their decisions, in contrast to the facts that individuals often perceive differently. Here, within the reinforcement learning framework, we investigate the impact of information perception on the evolution of cooperation in a 2-person scenario when playing the prisoner's dilemma game. We demonstrate that distinctly different evolution processes are observed in three information perception scenarios, revealing that the structure of information significantly affects the emergence of cooperation. Notably, the asymmetric information scenario exhibits a rich dynamical process, including the cooperation emergence, breakdown, and reconstruction, akin to psychological changes in humans. Our findings indicate that the information structure is vital to the emergence of cooperation, shedding new light on establishing mutually stable cooperative relationships and understanding human behavioral complexities in general.
1901.07400
Norichika Ogata
Norichika Ogata
Quantitative Measurement of Heritability in the Pre-RNA World
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long ago, life obtained nucleotides in the course of evolution and became a vehicle for them. Before assembly with nucleotides, in the pre-RNA era, what system dominated heredity? What was the subject of survival competition? Is it still a subject of competition? Self-organized complex systems are hypothesized to be a primary factor of the origin of life and to dominate heritability, mediating the partitioning of an equal distribution of structures and molecules at cell division. The degree of strength of self-organization would correlate with heritability; self-organization is known to be a physical basis of hysteresis phenomena, and the degree of hysteresis is quantifiable. However, there is no argument corroborating the relationship between heritability and hysteresis. Here, we show that the degree of cellular hysteresis indicates its heritability and daughter equivalence at cell division. We found a correlation between thermal hysteresis in cell size and heritability, which quantified cell line generation stability, suggesting that the self-organized complex system is a subject of survival competition, comparable to nucleotides. Furthermore, in single-cell-resolution observations, we found that thermal hysteresis in cell size indicates equivalent partitioning in future cell division. Our results demonstrate that self-organized complex systems contribute to heredity and are still important in mammalian cells. Predicting the cell line generational stability required for the industrial production of therapeutic biologics is useful. Discovering ancient and hidden heredity systems enables us to study our own origin, to predict cell features and to manage them in the bio-economy.
[ { "created": "Tue, 22 Jan 2019 15:12:34 GMT", "version": "v1" } ]
2019-01-23
[ [ "Ogata", "Norichika", "" ] ]
Long ago, life obtained nucleotides in the course of evolution and became a vehicle for them. Before assembly with nucleotides, in the pre-RNA era, what system dominated heredity? What was the subject of survival competition? Is it still a subject of competition? Self-organized complex systems are hypothesized to be a primary factor of the origin of life and to dominate heritability, mediating the partitioning of an equal distribution of structures and molecules at cell division. The degree of strength of self-organization would correlate with heritability; self-organization is known to be a physical basis of hysteresis phenomena, and the degree of hysteresis is quantifiable. However, there is no argument corroborating the relationship between heritability and hysteresis. Here, we show that the degree of cellular hysteresis indicates its heritability and daughter equivalence at cell division. We found a correlation between thermal hysteresis in cell size and heritability, which quantified cell line generation stability, suggesting that the self-organized complex system is a subject of survival competition, comparable to nucleotides. Furthermore, in single-cell-resolution observations, we found that thermal hysteresis in cell size indicates equivalent partitioning in future cell division. Our results demonstrate that self-organized complex systems contribute to heredity and are still important in mammalian cells. Predicting the cell line generational stability required for the industrial production of therapeutic biologics is useful. Discovering ancient and hidden heredity systems enables us to study our own origin, to predict cell features and to manage them in the bio-economy.
1106.2793
Olivier Francois
Katalin Csill\'ery, Olivier Fran\c{c}ois, Michael GB Blum
abc: an R package for Approximate Bayesian Computation (ABC)
null
null
null
null
q-bio.PE physics.data-an stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many recent statistical applications involve inference under complex models, where it is computationally prohibitive to calculate likelihoods but possible to simulate data. Approximate Bayesian Computation (ABC) is devoted to these complex models because it bypasses evaluations of the likelihood function using comparisons between observed and simulated summary statistics. We introduce the R abc package that implements several ABC algorithms for performing parameter estimation and model selection. In particular, the recently developed non-linear heteroscedastic regression methods for ABC are implemented. The abc package also includes a cross-validation tool for measuring the accuracy of ABC estimates, and to calculate the misclassification probabilities when performing model selection. The main functions are accompanied by appropriate summary and plotting tools. Considering an example of demographic inference with population genetics data, we show the potential of the R package. R is already widely used in bioinformatics and several fields of biology. The R abc package will make the ABC algorithms available to the large number of R users. abc is a freely available R package under the GPL license, and it can be downloaded at http://cran.r-project.org/web/packages/abc/index.html.
[ { "created": "Tue, 14 Jun 2011 19:23:07 GMT", "version": "v1" } ]
2011-06-15
[ [ "Csilléry", "Katalin", "" ], [ "François", "Olivier", "" ], [ "Blum", "Michael GB", "" ] ]
Many recent statistical applications involve inference under complex models, where it is computationally prohibitive to calculate likelihoods but possible to simulate data. Approximate Bayesian Computation (ABC) is devoted to these complex models because it bypasses evaluations of the likelihood function using comparisons between observed and simulated summary statistics. We introduce the R abc package that implements several ABC algorithms for performing parameter estimation and model selection. In particular, the recently developed non-linear heteroscedastic regression methods for ABC are implemented. The abc package also includes a cross-validation tool for measuring the accuracy of ABC estimates, and to calculate the misclassification probabilities when performing model selection. The main functions are accompanied by appropriate summary and plotting tools. Considering an example of demographic inference with population genetics data, we show the potential of the R package. R is already widely used in bioinformatics and several fields of biology. The R abc package will make the ABC algorithms available to the large number of R users. abc is a freely available R package under the GPL license, and it can be downloaded at http://cran.r-project.org/web/packages/abc/index.html.
2101.01389
Yifei Li
Yifei Li, Stuart T. Johnston, Pascal R. Buenzli, Peter van Heijster, Matthew J. Simpson
Extinction of bistable populations is affected by the shape of their initial spatial distribution
14 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The question of whether biological populations survive or are eventually driven to extinction has long been examined using mathematical models. In this work we study population survival or extinction using a stochastic, discrete lattice-based random walk model where individuals undergo movement, birth and death events. The discrete model is defined on a two-dimensional hexagonal lattice with periodic boundary conditions. A key feature of the discrete model is that crowding effects are introduced by specifying two different crowding functions that govern how local agent density influences movement events and birth/death events. The continuum limit description of the discrete model is a nonlinear reaction-diffusion equation, and we focus on crowding functions that lead to linear diffusion and a bistable source term that is often associated with the strong Allee effect. Using both the discrete and continuum modelling tools we explore the complicated relationship between the long-term survival or extinction of the population and the initial spatial arrangement of the population. In particular, we study different spatial arrangements of initial distributions: (i) a well-mixed initial distribution where the initial density is independent of position in the domain; (ii) a vertical strip initial distribution where the initial density is independent of vertical position in the domain; and, (iii) several forms of two-dimensional initial distributions where the initial population is distributed in regions with different shapes. Our results indicate that the shape of the initial spatial distribution of the population affects extinction of bistable populations. All software required to solve the discrete and continuum models used in this work are available on GitHub.
[ { "created": "Tue, 5 Jan 2021 07:35:29 GMT", "version": "v1" }, { "created": "Tue, 19 Jan 2021 04:29:25 GMT", "version": "v2" }, { "created": "Tue, 14 Sep 2021 07:18:31 GMT", "version": "v3" } ]
2021-09-15
[ [ "Li", "Yifei", "" ], [ "Johnston", "Stuart T.", "" ], [ "Buenzli", "Pascal R.", "" ], [ "van Heijster", "Peter", "" ], [ "Simpson", "Matthew J.", "" ] ]
The question of whether biological populations survive or are eventually driven to extinction has long been examined using mathematical models. In this work we study population survival or extinction using a stochastic, discrete lattice-based random walk model where individuals undergo movement, birth and death events. The discrete model is defined on a two-dimensional hexagonal lattice with periodic boundary conditions. A key feature of the discrete model is that crowding effects are introduced by specifying two different crowding functions that govern how local agent density influences movement events and birth/death events. The continuum limit description of the discrete model is a nonlinear reaction-diffusion equation, and we focus on crowding functions that lead to linear diffusion and a bistable source term that is often associated with the strong Allee effect. Using both the discrete and continuum modelling tools we explore the complicated relationship between the long-term survival or extinction of the population and the initial spatial arrangement of the population. In particular, we study different spatial arrangements of initial distributions: (i) a well-mixed initial distribution where the initial density is independent of position in the domain; (ii) a vertical strip initial distribution where the initial density is independent of vertical position in the domain; and, (iii) several forms of two-dimensional initial distributions where the initial population is distributed in regions with different shapes. Our results indicate that the shape of the initial spatial distribution of the population affects extinction of bistable populations. All software required to solve the discrete and continuum models used in this work are available on GitHub.
0903.0582
Sheng Wang
Sheng Wang
CLeFAPS: Fast Flexible Alignment of Protein Structures Based on Conformational Letters
12 pages, 9 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CLeFAPS, a fast and flexible pairwise structural alignment algorithm based on a rigid-body framework, namely CLePAPS, is proposed. Instead of allowing twists (or bends), the flexible in CLeFAPS means: (a) flexibilization of the algorithm's parameters through self-adapting with the input structures' size, (b) flexibilization of adding the aligned fragment pairs (AFPs) into an one-to-multi correspondence set instead of checking their position conflict, (c) flexible fragment may be found through an elongation procedure rooted in a vector-based score instead of a distance-based score. We perform a comparison between CLeFAPS and other popular algorithms including rigid-body and flexible on a closely-related protein benchmark (HOMSTRAD) and a distantly-related protein benchmark (SABmark) while the latter is also for the discrimination test, the result shows that CLeFAPS is competitive with or even outperforms other algorithms while the running time is only 1/150 to 1/50 of them.
[ { "created": "Tue, 3 Mar 2009 18:38:08 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2009 18:11:26 GMT", "version": "v2" } ]
2009-03-04
[ [ "Wang", "Sheng", "" ] ]
CLeFAPS, a fast and flexible pairwise structural alignment algorithm based on a rigid-body framework, namely CLePAPS, is proposed. Instead of allowing twists (or bends), the flexible in CLeFAPS means: (a) flexibilization of the algorithm's parameters through self-adapting with the input structures' size, (b) flexibilization of adding the aligned fragment pairs (AFPs) into an one-to-multi correspondence set instead of checking their position conflict, (c) flexible fragment may be found through an elongation procedure rooted in a vector-based score instead of a distance-based score. We perform a comparison between CLeFAPS and other popular algorithms including rigid-body and flexible on a closely-related protein benchmark (HOMSTRAD) and a distantly-related protein benchmark (SABmark) while the latter is also for the discrimination test, the result shows that CLeFAPS is competitive with or even outperforms other algorithms while the running time is only 1/150 to 1/50 of them.
2002.08983
Ariane Nunes-Alves
Ariane Nunes-Alves, Daria B. Kokh, Rebecca C. Wade
Recent progress in molecular simulation methods for drug binding kinetics
Figure 3 was improved. A definition of PIB was included. Reference to WE was added (ref. 20), reference to RAMD was corrected (ref. 43)
null
10.1016/j.sbi.2020.06.022
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Due to the contribution of drug-target binding kinetics to drug efficacy, there is a high level of interest in developing methods to predict drug-target binding kinetic parameters. During the review period, a wide range of enhanced sampling molecular dynamics simulation-based methods has been developed for computing drug-target binding kinetics and studying binding and unbinding mechanisms. Here, we assess the performance of these methods considering two benchmark systems in detail: mutant T4 lysozyme-ligand complexes and a large set of N-HSP90-inhibitor complexes. The results indicate that some of the simulation methods can already be usefully applied in drug discovery or lead optimization programs but that further studies on more high-quality experimental benchmark datasets are necessary to improve and validate computational methods.
[ { "created": "Thu, 20 Feb 2020 19:17:59 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2020 10:53:36 GMT", "version": "v2" }, { "created": "Sun, 24 May 2020 20:31:04 GMT", "version": "v3" } ]
2020-08-10
[ [ "Nunes-Alves", "Ariane", "" ], [ "Kokh", "Daria B.", "" ], [ "Wade", "Rebecca C.", "" ] ]
Due to the contribution of drug-target binding kinetics to drug efficacy, there is a high level of interest in developing methods to predict drug-target binding kinetic parameters. During the review period, a wide range of enhanced sampling molecular dynamics simulation-based methods has been developed for computing drug-target binding kinetics and studying binding and unbinding mechanisms. Here, we assess the performance of these methods considering two benchmark systems in detail: mutant T4 lysozyme-ligand complexes and a large set of N-HSP90-inhibitor complexes. The results indicate that some of the simulation methods can already be usefully applied in drug discovery or lead optimization programs but that further studies on more high-quality experimental benchmark datasets are necessary to improve and validate computational methods.
1302.3274
Gideon Bradburd
Gideon Bradburd, Peter Ralph, Graham Coop
Disentangling the effects of geographic and ecological isolation on genetic differentiation
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Populations can be genetically isolated both by geographic distance and by differences in their ecology or environment that decrease the rate of successful migration. Empirical studies often seek to investigate the relationship between genetic differentiation and some ecological variable(s) while accounting for geographic distance, but common approaches to this problem (such as the partial Mantel test) have a number of drawbacks. In this article, we present a Bayesian method that enables users to quantify the relative contributions of geographic distance and ecological distance to genetic differentiation between sampled populations or individuals. We model the allele frequencies in a set of populations at a set of unlinked loci as spatially correlated Gaussian processes, in which the covariance structure is a decreasing function of both geographic and ecological distance. Parameters of the model are estimated using a Markov chain Monte Carlo algorithm. We call this method Bayesian Estimation of Differentiation in Alleles by Spatial Structure and Local Ecology (BEDASSLE), and have implemented it in a user-friendly format in the statistical platform R. We demonstrate its utility with a simulation study and empirical applications to human and teosinte datasets.
[ { "created": "Wed, 13 Feb 2013 23:50:23 GMT", "version": "v1" }, { "created": "Fri, 24 May 2013 23:54:28 GMT", "version": "v2" }, { "created": "Sat, 7 Sep 2013 17:08:17 GMT", "version": "v3" }, { "created": "Wed, 11 Sep 2013 16:10:45 GMT", "version": "v4" } ]
2013-09-12
[ [ "Bradburd", "Gideon", "" ], [ "Ralph", "Peter", "" ], [ "Coop", "Graham", "" ] ]
Populations can be genetically isolated both by geographic distance and by differences in their ecology or environment that decrease the rate of successful migration. Empirical studies often seek to investigate the relationship between genetic differentiation and some ecological variable(s) while accounting for geographic distance, but common approaches to this problem (such as the partial Mantel test) have a number of drawbacks. In this article, we present a Bayesian method that enables users to quantify the relative contributions of geographic distance and ecological distance to genetic differentiation between sampled populations or individuals. We model the allele frequencies in a set of populations at a set of unlinked loci as spatially correlated Gaussian processes, in which the covariance structure is a decreasing function of both geographic and ecological distance. Parameters of the model are estimated using a Markov chain Monte Carlo algorithm. We call this method Bayesian Estimation of Differentiation in Alleles by Spatial Structure and Local Ecology (BEDASSLE), and have implemented it in a user-friendly format in the statistical platform R. We demonstrate its utility with a simulation study and empirical applications to human and teosinte datasets.
1610.00542
Cristina Zucca
Petr Lansky, Laura Sacerdote, Cristina Zucca
The Gamma renewal process as an output of the diffusion leaky integrate-and-fire neuronal model
null
null
null
null
q-bio.NC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons. These models range from entirely descriptive ones to those deduced from the properties of the real neurons. One of them, the diffusion leaky integrate-and-fire neuronal model, which is based on the Ornstein-Uhlenbeck stochastic process that is restricted by an absorbing barrier, can describe a wide range of neuronal activity in terms of its parameters. These parameters are readily associated with known physiological mechanisms. The other model is descriptive, Gamma renewal process, and its parameters only reflect the observed experimental data or assumed theoretical properties. Both of these commonly used models are related here. We show under which conditions the Gamma model is an output from the diffusion Ornstein-Uhlenbeck model. In some cases we can see that the Gamma distribution is unrealistic to be achieved for the employed parameters of the Ornstein-Uhlenbeck process.
[ { "created": "Mon, 3 Oct 2016 13:33:58 GMT", "version": "v1" } ]
2016-10-04
[ [ "Lansky", "Petr", "" ], [ "Sacerdote", "Laura", "" ], [ "Zucca", "Cristina", "" ] ]
Statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons. These models range from entirely descriptive ones to those deduced from the properties of the real neurons. One of them, the diffusion leaky integrate-and-fire neuronal model, which is based on the Ornstein-Uhlenbeck stochastic process that is restricted by an absorbing barrier, can describe a wide range of neuronal activity in terms of its parameters. These parameters are readily associated with known physiological mechanisms. The other model is descriptive, Gamma renewal process, and its parameters only reflect the observed experimental data or assumed theoretical properties. Both of these commonly used models are related here. We show under which conditions the Gamma model is an output from the diffusion Ornstein-Uhlenbeck model. In some cases we can see that the Gamma distribution is unrealistic to be achieved for the employed parameters of the Ornstein-Uhlenbeck process.
0912.4391
Rui Dilao
R. Dil\~ao and D. Muraro
Calibration and validation of a genetic regulatory network model describing the production of the gap gene protein Hunchback in \emph{Drosophila} early development
25 pages, 10 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We fit the parameters of a differential equations model describing the production of gap gene proteins Hunchback and Knirps along the antero-posterior axis of the embryo of \emph{Drosophila}. As initial data for the differential equations model, we take the antero-posterior distribution of the proteins Bicoid, Hunchback and Tailless at the beginning of cleavage cycle 14. We calibrate and validate the model with experimental data using single- and multi-objective evolutionary optimization techniques. In the multi-objective optimization technique, we compute the associated Pareto fronts. We analyze the cross regulation mechanism between the gap-genes protein pair Hunchback-Knirps and we show that the posterior distribution of Hunchback follow the experimental data if Hunchback is negatively regulated by the Huckebein protein. This approach enables to predict the posterior localization on the embryo of the protein Huckebein, and we validate with the experimental data the genetic regulatory network responsible for the antero-posterior distribution of the gap gene protein Hunchback. We discuss the importance of Pareto multi-objective optimization techniques in the calibration and validation of biological models.
[ { "created": "Tue, 22 Dec 2009 12:41:53 GMT", "version": "v1" } ]
2009-12-23
[ [ "Dilão", "R.", "" ], [ "Muraro", "D.", "" ] ]
We fit the parameters of a differential equations model describing the production of gap gene proteins Hunchback and Knirps along the antero-posterior axis of the embryo of \emph{Drosophila}. As initial data for the differential equations model, we take the antero-posterior distribution of the proteins Bicoid, Hunchback and Tailless at the beginning of cleavage cycle 14. We calibrate and validate the model with experimental data using single- and multi-objective evolutionary optimization techniques. In the multi-objective optimization technique, we compute the associated Pareto fronts. We analyze the cross regulation mechanism between the gap-genes protein pair Hunchback-Knirps and we show that the posterior distribution of Hunchback follow the experimental data if Hunchback is negatively regulated by the Huckebein protein. This approach enables to predict the posterior localization on the embryo of the protein Huckebein, and we validate with the experimental data the genetic regulatory network responsible for the antero-posterior distribution of the gap gene protein Hunchback. We discuss the importance of Pareto multi-objective optimization techniques in the calibration and validation of biological models.
1708.08120
Brooke Husic
Brooke E. Husic and Vijay S. Pande
MSM lag time cannot be used for variational model selection
null
J. Chem. Phys. 2017, 147, 176101
10.1063/1.5002086
null
q-bio.BM physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The variational principle for conformational dynamics has enabled the systematic construction of Markov state models through the optimization of hyperparameters by approximating the transfer operator. In this note we discuss why lag time of the operator being approximated must be held constant in the variational approach.
[ { "created": "Sun, 27 Aug 2017 18:26:24 GMT", "version": "v1" } ]
2019-11-26
[ [ "Husic", "Brooke E.", "" ], [ "Pande", "Vijay S.", "" ] ]
The variational principle for conformational dynamics has enabled the systematic construction of Markov state models through the optimization of hyperparameters by approximating the transfer operator. In this note we discuss why lag time of the operator being approximated must be held constant in the variational approach.
2310.00266
Perrine Seguin
Perrine Seguin, Emmanuel Maby, Fabien Perrin, Alessandro Farn\`e, J\'er\'emie Mattout
Is controlling a brain-computer interface just a matter of presence of mind? The limits of cognitive-motor dissociation
3 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Brain-computer interfaces (BCI) are presented as a solution for people with global paralysis, also known as locked-in syndrome (LIS). The targeted population includes the most severe patients, with no residual eye movements, who cannot use any communication device (Complete LIS). However, BCI reliability is low precisely in these cases, technical pitfalls being considered responsible so far. Here, we propose to consider also that global paralysis could have an impact on cognitive functions that are crucial for being able to control a BCI. We review a bundle of arguments about the role of motor structures in cognition. Especially, we uncover that these patients without oculomotor activity often have injuries in more 'cognitive' structures such as the frontal eye field or the midbrain, exposing them to cognitive deficits further than canonical LIS population. We develop a hypothesis about the putative role of the motor system in (covert) attention, a capacity which is a prerequisite for most BCI paradigms and which should therefore be both better assessed in patients and considered.
[ { "created": "Sat, 30 Sep 2023 05:50:03 GMT", "version": "v1" } ]
2023-10-03
[ [ "Seguin", "Perrine", "" ], [ "Maby", "Emmanuel", "" ], [ "Perrin", "Fabien", "" ], [ "Farnè", "Alessandro", "" ], [ "Mattout", "Jérémie", "" ] ]
Brain-computer interfaces (BCI) are presented as a solution for people with global paralysis, also known as locked-in syndrome (LIS). The targeted population includes the most severe patients, with no residual eye movements, who cannot use any communication device (Complete LIS). However, BCI reliability is low precisely in these cases, technical pitfalls being considered responsible so far. Here, we propose to consider also that global paralysis could have an impact on cognitive functions that are crucial for being able to control a BCI. We review a bundle of arguments about the role of motor structures in cognition. Especially, we uncover that these patients without oculomotor activity often have injuries in more 'cognitive' structures such as the frontal eye field or the midbrain, exposing them to cognitive deficits further than canonical LIS population. We develop a hypothesis about the putative role of the motor system in (covert) attention, a capacity which is a prerequisite for most BCI paradigms and which should therefore be both better assessed in patients and considered.