id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2309.17363
Sreejan Kumar
Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths
Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with task performance enhanced for more regular and symmetric forms compared to their geometrically irregular counterparts. Such studies conclude that this behavior implies the existence of discrete symbolic structure in human mental representations, and that replicating such behavior in neural network architectures will require mechanisms for symbolic processing. In this study, we argue that human biases towards geometric regularity can be reproduced in neural networks, without explicitly providing them with symbolic machinery, by augmenting them with an architectural constraint that enables the system to discover and manipulate relational structure. When trained with the appropriate curriculum, this model exhibits human-like biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can exhibit human-like regularity biases and generalization. This approach provides insights into the neural mechanisms underlying geometric reasoning and offers an alternative to prevailing symbolic "Language of Thought" models in this domain.
[ { "created": "Fri, 29 Sep 2023 16:12:51 GMT", "version": "v1" } ]
2023-10-02
[ [ "Campbell", "Declan", "" ], [ "Kumar", "Sreejan", "" ], [ "Giallanza", "Tyler", "" ], [ "Cohen", "Jonathan D.", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with task performance enhanced for more regular and symmetric forms compared to their geometrically irregular counterparts. Such studies conclude that this behavior implies the existence of discrete symbolic structure in human mental representations, and that replicating such behavior in neural network architectures will require mechanisms for symbolic processing. In this study, we argue that human biases towards geometric regularity can be reproduced in neural networks, without explicitly providing them with symbolic machinery, by augmenting them with an architectural constraint that enables the system to discover and manipulate relational structure. When trained with the appropriate curriculum, this model exhibits human-like biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can exhibit human-like regularity biases and generalization. This approach provides insights into the neural mechanisms underlying geometric reasoning and offers an alternative to prevailing symbolic "Language of Thought" models in this domain.
q-bio/0511041
Javier Macia Santamaria
Javier Macia, Ricard V. Sole
Protocell Self-Reproduction in a Spatially Extended Metabolism-Vesicle System
50 pages, 16 figures, 1 table
null
null
null
q-bio.CB q-bio.SC
null
Cellular life requires the presence of a set of biochemical mechanisms in order to maintain a predictable process of growth and division. Several attempts have been made towards the building of minimal protocells from a top-down approach, i.e. by using available biomolecules This type of synthetic approach has so far been unsuccesful, and the lack of appropriate models of the synthetic protocell cycle might be needed to guide future experiments. In this paper we present a simple biochemically and physically feasible model of cell replication involving a discrete semi-permeable vesicle with an internal minimal metabolism involving two reactive centers. It is shown that such a system can effectively undergo a whole cell replication cycle. The model can be used as a basic framework to model whole protocell dynamics including more complex sets of reactions. The possible implementation of our design in future synthetic protocells is outlined.
[ { "created": "Fri, 25 Nov 2005 14:44:34 GMT", "version": "v1" }, { "created": "Sun, 9 Apr 2006 20:25:07 GMT", "version": "v2" }, { "created": "Mon, 24 Apr 2006 14:41:27 GMT", "version": "v3" }, { "created": "Thu, 4 May 2006 14:21:00 GMT", "version": "v4" } ]
2007-05-23
[ [ "Macia", "Javier", "" ], [ "Sole", "Ricard V.", "" ] ]
Cellular life requires the presence of a set of biochemical mechanisms in order to maintain a predictable process of growth and division. Several attempts have been made towards the building of minimal protocells from a top-down approach, i.e. by using available biomolecules This type of synthetic approach has so far been unsuccesful, and the lack of appropriate models of the synthetic protocell cycle might be needed to guide future experiments. In this paper we present a simple biochemically and physically feasible model of cell replication involving a discrete semi-permeable vesicle with an internal minimal metabolism involving two reactive centers. It is shown that such a system can effectively undergo a whole cell replication cycle. The model can be used as a basic framework to model whole protocell dynamics including more complex sets of reactions. The possible implementation of our design in future synthetic protocells is outlined.
2006.12619
Olga Krivorotko
Olga Krivorotko (1, 2 and 3), Sergey Kabanikhin (1, 2 and 3), Nikolay Zyatkov (3), Alexey Prikhodko (1, 2 and 3), Nikita Prokhoshin (1 and 2), Maxim Shishlenin (1, 2 and 3) ((1) Novosibirsk State University, (2) Mathematical Center in Akademgorodok, (3) Institute of Computational Mathematics and Mathematical Geophysics of Siberian Branch of Russian Academy of Sciences, Novosibirsk, Russia)
Mathematical modeling and prediction of COVID-19 in Moscow city and Novosibirsk region
23 pages, in Russian, 8 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper formulates and solves the problem of identification of unknown parameters of mathematical models of the spread of COVID-19 coronavirus infection, based on SEIR type models, based on additional information about the number of detected cases, mortality, self-isolation coefficient and tests performed for the Moscow city and the Novosibirsk Region from 03.23.2020. Within the framework of the models used, the population is divided into seven (SEIR-HCD) and five (SEIR-D) groups with similar characteristics with transition probabilities between groups depending on a specific region. Identifiability analysis of the SEIR-HCD mathematical model was carried out, which revealed the least sensitive unknown parameters to additional measurements. The tasks of refining the parameters are reduced to minimizing the corresponding target functionals, which were solved using stochastic methods (simulating annealing, differential evolution, genetic algorithm, etc.). For a different amount of tested data, a prognostic scenario for the development of the disease in the city of Moscow and the Novosibirsk region was developed, the peak is predicted the development of the epidemic in Moscow with an error of 2 days and 174 detected cases, and an analysis of the applicability of the developed models was carried out.
[ { "created": "Thu, 18 Jun 2020 08:53:55 GMT", "version": "v1" } ]
2020-06-24
[ [ "Krivorotko", "Olga", "", "1, 2 and 3" ], [ "Kabanikhin", "Sergey", "", "1, 2 and 3" ], [ "Zyatkov", "Nikolay", "", "1, 2 and 3" ], [ "Prikhodko", "Alexey", "", "1, 2 and 3" ], [ "Prokhoshin", "Nikita", "", "1 and 2" ], [ "Shishlenin", "Maxim", "", "1, 2 and 3" ] ]
The paper formulates and solves the problem of identification of unknown parameters of mathematical models of the spread of COVID-19 coronavirus infection, based on SEIR type models, based on additional information about the number of detected cases, mortality, self-isolation coefficient and tests performed for the Moscow city and the Novosibirsk Region from 03.23.2020. Within the framework of the models used, the population is divided into seven (SEIR-HCD) and five (SEIR-D) groups with similar characteristics with transition probabilities between groups depending on a specific region. Identifiability analysis of the SEIR-HCD mathematical model was carried out, which revealed the least sensitive unknown parameters to additional measurements. The tasks of refining the parameters are reduced to minimizing the corresponding target functionals, which were solved using stochastic methods (simulating annealing, differential evolution, genetic algorithm, etc.). For a different amount of tested data, a prognostic scenario for the development of the disease in the city of Moscow and the Novosibirsk region was developed, the peak is predicted the development of the epidemic in Moscow with an error of 2 days and 174 detected cases, and an analysis of the applicability of the developed models was carried out.
1804.04725
Khalique Newaz
Khalique Newaz, Mahboobeh Ghalehnovi, Arash Rahnama, Panos J. Antsaklis and Tijana Milenkovic
Network-based protein structural classification
null
null
null
null
q-bio.MN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental determination of protein function is resource-consuming. As an alternative, computational prediction of protein function has received attention. In this context, protein structural classification (PSC) can help, by allowing for determining structural classes of currently unclassified proteins based on their features, and then relying on the fact that proteins with similar structures have similar functions. Existing PSC approaches rely on sequence-based or direct 3-dimensional (3D) structure-based protein features. In contrast, we first model 3D structures of proteins as protein structure networks (PSNs). Then, we use network-based features for PSC. We propose the use of graphlets, state-of-the-art features in many research areas of network science, in the task of PSC. Moreover, because graphlets can deal only with unweighted PSNs, and because accounting for edge weights when constructing PSNs could improve PSC accuracy, we also propose a deep learning framework that automatically learns network features from weighted PSNs. When evaluated on a large set of ~9,400 CATH and ~12,800 SCOP protein domains (spanning 36 PSN sets), our proposed approaches are superior to existing PSC approaches in terms of accuracy, with comparable running time.
[ { "created": "Thu, 12 Apr 2018 20:55:26 GMT", "version": "v1" }, { "created": "Fri, 21 Dec 2018 01:54:52 GMT", "version": "v2" }, { "created": "Mon, 25 Feb 2019 17:56:40 GMT", "version": "v3" }, { "created": "Fri, 23 Aug 2019 17:03:07 GMT", "version": "v4" }, { "created": "Wed, 13 Nov 2019 17:06:14 GMT", "version": "v5" }, { "created": "Fri, 6 Mar 2020 23:28:32 GMT", "version": "v6" }, { "created": "Sun, 15 Mar 2020 18:48:27 GMT", "version": "v7" } ]
2020-03-17
[ [ "Newaz", "Khalique", "" ], [ "Ghalehnovi", "Mahboobeh", "" ], [ "Rahnama", "Arash", "" ], [ "Antsaklis", "Panos J.", "" ], [ "Milenkovic", "Tijana", "" ] ]
Experimental determination of protein function is resource-consuming. As an alternative, computational prediction of protein function has received attention. In this context, protein structural classification (PSC) can help, by allowing for determining structural classes of currently unclassified proteins based on their features, and then relying on the fact that proteins with similar structures have similar functions. Existing PSC approaches rely on sequence-based or direct 3-dimensional (3D) structure-based protein features. In contrast, we first model 3D structures of proteins as protein structure networks (PSNs). Then, we use network-based features for PSC. We propose the use of graphlets, state-of-the-art features in many research areas of network science, in the task of PSC. Moreover, because graphlets can deal only with unweighted PSNs, and because accounting for edge weights when constructing PSNs could improve PSC accuracy, we also propose a deep learning framework that automatically learns network features from weighted PSNs. When evaluated on a large set of ~9,400 CATH and ~12,800 SCOP protein domains (spanning 36 PSN sets), our proposed approaches are superior to existing PSC approaches in terms of accuracy, with comparable running time.
1408.1177
Sriganesh Srihari Dr
Sriganesh Srihari, Piyush B. Madhamshettiwar, Sarah Song, Chao Liu, Peter T. Simpson, Kum Kum Khanna and Mark A. Ragan
Complex-based analysis of dysregulated cellular processes in cancer
22 pages, BMC Systems Biology
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Differential expression analysis of (individual) genes is often used to study their roles in diseases. However, diseases such as cancer are a result of the combined effect of multiple genes. Gene products such as proteins seldom act in isolation, but instead constitute stable multi-protein complexes performing dedicated functions. Therefore, complexes aggregate the effect of individual genes (proteins) and can be used to gain a better understanding of cancer mechanisms. Here, we observe that complexes show considerable changes in their expression, in turn directed by the concerted action of transcription factors (TFs), across cancer conditions. We seek to gain novel insights into cancer mechanisms through a systematic analysis of complexes and their transcriptional regulation. Results: We integrated large-scale protein-interaction (PPI) and gene-expression datasets to identify complexes that exhibit significant changes in their expression across different conditions in cancer. We devised a log-linear model to relate these changes to the differential regulation of complexes by TFs. The application of our model on two case studies involving pancreatic and familial breast tumour conditions revealed: (i) complexes in core cellular processes, especially those responsible for maintaining genome stability and cell proliferation (e.g. DNA damage repair and cell cycle) show considerable changes in expression; (ii) these changes include decrease and countering increase for different sets of complexes indicative of compensatory mechanisms coming into play in tumours; and (iii) TFs work in cooperative and counteractive ways to regulate these mechanisms. Such aberrant complexes and their regulating TFs play vital roles in the initiation and progression of cancer.
[ { "created": "Wed, 6 Aug 2014 04:00:19 GMT", "version": "v1" } ]
2014-08-07
[ [ "Srihari", "Sriganesh", "" ], [ "Madhamshettiwar", "Piyush B.", "" ], [ "Song", "Sarah", "" ], [ "Liu", "Chao", "" ], [ "Simpson", "Peter T.", "" ], [ "Khanna", "Kum Kum", "" ], [ "Ragan", "Mark A.", "" ] ]
Background: Differential expression analysis of (individual) genes is often used to study their roles in diseases. However, diseases such as cancer are a result of the combined effect of multiple genes. Gene products such as proteins seldom act in isolation, but instead constitute stable multi-protein complexes performing dedicated functions. Therefore, complexes aggregate the effect of individual genes (proteins) and can be used to gain a better understanding of cancer mechanisms. Here, we observe that complexes show considerable changes in their expression, in turn directed by the concerted action of transcription factors (TFs), across cancer conditions. We seek to gain novel insights into cancer mechanisms through a systematic analysis of complexes and their transcriptional regulation. Results: We integrated large-scale protein-interaction (PPI) and gene-expression datasets to identify complexes that exhibit significant changes in their expression across different conditions in cancer. We devised a log-linear model to relate these changes to the differential regulation of complexes by TFs. The application of our model on two case studies involving pancreatic and familial breast tumour conditions revealed: (i) complexes in core cellular processes, especially those responsible for maintaining genome stability and cell proliferation (e.g. DNA damage repair and cell cycle) show considerable changes in expression; (ii) these changes include decrease and countering increase for different sets of complexes indicative of compensatory mechanisms coming into play in tumours; and (iii) TFs work in cooperative and counteractive ways to regulate these mechanisms. Such aberrant complexes and their regulating TFs play vital roles in the initiation and progression of cancer.
1712.04602
David Schwartz M
David M. Schwartz and O. Ozan Koyluoglu
On the organization of grid and place cells: Neural de-noising via subspace learning
null
null
null
null
q-bio.NC cs.IT cs.LG cs.NE math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place cells in the hippocampus are active when an animal visits a certain location (referred to as a place field) within an environment. Grid cells in the medial entorhinal cortex (MEC) respond at multiple locations, with firing fields that form a periodic and hexagonal tiling of the environment. The joint activity of grid and place cell populations, as a function of location, forms a neural code for space. An ensemble of codes is generated by varying grid and place cell population parameters. For each code in this ensemble, codewords are generated by stimulating a network with a discrete set of locations. In this manuscript, we develop an understanding of the relationships between coding theoretic properties of these combined populations and code construction parameters. These relationships are revisited by measuring the performances of biologically realizable algorithms implemented by networks of place and grid cell populations, as well as constraint neurons, which perform de-noising operations. Objectives of this work include the investigation of coding theoretic limitations of the mammalian neural code for location and how communication between grid and place cell networks may improve the accuracy of each population's representation. Simulations demonstrate that de-noising mechanisms analyzed here can significantly improve fidelity of this neural representation of space. Further, patterns observed in connectivity of each population of simulated cells suggest that inter-hippocampal-medial-entorhinal-cortical connectivity decreases downward along the dorsoventral axis.
[ { "created": "Wed, 13 Dec 2017 03:48:27 GMT", "version": "v1" }, { "created": "Tue, 15 May 2018 21:07:55 GMT", "version": "v2" } ]
2018-05-17
[ [ "Schwartz", "David M.", "" ], [ "Koyluoglu", "O. Ozan", "" ] ]
Place cells in the hippocampus are active when an animal visits a certain location (referred to as a place field) within an environment. Grid cells in the medial entorhinal cortex (MEC) respond at multiple locations, with firing fields that form a periodic and hexagonal tiling of the environment. The joint activity of grid and place cell populations, as a function of location, forms a neural code for space. An ensemble of codes is generated by varying grid and place cell population parameters. For each code in this ensemble, codewords are generated by stimulating a network with a discrete set of locations. In this manuscript, we develop an understanding of the relationships between coding theoretic properties of these combined populations and code construction parameters. These relationships are revisited by measuring the performances of biologically realizable algorithms implemented by networks of place and grid cell populations, as well as constraint neurons, which perform de-noising operations. Objectives of this work include the investigation of coding theoretic limitations of the mammalian neural code for location and how communication between grid and place cell networks may improve the accuracy of each population's representation. Simulations demonstrate that de-noising mechanisms analyzed here can significantly improve fidelity of this neural representation of space. Further, patterns observed in connectivity of each population of simulated cells suggest that inter-hippocampal-medial-entorhinal-cortical connectivity decreases downward along the dorsoventral axis.
2210.12072
Kazem Haghnejad Azar
Kazem Haghnejad Azar
The quest for the definition of life
53 pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The intricacy and diversity inherent in living organisms present a formidable obstacle to the establishment of a universally accepted definition. Life manifests in a multitude of forms, exhibiting various attributes such as growth, reproduction, responsiveness to stimuli, adaptation, and homeostasis. However, each of these characteristics can also be observed to some degree within certain non-living systems, leading to a blurring of boundaries and generating conceptual complexities. In this manuscript, I demonstrate that the transformation of a non-living entity into a living organism does not adhere to a specific temporal boundary that unequivocally designates the onset of life. Through mathematical analysis, I have demonstrated that a comprehensive definition of living beings does not exist, which means that there are no clear boundaries in the chemical processes that turn non-living entities into living ones. In other words, living organisms do not possess unique characteristics that can completely set them apart from non-living entities. Therefore, no definitive definition exists that unequivocally distinguishes living things from non-living things.
[ { "created": "Mon, 10 Oct 2022 13:42:12 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2024 19:13:09 GMT", "version": "v2" } ]
2024-01-10
[ [ "Azar", "Kazem Haghnejad", "" ] ]
The intricacy and diversity inherent in living organisms present a formidable obstacle to the establishment of a universally accepted definition. Life manifests in a multitude of forms, exhibiting various attributes such as growth, reproduction, responsiveness to stimuli, adaptation, and homeostasis. However, each of these characteristics can also be observed to some degree within certain non-living systems, leading to a blurring of boundaries and generating conceptual complexities. In this manuscript, I demonstrate that the transformation of a non-living entity into a living organism does not adhere to a specific temporal boundary that unequivocally designates the onset of life. Through mathematical analysis, I have demonstrated that a comprehensive definition of living beings does not exist, which means that there are no clear boundaries in the chemical processes that turn non-living entities into living ones. In other words, living organisms do not possess unique characteristics that can completely set them apart from non-living entities. Therefore, no definitive definition exists that unequivocally distinguishes living things from non-living things.
1012.5916
Antonio Deiana
Antonio Deiana, Kana Shimizu, Andrea Giansanti
Amino acid composition and thermal stability of protein structures: the free energy geography of the Protein Data Bank
Version 2 revisits the theme of protein thermal stability with an improved form of energy function
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the combined influence of amino acid composition and chain length on the thermal stability of protein structures. A new parameterization of the internal free energy is considered, as the sum of hydrophobic effect, hydrogen-bond and de-hydration energy terms. We divided a non-redundant selection of protein structures from the Protein Data Bank into three groups: i) rich in order-promoting residues (OPR proteins); ii) rich in disorder-promoting residues (DPR proteins); iii) belonging to a twilight zone (TZ proteins). We observe a partition of PDB in several groups with different internal free energies, amino acid compositions and protein lengths. Internal free energy of 96% of the proteins analyzed ranges from -2 to -6.5 kJ/mol/res. We found many DPR and OPR proteins with the same relative thermal stability. Only OPR proteins with internal energy between -4 and -6.5 kJ/mol/res are observed to have chains longer than 200 residues, with a high de-hydration energy compensated by the hydrophobic effect. DPR and TZ proteins are shorter than 200 residues and they have an internal energy above -4 kJ/mol/res, with a few exceptions among TZ proteins. Hydrogen-bonds play an important role in the stabilization of these DPR folds, often higher than contact energy. The new parameterization of internal free energy let emerge a geography of thermal stabilities of PDB structures. Amino acid composition per se is not sufficient to determine the stability of protein folds, since. DPR and TZ proteins generally have a relatively high internal free energy, and they are stabilized by hydrogen-bonds. Long DPR proteins are not observed in the PDB, because their low hydrophobicity cannot compensate the high de-hydration energy necessary to accommodate residues within a highly packed globular fold.
[ { "created": "Wed, 29 Dec 2010 11:50:28 GMT", "version": "v1" }, { "created": "Fri, 4 May 2012 09:11:00 GMT", "version": "v2" } ]
2012-05-07
[ [ "Deiana", "Antonio", "" ], [ "Shimizu", "Kana", "" ], [ "Giansanti", "Andrea", "" ] ]
We study the combined influence of amino acid composition and chain length on the thermal stability of protein structures. A new parameterization of the internal free energy is considered, as the sum of hydrophobic effect, hydrogen-bond and de-hydration energy terms. We divided a non-redundant selection of protein structures from the Protein Data Bank into three groups: i) rich in order-promoting residues (OPR proteins); ii) rich in disorder-promoting residues (DPR proteins); iii) belonging to a twilight zone (TZ proteins). We observe a partition of PDB in several groups with different internal free energies, amino acid compositions and protein lengths. Internal free energy of 96% of the proteins analyzed ranges from -2 to -6.5 kJ/mol/res. We found many DPR and OPR proteins with the same relative thermal stability. Only OPR proteins with internal energy between -4 and -6.5 kJ/mol/res are observed to have chains longer than 200 residues, with a high de-hydration energy compensated by the hydrophobic effect. DPR and TZ proteins are shorter than 200 residues and they have an internal energy above -4 kJ/mol/res, with a few exceptions among TZ proteins. Hydrogen-bonds play an important role in the stabilization of these DPR folds, often higher than contact energy. The new parameterization of internal free energy let emerge a geography of thermal stabilities of PDB structures. Amino acid composition per se is not sufficient to determine the stability of protein folds, since. DPR and TZ proteins generally have a relatively high internal free energy, and they are stabilized by hydrogen-bonds. Long DPR proteins are not observed in the PDB, because their low hydrophobicity cannot compensate the high de-hydration energy necessary to accommodate residues within a highly packed globular fold.
0811.1271
Pablo Echenique
Pablo Echenique, Gregory A. Chass
Efficient model chemistries for peptides. II. Basis set convergence in the B3LYP method
21 pages, 5 figures, in press in Philosophical Nature
null
null
null
q-bio.QM cond-mat.soft q-bio.BM
http://creativecommons.org/licenses/by/3.0/
Small peptides are model molecules for the amino acid residues that are the constituents of proteins. In any bottom-up approach to understand the properties of these macromolecules essential in the functioning of every living being, to correctly describe the conformational behaviour of small peptides constitutes an unavoidable first step. In this work, we present an study of several potential energy surfaces (PESs) of the model dipeptide HCO-L-Ala-NH2. The PESs are calculated using the B3LYP density-functional theory (DFT) method, with Dunning's basis sets cc-pVDZ, aug-cc-pVDZ, cc-pVTZ, aug-cc-pVTZ, and cc-pVQZ. These calculations, whose cost amounts to approximately 10 years of computer time, allow us to study the basis set convergence of the B3LYP method for this model peptide. Also, we compare the B3LYP PESs to a previous computation at the MP2/6-311++G(2df,2pd) level, in order to assess their accuracy with respect to a higher level reference. All data sets have been analyzed according to a general framework which can be extended to other complex problems and which captures the nearness concept in the space of model chemistries (MCs).
[ { "created": "Sat, 8 Nov 2008 16:07:19 GMT", "version": "v1" }, { "created": "Mon, 17 Nov 2008 10:22:32 GMT", "version": "v2" } ]
2008-11-24
[ [ "Echenique", "Pablo", "" ], [ "Chass", "Gregory A.", "" ] ]
Small peptides are model molecules for the amino acid residues that are the constituents of proteins. In any bottom-up approach to understand the properties of these macromolecules essential in the functioning of every living being, to correctly describe the conformational behaviour of small peptides constitutes an unavoidable first step. In this work, we present an study of several potential energy surfaces (PESs) of the model dipeptide HCO-L-Ala-NH2. The PESs are calculated using the B3LYP density-functional theory (DFT) method, with Dunning's basis sets cc-pVDZ, aug-cc-pVDZ, cc-pVTZ, aug-cc-pVTZ, and cc-pVQZ. These calculations, whose cost amounts to approximately 10 years of computer time, allow us to study the basis set convergence of the B3LYP method for this model peptide. Also, we compare the B3LYP PESs to a previous computation at the MP2/6-311++G(2df,2pd) level, in order to assess their accuracy with respect to a higher level reference. All data sets have been analyzed according to a general framework which can be extended to other complex problems and which captures the nearness concept in the space of model chemistries (MCs).
2102.03039
Cian O'Donnell
Beatriz E.P. Mizusaki and Cian O'Donnell
Neural circuit function redundancy in brain disorders
12 pages, 2 figures
null
null
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
Redundancy is a ubiquitous property of the nervous system. This means that vastly different configurations of cellular and synaptic components can enable the same neural circuit functions. However, until recently very little brain disorder research considered the implications of this characteristic when designing experiments or interpreting data. Here, we first summarise the evidence for redundancy in healthy brains, explaining redundancy and three of its sub-concepts: sloppiness, dependencies, and multiple solutions. We then lay out key implications for brain disorder research, covering recent examples of redundancy effects in experimental studies on psychiatric disorders. Finally, we give predictions for future experiments based on these concepts.
[ { "created": "Fri, 5 Feb 2021 07:53:49 GMT", "version": "v1" } ]
2021-02-08
[ [ "Mizusaki", "Beatriz E. P.", "" ], [ "O'Donnell", "Cian", "" ] ]
Redundancy is a ubiquitous property of the nervous system. This means that vastly different configurations of cellular and synaptic components can enable the same neural circuit functions. However, until recently very little brain disorder research considered the implications of this characteristic when designing experiments or interpreting data. Here, we first summarise the evidence for redundancy in healthy brains, explaining redundancy and three of its sub-concepts: sloppiness, dependencies, and multiple solutions. We then lay out key implications for brain disorder research, covering recent examples of redundancy effects in experimental studies on psychiatric disorders. Finally, we give predictions for future experiments based on these concepts.
2005.03144
Giuseppe Torrisi
Giuseppe Torrisi, Reimer K\"uhn, and Alessia Annibale
Percolation on the gene regulatory network
null
null
null
null
q-bio.MN cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a simplified model for gene regulation, where gene expression is regulated by transcription factors (TFs), which are single proteins or protein complexes. Proteins are in turn synthesised from expressed genes, creating a feedback loop of regulation. This leads to a directed bipartite network in which a link from a gene to a TF exists if the gene codes for a protein contributing to the TF, and a link from a TF to a gene exists if the TF regulates the expression of the gene. Both genes and TFs are modelled as binary variables, which indicate, respectively, whether a gene is expressed or not, and a TF is synthesised or not. We consider the scenario where for a TF to be synthesised, all of its contributing genes must be expressed. This results in an ``AND'' gate logic for the dynamics of TFs. By adapting percolation theory to directed bipartite graphs, evolving according to the AND logic dynamics, we are able to determine the necessary conditions, in the network parameter space, under which bipartite networks can support a multiplicity of stable gene expression patterns, under noisy conditions, as required in stable cell types. In particular, the analysis reveals the possibility of a bi-stability region, where the extensive percolating cluster is or is not resilient to perturbations. This is remarkably different from the transition observed in standard percolation theory. Finally, we consider perturbations involving single node removal that mimic gene knockout experiments. Results reveal the strong dependence of the gene knockout cascade on the logic implemented in the underlying network dynamics, highlighting in particular that avalanche sizes cannot be easily related to gene-gene interaction networks.
[ { "created": "Wed, 6 May 2020 21:30:41 GMT", "version": "v1" }, { "created": "Mon, 13 Jul 2020 21:56:10 GMT", "version": "v2" } ]
2020-07-15
[ [ "Torrisi", "Giuseppe", "" ], [ "Kühn", "Reimer", "" ], [ "Annibale", "Alessia", "" ] ]
We consider a simplified model for gene regulation, where gene expression is regulated by transcription factors (TFs), which are single proteins or protein complexes. Proteins are in turn synthesised from expressed genes, creating a feedback loop of regulation. This leads to a directed bipartite network in which a link from a gene to a TF exists if the gene codes for a protein contributing to the TF, and a link from a TF to a gene exists if the TF regulates the expression of the gene. Both genes and TFs are modelled as binary variables, which indicate, respectively, whether a gene is expressed or not, and a TF is synthesised or not. We consider the scenario where for a TF to be synthesised, all of its contributing genes must be expressed. This results in an ``AND'' gate logic for the dynamics of TFs. By adapting percolation theory to directed bipartite graphs, evolving according to the AND logic dynamics, we are able to determine the necessary conditions, in the network parameter space, under which bipartite networks can support a multiplicity of stable gene expression patterns, under noisy conditions, as required in stable cell types. In particular, the analysis reveals the possibility of a bi-stability region, where the extensive percolating cluster is or is not resilient to perturbations. This is remarkably different from the transition observed in standard percolation theory. Finally, we consider perturbations involving single node removal that mimic gene knockout experiments. Results reveal the strong dependence of the gene knockout cascade on the logic implemented in the underlying network dynamics, highlighting in particular that avalanche sizes cannot be easily related to gene-gene interaction networks.
1405.3315
Elinor Velasquez
Elinor Velasquez, Jorge Soto-Andrade, Ben Bongalon
Designing anti-cancer drugs and directing anti-cancer therapy
10 pages, 2 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/3.0/
A prototype for a web application was designed and implemented as a guide to be used by clinicians when designing the best drug therapy for a specific cancer patient, given biological data derived from the patients tumor tissue biopsy. A representation of the patients metabolic pathways is displayed as a graph in the application, with nodes as substrates and products and edges as enzymes. The top metabolically active sub- paths in the pathway, ranked using an algorithm based on both the patients biological data and the graph topology, are also displayed and can be individually highlighted to examine potential enzymatic sites to be disrupted by a drug.
[ { "created": "Tue, 13 May 2014 22:14:47 GMT", "version": "v1" } ]
2014-05-15
[ [ "Velasquez", "Elinor", "" ], [ "Soto-Andrade", "Jorge", "" ], [ "Bongalon", "Ben", "" ] ]
A prototype for a web application was designed and implemented as a guide to be used by clinicians when designing the best drug therapy for a specific cancer patient, given biological data derived from the patients tumor tissue biopsy. A representation of the patients metabolic pathways is displayed as a graph in the application, with nodes as substrates and products and edges as enzymes. The top metabolically active sub- paths in the pathway, ranked using an algorithm based on both the patients biological data and the graph topology, are also displayed and can be individually highlighted to examine potential enzymatic sites to be disrupted by a drug.
1210.2336
Andrea Pagnani
Carla Bosia, Andrea Pagnani, Riccardo Zecchina
Modeling competing endogenous RNAs networks
28 pages, 9 pdf figures
null
null
null
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MicroRNAs (miRNAs) are small RNA molecules, about 22 nucleotide long, which post-transcriptionally regulate their target messenger RNAs (mRNAs). They accomplish key roles in gene regulatory networks, ranging from signaling pathways to tissue morphogenesis, and their aberrant behavior is often associated with the development of various diseases. Recently it has been shown that, in analogy with the better understood case of small RNAs in bacteria, the way miRNAs interact with their targets can be described in terms of a titration mechanism characterized by threshold effects, hypersensitivity of the system near the threshold, and prioritized cross-talk among targets. The latter characteristic has been lately identified as competing endogenous RNA (ceRNA) effect to mark those indirect interactions among targets of a common pool of miRNAs they are in competition for. Here we analyze the equilibrium and out-of-equilibrium properties of a general stochastic model of $M$ miRNAs interacting with $N$ mRNA targets. In particular we are able to describe in details the peculiar equilibrium and non-equilibrium phenomena that the system displays around the threshold: (i) maximal cross-talk and correlation between targets, (ii) robustness of ceRNA effect with respect to the model's parameters and in particular to the catalyticity of the miRNA-mRNA interaction, and (iii) anomalous response-time to external perturbations.
[ { "created": "Mon, 8 Oct 2012 16:53:17 GMT", "version": "v1" } ]
2012-10-09
[ [ "Bosia", "Carla", "" ], [ "Pagnani", "Andrea", "" ], [ "Zecchina", "Riccardo", "" ] ]
MicroRNAs (miRNAs) are small RNA molecules, about 22 nucleotide long, which post-transcriptionally regulate their target messenger RNAs (mRNAs). They accomplish key roles in gene regulatory networks, ranging from signaling pathways to tissue morphogenesis, and their aberrant behavior is often associated with the development of various diseases. Recently it has been shown that, in analogy with the better understood case of small RNAs in bacteria, the way miRNAs interact with their targets can be described in terms of a titration mechanism characterized by threshold effects, hypersensitivity of the system near the threshold, and prioritized cross-talk among targets. The latter characteristic has been lately identified as competing endogenous RNA (ceRNA) effect to mark those indirect interactions among targets of a common pool of miRNAs they are in competition for. Here we analyze the equilibrium and out-of-equilibrium properties of a general stochastic model of $M$ miRNAs interacting with $N$ mRNA targets. In particular we are able to describe in details the peculiar equilibrium and non-equilibrium phenomena that the system displays around the threshold: (i) maximal cross-talk and correlation between targets, (ii) robustness of ceRNA effect with respect to the model's parameters and in particular to the catalyticity of the miRNA-mRNA interaction, and (iii) anomalous response-time to external perturbations.
2007.14359
Luca Presotto
Luca Presotto
Robustly estimating the COVID19 epidemic curve in northern Italy using all-cause mortality
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Northern Italy was one of the most impacted areas by COVID. It is now widely assumed that the virus was silently spreading for at least 2 weeks before the first patient was identified. During this silent phase, and in the following weeks when the hospital system was overburdened, data collection was not performed in an accurate enough way to estimate an epidemic curve. With the aim of assessing both the dynamics of the introduction of the virus and the effectiveness of containment measures introduced, we try to reconstruct the epidemic curve using all cause mortality data. Methods: we collected all cause mortality data stratified by age from the national institute of statistics, together with COVID-related deaths data released by other government structures. Using a SEIR model together with estimates of the exposure to death time distribution, we fitted the reproduction number in different phases of the spread at regional level. Results: We estimate a reproduction number of 2.6+/-0.1 before case 1 was identified. School closures in Lombardy lowered it to 1.3. Soft lockdown measures resulted in R<0.8 and no further reductions were observed when a hard lockdown was introduced (e.g. Emilia-Romagna soft lockdown 0.67 +/-0.07, hard lockdown 0.69+/-0.071). Reproduction number for the >75 age range during hard lockdown are consistently higher than for the rest of the population (e.g. 0.98 vs 0.71 in Milan province), suggesting outbreaks in retirement facilities. Reproduction numbers in Bergamo and Brescia provinces starting from March 7th are markedly lower than in other areas with the same strict lockdown measures (Nearby provinces: 0.73, Brescia: 0.52, Bergamo 0.43) supporting the hypothesis that in those provinces a large percentage of the population had already been infected by the beginning of March.
[ { "created": "Tue, 28 Jul 2020 16:50:24 GMT", "version": "v1" }, { "created": "Sun, 9 Aug 2020 21:22:32 GMT", "version": "v2" } ]
2020-08-11
[ [ "Presotto", "Luca", "" ] ]
Background: Northern Italy was one of the most impacted areas by COVID. It is now widely assumed that the virus was silently spreading for at least 2 weeks before the first patient was identified. During this silent phase, and in the following weeks when the hospital system was overburdened, data collection was not performed in an accurate enough way to estimate an epidemic curve. With the aim of assessing both the dynamics of the introduction of the virus and the effectiveness of containment measures introduced, we try to reconstruct the epidemic curve using all cause mortality data. Methods: we collected all cause mortality data stratified by age from the national institute of statistics, together with COVID-related deaths data released by other government structures. Using a SEIR model together with estimates of the exposure to death time distribution, we fitted the reproduction number in different phases of the spread at regional level. Results: We estimate a reproduction number of 2.6+/-0.1 before case 1 was identified. School closures in Lombardy lowered it to 1.3. Soft lockdown measures resulted in R<0.8 and no further reductions were observed when a hard lockdown was introduced (e.g. Emilia-Romagna soft lockdown 0.67 +/-0.07, hard lockdown 0.69+/-0.071). Reproduction number for the >75 age range during hard lockdown are consistently higher than for the rest of the population (e.g. 0.98 vs 0.71 in Milan province), suggesting outbreaks in retirement facilities. Reproduction numbers in Bergamo and Brescia provinces starting from March 7th are markedly lower than in other areas with the same strict lockdown measures (Nearby provinces: 0.73, Brescia: 0.52, Bergamo 0.43) supporting the hypothesis that in those provinces a large percentage of the population had already been infected by the beginning of March.
2012.14309
Li-Min Wang
Li-Min Wang, Hsing-Yi Lai, Sun-Ting Tsai, Chen Siang Ng, Shan-Jyun Wu, Meng-Xue Tsai, Yi-Ching Su, Daw-Wei Wang, and Tzay-Ming Hong
General Mechanism of Evolution Shared by Proteins and Words
null
null
null
null
q-bio.PE cond-mat.soft cs.CL physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex systems, such as life and languages, are governed by principles of evolution. The analogy and comparison between biology and linguistics\cite{alphafold2, RoseTTAFold, lang_virus, cell language, faculty1, language of gene, Protein linguistics, dictionary, Grammar of pro_dom, complexity, genomics_nlp, InterPro, language modeling, Protein language modeling} provide a computational foundation for characterizing and analyzing protein sequences, human corpora, and their evolution. However, no general mathematical formula has been proposed so far to illuminate the origin of quantitative hallmarks shared by life and language. Here we show several new statistical relationships shared by proteins and words, which inspire us to establish a general mechanism of evolution with explicit formulations that can incorporate both old and new characteristics. We found natural selection can be quantified via the entropic formulation by the principle of least effort to determine the sequence variation that survives in evolution. Besides, the origin of power law behavior and how changes in the environment stimulate the emergence of new proteins and words can also be explained via the introduction of function connection network. Our results demonstrate not only the correspondence between genetics and linguistics over their different hierarchies but also new fundamental physical properties for the evolution of complex adaptive systems. We anticipate our statistical tests can function as quantitative criteria to examine whether an evolution theory of sequence is consistent with the regularity of real data. In the meantime, their correspondence broadens the bridge to exchange existing knowledge, spurs new interpretations, and opens Pandora's box to release several potentially revolutionary challenges. For example, does linguistic arbitrariness conflict with the dogma that structure determines function?
[ { "created": "Mon, 28 Dec 2020 15:46:19 GMT", "version": "v1" }, { "created": "Fri, 16 Dec 2022 17:17:57 GMT", "version": "v2" } ]
2022-12-19
[ [ "Wang", "Li-Min", "" ], [ "Lai", "Hsing-Yi", "" ], [ "Tsai", "Sun-Ting", "" ], [ "Ng", "Chen Siang", "" ], [ "Wu", "Shan-Jyun", "" ], [ "Tsai", "Meng-Xue", "" ], [ "Su", "Yi-Ching", "" ], [ "Wang", "Daw-Wei", "" ], [ "Hong", "Tzay-Ming", "" ] ]
Complex systems, such as life and languages, are governed by principles of evolution. The analogy and comparison between biology and linguistics\cite{alphafold2, RoseTTAFold, lang_virus, cell language, faculty1, language of gene, Protein linguistics, dictionary, Grammar of pro_dom, complexity, genomics_nlp, InterPro, language modeling, Protein language modeling} provide a computational foundation for characterizing and analyzing protein sequences, human corpora, and their evolution. However, no general mathematical formula has been proposed so far to illuminate the origin of quantitative hallmarks shared by life and language. Here we show several new statistical relationships shared by proteins and words, which inspire us to establish a general mechanism of evolution with explicit formulations that can incorporate both old and new characteristics. We found natural selection can be quantified via the entropic formulation by the principle of least effort to determine the sequence variation that survives in evolution. Besides, the origin of power law behavior and how changes in the environment stimulate the emergence of new proteins and words can also be explained via the introduction of function connection network. Our results demonstrate not only the correspondence between genetics and linguistics over their different hierarchies but also new fundamental physical properties for the evolution of complex adaptive systems. We anticipate our statistical tests can function as quantitative criteria to examine whether an evolution theory of sequence is consistent with the regularity of real data. In the meantime, their correspondence broadens the bridge to exchange existing knowledge, spurs new interpretations, and opens Pandora's box to release several potentially revolutionary challenges. For example, does linguistic arbitrariness conflict with the dogma that structure determines function?
2305.09826
Alireza Zaeemzadeh
Alireza Zaeemzadeh, Giulio Tononi
Upper bounds for integrated information
null
null
null
null
q-bio.NC math.DS math.PR
http://creativecommons.org/licenses/by/4.0/
Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.
[ { "created": "Tue, 16 May 2023 22:05:56 GMT", "version": "v1" }, { "created": "Sun, 21 Apr 2024 17:44:58 GMT", "version": "v2" } ]
2024-04-23
[ [ "Zaeemzadeh", "Alireza", "" ], [ "Tononi", "Giulio", "" ] ]
Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.
2310.01774
Alan Cronemberger Andrade
Alan Cronemberger Andrade, Di\'ogenes de Souza Bido, Ana Carolina Bottura de Barros, Walter Richard Boot, Paulo Henrique Ferreira Bertolucci
A mobile digital device proficiency performance test for cognitive clinical research
3 figures, 5 tables
null
null
null
q-bio.NC cs.HC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile device proficiency is increasingly important for everyday living, including to deliver healthcare services. Human-device interactions represent a potential in cognitive neurology and aging research. Although traditional pen-and-paper evaluations serve as valuable tools within public health strategies for population-scale cognitive assessments, digital devices could amplify cognitive assessment. However, even person-centered studies often fail to incorporate measures of mobile device proficiency and research with digital mobile technology frequently neglects these evaluations. Besides that, cognitive screening, a fundamental part of brain health evaluation and a widely accepted strategy to identify high-risk individuals vulnerable to cognitive impairment and dementia, has research using digital devices for older adults in need for standardization. To address this shortfall, the DigiTAU collaborative and interdisciplinary project is creating refined methodological parameters for the investigation of digital biomarkers. With careful consideration of cognitive design elements, here we describe the open-source and performance-based Mobile Device Abilities Test (MDAT), a simple, low-cost, and reproductible open-sourced test framework. This result was achieved with a cross-sectional study population sample of 101 low and middle-income subjects aged 20 to 79 years old. Partial least squares structural equation modeling (PLS-SEM) was used to assess the measurement of the construct. It was possible to achieve a reliable method with internal consistency, good content validity related to digital competences, and that does not have much interference with auto-perceived global functional disability, health self-perception, and motor dexterity. Limitations for this method are discussed and paths to improve and establish better standards are highlighted.
[ { "created": "Tue, 3 Oct 2023 03:52:04 GMT", "version": "v1" } ]
2023-10-04
[ [ "Andrade", "Alan Cronemberger", "" ], [ "Bido", "Diógenes de Souza", "" ], [ "de Barros", "Ana Carolina Bottura", "" ], [ "Boot", "Walter Richard", "" ], [ "Bertolucci", "Paulo Henrique Ferreira", "" ] ]
Mobile device proficiency is increasingly important for everyday living, including to deliver healthcare services. Human-device interactions represent a potential in cognitive neurology and aging research. Although traditional pen-and-paper evaluations serve as valuable tools within public health strategies for population-scale cognitive assessments, digital devices could amplify cognitive assessment. However, even person-centered studies often fail to incorporate measures of mobile device proficiency and research with digital mobile technology frequently neglects these evaluations. Besides that, cognitive screening, a fundamental part of brain health evaluation and a widely accepted strategy to identify high-risk individuals vulnerable to cognitive impairment and dementia, has research using digital devices for older adults in need for standardization. To address this shortfall, the DigiTAU collaborative and interdisciplinary project is creating refined methodological parameters for the investigation of digital biomarkers. With careful consideration of cognitive design elements, here we describe the open-source and performance-based Mobile Device Abilities Test (MDAT), a simple, low-cost, and reproductible open-sourced test framework. This result was achieved with a cross-sectional study population sample of 101 low and middle-income subjects aged 20 to 79 years old. Partial least squares structural equation modeling (PLS-SEM) was used to assess the measurement of the construct. It was possible to achieve a reliable method with internal consistency, good content validity related to digital competences, and that does not have much interference with auto-perceived global functional disability, health self-perception, and motor dexterity. Limitations for this method are discussed and paths to improve and establish better standards are highlighted.
1004.4986
Diana Clausznitzer
Diana Clausznitzer, Olga Oleksiuk, Linda Lovdok, Victor Sourjik, Robert G. Endres
Chemotactic response and adaptation dynamics in Escherichia coli
accepted for publication in PLoS Computational Biology; manuscript (19 pages, 5 figures) and supplementary information; added additional clarification on alternative adaptation models in supplementary information
null
10.1371/journal.pcbi.1000784
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation of the chemotaxis sensory pathway of the bacterium Escherichia coli is integral for detecting chemicals over a wide range of background concentrations, ultimately allowing cells to swim towards sources of attractant and away from repellents. Its biochemical mechanism based on methylation and demethylation of chemoreceptors has long been known. Despite the importance of adaptation for cell memory and behavior, the dynamics of adaptation are difficult to reconcile with current models of precise adaptation. Here, we follow time courses of signaling in response to concentration step changes of attractant using in vivo fluorescence resonance energy transfer measurements. Specifically, we use a condensed representation of adaptation time courses for efficient evaluation of different adaptation models. To quantitatively explain the data, we finally develop a dynamic model for signaling and adaptation based on the attractant flow in the experiment, signaling by cooperative receptor complexes, and multiple layers of feedback regulation for adaptation. We experimentally confirm the predicted effects of changing the enzyme-expression level and bypassing the negative feedback for demethylation. Our data analysis suggests significant imprecision in adaptation for large additions. Furthermore, our model predicts highly regulated, ultrafast adaptation in response to removal of attractant, which may be useful for fast reorientation of the cell and noise reduction in adaptation.
[ { "created": "Wed, 28 Apr 2010 10:18:20 GMT", "version": "v1" }, { "created": "Mon, 3 May 2010 10:27:54 GMT", "version": "v2" } ]
2015-05-18
[ [ "Clausznitzer", "Diana", "" ], [ "Oleksiuk", "Olga", "" ], [ "Lovdok", "Linda", "" ], [ "Sourjik", "Victor", "" ], [ "Endres", "Robert G.", "" ] ]
Adaptation of the chemotaxis sensory pathway of the bacterium Escherichia coli is integral for detecting chemicals over a wide range of background concentrations, ultimately allowing cells to swim towards sources of attractant and away from repellents. Its biochemical mechanism based on methylation and demethylation of chemoreceptors has long been known. Despite the importance of adaptation for cell memory and behavior, the dynamics of adaptation are difficult to reconcile with current models of precise adaptation. Here, we follow time courses of signaling in response to concentration step changes of attractant using in vivo fluorescence resonance energy transfer measurements. Specifically, we use a condensed representation of adaptation time courses for efficient evaluation of different adaptation models. To quantitatively explain the data, we finally develop a dynamic model for signaling and adaptation based on the attractant flow in the experiment, signaling by cooperative receptor complexes, and multiple layers of feedback regulation for adaptation. We experimentally confirm the predicted effects of changing the enzyme-expression level and bypassing the negative feedback for demethylation. Our data analysis suggests significant imprecision in adaptation for large additions. Furthermore, our model predicts highly regulated, ultrafast adaptation in response to removal of attractant, which may be useful for fast reorientation of the cell and noise reduction in adaptation.
0807.1558
Eduardo Candelario-Jalil
Eduardo Candelario-Jalil and Bernd L. Fiebich
Cyclooxygenase inhibition in ischemic brain injury
null
Current Pharmaceutical Design 2008; 14(14):1401-1418
null
null
q-bio.TO q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroinflammation is one of the key pathological events involved in the progression of brain damage caused by cerebral ischemia. Metabolism of arachidonic acid through cyclooxygenase (COX) enzymes is known to be actively involved in the neuroinflammatory events leading to neuronal death after ischemia. Two isoforms of COX, termed COX-1 and COX-2, have been identified. Unlike COX-1, COX-2 expression is dramatically induced by ischemia and appears to be an effector of tissue damage. This review article will focus specifically on the involvement of COX isozymes in brain ischemia. We will discuss issues related to the biochemistry and selective pharmacological inhibition of COX enzymes, and further refer to their expression in the brain under normal conditions and following excitotoxicity and ischemic cerebral injury. We will review present knowledge of the relative contribution of each COX isoform to the brain ischemic pathology, based on data from investigations utilizing selective COX-1/COX-2 inhibitors and genetic knockout mouse models. The mechanisms of neurotoxicity associated with increased COX activity after ischemia will also be examined. Finally, we will provide a critical evaluation of the therapeutic potential of COX inhibitors in cerebral ischemia and discuss new targets downstream of COX with potential neuroprotective ability.
[ { "created": "Wed, 9 Jul 2008 23:05:49 GMT", "version": "v1" } ]
2008-07-11
[ [ "Candelario-Jalil", "Eduardo", "" ], [ "Fiebich", "Bernd L.", "" ] ]
Neuroinflammation is one of the key pathological events involved in the progression of brain damage caused by cerebral ischemia. Metabolism of arachidonic acid through cyclooxygenase (COX) enzymes is known to be actively involved in the neuroinflammatory events leading to neuronal death after ischemia. Two isoforms of COX, termed COX-1 and COX-2, have been identified. Unlike COX-1, COX-2 expression is dramatically induced by ischemia and appears to be an effector of tissue damage. This review article will focus specifically on the involvement of COX isozymes in brain ischemia. We will discuss issues related to the biochemistry and selective pharmacological inhibition of COX enzymes, and further refer to their expression in the brain under normal conditions and following excitotoxicity and ischemic cerebral injury. We will review present knowledge of the relative contribution of each COX isoform to the brain ischemic pathology, based on data from investigations utilizing selective COX-1/COX-2 inhibitors and genetic knockout mouse models. The mechanisms of neurotoxicity associated with increased COX activity after ischemia will also be examined. Finally, we will provide a critical evaluation of the therapeutic potential of COX inhibitors in cerebral ischemia and discuss new targets downstream of COX with potential neuroprotective ability.
q-bio/0401017
Marek Cieplak
Marek Cieplak
Cooperativity and Contact Order in Protein Folding
18 pages 6 figures. Phys. Rev. E in press
null
10.1103/PhysRevE.69.031907
null
q-bio.BM cond-mat.soft
null
The effects of cooperativity are studied within Go-Lennard-Jones models of proteins by making the contact interactions dependent on the proximity to the native conformation. The kinetic universality classes are found to remain the same as in the absence of cooperativity. For a fixed native geometry, small changes in the effective contact map may affect the folding times in a chance way and to the extent that is comparable to the shift in the folding times due to cooperativity. The contact order controlls folding scenarios: the average times necessary to bring pairs of amino acids into their near native separations depend on the sequential distances within the pairs. This dependence is largely monotonic, regardless of the cooperativity, and the dominant trend could be described by a single parameter like the average contact order. However, it is the deviations from the trend which are usually found to set the net folding times.
[ { "created": "Sun, 11 Jan 2004 15:58:07 GMT", "version": "v1" } ]
2009-11-10
[ [ "Cieplak", "Marek", "" ] ]
The effects of cooperativity are studied within Go-Lennard-Jones models of proteins by making the contact interactions dependent on the proximity to the native conformation. The kinetic universality classes are found to remain the same as in the absence of cooperativity. For a fixed native geometry, small changes in the effective contact map may affect the folding times in a chance way and to the extent that is comparable to the shift in the folding times due to cooperativity. The contact order controlls folding scenarios: the average times necessary to bring pairs of amino acids into their near native separations depend on the sequential distances within the pairs. This dependence is largely monotonic, regardless of the cooperativity, and the dominant trend could be described by a single parameter like the average contact order. However, it is the deviations from the trend which are usually found to set the net folding times.
1709.02145
Ada Yan
Ada W. C. Yan, Sophie G. Zaloumis, Julie A. Simpson, James M. McCaw
Sequential infection experiments for quantifying innate and adaptive immunity during influenza infection
25 pages, 5 figures, 4 supplementary figures, 2 supplementary texts, 4 supplementary files, 4 supplementary tables
null
null
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Laboratory models are often used to understand the interaction of related pathogens via host immunity. For example, recent experiments where ferrets were exposed to two influenza strains within a short period of time have shown how the effects of cross-immunity vary with the time between exposures and the specific strains used. On the other hand, studies of the workings of different arms of the immune response, and their relative importance, typically use experiments involving a single infection. However, inferring the relative importance of different immune components from this type of data is challenging. Using simulations and mathematical modelling, here we investigate whether the sequential infection experiment design can be used not only to determine immune components contributing to cross-protection, but also to gain insight into the immune response during a single infection. We show that virological data from sequential infection experiments can be used to accurately extract the timing and extent of cross-protection. Moreover, the broad immune components responsible for such cross-protection can be determined. Such data can also be used to infer the timing and strength of some immune components in controlling a primary infection, even in the absence of serological data. By contrast, single infection data cannot be used to reliably recover this information. Hence, sequential infection data enhances our understanding of the mechanisms underlying the control and resolution of infection, and generates new insight into how previous exposure influences the time course of a subsequent infection.
[ { "created": "Thu, 7 Sep 2017 09:09:52 GMT", "version": "v1" }, { "created": "Tue, 5 Jun 2018 10:55:23 GMT", "version": "v2" } ]
2018-06-06
[ [ "Yan", "Ada W. C.", "" ], [ "Zaloumis", "Sophie G.", "" ], [ "Simpson", "Julie A.", "" ], [ "McCaw", "James M.", "" ] ]
Laboratory models are often used to understand the interaction of related pathogens via host immunity. For example, recent experiments where ferrets were exposed to two influenza strains within a short period of time have shown how the effects of cross-immunity vary with the time between exposures and the specific strains used. On the other hand, studies of the workings of different arms of the immune response, and their relative importance, typically use experiments involving a single infection. However, inferring the relative importance of different immune components from this type of data is challenging. Using simulations and mathematical modelling, here we investigate whether the sequential infection experiment design can be used not only to determine immune components contributing to cross-protection, but also to gain insight into the immune response during a single infection. We show that virological data from sequential infection experiments can be used to accurately extract the timing and extent of cross-protection. Moreover, the broad immune components responsible for such cross-protection can be determined. Such data can also be used to infer the timing and strength of some immune components in controlling a primary infection, even in the absence of serological data. By contrast, single infection data cannot be used to reliably recover this information. Hence, sequential infection data enhances our understanding of the mechanisms underlying the control and resolution of infection, and generates new insight into how previous exposure influences the time course of a subsequent infection.
2106.00384
Rim Adenane
Florin Avram, Rim Adenane, and David I. Ketcheson
A review of matrix SIR Arino epidemic models
null
null
null
null
q-bio.PE math.CA
http://creativecommons.org/licenses/by/4.0/
Many of the models used nowadays in mathematical epidemiology, in particular in COVID-19 research, belong to a certain sub-class of compartmental models whose classes may be divided into three "(x, y, z)" groups, which we will call respectively "susceptible/entrance, diseased, and output" (in the classic SIR case, there is only one class of each type). Roughly, the ODE dynamics of these models contain only linear terms, with the exception of products between x and y terms. It has long been noticed that the basic reproduction number R has a very simple formula (3.3) in terms of the matrices which define the model, and an explicit first integral formula (3.8) is also available. These results can be traced back at least to [ABvdD+07] and [Fen07], respectively, and may be viewed as the "basic laws of SIR-type epidemics"; however many papers continue to reprove them in particular instances (by the next-generation matrix method or by direct computations, which are unnecessary). This motivated us to redraw the attention to these basic laws and provide a self-contained reference of related formulas for (x, y, z) models. We propose to rebaptize the class to which they apply as matrix SIR epidemic models, abbreviated as SYR, to emphasize the similarity to the classic SIR case. For the case of one susceptible class, we propose to use the name SIR-PH, due to a simple probabilistic interpretation as SIR models where the exponential infection time has been replaced by a PH-type distribution. We note that to each SIR-PH model, one may associate a scalar quantity Y(t) which satisfies "classic SIR relations", see (3.8). In the case of several susceptible classes, this generalizes to (5.10); in a future paper, we will show that (3.8), (5.10) may be used to obtain approximate control policies which compare well with the optimal control of the original model.
[ { "created": "Tue, 1 Jun 2021 11:06:04 GMT", "version": "v1" } ]
2021-12-30
[ [ "Avram", "Florin", "" ], [ "Adenane", "Rim", "" ], [ "Ketcheson", "David I.", "" ] ]
Many of the models used nowadays in mathematical epidemiology, in particular in COVID-19 research, belong to a certain sub-class of compartmental models whose classes may be divided into three "(x, y, z)" groups, which we will call respectively "susceptible/entrance, diseased, and output" (in the classic SIR case, there is only one class of each type). Roughly, the ODE dynamics of these models contain only linear terms, with the exception of products between x and y terms. It has long been noticed that the basic reproduction number R has a very simple formula (3.3) in terms of the matrices which define the model, and an explicit first integral formula (3.8) is also available. These results can be traced back at least to [ABvdD+07] and [Fen07], respectively, and may be viewed as the "basic laws of SIR-type epidemics"; however many papers continue to reprove them in particular instances (by the next-generation matrix method or by direct computations, which are unnecessary). This motivated us to redraw the attention to these basic laws and provide a self-contained reference of related formulas for (x, y, z) models. We propose to rebaptize the class to which they apply as matrix SIR epidemic models, abbreviated as SYR, to emphasize the similarity to the classic SIR case. For the case of one susceptible class, we propose to use the name SIR-PH, due to a simple probabilistic interpretation as SIR models where the exponential infection time has been replaced by a PH-type distribution. We note that to each SIR-PH model, one may associate a scalar quantity Y(t) which satisfies "classic SIR relations", see (3.8). In the case of several susceptible classes, this generalizes to (5.10); in a future paper, we will show that (3.8), (5.10) may be used to obtain approximate control policies which compare well with the optimal control of the original model.
2205.06884
Francesco Fumarola
Shun Ogawa, Francesco Fumarola, Luca Mazzucato
Multi-tasking via baseline control in recurrent neural networks
26 pages, 6 figures, accepted by PNAS
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Changes in an animal's behavioral state, such as arousal and movements, induce {complex modulations of the baseline input currents to sensory areas, eliciting sensory modality-specific effects. A simple computational principle explaining the effects of baseline modulations to recurrent cortical circuits is lacking. We investigate the benefits of baseline modulations using a reservoir computing approach in recurrent neural networks with random couplings. Baseline modulations unlock a set of new network phases and phenomena, including chaos enhancement, neural hysteresis and ergodicity breaking. Strikingly, baseline modulations enable reservoir networks to perform multiple tasks, without any optimization of the network couplings.} Baseline control of network dynamics opens new directions for brain-inspired artificial intelligence and sheds new light on behavioral modulations of cortical activity.
[ { "created": "Fri, 13 May 2022 20:41:26 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2023 04:26:32 GMT", "version": "v2" } ]
2023-06-06
[ [ "Ogawa", "Shun", "" ], [ "Fumarola", "Francesco", "" ], [ "Mazzucato", "Luca", "" ] ]
Changes in an animal's behavioral state, such as arousal and movements, induce {complex modulations of the baseline input currents to sensory areas, eliciting sensory modality-specific effects. A simple computational principle explaining the effects of baseline modulations to recurrent cortical circuits is lacking. We investigate the benefits of baseline modulations using a reservoir computing approach in recurrent neural networks with random couplings. Baseline modulations unlock a set of new network phases and phenomena, including chaos enhancement, neural hysteresis and ergodicity breaking. Strikingly, baseline modulations enable reservoir networks to perform multiple tasks, without any optimization of the network couplings.} Baseline control of network dynamics opens new directions for brain-inspired artificial intelligence and sheds new light on behavioral modulations of cortical activity.
1506.07951
Nathan Baker
Maria L. Sushko, Dennis G. Thomas, Suzette A. Pabit, Lois Pollack, Alexey V. Onufriev, Nathan A. Baker
The role of correlation and solvation in ion interactions with B-DNA
null
null
10.1016/j.bpj.2015.11.1691
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ionic atmospheres around nucleic acids play important roles in biological function. Large-scale explicit solvent simulations coupled to experimental assays such as anomalous small-angle X-ray scattering (ASAXS) can provide important insights into the structure and energetics of such atmospheres but are time- and resource-intensive. In this paper, we use classical density functional theory (cDFT) to explore the balance between ion-DNA, ion-water, and ion-ion interactions in ionic atmospheres of RbCl, SrCl$_2$, and CoHexCl$_3$ (cobalt hexammine chloride) around a B-form DNA molecule. The accuracy of the cDFT calculations was assessed by comparison between simulated and experimental ASAXS curves, demonstrating that an accurate model should take into account ion-ion correlation and ion hydration forces, DNA topology, and the discrete distribution of charges on DNA strands. As expected, these calculations revealed significant differences between monovalent, divalent, and trivalent cation distributions around DNA. About half of the DNA-bound Rb$^+$ ions penetrate into the minor groove of the DNA and half adsorb on the DNA strands. The fraction of cations in the minor groove decreases for the larger Sr$^{2+}$ ions and becomes zero for CoHex$^{3+}$ ions, which all adsorb on the DNA strands. The distribution of CoHex$^{3+}$ ions is mainly determined by Coulomb and steric interactions, while ion-correlation forces play a central role in the monovalent Rb$^+$ distribution and a combination of ion-correlation and hydration forces affect the Sr$^{2+}$ distribution around DNA.
[ { "created": "Fri, 26 Jun 2015 03:52:25 GMT", "version": "v1" }, { "created": "Sun, 6 Dec 2015 05:15:41 GMT", "version": "v2" } ]
2016-04-20
[ [ "Sushko", "Maria L.", "" ], [ "Thomas", "Dennis G.", "" ], [ "Pabit", "Suzette A.", "" ], [ "Pollack", "Lois", "" ], [ "Onufriev", "Alexey V.", "" ], [ "Baker", "Nathan A.", "" ] ]
The ionic atmospheres around nucleic acids play important roles in biological function. Large-scale explicit solvent simulations coupled to experimental assays such as anomalous small-angle X-ray scattering (ASAXS) can provide important insights into the structure and energetics of such atmospheres but are time- and resource-intensive. In this paper, we use classical density functional theory (cDFT) to explore the balance between ion-DNA, ion-water, and ion-ion interactions in ionic atmospheres of RbCl, SrCl$_2$, and CoHexCl$_3$ (cobalt hexammine chloride) around a B-form DNA molecule. The accuracy of the cDFT calculations was assessed by comparison between simulated and experimental ASAXS curves, demonstrating that an accurate model should take into account ion-ion correlation and ion hydration forces, DNA topology, and the discrete distribution of charges on DNA strands. As expected, these calculations revealed significant differences between monovalent, divalent, and trivalent cation distributions around DNA. About half of the DNA-bound Rb$^+$ ions penetrate into the minor groove of the DNA and half adsorb on the DNA strands. The fraction of cations in the minor groove decreases for the larger Sr$^{2+}$ ions and becomes zero for CoHex$^{3+}$ ions, which all adsorb on the DNA strands. The distribution of CoHex$^{3+}$ ions is mainly determined by Coulomb and steric interactions, while ion-correlation forces play a central role in the monovalent Rb$^+$ distribution and a combination of ion-correlation and hydration forces affect the Sr$^{2+}$ distribution around DNA.
1103.4649
Andr\'e C. R. Martins
Andr\'e C. R. Martins
Change and Aging Senescence as an adaptation
19 pages, 4 figures
null
10.1371/journal.pone.0024328
PLoS ONE 6(9): e24328
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding why we age is a long-lived open problem in evolutionary biology. Aging is prejudicial to the individual and evolutionary forces should prevent it, but many species show signs of senescence as individuals age. Here, I will propose a model for aging based on assumptions that are compatible with evolutionary theory: i) competition is between individuals; ii) there is some degree of locality, so quite often competition will between parents and their progeny; iii) optimal conditions are not stationary, mutation helps each species to keep competitive. When conditions change, a senescent species can drive immortal competitors to extinction. This counter-intuitive result arises from the pruning caused by the death of elder individuals. When there is change and mutation, each generation is slightly better adapted to the new conditions, but some older individuals survive by random chance. Senescence can eliminate those from the genetic pool. Even though individual selection forces always win over group selection ones, it is not exactly the individual that is selected, but its lineage. While senescence damages the individuals and has an evolutionary cost, it has a benefit of its own. It allows each lineage to adapt faster to changing conditions. We age because the world changes.
[ { "created": "Wed, 23 Mar 2011 23:24:25 GMT", "version": "v1" } ]
2012-01-24
[ [ "Martins", "André C. R.", "" ] ]
Understanding why we age is a long-lived open problem in evolutionary biology. Aging is prejudicial to the individual and evolutionary forces should prevent it, but many species show signs of senescence as individuals age. Here, I will propose a model for aging based on assumptions that are compatible with evolutionary theory: i) competition is between individuals; ii) there is some degree of locality, so quite often competition will between parents and their progeny; iii) optimal conditions are not stationary, mutation helps each species to keep competitive. When conditions change, a senescent species can drive immortal competitors to extinction. This counter-intuitive result arises from the pruning caused by the death of elder individuals. When there is change and mutation, each generation is slightly better adapted to the new conditions, but some older individuals survive by random chance. Senescence can eliminate those from the genetic pool. Even though individual selection forces always win over group selection ones, it is not exactly the individual that is selected, but its lineage. While senescence damages the individuals and has an evolutionary cost, it has a benefit of its own. It allows each lineage to adapt faster to changing conditions. We age because the world changes.
2004.03497
Ali Madani
Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R. Eguchi, Po-Ssu Huang, Richard Socher
ProGen: Language Modeling for Protein Generation
null
null
null
null
q-bio.BM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science. We pose protein engineering as an unsupervised sequence generation problem in order to leverage the exponentially growing set of proteins that lack costly, structural annotations. We train a 1.2B-parameter language model, ProGen, on ~280M protein sequences conditioned on taxonomic and keyword tags such as molecular function and cellular component. This provides ProGen with an unprecedented range of evolutionary sequence diversity and allows it to generate with fine-grained control as demonstrated by metrics based on primary sequence similarity, secondary structure accuracy, and conformational energy.
[ { "created": "Sun, 8 Mar 2020 04:27:16 GMT", "version": "v1" } ]
2020-04-08
[ [ "Madani", "Ali", "" ], [ "McCann", "Bryan", "" ], [ "Naik", "Nikhil", "" ], [ "Keskar", "Nitish Shirish", "" ], [ "Anand", "Namrata", "" ], [ "Eguchi", "Raphael R.", "" ], [ "Huang", "Po-Ssu", "" ], [ "Socher", "Richard", "" ] ]
Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science. We pose protein engineering as an unsupervised sequence generation problem in order to leverage the exponentially growing set of proteins that lack costly, structural annotations. We train a 1.2B-parameter language model, ProGen, on ~280M protein sequences conditioned on taxonomic and keyword tags such as molecular function and cellular component. This provides ProGen with an unprecedented range of evolutionary sequence diversity and allows it to generate with fine-grained control as demonstrated by metrics based on primary sequence similarity, secondary structure accuracy, and conformational energy.
2403.00093
Shruti Gadewar
Shruti P. Gadewar, Alyssa H. Zhu, Iyad Ba Gari, Sunanda Somu, Sophia I. Thomopoulos, Paul M. Thompson, Talia M. Nir, Neda Jahanshad
Synthesizing study-specific controls using generative models on open access datasets for harmonized multi-study analyses
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neuroimaging consortia can enhance reliability and generalizability of findings by pooling data across studies to achieve larger sample sizes. To adjust for site and MRI protocol effects, imaging datasets are often harmonized based on healthy controls. When data from a control group were not collected, statistical harmonization options are limited as patient characteristics and acquisition-related variables may be confounded. Here, in a multi-study neuroimaging analysis of Alzheimer's patients and controls, we tested whether it is possible to generate synthetic control MRIs. For one case-control study, we used a generative adversarial model for style-based harmonization to generate site-specific controls. Downstream feature extraction, statistical harmonization and group-level multi-study case-control and case-only analyses were performed twice, using either true or synthetic controls. All effect sizes using synthetic controls overlapped with those based on true study controls. This line of work may facilitate wider inclusion of case-only studies in multi-study consortia.
[ { "created": "Thu, 29 Feb 2024 19:43:43 GMT", "version": "v1" } ]
2024-03-04
[ [ "Gadewar", "Shruti P.", "" ], [ "Zhu", "Alyssa H.", "" ], [ "Gari", "Iyad Ba", "" ], [ "Somu", "Sunanda", "" ], [ "Thomopoulos", "Sophia I.", "" ], [ "Thompson", "Paul M.", "" ], [ "Nir", "Talia M.", "" ], [ "Jahanshad", "Neda", "" ] ]
Neuroimaging consortia can enhance reliability and generalizability of findings by pooling data across studies to achieve larger sample sizes. To adjust for site and MRI protocol effects, imaging datasets are often harmonized based on healthy controls. When data from a control group were not collected, statistical harmonization options are limited as patient characteristics and acquisition-related variables may be confounded. Here, in a multi-study neuroimaging analysis of Alzheimer's patients and controls, we tested whether it is possible to generate synthetic control MRIs. For one case-control study, we used a generative adversarial model for style-based harmonization to generate site-specific controls. Downstream feature extraction, statistical harmonization and group-level multi-study case-control and case-only analyses were performed twice, using either true or synthetic controls. All effect sizes using synthetic controls overlapped with those based on true study controls. This line of work may facilitate wider inclusion of case-only studies in multi-study consortia.
0811.3389
Luigi Grassi
D. Fusco and L. Grassi and A. L. Sellerio and D. Cora` and B. Bassetti and M. Caselle and M. Cosentino Lagomarsino
Identity and divergence of protein domain architectures after the Yeast Whole Genome Duplication event
19 pages, 5 figures, Supporting Information
null
null
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing the properties of duplicate genes during evolution is useful to understand the development of new cell functions. The yeast S. cerevisiae is a useful testing ground for this problem, because its duplicated genes with different evolutionary birth and destiny are well distinguishable. In particular, there is a clear detection for the occurrence of a Whole Genome Duplication (WGD) event in S. cerevisiae, and the genes derived from this event (WGD paralogs) are known. We studied WGD and non-WGD duplicates by two parallel analysis based on structural protein domains and on Gene Ontology annotation scheme respectively. The results show that while a large number of ``duplicable'' structural domains is shared in local and global duplications, WGD and non-WGD paralogs tend to have different functions. The reason for this is the existence of WGD and non-WGD specific domains with largely different functions. In agreement with the recent findings of Wapinski and collaborators (Nature 449, 2007), WGD paralogs often perform ``core'' cell functions, such as translation and DNA replication, while local duplications associate with ``peripheral'' functions such as response to stress. Our results also support the fact that domain architectures are a reliable tool to detect homology, as the domains of duplicates are largely invariant with date and nature of the duplication, while their sequences and also their functions might migrate.
[ { "created": "Thu, 20 Nov 2008 18:36:45 GMT", "version": "v1" } ]
2008-11-21
[ [ "Fusco", "D.", "" ], [ "Grassi", "L.", "" ], [ "Sellerio", "A. L.", "" ], [ "Cora`", "D.", "" ], [ "Bassetti", "B.", "" ], [ "Caselle", "M.", "" ], [ "Lagomarsino", "M. Cosentino", "" ] ]
Analyzing the properties of duplicate genes during evolution is useful to understand the development of new cell functions. The yeast S. cerevisiae is a useful testing ground for this problem, because its duplicated genes with different evolutionary birth and destiny are well distinguishable. In particular, there is a clear detection for the occurrence of a Whole Genome Duplication (WGD) event in S. cerevisiae, and the genes derived from this event (WGD paralogs) are known. We studied WGD and non-WGD duplicates by two parallel analysis based on structural protein domains and on Gene Ontology annotation scheme respectively. The results show that while a large number of ``duplicable'' structural domains is shared in local and global duplications, WGD and non-WGD paralogs tend to have different functions. The reason for this is the existence of WGD and non-WGD specific domains with largely different functions. In agreement with the recent findings of Wapinski and collaborators (Nature 449, 2007), WGD paralogs often perform ``core'' cell functions, such as translation and DNA replication, while local duplications associate with ``peripheral'' functions such as response to stress. Our results also support the fact that domain architectures are a reliable tool to detect homology, as the domains of duplicates are largely invariant with date and nature of the duplication, while their sequences and also their functions might migrate.
1701.07775
Xerxes D. Arsiwalla
Ivan Herreros-Alonso, Xerxes D. Arsiwalla, Paul F.M.J. Verschure
A Forward Model at Purkinje Cell Synapses Facilitates Cerebellar Anticipatory Control
NIPS 2016
Advances in Neural Information Processing Systems, 29: 3828-3836, (2016)
null
null
q-bio.NC cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with counterfactual error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system.
[ { "created": "Thu, 26 Jan 2017 16:59:20 GMT", "version": "v1" } ]
2017-01-27
[ [ "Herreros-Alonso", "Ivan", "" ], [ "Arsiwalla", "Xerxes D.", "" ], [ "Verschure", "Paul F. M. J.", "" ] ]
How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with counterfactual error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system.
2304.08317
Carlos Hernandez-Suarez M
Carlos Hernandez-Suarez
Back to the future: a simplified and intuitive derivation of the Lotka-Euler equation
Just a note
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Lotka-Euler equation is a mathematical expression used to study population dynamics and growth, particularly in the context of demography and ecology. The growth rate $\lambda$ is the speed at which an individual produce their offspring. It is essentially a birth process, and here it is shown that by reversing the process to a death process, in which individuals die at a rate $\lambda^{-1}$, the derivation of the Lotka-Euler equation becomes more intuitive and direct, both in discrete and continuous time.
[ { "created": "Mon, 17 Apr 2023 14:39:30 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2023 13:42:05 GMT", "version": "v2" }, { "created": "Fri, 21 Apr 2023 02:16:09 GMT", "version": "v3" }, { "created": "Tue, 9 May 2023 17:28:12 GMT", "version": "v4" }, { "created": "Sun, 28 May 2023 02:14:30 GMT", "version": "v5" } ]
2023-05-30
[ [ "Hernandez-Suarez", "Carlos", "" ] ]
The Lotka-Euler equation is a mathematical expression used to study population dynamics and growth, particularly in the context of demography and ecology. The growth rate $\lambda$ is the speed at which an individual produce their offspring. It is essentially a birth process, and here it is shown that by reversing the process to a death process, in which individuals die at a rate $\lambda^{-1}$, the derivation of the Lotka-Euler equation becomes more intuitive and direct, both in discrete and continuous time.
q-bio/0505050
Peter Hraber
Peter T. Hraber, Bette T. Korber, Steven Wolinsky, Henry A. Erlich, Elizabeth A. Trachtenberg, Thomas B. Kepler
HLA and HIV Infection Progression: Application of the Minimum Description Length Principle to Statistical Genetics
17 pages, 1 figure
Lecture Notes in Computer Science Volume 4345, 2006, pp 1-12
10.1007/11946465_1
Santa Fe Institute Working Paper 03-04-023
q-bio.QM cs.IT math.IT
null
The minimum description length (MDL) principle states that the best model to account for some data minimizes the sum of the lengths, in bits, of the descriptions of the model and the residual error. The description length is thus a criterion for model selection. Description-length analysis of HLA alleles from the Chicago MACS cohort enables classification of alleles associated with plasma HIV RNA, an indicator of infection progression. Progression variation is most strongly associated with HLA-B. Individuals without B58s supertype alleles average viral RNA levels 3.6-fold greater than individuals with them.
[ { "created": "Thu, 26 May 2005 16:48:05 GMT", "version": "v1" } ]
2015-07-21
[ [ "Hraber", "Peter T.", "" ], [ "Korber", "Bette T.", "" ], [ "Wolinsky", "Steven", "" ], [ "Erlich", "Henry A.", "" ], [ "Trachtenberg", "Elizabeth A.", "" ], [ "Kepler", "Thomas B.", "" ] ]
The minimum description length (MDL) principle states that the best model to account for some data minimizes the sum of the lengths, in bits, of the descriptions of the model and the residual error. The description length is thus a criterion for model selection. Description-length analysis of HLA alleles from the Chicago MACS cohort enables classification of alleles associated with plasma HIV RNA, an indicator of infection progression. Progression variation is most strongly associated with HLA-B. Individuals without B58s supertype alleles average viral RNA levels 3.6-fold greater than individuals with them.
1606.01696
Riku Hakulinen
Riku Hakulinen and Santeri Puranen
Probabilistic Model to Treat Flexibility in Molecular Contacts
30 pages, 11 figures
null
10.1080/00268976.2016.1225129
null
q-bio.QM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating accessible conformational space is computationally expensive and thermal motions are partly neglected in computer models of molecular interactions. This produces error into the estimates of binding strength. We introduce a method for modelling interactions so that structural flexibility is inherently taken into account. It has a statistical model for 3D properties of 'nonlocal' contacts and a physics based description of 'local' interactions, based on mechanical torque. The form of the torque barrier is derived using a representation of the local electronic structure, which is presumed to improve transferability, compared to traditional force fields. The nonlocal contacts are more distant than 1-4 interactions and Target-atoms are represented by 3D probability densities. Probability mass quantifies strength of contact and is calculated as an overlap integral. Repulsion is described by negative probability density, allowing probability mass to be used as the descriptor of contact preference. As a result, we are able to transform the high-dimensional problem into a simpler evaluation of three-dimensional integrals. We outline how this scoring function gives a tool to study the enthalpy--entropy compensation and demonstrate the feasibility of our approach by evaluating numerical probability masses for chosen side chain to main chain contacts in a lysine dipeptide structure.
[ { "created": "Mon, 6 Jun 2016 11:36:31 GMT", "version": "v1" } ]
2016-12-13
[ [ "Hakulinen", "Riku", "" ], [ "Puranen", "Santeri", "" ] ]
Evaluating accessible conformational space is computationally expensive and thermal motions are partly neglected in computer models of molecular interactions. This produces error into the estimates of binding strength. We introduce a method for modelling interactions so that structural flexibility is inherently taken into account. It has a statistical model for 3D properties of 'nonlocal' contacts and a physics based description of 'local' interactions, based on mechanical torque. The form of the torque barrier is derived using a representation of the local electronic structure, which is presumed to improve transferability, compared to traditional force fields. The nonlocal contacts are more distant than 1-4 interactions and Target-atoms are represented by 3D probability densities. Probability mass quantifies strength of contact and is calculated as an overlap integral. Repulsion is described by negative probability density, allowing probability mass to be used as the descriptor of contact preference. As a result, we are able to transform the high-dimensional problem into a simpler evaluation of three-dimensional integrals. We outline how this scoring function gives a tool to study the enthalpy--entropy compensation and demonstrate the feasibility of our approach by evaluating numerical probability masses for chosen side chain to main chain contacts in a lysine dipeptide structure.
1708.00540
Gergely Marton PhD
Gergely Marton, Peter Baracskay, Barbara Cseri, Bela Plosz, Gabor Juhasz, Zoltan Fekete, Anita Pongracz
A silicon-based microelectrode array with a microdrive for monitoring brainstem regions of freely moving rats
null
J Neural Eng. 2016 Apr;13(2):026025. doi: 10.1088/1741-2560/13/2/026025. Epub 2016 Feb 29
10.1088/1741-2560/13/2/026025
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective. Exploring neural activity behind synchronization and time locking in brain circuits is one of the most important tasks in neuroscience. Our goal was to design and characterize a microelectrode array (MEA) system specifically for obtaining in vivo extracellular recordings from three deep-brain areas of freely moving rats, simultaneously. The target areas, the deep mesencephalic reticular-, pedunculopontine tegmental- and pontine reticular nuclei are related to the regulation of sleep-wake cycles. Approach. The three targeted nuclei are collinear, therefore a single-shank MEA was designed in order to contact them. The silicon-based device was equipped with 3*4 recording sites, located according to the geometry of the brain regions. Furthermore, a microdrive was developed to allow fine actuation and post-implantation relocation of the probe. The probe was attached to a rigid printed circuit board, which was fastened to the microdrive. A flexible cable was designed in order to provide not only electronic connection between the probe and the amplifier system, but sufficient freedom for the movements of the probe as well. Main results. The microdrive was stable enough to allow precise electrode targeting into the tissue via a single track. The microelectrodes on the probe were suitable for recording neural activity from the three targeted brainstem areas. Significance. The system offers a robust solution to provide long-term interface between an array of precisely defined microelectrodes and deep-brain areas of a behaving rodent. The microdrive allowed us to fine-tune the probe location and easily scan through the regions of interest.
[ { "created": "Tue, 1 Aug 2017 22:42:32 GMT", "version": "v1" } ]
2017-08-03
[ [ "Marton", "Gergely", "" ], [ "Baracskay", "Peter", "" ], [ "Cseri", "Barbara", "" ], [ "Plosz", "Bela", "" ], [ "Juhasz", "Gabor", "" ], [ "Fekete", "Zoltan", "" ], [ "Pongracz", "Anita", "" ] ]
Objective. Exploring neural activity behind synchronization and time locking in brain circuits is one of the most important tasks in neuroscience. Our goal was to design and characterize a microelectrode array (MEA) system specifically for obtaining in vivo extracellular recordings from three deep-brain areas of freely moving rats, simultaneously. The target areas, the deep mesencephalic reticular-, pedunculopontine tegmental- and pontine reticular nuclei are related to the regulation of sleep-wake cycles. Approach. The three targeted nuclei are collinear, therefore a single-shank MEA was designed in order to contact them. The silicon-based device was equipped with 3*4 recording sites, located according to the geometry of the brain regions. Furthermore, a microdrive was developed to allow fine actuation and post-implantation relocation of the probe. The probe was attached to a rigid printed circuit board, which was fastened to the microdrive. A flexible cable was designed in order to provide not only electronic connection between the probe and the amplifier system, but sufficient freedom for the movements of the probe as well. Main results. The microdrive was stable enough to allow precise electrode targeting into the tissue via a single track. The microelectrodes on the probe were suitable for recording neural activity from the three targeted brainstem areas. Significance. The system offers a robust solution to provide long-term interface between an array of precisely defined microelectrodes and deep-brain areas of a behaving rodent. The microdrive allowed us to fine-tune the probe location and easily scan through the regions of interest.
2109.05303
Andrew Leifer
Mochi Liu, Sandeep Kumar, Anuj K Sharma and Andrew M Leifer
Closed-loop targeted optogenetic stimulation of C. elegans populations
35 pages, 8 main text figures; 4 supplementary figures
PLoS Biol 20(1): e3001524, 2021
10.1371/journal.pbio.3001524
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
We present a high-throughput optogenetic illumination system capable of simultaneous closed-loop light delivery to specified targets in populations of moving Caenorhabditis elegans. The instrument addresses three technical challenges: it delivers targeted illumination to specified regions of the animal's body such as its head or tail; it automatically delivers stimuli triggered upon the animal's behavior; and it achieves high throughput by targeting many animals simultaneously. The instrument was used to optogenetically probe the animal's behavioral response to competing mechanosensory stimuli in the the anterior and posterior soft touch receptor neurons. Responses to more than $10^4$ stimulus events from a range of anterior-posterior intensity combinations were measured. The animal's probability of sprinting forward in response to a mechanosensory stimulus depended on both the anterior and posterior stimulation intensity, while the probability of reversing depended primarily on the posterior stimulation intensity. We also probed the animal's response to mechanosensory stimulation during the onset of turning, a relatively rare behavioral event, by delivering stimuli automatically when the animal began to turn. Using this closed-loop approach, over $10^3$ stimulus events were delivered during turning onset at a rate of 9.2 events per worm-hour, a greater than 25-fold increase in throughput compared to previous investigations. These measurements validate with greater statistical power previous findings that turning acts to gate mechanosensory evoked reversals. Compared to previous approaches, the current system offers targeted optogenetic stimulation to specific body regions or behaviors with many-fold increases in throughput to better constrain quantitative models of sensorimotor processing.
[ { "created": "Sat, 11 Sep 2021 15:30:23 GMT", "version": "v1" } ]
2022-02-01
[ [ "Liu", "Mochi", "" ], [ "Kumar", "Sandeep", "" ], [ "Sharma", "Anuj K", "" ], [ "Leifer", "Andrew M", "" ] ]
We present a high-throughput optogenetic illumination system capable of simultaneous closed-loop light delivery to specified targets in populations of moving Caenorhabditis elegans. The instrument addresses three technical challenges: it delivers targeted illumination to specified regions of the animal's body such as its head or tail; it automatically delivers stimuli triggered upon the animal's behavior; and it achieves high throughput by targeting many animals simultaneously. The instrument was used to optogenetically probe the animal's behavioral response to competing mechanosensory stimuli in the the anterior and posterior soft touch receptor neurons. Responses to more than $10^4$ stimulus events from a range of anterior-posterior intensity combinations were measured. The animal's probability of sprinting forward in response to a mechanosensory stimulus depended on both the anterior and posterior stimulation intensity, while the probability of reversing depended primarily on the posterior stimulation intensity. We also probed the animal's response to mechanosensory stimulation during the onset of turning, a relatively rare behavioral event, by delivering stimuli automatically when the animal began to turn. Using this closed-loop approach, over $10^3$ stimulus events were delivered during turning onset at a rate of 9.2 events per worm-hour, a greater than 25-fold increase in throughput compared to previous investigations. These measurements validate with greater statistical power previous findings that turning acts to gate mechanosensory evoked reversals. Compared to previous approaches, the current system offers targeted optogenetic stimulation to specific body regions or behaviors with many-fold increases in throughput to better constrain quantitative models of sensorimotor processing.
q-bio/0402005
Michael Slutsky
Michael Slutsky and Leonid A. Mirny
Kinetics of protein-DNA interaction: facilitated target location in sequence-dependent potential
null
Biophys. J. 87:4021-4035 (2004)
10.1529/biophysj.104.050765
null
q-bio.BM cond-mat.dis-nn physics.bio-ph
null
Recognition and binding of specific sites on DNA by proteins is central for many cellular functions such as transcription, replication, and recombination. In the process of recognition, a protein rapidly searches for its specific site on a long DNA molecule and then strongly binds this site. Here we aim to find a mechanism that can provide both a fast search (1-10 sec) and high stability of the specific protein-DNA complex ($K_d=10^{-15}-10^{-8}$ M). Earlier studies have suggested that rapid search involves the sliding of a protein along the DNA. Here we consider sliding as a one-dimensional (1D) diffusion in a sequence-dependent rough energy landscape. We demonstrate that, in spite of the landscape's roughness, rapid search can be achieved if 1D sliding is accompanied by 3D diffusion. We estimate the range of the specific and non-specific DNA-binding energy required for rapid search and suggest experiments that can test our mechanism. We show that optimal search requires a protein to spend half of time sliding along the DNA and half diffusing in 3D. We also establish that, paradoxically, realistic energy functions cannot provide both rapid search and strong binding of a rigid protein. To reconcile these two fundamental requirements we propose a search-and-fold mechanism that involves the coupling of protein binding and partial protein folding. Proposed mechanism has several important biological implications for search in the presence of other proteins and nucleosomes, simultaneous search by several proteins etc. Proposed mechanism also provides a new framework for interpretation of experimental and structural data on protein-DNA interactions.
[ { "created": "Tue, 3 Feb 2004 15:45:23 GMT", "version": "v1" }, { "created": "Thu, 23 Sep 2004 22:24:05 GMT", "version": "v2" } ]
2007-05-23
[ [ "Slutsky", "Michael", "" ], [ "Mirny", "Leonid A.", "" ] ]
Recognition and binding of specific sites on DNA by proteins is central for many cellular functions such as transcription, replication, and recombination. In the process of recognition, a protein rapidly searches for its specific site on a long DNA molecule and then strongly binds this site. Here we aim to find a mechanism that can provide both a fast search (1-10 sec) and high stability of the specific protein-DNA complex ($K_d=10^{-15}-10^{-8}$ M). Earlier studies have suggested that rapid search involves the sliding of a protein along the DNA. Here we consider sliding as a one-dimensional (1D) diffusion in a sequence-dependent rough energy landscape. We demonstrate that, in spite of the landscape's roughness, rapid search can be achieved if 1D sliding is accompanied by 3D diffusion. We estimate the range of the specific and non-specific DNA-binding energy required for rapid search and suggest experiments that can test our mechanism. We show that optimal search requires a protein to spend half of time sliding along the DNA and half diffusing in 3D. We also establish that, paradoxically, realistic energy functions cannot provide both rapid search and strong binding of a rigid protein. To reconcile these two fundamental requirements we propose a search-and-fold mechanism that involves the coupling of protein binding and partial protein folding. Proposed mechanism has several important biological implications for search in the presence of other proteins and nucleosomes, simultaneous search by several proteins etc. Proposed mechanism also provides a new framework for interpretation of experimental and structural data on protein-DNA interactions.
2301.05006
Alexander Spirov
Marat A. Sabirov, Ekaterina M. Myasnikova, Alexander V. Spirov
mRNA active transport in oocyte-early embryo: 3D agent-based modeling
38 pages, 26 figures, in russian
null
null
null
q-bio.QM q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Axes of polarity (and primary morphogenetic gradients) are established in the oocyte - early embryo through active transport and localization of maternal factors. It is the oocyte - syncytial embryo of Drosophila (D. melanogaster) that is a model object for studying the molecular machinery of such transport systems. The attention of researchers is focused on the processes of formation, maintenance, and functioning of active transport systems of maternal mRNAs and proteins that are key for early Drosophila embryogenesis. Here we develop an approach for agent-based 3D modeling of the key components of transport by molecular motors (by elements of the cytoskeleton) of the Drosophila oocyte-syncytial embryo. The models were developed using Skeledyne software developed by Odell and Foe [Odell and Foe, 2008]. We start with the results of modeling transport along oriented microtubule (MT) bundles in the oocyte. This is a model of transport systems in the Drosophila oocyte, where three maternal mRNAs (bicoid (bcd), oskar, and gurken) that are key to embryonic polarity are transported along their oriented MT bundles. Then we consider models of oriented MT networks in the volume of a cell (oocyte) generated by a single microtubule organization center (or a pair of the centers). This model reproduces the formation of bcd mRNA intrusions deep into the cytoplasm in the head half of the early syncytial embryo. Finally, we consider models for the active transport of bcd mRNA in a syncytial embryo along a randomized network of many short MT strands. In conclusion, we consider the prospects for the implementation of cytoplasmic fountain flows in the active transport model.
[ { "created": "Tue, 10 Jan 2023 18:43:05 GMT", "version": "v1" } ]
2023-01-13
[ [ "Sabirov", "Marat A.", "" ], [ "Myasnikova", "Ekaterina M.", "" ], [ "Spirov", "Alexander V.", "" ] ]
Axes of polarity (and primary morphogenetic gradients) are established in the oocyte - early embryo through active transport and localization of maternal factors. It is the oocyte - syncytial embryo of Drosophila (D. melanogaster) that is a model object for studying the molecular machinery of such transport systems. The attention of researchers is focused on the processes of formation, maintenance, and functioning of active transport systems of maternal mRNAs and proteins that are key for early Drosophila embryogenesis. Here we develop an approach for agent-based 3D modeling of the key components of transport by molecular motors (by elements of the cytoskeleton) of the Drosophila oocyte-syncytial embryo. The models were developed using Skeledyne software developed by Odell and Foe [Odell and Foe, 2008]. We start with the results of modeling transport along oriented microtubule (MT) bundles in the oocyte. This is a model of transport systems in the Drosophila oocyte, where three maternal mRNAs (bicoid (bcd), oskar, and gurken) that are key to embryonic polarity are transported along their oriented MT bundles. Then we consider models of oriented MT networks in the volume of a cell (oocyte) generated by a single microtubule organization center (or a pair of the centers). This model reproduces the formation of bcd mRNA intrusions deep into the cytoplasm in the head half of the early syncytial embryo. Finally, we consider models for the active transport of bcd mRNA in a syncytial embryo along a randomized network of many short MT strands. In conclusion, we consider the prospects for the implementation of cytoplasmic fountain flows in the active transport model.
q-bio/0604028
Sachin Talathi
Sachin S Talathi
Synaptic plasticity of Inhibitory synapse promote synchrony in inhibitory network in presence of heterogeneity and noise
5 pages, 4 figures
null
10.1007/s10827-008-0077-7
null
q-bio.NC
null
Recently spike timing dependent plasticity was observed in inhibitory synapse in the layer II of entorhinal cortex. The rule provides an interesting zero in the region of $\Delta t=t_{post}-t_{pre}=0$ and in addition the dynamic range of the rule lie in gamma frequency band. We propose a robust mechanism based on this observed synaptic plasticity rule for inhibitory synapses for two mutually coupled interneurons to phase lock in synchrony in the presence of intrisic heterogeneity in firing. We study the stability of the phase locked solution by defining a map for spike times dependent on the phase response curve for the coupled neurons. Finally we present results on robustness of synchronization in the presence of noise.
[ { "created": "Mon, 24 Apr 2006 03:49:08 GMT", "version": "v1" } ]
2013-04-24
[ [ "Talathi", "Sachin S", "" ] ]
Recently spike timing dependent plasticity was observed in inhibitory synapse in the layer II of entorhinal cortex. The rule provides an interesting zero in the region of $\Delta t=t_{post}-t_{pre}=0$ and in addition the dynamic range of the rule lie in gamma frequency band. We propose a robust mechanism based on this observed synaptic plasticity rule for inhibitory synapses for two mutually coupled interneurons to phase lock in synchrony in the presence of intrisic heterogeneity in firing. We study the stability of the phase locked solution by defining a map for spike times dependent on the phase response curve for the coupled neurons. Finally we present results on robustness of synchronization in the presence of noise.
1910.06280
Stephen Pankavich
Stephen Pankavich, Nathan Neri, Deborah Shutt
Bistable Dynamics and Hopf Bifurcation in a Refined Model of Early Stage HIV Infection
28 pages, many figures
Discrete and Continuous Dynamical Systems B, 2019
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent clinical studies have shown that HIV disease pathogenesis can depend strongly on many factors at the time of transmission, including the strength of the initial viral load and the local availability of CD4+ T-cells. In this article, a new within-host model of HIV infection that incorporates the homeostatic proliferation of T-cells is formulated and analyzed. Due to the effects of this biological process, the influence of initial conditions on the proliferation of HIV infection is further elucidated. The identifiability of parameters within the model is investigated and a local stability analysis, which displays additional complexity in comparison to previous models, is conducted. The current study extends previous theoretical and computational work on the early stages of the disease and leads to interesting nonlinear dynamics, including a parameter region featuring bistability of infectious and viral clearance equilibria and the appearance of a Hopf bifurcation within biologically relevant parameter regimes.
[ { "created": "Mon, 14 Oct 2019 17:05:13 GMT", "version": "v1" } ]
2019-10-15
[ [ "Pankavich", "Stephen", "" ], [ "Neri", "Nathan", "" ], [ "Shutt", "Deborah", "" ] ]
Recent clinical studies have shown that HIV disease pathogenesis can depend strongly on many factors at the time of transmission, including the strength of the initial viral load and the local availability of CD4+ T-cells. In this article, a new within-host model of HIV infection that incorporates the homeostatic proliferation of T-cells is formulated and analyzed. Due to the effects of this biological process, the influence of initial conditions on the proliferation of HIV infection is further elucidated. The identifiability of parameters within the model is investigated and a local stability analysis, which displays additional complexity in comparison to previous models, is conducted. The current study extends previous theoretical and computational work on the early stages of the disease and leads to interesting nonlinear dynamics, including a parameter region featuring bistability of infectious and viral clearance equilibria and the appearance of a Hopf bifurcation within biologically relevant parameter regimes.
2011.08966
Peter Czuppon
Peter Czuppon, Emmanuel Schertzer, Fran\c{c}ois Blanquart, Florence D\'ebarre
The stochastic dynamics of early epidemics: probability of establishment, initial growth rate, and infection cluster size at first detection
revised version
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Emerging epidemics and local infection clusters are initially prone to stochastic effects that can substantially impact the epidemic trajectory. While numerous studies are devoted to the deterministic regime of an established epidemic, mathematical descriptions of the initial phase of epidemic growth are comparatively rarer. Here, we review existing mathematical results on the epidemic size over time, and derive new results to elucidate the early dynamics of an infection cluster started by a single infected individual. We show that the initial growth of epidemics that eventually take off is accelerated by stochasticity. These results are critical to improve early cluster detection and control. As an application, we compute the distribution of the first detection time of an infected individual in an infection cluster depending on the testing effort, and estimate that the SARS-CoV-2 variant of concern Alpha detected in September 2020 first appeared in the United Kingdom early August 2020. We also compute a minimal testing frequency to detect clusters before they exceed a given threshold size. These results improve our theoretical understanding of early epidemics and will be useful for the study and control of local infectious disease clusters.
[ { "created": "Tue, 17 Nov 2020 21:59:53 GMT", "version": "v1" }, { "created": "Mon, 7 Jun 2021 11:41:43 GMT", "version": "v2" }, { "created": "Fri, 17 Sep 2021 16:37:55 GMT", "version": "v3" } ]
2021-09-20
[ [ "Czuppon", "Peter", "" ], [ "Schertzer", "Emmanuel", "" ], [ "Blanquart", "François", "" ], [ "Débarre", "Florence", "" ] ]
Emerging epidemics and local infection clusters are initially prone to stochastic effects that can substantially impact the epidemic trajectory. While numerous studies are devoted to the deterministic regime of an established epidemic, mathematical descriptions of the initial phase of epidemic growth are comparatively rarer. Here, we review existing mathematical results on the epidemic size over time, and derive new results to elucidate the early dynamics of an infection cluster started by a single infected individual. We show that the initial growth of epidemics that eventually take off is accelerated by stochasticity. These results are critical to improve early cluster detection and control. As an application, we compute the distribution of the first detection time of an infected individual in an infection cluster depending on the testing effort, and estimate that the SARS-CoV-2 variant of concern Alpha detected in September 2020 first appeared in the United Kingdom early August 2020. We also compute a minimal testing frequency to detect clusters before they exceed a given threshold size. These results improve our theoretical understanding of early epidemics and will be useful for the study and control of local infectious disease clusters.
2307.13010
Adrien Badr\'e
Adrien Badr\'e, Li Zhang, Wellington Muchero, Justin C. Reynolds, Chongle Pan
Deep neural network improves the estimation of polygenic risk scores for breast cancer
28 pages, 7 figures, 2 Tables
A. Badr\'e, L. Zhang, W. Muchero, J.C. Reynolds, C. Pan (2021). Deep neural network improves the estimation of polygenic risk scores for breast cancer. Journal of Human Genetics, 66(4), 359-369
10.1038/s10038-020-00832-7
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Polygenic risk scores (PRS) estimate the genetic risk of an individual for a complex disease based on many genetic variants across the whole genome. In this study, we compared a series of computational models for estimation of breast cancer PRS. A deep neural network (DNN) was found to outperform alternative machine learning techniques and established statistical algorithms, including BLUP, BayesA and LDpred. In the test cohort with 50% prevalence, the Area Under the receiver operating characteristic Curve (AUC) were 67.4% for DNN, 64.2% for BLUP, 64.5% for BayesA, and 62.4% for LDpred. BLUP, BayesA, and LPpred all generated PRS that followed a normal distribution in the case population. However, the PRS generated by DNN in the case population followed a bi-modal distribution composed of two normal distributions with distinctly different means. This suggests that DNN was able to separate the case population into a high-genetic-risk case sub-population with an average PRS significantly higher than the control population and a normal-genetic-risk case sub-population with an average PRS similar to the control population. This allowed DNN to achieve 18.8% recall at 90% precision in the test cohort with 50% prevalence, which can be extrapolated to 65.4% recall at 20% precision in a general population with 12% prevalence. Interpretation of the DNN model identified salient variants that were assigned insignificant p-values by association studies, but were important for DNN prediction. These variants may be associated with the phenotype through non-linear relationships.
[ { "created": "Mon, 24 Jul 2023 13:35:36 GMT", "version": "v1" } ]
2023-07-26
[ [ "Badré", "Adrien", "" ], [ "Zhang", "Li", "" ], [ "Muchero", "Wellington", "" ], [ "Reynolds", "Justin C.", "" ], [ "Pan", "Chongle", "" ] ]
Polygenic risk scores (PRS) estimate the genetic risk of an individual for a complex disease based on many genetic variants across the whole genome. In this study, we compared a series of computational models for estimation of breast cancer PRS. A deep neural network (DNN) was found to outperform alternative machine learning techniques and established statistical algorithms, including BLUP, BayesA and LDpred. In the test cohort with 50% prevalence, the Area Under the receiver operating characteristic Curve (AUC) were 67.4% for DNN, 64.2% for BLUP, 64.5% for BayesA, and 62.4% for LDpred. BLUP, BayesA, and LPpred all generated PRS that followed a normal distribution in the case population. However, the PRS generated by DNN in the case population followed a bi-modal distribution composed of two normal distributions with distinctly different means. This suggests that DNN was able to separate the case population into a high-genetic-risk case sub-population with an average PRS significantly higher than the control population and a normal-genetic-risk case sub-population with an average PRS similar to the control population. This allowed DNN to achieve 18.8% recall at 90% precision in the test cohort with 50% prevalence, which can be extrapolated to 65.4% recall at 20% precision in a general population with 12% prevalence. Interpretation of the DNN model identified salient variants that were assigned insignificant p-values by association studies, but were important for DNN prediction. These variants may be associated with the phenotype through non-linear relationships.
q-bio/0608013
Jose F Nieves
Jos\'e F. Nieves and Marcelo R. Ubriaco
Linear model of tumor growth in a changing environment
Latex, 8 pages, double column
null
null
null
q-bio.PE
null
We propose a model for describing the growth on an untreated tumor, which is characterized in a simple way by a minimal number of parameters with a well-defined physical interpretation. The model is motivated by invoking the Master Equation and the Principle of Detailed Balance in the present context, and it is easily generalizable to include the effects of various types of therapies. In the simplest version that we consider here, it leads to a linear equation that describes the population growth in a dynamic environment, for which a complete solution can be given in terms of the integral of the growth rate. The essential features of the general solution for this case are illustrated with a few examples.
[ { "created": "Mon, 7 Aug 2006 14:08:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Nieves", "José F.", "" ], [ "Ubriaco", "Marcelo R.", "" ] ]
We propose a model for describing the growth on an untreated tumor, which is characterized in a simple way by a minimal number of parameters with a well-defined physical interpretation. The model is motivated by invoking the Master Equation and the Principle of Detailed Balance in the present context, and it is easily generalizable to include the effects of various types of therapies. In the simplest version that we consider here, it leads to a linear equation that describes the population growth in a dynamic environment, for which a complete solution can be given in terms of the integral of the growth rate. The essential features of the general solution for this case are illustrated with a few examples.
2103.05753
Bradly Alicea
Bradly Alicea, Rishabh Chakrabarty, Stefan Dvoretskii, Akshara Gopi, Avery Lim, and Jesse Parent
Continual Developmental Neurosimulation Using Embodied Computational Agents
35 pages, 9 figures
null
null
null
q-bio.NC cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is much to learn through synthesis of Developmental Biology, Cognitive Science and Computational Modeling. Our path forward involves a design for developmentally-inspired learning agents based on Braitenberg Vehicles. Continual developmental neurosimulation allows us to consider the role of developmental trajectories in bridging the related phenomena of nervous system morphogenesis, developmental learning, and plasticity. Being closely tied to continual learning, our approach is tightly integrated with developmental embodiment, and can be implemented using a type of agent called developmental Braitenberg Vehicles (dBVs). dBVs begin their lives as a set of undefined structures that transform into agent-based systems including a body, sensors, effectors, and nervous system. This phenotype is characterized in terms of developmental timing: with distinct morphogenetic, critical, and acquisition (developmental learning) periods. We further propose that network morphogenesis can be accomplished using a genetic algorithmic approach, while developmental learning can be implemented using a number of computational methodologies. This approach provides a framework for adaptive agent behavior that might result from a developmental approach: namely by exploiting critical periods or growth and acquisition, an explicitly embodied network architecture, and a distinction between the assembly of neuronal networks and active learning on these networks. In conclusion, we will consider agent learning and development at different timescales, from very short (<100ms) intervals to long-term evolution. The development, evolution, and learning in an embodied agent-based approach is key to an integrative view of biologically-inspired intelligence.
[ { "created": "Sun, 7 Mar 2021 07:22:49 GMT", "version": "v1" }, { "created": "Sat, 28 Oct 2023 06:30:17 GMT", "version": "v2" }, { "created": "Fri, 12 Jul 2024 06:10:30 GMT", "version": "v3" } ]
2024-07-15
[ [ "Alicea", "Bradly", "" ], [ "Chakrabarty", "Rishabh", "" ], [ "Dvoretskii", "Stefan", "" ], [ "Gopi", "Akshara", "" ], [ "Lim", "Avery", "" ], [ "Parent", "Jesse", "" ] ]
There is much to learn through synthesis of Developmental Biology, Cognitive Science and Computational Modeling. Our path forward involves a design for developmentally-inspired learning agents based on Braitenberg Vehicles. Continual developmental neurosimulation allows us to consider the role of developmental trajectories in bridging the related phenomena of nervous system morphogenesis, developmental learning, and plasticity. Being closely tied to continual learning, our approach is tightly integrated with developmental embodiment, and can be implemented using a type of agent called developmental Braitenberg Vehicles (dBVs). dBVs begin their lives as a set of undefined structures that transform into agent-based systems including a body, sensors, effectors, and nervous system. This phenotype is characterized in terms of developmental timing: with distinct morphogenetic, critical, and acquisition (developmental learning) periods. We further propose that network morphogenesis can be accomplished using a genetic algorithmic approach, while developmental learning can be implemented using a number of computational methodologies. This approach provides a framework for adaptive agent behavior that might result from a developmental approach: namely by exploiting critical periods or growth and acquisition, an explicitly embodied network architecture, and a distinction between the assembly of neuronal networks and active learning on these networks. In conclusion, we will consider agent learning and development at different timescales, from very short (<100ms) intervals to long-term evolution. The development, evolution, and learning in an embodied agent-based approach is key to an integrative view of biologically-inspired intelligence.
0904.1901
Fei Liu
Xiao chuan Xue, Jinhua Zhao, Fei Liu, and Zhong-can Ou-Yang
Response delay as a strategy for survival in fluctuating environment
5 pages, 3 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Response time-delay is an ubiquitous phenomenon in biological systems. Here we use a simple stochastic population model with time-delayed switching-rate conversion to quantitatively study the biological influence of the response time-delay on the survival fitness of cells living in periodically fluctuating and stochastic environment, respectively. Our calculation and simulation show that for the cells having a slow rate transition into fit phenotype and a fast rate transition into unfit phenotype, the response time-delay can always enhance their fitness during the environment change. Particularly, in the periodic or stochastic environment with small variance, the optimal fitness achieved by these cells is superior to that of cells with reverse switching rates even if the latter exhibits rapid response. These results suggest that the response time delay may be utilized by cells to enhance their adaptation to the fluctuating environment.
[ { "created": "Mon, 13 Apr 2009 03:10:14 GMT", "version": "v1" } ]
2009-04-14
[ [ "Xue", "Xiao chuan", "" ], [ "Zhao", "Jinhua", "" ], [ "Liu", "Fei", "" ], [ "Ou-Yang", "Zhong-can", "" ] ]
Response time-delay is an ubiquitous phenomenon in biological systems. Here we use a simple stochastic population model with time-delayed switching-rate conversion to quantitatively study the biological influence of the response time-delay on the survival fitness of cells living in periodically fluctuating and stochastic environment, respectively. Our calculation and simulation show that for the cells having a slow rate transition into fit phenotype and a fast rate transition into unfit phenotype, the response time-delay can always enhance their fitness during the environment change. Particularly, in the periodic or stochastic environment with small variance, the optimal fitness achieved by these cells is superior to that of cells with reverse switching rates even if the latter exhibits rapid response. These results suggest that the response time delay may be utilized by cells to enhance their adaptation to the fluctuating environment.
2204.12623
Wayne Hayes
Siyue Wang, Giles R. S. Atkinson, Wayne B. Hayes
SANA: Cross-Species Prediction of Gene Ontology GO Annotations via Topological Network Alignment
31 pages; 9 Figures; 8 Tables
npj Syst Biol Appl 8, 25 (2022)
10.1038/s41540-022-00232-x
null
q-bio.MN q-bio.BM q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Topological network alignment aims to align two networks node-wise in order to maximize the observed common connection (edge) topology between them. The topological alignment of two Protein-Protein Interaction (PPI) networks should thus expose protein pairs with similar interaction partners allowing, for example, the prediction of common Gene Ontology (GO) terms. Unfortunately, no network alignment algorithm based on topology alone has been able to achieve this aim, though those that include sequence similarity have seen some success. We argue that this failure of topology alone is due to the sparsity and incompleteness of the PPI network data of almost all species, which provides the network topology with a small signal-to-noise ratio that is effectively swamped when sequence information is added to the mix. Here we show that the weak signal can be detected using multiple stochastic samples of "good" topological network alignments, which allows us to observe regions of the two networks that are robustly aligned across multiple samples. The resulting Network Alignment Frequency (NAF) strongly correlates with GO-based Resnik semantic similarity and enables the first successful cross-species predictions of GO terms based on topology-only network alignments. Our best predictions have an AUPR of about 0.4, which is competitive with state-of-the-art algorithms, even when there is no observable sequence similarity and no known homology relationship. While our results provide only a "proof of concept" on existing network data, we hypothesize that predicting GO terms from topology-only network alignments will become increasingly practical as the volume and quality of PPI network data increase.
[ { "created": "Tue, 26 Apr 2022 22:39:55 GMT", "version": "v1" } ]
2022-08-29
[ [ "Wang", "Siyue", "" ], [ "Atkinson", "Giles R. S.", "" ], [ "Hayes", "Wayne B.", "" ] ]
Topological network alignment aims to align two networks node-wise in order to maximize the observed common connection (edge) topology between them. The topological alignment of two Protein-Protein Interaction (PPI) networks should thus expose protein pairs with similar interaction partners allowing, for example, the prediction of common Gene Ontology (GO) terms. Unfortunately, no network alignment algorithm based on topology alone has been able to achieve this aim, though those that include sequence similarity have seen some success. We argue that this failure of topology alone is due to the sparsity and incompleteness of the PPI network data of almost all species, which provides the network topology with a small signal-to-noise ratio that is effectively swamped when sequence information is added to the mix. Here we show that the weak signal can be detected using multiple stochastic samples of "good" topological network alignments, which allows us to observe regions of the two networks that are robustly aligned across multiple samples. The resulting Network Alignment Frequency (NAF) strongly correlates with GO-based Resnik semantic similarity and enables the first successful cross-species predictions of GO terms based on topology-only network alignments. Our best predictions have an AUPR of about 0.4, which is competitive with state-of-the-art algorithms, even when there is no observable sequence similarity and no known homology relationship. While our results provide only a "proof of concept" on existing network data, we hypothesize that predicting GO terms from topology-only network alignments will become increasingly practical as the volume and quality of PPI network data increase.
1904.03328
Ye Wu
Ye Wu, Yoonmi Hong, Yuanjing Feng, Dinggang Shen, Pew-Thian Yap
Mitigating Gyral Bias in Cortical Tractography via Asymmetric Fiber Orientation Distributions
null
null
null
null
q-bio.NC cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion tractography in brain connectomics often involves tracing axonal trajectories across gray-white matter boundaries in gyral blades of complex cortical convolutions. To date, gyral bias is observed in most tractography algorithms with streamlines predominantly terminating at gyral crowns instead of sulcal banks. This work demonstrates that asymmetric fiber orientation distribution functions (AFODFs), computed via a multi-tissue global estimation framework, can mitigate the effects of gyral bias, enabling fiber streamlines at gyral blades to make sharper turns into the cortical gray matter. We use ex-vivo data of an adult rhesus macaque and in-vivo data from the Human Connectome Project (HCP) to show that the fiber streamlines given by AFODFs bend more naturally into the cortex than the conventional symmetric FODFs in typical gyral blades. We demonstrate that AFODF tractography improves cortico-cortical connectivity and provides highly consistent outcomes between two different field strengths (3T and 7T).
[ { "created": "Sat, 6 Apr 2019 01:05:19 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2019 19:09:38 GMT", "version": "v2" } ]
2019-04-15
[ [ "Wu", "Ye", "" ], [ "Hong", "Yoonmi", "" ], [ "Feng", "Yuanjing", "" ], [ "Shen", "Dinggang", "" ], [ "Yap", "Pew-Thian", "" ] ]
Diffusion tractography in brain connectomics often involves tracing axonal trajectories across gray-white matter boundaries in gyral blades of complex cortical convolutions. To date, gyral bias is observed in most tractography algorithms with streamlines predominantly terminating at gyral crowns instead of sulcal banks. This work demonstrates that asymmetric fiber orientation distribution functions (AFODFs), computed via a multi-tissue global estimation framework, can mitigate the effects of gyral bias, enabling fiber streamlines at gyral blades to make sharper turns into the cortical gray matter. We use ex-vivo data of an adult rhesus macaque and in-vivo data from the Human Connectome Project (HCP) to show that the fiber streamlines given by AFODFs bend more naturally into the cortex than the conventional symmetric FODFs in typical gyral blades. We demonstrate that AFODF tractography improves cortico-cortical connectivity and provides highly consistent outcomes between two different field strengths (3T and 7T).
1801.06853
Alexander Gorban
A.N. Gorban, N. \c{C}abuko\v{g}lu
Basic Model of Purposeful Kinesis
Minor amendments: synchronization with the final Journal version
Ecological Complexity, 33, 2018, 75-83
10.1016/j.ecocom.2018.01.002
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The notions of taxis and kinesis are introduced and used to describe two types of behavior of an organism in non-uniform conditions: (i) Taxis means the guided movement to more favorable conditions; (ii) Kinesis is the non-directional change in space motion in response to the change of conditions. Migration and dispersal of animals has evolved under control of natural selection. In a simple formalisation, the strategy of dispersal should increase Darwinian fitness. We introduce new models of purposeful kinesis with diffusion coefficient dependent on fitness. The local and instant evaluation of Darwinian fitness is used, the reproduction coefficient. New models include one additional parameter, intensity of kinesis, and may be considered as the {\em minimal models of purposeful kinesis}. The properties of models are explored by a series of numerical experiments. It is demonstrated how kinesis could be beneficial for assimilation of patches of food or of periodic fluctuations. Kinesis based on local and instant estimations of fitness is not always beneficial: for species with the Allee effect it can delay invasion and spreading. It is proven that kinesis cannot modify stability of positive homogeneous steady states.
[ { "created": "Sun, 21 Jan 2018 17:04:08 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2018 15:37:19 GMT", "version": "v2" } ]
2018-12-06
[ [ "Gorban", "A. N.", "" ], [ "Çabukoǧlu", "N.", "" ] ]
The notions of taxis and kinesis are introduced and used to describe two types of behavior of an organism in non-uniform conditions: (i) Taxis means the guided movement to more favorable conditions; (ii) Kinesis is the non-directional change in space motion in response to the change of conditions. Migration and dispersal of animals has evolved under control of natural selection. In a simple formalisation, the strategy of dispersal should increase Darwinian fitness. We introduce new models of purposeful kinesis with diffusion coefficient dependent on fitness. The local and instant evaluation of Darwinian fitness is used, the reproduction coefficient. New models include one additional parameter, intensity of kinesis, and may be considered as the {\em minimal models of purposeful kinesis}. The properties of models are explored by a series of numerical experiments. It is demonstrated how kinesis could be beneficial for assimilation of patches of food or of periodic fluctuations. Kinesis based on local and instant estimations of fitness is not always beneficial: for species with the Allee effect it can delay invasion and spreading. It is proven that kinesis cannot modify stability of positive homogeneous steady states.
2209.12537
Farzin Sohraby
Farzin Sohraby and Ariane Nunes-Alves
Recent advances in computational methods for studying ligand binding kinetics
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Binding kinetic parameters can be correlated with drug efficacy, which led to the development of various computational methods for predicting binding kinetic rates and gaining insight into protein-drug binding paths and mechanisms in recent years. In this review, we introduce and compare computational methods recently developed and applied to two systems, trypsin-benzamidine and kinase-inhibitor complexes. Methods involving enhanced sampling in molecular dynamics simulations or machine learning can be used not only to predict kinetic rates, but also to reveal factors modulating the duration of residence times, selectivity and drug resistance to mutations. Methods which require less computational time to make predictions are highlighted, and suggestions to reduce the error of computed kinetic rates are presented.
[ { "created": "Mon, 26 Sep 2022 09:35:56 GMT", "version": "v1" } ]
2022-09-27
[ [ "Sohraby", "Farzin", "" ], [ "Nunes-Alves", "Ariane", "" ] ]
Binding kinetic parameters can be correlated with drug efficacy, which led to the development of various computational methods for predicting binding kinetic rates and gaining insight into protein-drug binding paths and mechanisms in recent years. In this review, we introduce and compare computational methods recently developed and applied to two systems, trypsin-benzamidine and kinase-inhibitor complexes. Methods involving enhanced sampling in molecular dynamics simulations or machine learning can be used not only to predict kinetic rates, but also to reveal factors modulating the duration of residence times, selectivity and drug resistance to mutations. Methods which require less computational time to make predictions are highlighted, and suggestions to reduce the error of computed kinetic rates are presented.
2107.12785
Indrajit Ghosh
Saheb Pal and Indrajit Ghosh
A mechanistic model for airborne and direct human-to-human transmission of COVID-19: Effect of mitigation strategies and immigration of infectious persons
null
null
10.1140/epjs/s11734-022-00433-9
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic is the most significant global crisis since World War II that affected almost all the countries of our planet. To control the COVID-19 pandemic outbreak, it is necessary to understand how the virus is transmitted to a susceptible individual and eventually spread in the community. The primary transmission pathway of COVID-19 is human-to-human transmission through infectious droplets. However, a recent study by Greenhalgh et al. (Lancet: 397:1603-1605, 2021) demonstrates 10 scientific reasons behind the airborne transmission of SARS-COV-2. In the present study, we introduce a novel mathematical model of COVID-19 that considers the transmission of free viruses in the air besides the transmission of direct contact with an infected person. The basic reproduction number of the epidemic model is calculated using the next-generation operator method and observed that it depends on both the transmission rate of direct contact and free virus contact. The local and global stability of disease-free equilibrium (DFE) is well established. Analytically it is found that there is a forward bifurcation between the DFE and an endemic equilibrium using central manifold theory. Next, we used the nonlinear least-squares technique to identify the best-fitted parameter values in the model from the observed COVID-19 mortality data of two major districts of India. Using estimated parameters for Bangalore urban and Chennai, different control scenarios for mitigation of the disease are investigated. Results indicate that when a vaccine crisis is there, the public health authorities should prefer to vaccinate the susceptible people compared to the recovered persons who are now healthy. Along with face mask use, treatment of hospitalized patients and vaccination of susceptibles, immigration should be allowed in a supervised manner so that economy of the overall society remains healthy.
[ { "created": "Tue, 27 Jul 2021 12:57:36 GMT", "version": "v1" } ]
2022-02-02
[ [ "Pal", "Saheb", "" ], [ "Ghosh", "Indrajit", "" ] ]
The COVID-19 pandemic is the most significant global crisis since World War II that affected almost all the countries of our planet. To control the COVID-19 pandemic outbreak, it is necessary to understand how the virus is transmitted to a susceptible individual and eventually spread in the community. The primary transmission pathway of COVID-19 is human-to-human transmission through infectious droplets. However, a recent study by Greenhalgh et al. (Lancet: 397:1603-1605, 2021) demonstrates 10 scientific reasons behind the airborne transmission of SARS-COV-2. In the present study, we introduce a novel mathematical model of COVID-19 that considers the transmission of free viruses in the air besides the transmission of direct contact with an infected person. The basic reproduction number of the epidemic model is calculated using the next-generation operator method and observed that it depends on both the transmission rate of direct contact and free virus contact. The local and global stability of disease-free equilibrium (DFE) is well established. Analytically it is found that there is a forward bifurcation between the DFE and an endemic equilibrium using central manifold theory. Next, we used the nonlinear least-squares technique to identify the best-fitted parameter values in the model from the observed COVID-19 mortality data of two major districts of India. Using estimated parameters for Bangalore urban and Chennai, different control scenarios for mitigation of the disease are investigated. Results indicate that when a vaccine crisis is there, the public health authorities should prefer to vaccinate the susceptible people compared to the recovered persons who are now healthy. Along with face mask use, treatment of hospitalized patients and vaccination of susceptibles, immigration should be allowed in a supervised manner so that economy of the overall society remains healthy.
1806.02230
Xim Cerd\'a-Company
Xim Cerda-Company and Xavier Otazu
Color induction in equiluminant flashed stimuli
null
null
10.1364/JOSAA.36.000022
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors (Kaneko and Murakami, J Vision, 2012) studied color induction using flashed uniform surrounds, reporting that the contrast was higher for shorter flash duration. Extending their work, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surround. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green and lime inducers, while for a purple inducer we obtained an asymptotic profile along flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red-green surround inducers (red 1st inducer, green 2nd inducer). For the other inducers' configurations, we observed color contrast or no induction. Since other works showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it.
[ { "created": "Mon, 4 Jun 2018 08:22:54 GMT", "version": "v1" } ]
2018-12-26
[ [ "Cerda-Company", "Xim", "" ], [ "Otazu", "Xavier", "" ] ]
Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors (Kaneko and Murakami, J Vision, 2012) studied color induction using flashed uniform surrounds, reporting that the contrast was higher for shorter flash duration. Extending their work, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surround. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green and lime inducers, while for a purple inducer we obtained an asymptotic profile along flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red-green surround inducers (red 1st inducer, green 2nd inducer). For the other inducers' configurations, we observed color contrast or no induction. Since other works showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it.
1209.3886
Bruno. Cessac
Hassan Nasser, Olivier Marre, Bruno Cessac
Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method
41 pages, 10 figures
J. Stat. Mech. (2013) P03006
10.1088/1742-5468/2013/03/P03006
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.
[ { "created": "Tue, 18 Sep 2012 09:17:28 GMT", "version": "v1" } ]
2014-04-15
[ [ "Nasser", "Hassan", "" ], [ "Marre", "Olivier", "" ], [ "Cessac", "Bruno", "" ] ]
Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.
1204.3863
Philipp Altrock
Philipp M. Altrock, Arne Traulsen, Tobias Galla
The mechanics of stochastic slowdown in evolutionary games
Accepted for publication in the Journal of Theoretical Biology. Includes changes after peer review
null
null
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the stochastic dynamics of evolutionary games, and focus on the so-called `stochastic slowdown' effect, previously observed in (Altrock et. al, 2010) for simple evolutionary dynamics. Slowdown here refers to the fact that a beneficial mutation may take longer to fixate than a neutral one. More precisely, the fixation time conditioned on the mutant taking over can show a maximum at intermediate selection strength. We show that this phenomenon is present in the prisoner's dilemma, and also discuss counterintuitive slowdown and speedup in coexistence games. In order to establish the microscopic origins of these phenomena, we calculate the average sojourn times. This allows us to identify the transient states which contribute most to the slowdown effect, and enables us to provide an understanding of slowdown in the takeover of a small group of cooperators by defectors: Defection spreads quickly initially, but the final steps to takeover can be delayed substantially. The analysis of coexistence games reveals even more intricate behavior. In small populations, the conditional average fixation time can show multiple extrema as a function of the selection strength, e.g., slowdown, speedup, and slowdown again. We classify two-player games with respect to the possibility to observe non-monotonic behavior of the conditional average fixation time as a function of selection strength.
[ { "created": "Tue, 17 Apr 2012 18:06:33 GMT", "version": "v1" }, { "created": "Fri, 27 Jul 2012 12:25:11 GMT", "version": "v2" } ]
2012-07-30
[ [ "Altrock", "Philipp M.", "" ], [ "Traulsen", "Arne", "" ], [ "Galla", "Tobias", "" ] ]
We study the stochastic dynamics of evolutionary games, and focus on the so-called `stochastic slowdown' effect, previously observed in (Altrock et. al, 2010) for simple evolutionary dynamics. Slowdown here refers to the fact that a beneficial mutation may take longer to fixate than a neutral one. More precisely, the fixation time conditioned on the mutant taking over can show a maximum at intermediate selection strength. We show that this phenomenon is present in the prisoner's dilemma, and also discuss counterintuitive slowdown and speedup in coexistence games. In order to establish the microscopic origins of these phenomena, we calculate the average sojourn times. This allows us to identify the transient states which contribute most to the slowdown effect, and enables us to provide an understanding of slowdown in the takeover of a small group of cooperators by defectors: Defection spreads quickly initially, but the final steps to takeover can be delayed substantially. The analysis of coexistence games reveals even more intricate behavior. In small populations, the conditional average fixation time can show multiple extrema as a function of the selection strength, e.g., slowdown, speedup, and slowdown again. We classify two-player games with respect to the possibility to observe non-monotonic behavior of the conditional average fixation time as a function of selection strength.
0705.1053
Everaldo Arashiro
Kelly C. de Carvalho and T\^ania Tom\'e
Anisotropic probabilistic cellular automaton for a predator-prey system
13 pages, 4 figures, accepted for publication in Brazilian Journal of Physics
null
null
null
q-bio.PE
null
We consider a probabilistic cellular automaton to analyze the stochastic dynamics of a predator-prey system. The local rules are Markovian and are based in the Lotka-Volterra model. The individuals of each species reside on the sites of a lattice and interact with an unsymmetrical neighborhood. We look for the effect of the space anisotropy in the characterization of the oscillations of the species population densities. Our study of the probabilistic cellular automaton is based on simple and pair mean-field approximations and explicitly takes into account spatial anisotropy.
[ { "created": "Tue, 8 May 2007 09:16:12 GMT", "version": "v1" } ]
2016-08-14
[ [ "de Carvalho", "Kelly C.", "" ], [ "Tomé", "Tânia", "" ] ]
We consider a probabilistic cellular automaton to analyze the stochastic dynamics of a predator-prey system. The local rules are Markovian and are based in the Lotka-Volterra model. The individuals of each species reside on the sites of a lattice and interact with an unsymmetrical neighborhood. We look for the effect of the space anisotropy in the characterization of the oscillations of the species population densities. Our study of the probabilistic cellular automaton is based on simple and pair mean-field approximations and explicitly takes into account spatial anisotropy.
1006.1869
Stefan Auer SA
Stefan Auer and Dimo Kashchiev
Insight into the correlation between lag time and aggregation rate in the kinetics of protein aggregation
null
null
10.1002/prot.22762
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under favourable conditions, many proteins can assemble into macroscopically large aggregate's, Parkinson's and other neurological and systemic diseases. The overall process of protein aggregation is characterized by initial lag time during which no detectable aggregation occurs in the solution and by maximal aggregation rate at which the dissolved protein converts into aggregates. In this study, the correlation between the lag time and the maximal rate of protein aggregation is analyzed. It is found that the product of these two quantities depends on a single numerical parameter, the kinetic index of the curve quantifying the time evolution of the fraction of protein aggregated. As this index depends relatively little on the conditions and/or system studied, our finding provides insight into why for many experiments the values of the product of the lag time and the maximal aggregation rate are often equal or quite close to each other. It is shown how the kinetic index is related to a basic kinetic parameter of a recently proposed theory of protein aggregation.
[ { "created": "Wed, 9 Jun 2010 18:04:27 GMT", "version": "v1" } ]
2010-06-10
[ [ "Auer", "Stefan", "" ], [ "Kashchiev", "Dimo", "" ] ]
Under favourable conditions, many proteins can assemble into macroscopically large aggregate's, Parkinson's and other neurological and systemic diseases. The overall process of protein aggregation is characterized by initial lag time during which no detectable aggregation occurs in the solution and by maximal aggregation rate at which the dissolved protein converts into aggregates. In this study, the correlation between the lag time and the maximal rate of protein aggregation is analyzed. It is found that the product of these two quantities depends on a single numerical parameter, the kinetic index of the curve quantifying the time evolution of the fraction of protein aggregated. As this index depends relatively little on the conditions and/or system studied, our finding provides insight into why for many experiments the values of the product of the lag time and the maximal aggregation rate are often equal or quite close to each other. It is shown how the kinetic index is related to a basic kinetic parameter of a recently proposed theory of protein aggregation.
1012.0121
Naoki Masuda Dr.
Naoki Masuda and Mitsuhiro Nakamura
Numerical analysis of a reinforcement learning model with the dynamic aspiration level in the iterated Prisoner's Dilemma
7 figures
Journal of Theoretical Biology, 278, 55-62 (2011)
10.1016/j.jtbi.2011.03.005
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's Dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfice if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
[ { "created": "Wed, 1 Dec 2010 07:37:36 GMT", "version": "v1" }, { "created": "Sun, 3 Apr 2011 10:00:10 GMT", "version": "v2" } ]
2011-04-05
[ [ "Masuda", "Naoki", "" ], [ "Nakamura", "Mitsuhiro", "" ] ]
Humans and other animals can adapt their social behavior in response to environmental cues including the feedback obtained through experience. Nevertheless, the effects of the experience-based learning of players in evolution and maintenance of cooperation in social dilemma games remain relatively unclear. Some previous literature showed that mutual cooperation of learning players is difficult or requires a sophisticated learning model. In the context of the iterated Prisoner's Dilemma, we numerically examine the performance of a reinforcement learning model. Our model modifies those of Karandikar et al. (1998), Posch et al. (1999), and Macy and Flache (2002) in which players satisfice if the obtained payoff is larger than a dynamic threshold. We show that players obeying the modified learning mutually cooperate with high probability if the dynamics of threshold is not too fast and the association between the reinforcement signal and the action in the next round is sufficiently strong. The learning players also perform efficiently against the reactive strategy. In evolutionary dynamics, they can invade a population of players adopting simpler but competitive strategies. Our version of the reinforcement learning model does not complicate the previous model and is sufficiently simple yet flexible. It may serve to explore the relationships between learning and evolution in social dilemma situations.
2202.08765
Ricardo Henriques Prof
Afonso Mendes, Hannah S. Heil, Simao Coelho, Christophe Leterrier and Ricardo Henriques
Mapping molecular complexes with Super-Resolution Microscopy and Single-Particle Analysis
12 pages, 3 figures
null
10.1098/rsob.220079
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Understanding the structure of supramolecular complexes provides insight into their functional capabilities and how they can be modulated in the context of disease. Super-resolution microscopy (SRM) excels in performing this task by resolving ultrastructural details at the nanoscale with molecular specificity. However, technical limitations, such as underlabelling, preclude its ability to provide complete structures. Single-particle analysis (SPA) overcomes this limitation by combining information from multiple images of identical structures and producing an averaged model, effectively enhancing the resolution and coverage of image reconstructions. This review highlights important studies using SRM-SPA, demonstrating how it broadens our knowledge by elucidating features of key biological structures with unprecedented detail.
[ { "created": "Thu, 17 Feb 2022 17:09:25 GMT", "version": "v1" } ]
2023-08-09
[ [ "Mendes", "Afonso", "" ], [ "Heil", "Hannah S.", "" ], [ "Coelho", "Simao", "" ], [ "Leterrier", "Christophe", "" ], [ "Henriques", "Ricardo", "" ] ]
Understanding the structure of supramolecular complexes provides insight into their functional capabilities and how they can be modulated in the context of disease. Super-resolution microscopy (SRM) excels in performing this task by resolving ultrastructural details at the nanoscale with molecular specificity. However, technical limitations, such as underlabelling, preclude its ability to provide complete structures. Single-particle analysis (SPA) overcomes this limitation by combining information from multiple images of identical structures and producing an averaged model, effectively enhancing the resolution and coverage of image reconstructions. This review highlights important studies using SRM-SPA, demonstrating how it broadens our knowledge by elucidating features of key biological structures with unprecedented detail.
2104.09999
Borko D. Stosic
Borko D. Stosic
Dynamics of COVID-19 mitigation inefficiency in Brazil
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/publicdomain/zero/1.0/
In this work Data Envelopment Analysis (DEA) is employed in thirty-day windows to quantify temporal evolution of relative pandemic mitigation inefficiency of Brazilian municipalities. For each thirty-day window the results of inefficiency scores of over five thousand Brazilian municipalities are displayed on maps, to address the spatial distribution of the corresponding values. This phenomenological spatiotemporal approach reveals location of the hotspots, in terms of relative pandemic inefficiency of the municipalities, at different points in time along the pandemic. It is expected that the current approach may subsidize decision making through comparative analysis of previous practice, and thus contribute to the future pandemic mitigation efforts.
[ { "created": "Tue, 20 Apr 2021 14:28:29 GMT", "version": "v1" } ]
2021-04-21
[ [ "Stosic", "Borko D.", "" ] ]
In this work Data Envelopment Analysis (DEA) is employed in thirty-day windows to quantify temporal evolution of relative pandemic mitigation inefficiency of Brazilian municipalities. For each thirty-day window the results of inefficiency scores of over five thousand Brazilian municipalities are displayed on maps, to address the spatial distribution of the corresponding values. This phenomenological spatiotemporal approach reveals location of the hotspots, in terms of relative pandemic inefficiency of the municipalities, at different points in time along the pandemic. It is expected that the current approach may subsidize decision making through comparative analysis of previous practice, and thus contribute to the future pandemic mitigation efforts.
2004.03326
Victor Zhao
Victor Zhao, William M. Jacobs, and Eugene I. Shakhnovich
Effect of protein structure on evolution of cotranslational folding
null
null
10.1016/j.bpj.2020.06.037
null
q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
Cotranslational folding depends on the folding speed and stability of the nascent protein. It remains difficult, however, to predict which proteins cotranslationally fold. Here, we simulate evolution of model proteins to investigate how native structure influences evolution of cotranslational folding. We developed a model that connects protein folding during and after translation to cellular fitness. Model proteins evolved improved folding speed and stability, with proteins adopting one of two strategies for folding quickly. Low contact order proteins evolve to fold cotranslationally. Such proteins adopt native conformations early on during the translation process, with each subsequently translated residue establishing additional native contacts. On the other hand, high contact order proteins tend not to be stable in their native conformations until the full chain is nearly extruded. We also simulated evolution of slowly translating codons, finding that slower translation speeds at certain positions enhances cotranslational folding. Finally, we investigated real protein structures using a previously published dataset that identified evolutionarily conserved rare codons in E. coli genes and associated such codons with cotranslational folding intermediates. We found that protein substructures preceding conserved rare codons tend to have lower contact orders, in line with our finding that lower contact order proteins are more likely to fold cotranslationally. Our work shows how evolutionary selection pressure can cause proteins with local contact topologies to evolve cotranslational folding.
[ { "created": "Tue, 7 Apr 2020 13:02:18 GMT", "version": "v1" }, { "created": "Mon, 22 Jun 2020 18:45:26 GMT", "version": "v2" } ]
2020-10-28
[ [ "Zhao", "Victor", "" ], [ "Jacobs", "William M.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Cotranslational folding depends on the folding speed and stability of the nascent protein. It remains difficult, however, to predict which proteins cotranslationally fold. Here, we simulate evolution of model proteins to investigate how native structure influences evolution of cotranslational folding. We developed a model that connects protein folding during and after translation to cellular fitness. Model proteins evolved improved folding speed and stability, with proteins adopting one of two strategies for folding quickly. Low contact order proteins evolve to fold cotranslationally. Such proteins adopt native conformations early on during the translation process, with each subsequently translated residue establishing additional native contacts. On the other hand, high contact order proteins tend not to be stable in their native conformations until the full chain is nearly extruded. We also simulated evolution of slowly translating codons, finding that slower translation speeds at certain positions enhances cotranslational folding. Finally, we investigated real protein structures using a previously published dataset that identified evolutionarily conserved rare codons in E. coli genes and associated such codons with cotranslational folding intermediates. We found that protein substructures preceding conserved rare codons tend to have lower contact orders, in line with our finding that lower contact order proteins are more likely to fold cotranslationally. Our work shows how evolutionary selection pressure can cause proteins with local contact topologies to evolve cotranslational folding.
2403.12249
King-Yeung Lam
King-Yeung Lam and Ray Lee
Asymptotic spreading of predator-prey populations in a shifting environment
null
null
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by recent studies associating shifting temperature conditions with changes in the efficiency of predator species in converting their prey to offspring, we propose a predator-prey model of reaction-diffusion type to analyze the consequence of such effects on the population dynamics and spread of species. In the model, the predator conversion efficiency is represented by a spatially heterogeneous function depending on the variable $\xi=x-c_1t$ for some given $c_1>0$. Using the Hamilton-Jacobi approach, we provide explicit formulas for the spreading speed of the predator species. When the conversion function is monotone increasing, the spreading speed is determined in all cases and non-local pulling is possible. When the function is monotone decreasing, we provide formulas for the spreading speed when the rate of shift of the conversion function is sufficiently fast or slow.
[ { "created": "Mon, 18 Mar 2024 21:00:16 GMT", "version": "v1" }, { "created": "Sun, 16 Jun 2024 07:43:40 GMT", "version": "v2" } ]
2024-06-18
[ [ "Lam", "King-Yeung", "" ], [ "Lee", "Ray", "" ] ]
Inspired by recent studies associating shifting temperature conditions with changes in the efficiency of predator species in converting their prey to offspring, we propose a predator-prey model of reaction-diffusion type to analyze the consequence of such effects on the population dynamics and spread of species. In the model, the predator conversion efficiency is represented by a spatially heterogeneous function depending on the variable $\xi=x-c_1t$ for some given $c_1>0$. Using the Hamilton-Jacobi approach, we provide explicit formulas for the spreading speed of the predator species. When the conversion function is monotone increasing, the spreading speed is determined in all cases and non-local pulling is possible. When the function is monotone decreasing, we provide formulas for the spreading speed when the rate of shift of the conversion function is sufficiently fast or slow.
0707.0754
Denis Semenov A.
Denis A. Semenov
The Symmetry of the Genetic Code and a Universal Trend of Amino Acid Gain and Loss
7 pages, 3 tables
null
null
null
q-bio.PE
null
Part 1 of the study intends to show that the universal trend of amino acid gain and loss discovered by Jordan et al. (2005) can be accounted for by the spontaneity of DNA typical damages. These damages lead to replacements of guanine and cytosine by thymine. Part 2 proposes a hypothesis of the evolution of the genetic code, the leading mechanism of which is the nucleotide spontaneous damage. The hypothesis accounts for the universal trend of amino acid gain and loss, stability of the genetic code towards point mutations, the presence of code dialects, and the symmetry of the genetic code table.
[ { "created": "Thu, 5 Jul 2007 10:29:00 GMT", "version": "v1" } ]
2007-07-06
[ [ "Semenov", "Denis A.", "" ] ]
Part 1 of the study intends to show that the universal trend of amino acid gain and loss discovered by Jordan et al. (2005) can be accounted for by the spontaneity of DNA typical damages. These damages lead to replacements of guanine and cytosine by thymine. Part 2 proposes a hypothesis of the evolution of the genetic code, the leading mechanism of which is the nucleotide spontaneous damage. The hypothesis accounts for the universal trend of amino acid gain and loss, stability of the genetic code towards point mutations, the presence of code dialects, and the symmetry of the genetic code table.
0707.1210
Brigitte Gaillard
A. Fergani, H. Oudart (DEPE-IPHC), J.-L. Gonzalez De Aguilar, B. Fricker, F. Ren\'e, J.-F. Hocquette (HERBIVORES), V. Meininger, L. Dupuis, J.-P. Loeffler
Increased peripheral lipid clearance in an animal model of amyotrophic lateral sclerosis
null
The Journal of Lipid Research 48, 7 (07/2007) 1571-1580
10.1194/jlr.M700017-JLR200
null
q-bio.PE
null
Amyotrophic lateral sclerosis (ALS) is the most common adult motor neuron disease, causing motor neuron degeneration, muscle atrophy, paralysis, and death. Despite this degenerative process, a stable hypermetabolic state has been observed in a large subset of patients. Mice expressing a mutant form of Cu/Zn-superoxide dismutase (mSOD1 mice) constitute an animal model of ALS that, like patients, exhibits unexpectedly increased energy expenditure. Counterbalancing for this increase with a high-fat diet extends lifespan and prevents motor neuron loss. Here, we investigated whether lipid metabolism is defective in this animal model. Hepatic lipid metabolism was roughly normal, whereas gastrointestinal absorption of lipids as well as peripheral clearance of triglyceride-rich lipoproteins were markedly increased, leading to decreased postprandial lipidemia. This defect was corrected by the high-fat regimen that typically induces neuroprotection in these animals. Together, our findings show that energy metabolism in mSOD1 mice shifts toward an increase in the peripheral use of lipids. This metabolic shift probably accounts for the protective effect of dietary lipids in this model.
[ { "created": "Mon, 9 Jul 2007 09:34:22 GMT", "version": "v1" } ]
2007-07-10
[ [ "Fergani", "A.", "", "DEPE-IPHC" ], [ "Oudart", "H.", "", "DEPE-IPHC" ], [ "De Aguilar", "J. -L. Gonzalez", "", "HERBIVORES" ], [ "Fricker", "B.", "", "HERBIVORES" ], [ "René", "F.", "", "HERBIVORES" ], [ "Hocquette", "J. -F.", "", "HERBIVORES" ], [ "Meininger", "V.", "" ], [ "Dupuis", "L.", "" ], [ "Loeffler", "J. -P.", "" ] ]
Amyotrophic lateral sclerosis (ALS) is the most common adult motor neuron disease, causing motor neuron degeneration, muscle atrophy, paralysis, and death. Despite this degenerative process, a stable hypermetabolic state has been observed in a large subset of patients. Mice expressing a mutant form of Cu/Zn-superoxide dismutase (mSOD1 mice) constitute an animal model of ALS that, like patients, exhibits unexpectedly increased energy expenditure. Counterbalancing for this increase with a high-fat diet extends lifespan and prevents motor neuron loss. Here, we investigated whether lipid metabolism is defective in this animal model. Hepatic lipid metabolism was roughly normal, whereas gastrointestinal absorption of lipids as well as peripheral clearance of triglyceride-rich lipoproteins were markedly increased, leading to decreased postprandial lipidemia. This defect was corrected by the high-fat regimen that typically induces neuroprotection in these animals. Together, our findings show that energy metabolism in mSOD1 mice shifts toward an increase in the peripheral use of lipids. This metabolic shift probably accounts for the protective effect of dietary lipids in this model.
q-bio/0608026
Alain Destexhe
Claude Bedard, Helmut Kroeger and Alain Destexhe
Does the 1/f frequency-scaling of brain signals reflect self-organized critical states?
3 figures, 6 pages
Physical Review Letters 97: 118102, 2006.
10.1103/PhysRevLett.97.118102
null
q-bio.NC q-bio.PE
null
Many complex systems display self-organized critical states characterized by 1/f frequency scaling of power spectra. Global variables such as the electroencephalogram, scale as 1/f, which could be the sign of self-organized critical states in neuronal activity. By analyzing simultaneous recordings of global and neuronal activities, we confirm the 1/f scaling of global variables for selected frequency bands, but show that neuronal activity is not consistent with critical states. We propose a model of 1/f scaling which does not rely on critical states, and which is testable experimentally.
[ { "created": "Tue, 15 Aug 2006 09:25:50 GMT", "version": "v1" } ]
2009-11-13
[ [ "Bedard", "Claude", "" ], [ "Kroeger", "Helmut", "" ], [ "Destexhe", "Alain", "" ] ]
Many complex systems display self-organized critical states characterized by 1/f frequency scaling of power spectra. Global variables such as the electroencephalogram, scale as 1/f, which could be the sign of self-organized critical states in neuronal activity. By analyzing simultaneous recordings of global and neuronal activities, we confirm the 1/f scaling of global variables for selected frequency bands, but show that neuronal activity is not consistent with critical states. We propose a model of 1/f scaling which does not rely on critical states, and which is testable experimentally.
2212.06136
Walid Hachem
Imane Akjouj, Matthieu Barbier, Maxime Clenet, Walid Hachem, Myl\`ene Ma\"ida, Fran\c{c}ois Massol, Jamal Najim, Viet Chi Tran
Complex systems in Ecology: a guided tour with large Lotka-Volterra models and random matrices
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems represent archetypal complex dynamical systems, often modelled by coupled differential equations of the form $$ \frac{d x_i}{d t} = x_i \varphi_i(x_1,\cdots, x_N)\ , $$ where $N$ represents the number of species and $x_i$, the abundance of species $i$. Among these families of coupled diffential equations, Lotka-Volterra (LV) equations $$ \frac{d x_i}{d t} = x_i ( r_i - x_i +(\Gamma \mathbf{x})_i)\ , $$ play a privileged role, as the LV model represents an acceptable trade-off between complexity and tractability. Here, $r_i$ represents the intrinsic growth of species $i$ and $\Gamma$ stands for the interaction matrix: $\Gamma_{ij}$ represents the effect of species $j$ over species $i$. For large $N$, estimating matrix $\Gamma$ is often an overwhelming task and an alternative is to draw $\Gamma$ at random, parametrizing its statistical distribution by a limited number of model features. Dealing with large random matrices, we naturally rely on Random Matrix Theory (RMT). The aim of this review article is to present an overview of the work at the junction of theoretical ecology and large random matrix theory. It is intended to an interdisciplinary audience spanning theoretical ecology, complex systems, statistical physics and mathematical biology.
[ { "created": "Mon, 12 Dec 2022 18:59:57 GMT", "version": "v1" } ]
2022-12-13
[ [ "Akjouj", "Imane", "" ], [ "Barbier", "Matthieu", "" ], [ "Clenet", "Maxime", "" ], [ "Hachem", "Walid", "" ], [ "Maïda", "Mylène", "" ], [ "Massol", "François", "" ], [ "Najim", "Jamal", "" ], [ "Tran", "Viet Chi", "" ] ]
Ecosystems represent archetypal complex dynamical systems, often modelled by coupled differential equations of the form $$ \frac{d x_i}{d t} = x_i \varphi_i(x_1,\cdots, x_N)\ , $$ where $N$ represents the number of species and $x_i$, the abundance of species $i$. Among these families of coupled diffential equations, Lotka-Volterra (LV) equations $$ \frac{d x_i}{d t} = x_i ( r_i - x_i +(\Gamma \mathbf{x})_i)\ , $$ play a privileged role, as the LV model represents an acceptable trade-off between complexity and tractability. Here, $r_i$ represents the intrinsic growth of species $i$ and $\Gamma$ stands for the interaction matrix: $\Gamma_{ij}$ represents the effect of species $j$ over species $i$. For large $N$, estimating matrix $\Gamma$ is often an overwhelming task and an alternative is to draw $\Gamma$ at random, parametrizing its statistical distribution by a limited number of model features. Dealing with large random matrices, we naturally rely on Random Matrix Theory (RMT). The aim of this review article is to present an overview of the work at the junction of theoretical ecology and large random matrix theory. It is intended to an interdisciplinary audience spanning theoretical ecology, complex systems, statistical physics and mathematical biology.
1002.1196
Giangiacomo Bravo
Giangiacomo Bravo
Cultural commons and cultural evolution
15 pages; 5 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Culture evolves following a process that is akin to biological evolution, although with some significant differences. At the same time culture has often a collective good value for human groups. This paper studies culture in an evolutionary perspective, with a focus on the implications of group definition for the coexistence of different cultures. A model of cultural evolution is presented where agents interacts in an artificial environment. The belonging to a specific memetic group is a major factor allowing agents to exploit different environmental niches with, as a result, the coexistence of different cultures in the same environment.
[ { "created": "Fri, 5 Feb 2010 11:02:02 GMT", "version": "v1" } ]
2010-02-08
[ [ "Bravo", "Giangiacomo", "" ] ]
Culture evolves following a process that is akin to biological evolution, although with some significant differences. At the same time culture has often a collective good value for human groups. This paper studies culture in an evolutionary perspective, with a focus on the implications of group definition for the coexistence of different cultures. A model of cultural evolution is presented where agents interacts in an artificial environment. The belonging to a specific memetic group is a major factor allowing agents to exploit different environmental niches with, as a result, the coexistence of different cultures in the same environment.
2003.04524
Mohammad Reza Dayer
Mohammad Reza Dayer
Old Drugs for Newly Emerging Viral Disease, COVID-19: Bioinformatic Prospective
16 pages, 3 Figures, 3 Tables, New Results
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Coronavirus (COVID-19) outbreak in late 2019 and 2020 comprises a serious and more likely a pandemic threat worldwide. Given that the disease has not approved vaccines or drugs up to now, any efforts for drug design and or clinical trails of old drugs based on their mechanism of action are worthy and creditable in such circumstances. Experienced docking experiments using the newly released coordinate structure for COVID-19 protease as a receptor and thoughtfully selected chemicals among antiviral and antibiotics drugs as ligands may be leading in this context. We selected nine drugs from HIV-1 protease inhibitors and twenty-one candidates from anti bronchitis drugs based on their chemical structures and enrolled them in blind and active site-directed dockings in different modes and in native-like conditions of interactions. Our findings suggest the binding capacity and the inhibitory potency of candidates are as follows Tipranavir>Indinavir>Atazanavir>Darunavir>Ritonavir>Amprenavir for HIV-1 protease inhibitors and Cefditoren>Cefixime>Erythromycin>Clarithromycin for anti bronchitis medicines. The drugs bioavailability, their hydrophobicity and the hydrophobic properties of their binding sites and also the rates of their metabolisms and deactivations in the human body are the next determinants for their overall effects on viral infections, the net results that should survey by clinical trials to assess their therapeutic usefulness for coronavirus infections.
[ { "created": "Tue, 10 Mar 2020 03:49:45 GMT", "version": "v1" } ]
2020-03-11
[ [ "Dayer", "Mohammad Reza", "" ] ]
Coronavirus (COVID-19) outbreak in late 2019 and 2020 comprises a serious and more likely a pandemic threat worldwide. Given that the disease has not approved vaccines or drugs up to now, any efforts for drug design and or clinical trails of old drugs based on their mechanism of action are worthy and creditable in such circumstances. Experienced docking experiments using the newly released coordinate structure for COVID-19 protease as a receptor and thoughtfully selected chemicals among antiviral and antibiotics drugs as ligands may be leading in this context. We selected nine drugs from HIV-1 protease inhibitors and twenty-one candidates from anti bronchitis drugs based on their chemical structures and enrolled them in blind and active site-directed dockings in different modes and in native-like conditions of interactions. Our findings suggest the binding capacity and the inhibitory potency of candidates are as follows Tipranavir>Indinavir>Atazanavir>Darunavir>Ritonavir>Amprenavir for HIV-1 protease inhibitors and Cefditoren>Cefixime>Erythromycin>Clarithromycin for anti bronchitis medicines. The drugs bioavailability, their hydrophobicity and the hydrophobic properties of their binding sites and also the rates of their metabolisms and deactivations in the human body are the next determinants for their overall effects on viral infections, the net results that should survey by clinical trials to assess their therapeutic usefulness for coronavirus infections.
2303.16914
Carlo Adornetto
Carlo Adornetto and Gianluigi Greco
A New Deep Learning and XAI-Based Algorithm for Features Selection in Genomics
8 pages, 5 figures, Best Doctoral Consortium Paper AIxIA2022 (Udine, Italy)
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of functional genomics, the analysis of gene expression profiles through Machine and Deep Learning is increasingly providing meaningful insight into a number of diseases. The paper proposes a novel algorithm to perform Feature Selection on genomic-scale data, which exploits the reconstruction capabilities of autoencoders and an ad-hoc defined Explainable Artificial Intelligence-based score in order to select the most informative genes for diagnosis, prognosis, and precision medicine. Results of the application on a Chronic Lymphocytic Leukemia dataset evidence the effectiveness of the algorithm, by identifying and suggesting a set of meaningful genes for further medical investigation.
[ { "created": "Wed, 29 Mar 2023 16:44:13 GMT", "version": "v1" } ]
2023-03-31
[ [ "Adornetto", "Carlo", "" ], [ "Greco", "Gianluigi", "" ] ]
In the field of functional genomics, the analysis of gene expression profiles through Machine and Deep Learning is increasingly providing meaningful insight into a number of diseases. The paper proposes a novel algorithm to perform Feature Selection on genomic-scale data, which exploits the reconstruction capabilities of autoencoders and an ad-hoc defined Explainable Artificial Intelligence-based score in order to select the most informative genes for diagnosis, prognosis, and precision medicine. Results of the application on a Chronic Lymphocytic Leukemia dataset evidence the effectiveness of the algorithm, by identifying and suggesting a set of meaningful genes for further medical investigation.
1408.2575
Leonard Apeltsin
Leonard Apeltsin
A Network Filtration Protocol for Elucidating Relationships between Families in a Protein Similarity Network
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: The study of diverse enzyme superfamilies can provide important insight into the relationships between protein sequence, structure and function. It is often challenging, however, to discover these relationships across a large and diverse superfamily. Contemporary similarity network visualization techniques allow researchers to aggregate sequence similarity information into a single global view. Network visualization provides a qualitative estimate of functional diversity within a superfamily, but is unable to quantitate explicit boundaries, when present, between neighboring families in sequence space. This limits the potential of existing sequence-based algorithms to generate functional predictions from superfamily datasets. Results: By building on current network analysis tools, we have developed a new algorithm for elucidating pairs of homologous families within a sequence dataset. Our algorithm is able to filter through a dense similarity network in order to estimate both the boundaries of individual families and also how the families neighbor one another. Globally, these neighboring families define a topology across the entire superfamily. The topology is simple to interpret by visualizing the network output generated by our filtration protocol. We have compared the network topology within the kinase superfamily against available phylogenetic data. Our results suggest that neighbors within the filtered kinase network are more likely to share structural and functional properties than more distant network clusters.
[ { "created": "Mon, 11 Aug 2014 22:20:18 GMT", "version": "v1" } ]
2014-08-13
[ [ "Apeltsin", "Leonard", "" ] ]
Motivation: The study of diverse enzyme superfamilies can provide important insight into the relationships between protein sequence, structure and function. It is often challenging, however, to discover these relationships across a large and diverse superfamily. Contemporary similarity network visualization techniques allow researchers to aggregate sequence similarity information into a single global view. Network visualization provides a qualitative estimate of functional diversity within a superfamily, but is unable to quantitate explicit boundaries, when present, between neighboring families in sequence space. This limits the potential of existing sequence-based algorithms to generate functional predictions from superfamily datasets. Results: By building on current network analysis tools, we have developed a new algorithm for elucidating pairs of homologous families within a sequence dataset. Our algorithm is able to filter through a dense similarity network in order to estimate both the boundaries of individual families and also how the families neighbor one another. Globally, these neighboring families define a topology across the entire superfamily. The topology is simple to interpret by visualizing the network output generated by our filtration protocol. We have compared the network topology within the kinase superfamily against available phylogenetic data. Our results suggest that neighbors within the filtered kinase network are more likely to share structural and functional properties than more distant network clusters.
1508.07378
Dolph Schluter
Dolph Schluter
Speciation, ecological opportunity, and latitude
Based on a presidential address at the 2013 evolution conference of the American Society of Naturalists in Snowbird, Utah
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary hypotheses to explain the greater numbers of species in the tropics than the temperate zone include greater age and area, higher temperature and metabolic rates, and greater ecological opportunity. These ideas make contrasting predictions about the relationship between speciation processes and latitude, which I elaborate and evaluate. Available data suggest that per capita speciation rates are currently highest in the temperate zone, and that diversification rates (speciation minus extinction) are similar between latitudes. In contrast, clades whose oldest analyzed dates precede the Eocene thermal maximum, when the extent of the tropics was much greater than today, tend to show highest speciation and diversification rates in the tropics. These findings are consistent with age and area, which is alone among hypotheses in predicting a time trend. Higher recent speciation rates in the temperate zone than the tropics suggest an additional response to high ecological opportunity associated with low species diversity. These broad patterns are compelling but provide limited insights into underlying mechanisms, arguing that studies of speciation processes along the latitudinal gradient will be vital. Using threespine stickleback in depauperate northern lakes as an example, I show how high ecological opportunity can lead to rapid speciation. The results support a role for ecological opportunity in speciation, but its importance in the evolution of the latitudinal gradient remains uncertain. I conclude that per-capita evolutionary rates are no longer higher in the tropics than the temperate zone. Nevertheless, the vast numbers of species that have already accumulated in the tropics ensure that total rate of species production remains highest there. Thus, tropical evolutionary momentum helps to perpetuate the steep latitudinal biodiversity gradient.
[ { "created": "Fri, 28 Aug 2015 23:52:08 GMT", "version": "v1" }, { "created": "Tue, 1 Sep 2015 15:06:12 GMT", "version": "v2" } ]
2015-09-02
[ [ "Schluter", "Dolph", "" ] ]
Evolutionary hypotheses to explain the greater numbers of species in the tropics than the temperate zone include greater age and area, higher temperature and metabolic rates, and greater ecological opportunity. These ideas make contrasting predictions about the relationship between speciation processes and latitude, which I elaborate and evaluate. Available data suggest that per capita speciation rates are currently highest in the temperate zone, and that diversification rates (speciation minus extinction) are similar between latitudes. In contrast, clades whose oldest analyzed dates precede the Eocene thermal maximum, when the extent of the tropics was much greater than today, tend to show highest speciation and diversification rates in the tropics. These findings are consistent with age and area, which is alone among hypotheses in predicting a time trend. Higher recent speciation rates in the temperate zone than the tropics suggest an additional response to high ecological opportunity associated with low species diversity. These broad patterns are compelling but provide limited insights into underlying mechanisms, arguing that studies of speciation processes along the latitudinal gradient will be vital. Using threespine stickleback in depauperate northern lakes as an example, I show how high ecological opportunity can lead to rapid speciation. The results support a role for ecological opportunity in speciation, but its importance in the evolution of the latitudinal gradient remains uncertain. I conclude that per-capita evolutionary rates are no longer higher in the tropics than the temperate zone. Nevertheless, the vast numbers of species that have already accumulated in the tropics ensure that total rate of species production remains highest there. Thus, tropical evolutionary momentum helps to perpetuate the steep latitudinal biodiversity gradient.
2007.12340
Zhaonan Sun
Zhaonan Sun, Bronislaw Gepner, Jason Kerrigan
New approaches in modeling belt-flesh-pelvis interaction using obese GHBMC models
NHTSA-ESV conference
null
null
null
q-bio.TO cs.NA math.NA physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obesity is associated with higher fatality risk and altered distribution of occupant injuries in automotive collisions partially because of the increased depth of abdominal soft tissue, which results in limited or delayed engagement of the lap belt with the pelvis and increases the risk of pelvis submarining under the lap belt exposing occupant abdomen to belt loading. Previous modeling studies have shown that pelvis submarining could not be replicated using existing human body models. The goal of this study is to perform model modifications and investigate whether they could lead to model submarining. By detaching the connections between the pelvis and surrounding flesh, submarining like belt kinematics were observed. By remeshing the flesh parts of the model, similar belt kinematics was observed but the pelvic wings were fractured. Finally, large shear deformation on the flesh together with submarining like kinematics were observed in the model with its flesh modeled using the meshless Smooth Particle Galerkin Method SPG method. The results of this study showed that SPG method has potential to simulate large deformations in soft tissue which may be necessary to improve the biofidelity of belt/pelvis interaction.
[ { "created": "Fri, 24 Jul 2020 04:14:20 GMT", "version": "v1" } ]
2020-07-27
[ [ "Sun", "Zhaonan", "" ], [ "Gepner", "Bronislaw", "" ], [ "Kerrigan", "Jason", "" ] ]
Obesity is associated with higher fatality risk and altered distribution of occupant injuries in automotive collisions partially because of the increased depth of abdominal soft tissue, which results in limited or delayed engagement of the lap belt with the pelvis and increases the risk of pelvis submarining under the lap belt exposing occupant abdomen to belt loading. Previous modeling studies have shown that pelvis submarining could not be replicated using existing human body models. The goal of this study is to perform model modifications and investigate whether they could lead to model submarining. By detaching the connections between the pelvis and surrounding flesh, submarining like belt kinematics were observed. By remeshing the flesh parts of the model, similar belt kinematics was observed but the pelvic wings were fractured. Finally, large shear deformation on the flesh together with submarining like kinematics were observed in the model with its flesh modeled using the meshless Smooth Particle Galerkin Method SPG method. The results of this study showed that SPG method has potential to simulate large deformations in soft tissue which may be necessary to improve the biofidelity of belt/pelvis interaction.
2006.00639
Yongqun He
Hong Yu, Li Li, Hsin-hui Huang, Yang Wang, Yingtong Liu, Edison Ong, Anthony Huffman, Tao Zeng, Jingsong Zhang, Pengpai Li, Zhiping Liu, Xiangyan Zhang, Xianwei Ye, Samuel K. Handelman, Gerry Higgins, Gilbert S. Omenn, Brian Athey, Junguk Hur, Luonan Chen, Yongqun He
Ontology-based systematic classification and analysis of coronaviruses, hosts, and host-coronavirus interactions towards deep understanding of COVID-19
32 pages, 1 table, 6 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Given the existing COVID-19 pandemic worldwide, it is critical to systematically study the interactions between hosts and coronaviruses including SARS-Cov, MERS-Cov, and SARS-CoV-2 (cause of COVID-19). We first created four host-pathogen interaction (HPI)-Outcome postulates, and generated a HPI-Outcome model as the basis for understanding host-coronavirus interactions (HCI) and their relations with the disease outcomes. We hypothesized that ontology can be used as an integrative platform to classify and analyze HCI and disease outcomes. Accordingly, we annotated and categorized different coronaviruses, hosts, and phenotypes using ontologies and identified their relations. Various COVID-19 phenotypes are hypothesized to be caused by the backend HCI mechanisms. To further identify the causal HCI-outcome relations, we collected 35 experimentally-verified HCI protein-protein interactions (PPIs), and applied literature mining to identify additional host PPIs in response to coronavirus infections. The results were formulated in a logical ontology representation for integrative HCI-outcome understanding. Using known PPIs as baits, we also developed and applied a domain-inferred prediction method to predict new PPIs and identified their pathological targets on multiple organs. Overall, our proposed ontology-based integrative framework combined with computational predictions can be used to support fundamental understanding of the intricate interactions between human patients and coronaviruses (including SARS-CoV-2) and their association with various disease outcomes.
[ { "created": "Sun, 31 May 2020 23:21:01 GMT", "version": "v1" } ]
2020-06-02
[ [ "Yu", "Hong", "" ], [ "Li", "Li", "" ], [ "Huang", "Hsin-hui", "" ], [ "Wang", "Yang", "" ], [ "Liu", "Yingtong", "" ], [ "Ong", "Edison", "" ], [ "Huffman", "Anthony", "" ], [ "Zeng", "Tao", "" ], [ "Zhang", "Jingsong", "" ], [ "Li", "Pengpai", "" ], [ "Liu", "Zhiping", "" ], [ "Zhang", "Xiangyan", "" ], [ "Ye", "Xianwei", "" ], [ "Handelman", "Samuel K.", "" ], [ "Higgins", "Gerry", "" ], [ "Omenn", "Gilbert S.", "" ], [ "Athey", "Brian", "" ], [ "Hur", "Junguk", "" ], [ "Chen", "Luonan", "" ], [ "He", "Yongqun", "" ] ]
Given the existing COVID-19 pandemic worldwide, it is critical to systematically study the interactions between hosts and coronaviruses including SARS-Cov, MERS-Cov, and SARS-CoV-2 (cause of COVID-19). We first created four host-pathogen interaction (HPI)-Outcome postulates, and generated a HPI-Outcome model as the basis for understanding host-coronavirus interactions (HCI) and their relations with the disease outcomes. We hypothesized that ontology can be used as an integrative platform to classify and analyze HCI and disease outcomes. Accordingly, we annotated and categorized different coronaviruses, hosts, and phenotypes using ontologies and identified their relations. Various COVID-19 phenotypes are hypothesized to be caused by the backend HCI mechanisms. To further identify the causal HCI-outcome relations, we collected 35 experimentally-verified HCI protein-protein interactions (PPIs), and applied literature mining to identify additional host PPIs in response to coronavirus infections. The results were formulated in a logical ontology representation for integrative HCI-outcome understanding. Using known PPIs as baits, we also developed and applied a domain-inferred prediction method to predict new PPIs and identified their pathological targets on multiple organs. Overall, our proposed ontology-based integrative framework combined with computational predictions can be used to support fundamental understanding of the intricate interactions between human patients and coronaviruses (including SARS-CoV-2) and their association with various disease outcomes.
2304.05874
Dominik Klepl
Dominik Klepl, Fei He, Min Wu, Daniel J. Blackburn, Ptolemaios G. Sarrigiannis
Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data
16 pages, 16 figures
null
10.1109/TNSRE.2023.3321634
null
q-bio.NC cs.AI cs.LG cs.NE eess.SP q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a correlation-based measure of power spectral density similarity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.
[ { "created": "Wed, 12 Apr 2023 14:13:09 GMT", "version": "v1" }, { "created": "Thu, 10 Aug 2023 15:13:58 GMT", "version": "v2" }, { "created": "Wed, 27 Sep 2023 13:58:50 GMT", "version": "v3" } ]
2023-12-21
[ [ "Klepl", "Dominik", "" ], [ "He", "Fei", "" ], [ "Wu", "Min", "" ], [ "Blackburn", "Daniel J.", "" ], [ "Sarrigiannis", "Ptolemaios G.", "" ] ]
Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a correlation-based measure of power spectral density similarity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.
q-bio/0702037
Manoj P. Samanta
Manoj Pratim Samanta
Ultraconserved Sequences in the Honeybee Genome - Are GC-rich Regions Preferred?
null
Systemix Reports 2, Feb. 18 (2007)
null
null
q-bio.GN
null
Among all insect genomes, honeybee displays one of the most unusual patterns with interspersed long AT and GC-rich segments. Nearly 75% of the protein-coding genes are located in the AT-rich segments of the genome, but the biological significance of the GC-rich regions is not well understood. Based on an observation that the bee miRNAs, actins and tubulins are located in the GC-rich segments, this work investigated whether other highly conserved genomic regions show similar preferences. Sequences ultraconserved between the genomes of honeybee and Nasonia, another hymenopteran insect, were determined. They showed strong preferences towards locating in the GC-rich regions of the bee genome.
[ { "created": "Sun, 18 Feb 2007 08:42:17 GMT", "version": "v1" } ]
2007-05-23
[ [ "Samanta", "Manoj Pratim", "" ] ]
Among all insect genomes, honeybee displays one of the most unusual patterns with interspersed long AT and GC-rich segments. Nearly 75% of the protein-coding genes are located in the AT-rich segments of the genome, but the biological significance of the GC-rich regions is not well understood. Based on an observation that the bee miRNAs, actins and tubulins are located in the GC-rich segments, this work investigated whether other highly conserved genomic regions show similar preferences. Sequences ultraconserved between the genomes of honeybee and Nasonia, another hymenopteran insect, were determined. They showed strong preferences towards locating in the GC-rich regions of the bee genome.
2311.06747
Radwa Adel
Radwa Adel, and Ercan Engin Kuruoglu
Graph Signal Processing For Cancer Gene Co-Expression Network Analysis
null
null
null
null
q-bio.MN physics.data-an
http://creativecommons.org/licenses/by-sa/4.0/
Cancer heterogeneity arises from complex molecular interactions. Elucidating systems-level properties of gene interaction networks distinguishing cancer from normal cells is critical for understanding disease mechanisms and developing targeted therapies. Previous works focused only on identifying differences in network structures. In this study, we used graph frequency analysis of cancer genetic signals defined on a co-expression network to describe the spectral properties of underlying cancer systems. We demonstrated that cancer cells exhibit distinctive signatures in the graph frequency content of their gene expression signals. Applying graph frequency filtering, graph Fourier transforms, and its inverse to gene expression from different cancer stages resulted in significant improvements in average F-statistics of the genes compared to using their unfiltered expression levels. We propose graph spectral properties of cancer genetic signals defined on gene co-expression networks as cancer hallmarks with potential application for differential co-expression analysis.
[ { "created": "Sun, 12 Nov 2023 06:03:13 GMT", "version": "v1" } ]
2023-11-14
[ [ "Adel", "Radwa", "" ], [ "Kuruoglu", "Ercan Engin", "" ] ]
Cancer heterogeneity arises from complex molecular interactions. Elucidating systems-level properties of gene interaction networks distinguishing cancer from normal cells is critical for understanding disease mechanisms and developing targeted therapies. Previous works focused only on identifying differences in network structures. In this study, we used graph frequency analysis of cancer genetic signals defined on a co-expression network to describe the spectral properties of underlying cancer systems. We demonstrated that cancer cells exhibit distinctive signatures in the graph frequency content of their gene expression signals. Applying graph frequency filtering, graph Fourier transforms, and its inverse to gene expression from different cancer stages resulted in significant improvements in average F-statistics of the genes compared to using their unfiltered expression levels. We propose graph spectral properties of cancer genetic signals defined on gene co-expression networks as cancer hallmarks with potential application for differential co-expression analysis.
1705.04192
Carlo Vittorio Cannistraci
Alberto Cacciola, Alessandro Muscoloni, Vaibhav Narula, Alessandro Calamuneri, Salvatore Nigro, Emeran A. Mayer, Jennifer S. Labus, Giuseppe Anastasi, Aldo Quattrone, Angelo Quartarone, Demetrio Milardi and Carlo Vittorio Cannistraci
Coalescent embedding in the hyperbolic space unsupervisedly discloses the hidden geometry of the brain
null
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain displays a complex network topology, whose structural organization is widely studied using diffusion tensor imaging. The original geometry from which emerges the network topology is known, as well as the localization of the network nodes in respect to the brain morphology and anatomy. One of the most challenging problems of current network science is to infer the latent geometry from the mere topology of a complex network. The human brain structural connectome represents the perfect benchmark to test algorithms aimed to solve this problem. Coalescent embedding was recently designed to map a complex network in the hyperbolic space, inferring the node angular coordinates. Here we show that this methodology is able to unsupervisedly reconstruct the latent geometry of the brain with an incredible accuracy and that the intrinsic geometry of the brain networks strongly relates to the lobes organization known in neuroanatomy. Furthermore, coalescent embedding allowed the detection of geometrical pathological changes in the connectomes of Parkinson's Disease patients. The present study represents the first evidence of brain networks' angular coalescence in the hyperbolic space, opening a completely new perspective, possibly towards the realization of latent geometry network markers for evaluation of brain disorders and pathologies.
[ { "created": "Wed, 10 May 2017 16:42:03 GMT", "version": "v1" } ]
2017-05-12
[ [ "Cacciola", "Alberto", "" ], [ "Muscoloni", "Alessandro", "" ], [ "Narula", "Vaibhav", "" ], [ "Calamuneri", "Alessandro", "" ], [ "Nigro", "Salvatore", "" ], [ "Mayer", "Emeran A.", "" ], [ "Labus", "Jennifer S.", "" ], [ "Anastasi", "Giuseppe", "" ], [ "Quattrone", "Aldo", "" ], [ "Quartarone", "Angelo", "" ], [ "Milardi", "Demetrio", "" ], [ "Cannistraci", "Carlo Vittorio", "" ] ]
The human brain displays a complex network topology, whose structural organization is widely studied using diffusion tensor imaging. The original geometry from which emerges the network topology is known, as well as the localization of the network nodes in respect to the brain morphology and anatomy. One of the most challenging problems of current network science is to infer the latent geometry from the mere topology of a complex network. The human brain structural connectome represents the perfect benchmark to test algorithms aimed to solve this problem. Coalescent embedding was recently designed to map a complex network in the hyperbolic space, inferring the node angular coordinates. Here we show that this methodology is able to unsupervisedly reconstruct the latent geometry of the brain with an incredible accuracy and that the intrinsic geometry of the brain networks strongly relates to the lobes organization known in neuroanatomy. Furthermore, coalescent embedding allowed the detection of geometrical pathological changes in the connectomes of Parkinson's Disease patients. The present study represents the first evidence of brain networks' angular coalescence in the hyperbolic space, opening a completely new perspective, possibly towards the realization of latent geometry network markers for evaluation of brain disorders and pathologies.
2309.16030
Reidun Twarock Prof
Q. Roussel, S. Benbedra and R Twarock
Protein container disassembly pathways depend on geometric design
15 pages, 10 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The majority of viruses are organised according to the structural blueprints of the seminal Caspar-Klug theory. However, there are a number of notable exceptions to this geometric design principle. Prominent examples are the cancer-causing papilloma viridae and the \textit{de novo} designed AaLS cages that exhibit non-quasiequivalent capsid structures with protein numbers excluded by Caspar-Klug theory. The biophysical properties of these geometrically distinct architectures and the fitness advantages driving their evolution are currently unclear. We investigate here the resilience to fragmentation and disassembly behaviour of these capsid geometries by introducing a percolation theory on weighted graphs. We show that these cage architectures follow one of two distinct disassembly pathways, preferring either hole formation or capsid fragmentation. This suggests that preference for specific disassembly scenarios could be a driving force for the evolution of the non Caspar-Klug protein container architectures.
[ { "created": "Wed, 27 Sep 2023 21:20:17 GMT", "version": "v1" } ]
2023-09-29
[ [ "Roussel", "Q.", "" ], [ "Benbedra", "S.", "" ], [ "Twarock", "R", "" ] ]
The majority of viruses are organised according to the structural blueprints of the seminal Caspar-Klug theory. However, there are a number of notable exceptions to this geometric design principle. Prominent examples are the cancer-causing papilloma viridae and the \textit{de novo} designed AaLS cages that exhibit non-quasiequivalent capsid structures with protein numbers excluded by Caspar-Klug theory. The biophysical properties of these geometrically distinct architectures and the fitness advantages driving their evolution are currently unclear. We investigate here the resilience to fragmentation and disassembly behaviour of these capsid geometries by introducing a percolation theory on weighted graphs. We show that these cage architectures follow one of two distinct disassembly pathways, preferring either hole formation or capsid fragmentation. This suggests that preference for specific disassembly scenarios could be a driving force for the evolution of the non Caspar-Klug protein container architectures.
1405.4174
Mar Alba Dr
Jorge Ruiz-Orera, Xavier Messeguer, Juan A. Subirana and M.Mar Alb\`a
Long non-coding RNAs as a source of new peptides
40 pages, 3 tables, 6 figures
null
10.7554/eLife.03523
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep transcriptome sequencing has revealed the existence of many transcripts that lack long or conserved open reading frames and which have been termed long non-coding RNAs (lncRNAs). Despite the existence of several well-characterized lncRNAs that play roles in the regulation of gene expression, the vast majority of them do not yet have a known function. Motivated by the existence of ribosome profiling data for several species, we have tested the hypothesis that they may act as a repository for the synthesis of new peptides using data from human, mouse, zebrafish, fruit fly, Arabidopsis and yeast. The ribosome protection patterns are consistent with the presence of translated open reading frames (ORFs) in a very large number of lncRNAs. Most of the ribosome-protected ORFs are shorter than 100 amino acids and usually cover less than half the transcript. Ribosome density in these ORFs is high and contrasts sharply with the 3UTR region, in which very often there is no detectable ribosome binding, similar to bona fide protein-coding genes. The coding potential of ribosome-protected ORFs, measured using hexamer frequencies, is significantly higher than that of randomly selected intronic ORFs and similar to that of evolutionary young coding sequences. Selective constraints in ribosome-protected ORFs from lncRNAs are lower than in typical protein-coding genes but again similar to young proteins. These results strongly suggest that lncRNAs play an important role in de novo protein evolution.
[ { "created": "Fri, 16 May 2014 14:16:44 GMT", "version": "v1" } ]
2014-10-02
[ [ "Ruiz-Orera", "Jorge", "" ], [ "Messeguer", "Xavier", "" ], [ "Subirana", "Juan A.", "" ], [ "Albà", "M. Mar", "" ] ]
Deep transcriptome sequencing has revealed the existence of many transcripts that lack long or conserved open reading frames and which have been termed long non-coding RNAs (lncRNAs). Despite the existence of several well-characterized lncRNAs that play roles in the regulation of gene expression, the vast majority of them do not yet have a known function. Motivated by the existence of ribosome profiling data for several species, we have tested the hypothesis that they may act as a repository for the synthesis of new peptides using data from human, mouse, zebrafish, fruit fly, Arabidopsis and yeast. The ribosome protection patterns are consistent with the presence of translated open reading frames (ORFs) in a very large number of lncRNAs. Most of the ribosome-protected ORFs are shorter than 100 amino acids and usually cover less than half the transcript. Ribosome density in these ORFs is high and contrasts sharply with the 3UTR region, in which very often there is no detectable ribosome binding, similar to bona fide protein-coding genes. The coding potential of ribosome-protected ORFs, measured using hexamer frequencies, is significantly higher than that of randomly selected intronic ORFs and similar to that of evolutionary young coding sequences. Selective constraints in ribosome-protected ORFs from lncRNAs are lower than in typical protein-coding genes but again similar to young proteins. These results strongly suggest that lncRNAs play an important role in de novo protein evolution.
2001.08945
Gustau Catalan
Raquel N\'u\~nez-Toldr\`a, Fabian Vasquez-Sancho, Nathalie Barroca and Gustau Catalan
Investigation of the cellular response to bone fractures: evidence for flexoelectricity
null
Scientific Reports volume 10, Article number: 254 (2020)
10.1038/s41598-019-57121-3
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent discovery of bone flexoelectricity (electrical polarization induced by strain gradient) suggests that flexoelectricity could have physiological effects in bones, and specifically near bone fractures, where flexoelectricity is theoretically highest. Here, we report a cytological study of the interaction between crack stress and bone cells. We have cultured MC3T3-E1 mouse osteoblastic cells in biomimetic microcracked hydroxyapatite substrates, differentiated into osteocytes and applied a strain gradient to the samples. The results show a strong apoptotic cellular response, whereby mechanical stimulation causes those cells near the crack to die, as indicated by live-dead and caspase staining. In addition, analysis two weeks after stimulation shows increased cell attachment and mineralization around microcracks and a higher expression of osteocalcin, an osteogenic protein known to be promoted by physical exercise. The results are consistent with flexoelectricity playing at least two different roles in bone remodelling: apoptotic trigger of the repair protocol, and electrostimulant of the bone-building activity of osteoblasts.
[ { "created": "Fri, 24 Jan 2020 10:47:02 GMT", "version": "v1" } ]
2020-01-27
[ [ "Núñez-Toldrà", "Raquel", "" ], [ "Vasquez-Sancho", "Fabian", "" ], [ "Barroca", "Nathalie", "" ], [ "Catalan", "Gustau", "" ] ]
The recent discovery of bone flexoelectricity (electrical polarization induced by strain gradient) suggests that flexoelectricity could have physiological effects in bones, and specifically near bone fractures, where flexoelectricity is theoretically highest. Here, we report a cytological study of the interaction between crack stress and bone cells. We have cultured MC3T3-E1 mouse osteoblastic cells in biomimetic microcracked hydroxyapatite substrates, differentiated into osteocytes and applied a strain gradient to the samples. The results show a strong apoptotic cellular response, whereby mechanical stimulation causes those cells near the crack to die, as indicated by live-dead and caspase staining. In addition, analysis two weeks after stimulation shows increased cell attachment and mineralization around microcracks and a higher expression of osteocalcin, an osteogenic protein known to be promoted by physical exercise. The results are consistent with flexoelectricity playing at least two different roles in bone remodelling: apoptotic trigger of the repair protocol, and electrostimulant of the bone-building activity of osteoblasts.
q-bio/0501012
R. Mulet
M. Hern\'andez-Gu\'ia, R. Mulet and S. Rodr\'iguez-P\'erez
A New Simulated Annealing Algorithm for the Multiple Sequence Alignment Problem: The approach of Polymers in a Random Media
7 pages and 11 figures
null
10.1103/PhysRevE.72.031915
null
q-bio.GN
null
We proposed a probabilistic algorithm to solve the Multiple Sequence Alignment problem. The algorithm is a Simulated Annealing (SA) that exploits the representation of the Multiple Alignment between $D$ sequences as a directed polymer in $D$ dimensions. Within this representation we can easily track the evolution in the configuration space of the alignment through local moves of low computational cost. At variance with other probabilistic algorithms proposed to solve this problem, our approach allows for the creation and deletion of gaps without extra computational cost. The algorithm was tested aligning proteins from the kinases family. When D=3 the results are consistent with those obtained using a complete algorithm. For $D>3$ where the complete algorithm fails, we show that our algorithm still converges to reasonable alignments. Moreover, we study the space of solutions obtained and show that depending on the number of sequences aligned the solutions are organized in different ways, suggesting a possible source of errors for progressive algorithms.
[ { "created": "Mon, 10 Jan 2005 19:24:04 GMT", "version": "v1" } ]
2016-08-16
[ [ "Hernández-Guía", "M.", "" ], [ "Mulet", "R.", "" ], [ "Rodríguez-Pérez", "S.", "" ] ]
We proposed a probabilistic algorithm to solve the Multiple Sequence Alignment problem. The algorithm is a Simulated Annealing (SA) that exploits the representation of the Multiple Alignment between $D$ sequences as a directed polymer in $D$ dimensions. Within this representation we can easily track the evolution in the configuration space of the alignment through local moves of low computational cost. At variance with other probabilistic algorithms proposed to solve this problem, our approach allows for the creation and deletion of gaps without extra computational cost. The algorithm was tested aligning proteins from the kinases family. When D=3 the results are consistent with those obtained using a complete algorithm. For $D>3$ where the complete algorithm fails, we show that our algorithm still converges to reasonable alignments. Moreover, we study the space of solutions obtained and show that depending on the number of sequences aligned the solutions are organized in different ways, suggesting a possible source of errors for progressive algorithms.
2103.15572
Anshuman Swain
Anshuman Swain, Travis Byrum, Zhaoyi Zhuang, Luke Perry, Michael Lin and William Fagan
Inferring, comparing and exploring ecological networks from time-series data through R packages constructnet, disgraph and dynet
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Network inference is a major field of interest for the ecological community, especially in light of the high cost and difficulty of manual observation, and easy availability of remote, long term monitoring data. In addition, comparing across similar network structures, especially with spatial, environmental, or temporal variability and, simulating processes on networks to create toy models and hypotheses - are topics of considerable interest to the researchers. A large number of methods are being developed in the network science community to achieve these objectives but either don't have their code available or an implementation in R, the language preferred by ecologists and other biologists. We provide a suite of three packages which will provide a central suite of standardized network inference methods from time-series data (constructnet), distance metrics (disgraph) and (process) simulation models (dynet) to the growing R network analysis environment and would help ecologists and biologists to perform and compare methods under one roof. These packages are implemented in a coherent, consistent framework - making comparisons across methods and metrics easier. We hope that these tools in R will help increase the accessibility of network tools to ecologists and other biologists, who the language for most of their analysis.
[ { "created": "Mon, 29 Mar 2021 12:43:18 GMT", "version": "v1" } ]
2021-03-30
[ [ "Swain", "Anshuman", "" ], [ "Byrum", "Travis", "" ], [ "Zhuang", "Zhaoyi", "" ], [ "Perry", "Luke", "" ], [ "Lin", "Michael", "" ], [ "Fagan", "William", "" ] ]
Network inference is a major field of interest for the ecological community, especially in light of the high cost and difficulty of manual observation, and easy availability of remote, long term monitoring data. In addition, comparing across similar network structures, especially with spatial, environmental, or temporal variability and, simulating processes on networks to create toy models and hypotheses - are topics of considerable interest to the researchers. A large number of methods are being developed in the network science community to achieve these objectives but either don't have their code available or an implementation in R, the language preferred by ecologists and other biologists. We provide a suite of three packages which will provide a central suite of standardized network inference methods from time-series data (constructnet), distance metrics (disgraph) and (process) simulation models (dynet) to the growing R network analysis environment and would help ecologists and biologists to perform and compare methods under one roof. These packages are implemented in a coherent, consistent framework - making comparisons across methods and metrics easier. We hope that these tools in R will help increase the accessibility of network tools to ecologists and other biologists, who the language for most of their analysis.
0901.4448
Elena Dubrova
Elena Dubrova and Maxim Teslenko
A SAT-Based Algorithm for Computing Attractors in Synchronous Boolean Networks
9 pages, 1 figure
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of finding cycles in the state transition graphs of synchronous Boolean networks. Synchronous Boolean networks are a class of deterministic finite state machines which are used for the modeling of gene regulatory networks. Their state transition graph cycles, called attractors, represent cell types of the organism being modeled. When the effect of a disease or a mutation on an organism is studied, attractors have to be re-computed every time a fault is injected in the model. We present an algorithm for finding attractors which uses a SAT-based bounded model checking. Novel features of the algorithm compared to the traditional SAT-based bounded model checking approaches are: (1) a termination condition which does not require an explicit computation of the diameter and (2) a technique to reduce the number of additional clauses which are needed to make paths loop-free. The presented algorithm uses much less space than existing BDD-based approaches and has a potential to handle several orders of magnitude larger networks.
[ { "created": "Wed, 28 Jan 2009 12:37:58 GMT", "version": "v1" } ]
2009-01-29
[ [ "Dubrova", "Elena", "" ], [ "Teslenko", "Maxim", "" ] ]
This paper addresses the problem of finding cycles in the state transition graphs of synchronous Boolean networks. Synchronous Boolean networks are a class of deterministic finite state machines which are used for the modeling of gene regulatory networks. Their state transition graph cycles, called attractors, represent cell types of the organism being modeled. When the effect of a disease or a mutation on an organism is studied, attractors have to be re-computed every time a fault is injected in the model. We present an algorithm for finding attractors which uses a SAT-based bounded model checking. Novel features of the algorithm compared to the traditional SAT-based bounded model checking approaches are: (1) a termination condition which does not require an explicit computation of the diameter and (2) a technique to reduce the number of additional clauses which are needed to make paths loop-free. The presented algorithm uses much less space than existing BDD-based approaches and has a potential to handle several orders of magnitude larger networks.
2406.08961
Bowen Gao
Yanwen Huang, Bowen Gao, Yinjun Jia, Hongbo Ma, Wei-Ying Ma, Ya-Qin Zhang, Yanyan Lan
SIU: A Million-Scale Structural Small Molecule-Protein Interaction Dataset for Unbiased Bioactivity Prediction
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Small molecules play a pivotal role in modern medicine, and scrutinizing their interactions with protein targets is essential for the discovery and development of novel, life-saving therapeutics. The term "bioactivity" encompasses various biological effects resulting from these interactions, including both binding and functional responses. The magnitude of bioactivity dictates the therapeutic or toxic pharmacological outcomes of small molecules, rendering accurate bioactivity prediction crucial for the development of safe and effective drugs. However, existing structural datasets of small molecule-protein interactions are often limited in scale and lack systematically organized bioactivity labels, thereby impeding our understanding of these interactions and precise bioactivity prediction. In this study, we introduce a comprehensive dataset of small molecule-protein interactions, consisting of over a million binding structures, each annotated with real biological activity labels. This dataset is designed to facilitate unbiased bioactivity prediction. We evaluated several classical models on this dataset, and the results demonstrate that the task of unbiased bioactivity prediction is challenging yet essential.
[ { "created": "Thu, 13 Jun 2024 09:49:58 GMT", "version": "v1" } ]
2024-06-14
[ [ "Huang", "Yanwen", "" ], [ "Gao", "Bowen", "" ], [ "Jia", "Yinjun", "" ], [ "Ma", "Hongbo", "" ], [ "Ma", "Wei-Ying", "" ], [ "Zhang", "Ya-Qin", "" ], [ "Lan", "Yanyan", "" ] ]
Small molecules play a pivotal role in modern medicine, and scrutinizing their interactions with protein targets is essential for the discovery and development of novel, life-saving therapeutics. The term "bioactivity" encompasses various biological effects resulting from these interactions, including both binding and functional responses. The magnitude of bioactivity dictates the therapeutic or toxic pharmacological outcomes of small molecules, rendering accurate bioactivity prediction crucial for the development of safe and effective drugs. However, existing structural datasets of small molecule-protein interactions are often limited in scale and lack systematically organized bioactivity labels, thereby impeding our understanding of these interactions and precise bioactivity prediction. In this study, we introduce a comprehensive dataset of small molecule-protein interactions, consisting of over a million binding structures, each annotated with real biological activity labels. This dataset is designed to facilitate unbiased bioactivity prediction. We evaluated several classical models on this dataset, and the results demonstrate that the task of unbiased bioactivity prediction is challenging yet essential.
1211.3473
Wim Hordijk
Wim Hordijk and Mike Steel
A formal model of autocatalytic sets emerging in an RNA replicator system
16 pages, 6 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The idea that autocatalytic sets played an important role in the origin of life is not new. However, the likelihood of autocatalytic sets emerging spontaneously has long been debated. Recently, progress has been made along two different lines. Experimental results have shown that autocatalytic sets can indeed emerge in real chemical systems, and theoretical work has shown that the existence of such self-sustaining sets is highly likely in formal models of chemical systems. Here, we take a first step towards merging these two lines of work by constructing and investigating a formal model of a real chemical system of RNA replicators exhibiting autocatalytic sets. Results: We show that the formal model accurately reproduces recent experimental results on an RNA replicator system, in particular how the system goes through a sequence of larger and larger autocatalytic sets, and how a cooperative (autocatalytic) system can outcompete an equivalent selfish system. Moreover, the model provides additional insights that could not be obtained from experiments alone, and it suggests several experimentally testable hypotheses. Conclusions: Given these additional insights and predictions, the modeling framework provides a better and more detailed understanding of the nature of chemical systems in general and the emergence of autocatalytic sets in particular. This provides an important first step in combining experimental and theoretical work on autocatalytic sets in the context of the orgin of life.
[ { "created": "Thu, 15 Nov 2012 01:17:06 GMT", "version": "v1" } ]
2012-11-16
[ [ "Hordijk", "Wim", "" ], [ "Steel", "Mike", "" ] ]
Background: The idea that autocatalytic sets played an important role in the origin of life is not new. However, the likelihood of autocatalytic sets emerging spontaneously has long been debated. Recently, progress has been made along two different lines. Experimental results have shown that autocatalytic sets can indeed emerge in real chemical systems, and theoretical work has shown that the existence of such self-sustaining sets is highly likely in formal models of chemical systems. Here, we take a first step towards merging these two lines of work by constructing and investigating a formal model of a real chemical system of RNA replicators exhibiting autocatalytic sets. Results: We show that the formal model accurately reproduces recent experimental results on an RNA replicator system, in particular how the system goes through a sequence of larger and larger autocatalytic sets, and how a cooperative (autocatalytic) system can outcompete an equivalent selfish system. Moreover, the model provides additional insights that could not be obtained from experiments alone, and it suggests several experimentally testable hypotheses. Conclusions: Given these additional insights and predictions, the modeling framework provides a better and more detailed understanding of the nature of chemical systems in general and the emergence of autocatalytic sets in particular. This provides an important first step in combining experimental and theoretical work on autocatalytic sets in the context of the orgin of life.
1210.8406
Natasha Cayco Gajic
Natasha Cayco Gajic, Eric Shea-Brown
Neutral stability, rate propagation, and critical branching in feedforward networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes, and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur: that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and 'local' noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.
[ { "created": "Wed, 31 Oct 2012 17:29:31 GMT", "version": "v1" } ]
2012-11-01
[ [ "Gajic", "Natasha Cayco", "" ], [ "Shea-Brown", "Eric", "" ] ]
Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes, and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur: that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and 'local' noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.
1703.02667
Akira Kinjo
Ken Nishikawa and Akira R. Kinjo
Essential role of long non-coding RNAs in de novo chromatin modifications: The genomic address code hypothesis
12 pages, 1 figure
Biophysical Reviews Vol. 9, pp. 73-77 (2017)
10.1007/s12551-017-0259-5
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The epigenome, i.e. the whole of chromatin modifications, is transferred from mother to daughter cells during cell differentiation. When de novo chromatin modifications (establishment or erasure of, respectively, new or pre-existing DNA methylations and/or histone modifications) are made in a daughter cell, however, it has a different epigenome than its mother cell. Although de novo chromatin modifications are an important event that comprises elementary processes of cell differentiation, its molecular mechanism remains poorly understood. We argue in this Letter that a key to solving this problem lies in understanding the role of long non-coding RNAs (lncRNAs)- a type of RNA that is becoming increasingly prominent in epigenetic studies. Many studies show that lncRNAs form ribonucleo-protein complexes in the nucleus and are involved in chromatin modifications. However, chromatin-modifying enzymes lack the information about genomic positions on which they act. It is known, on the other hand, that a single-stranded RNA in general can bind to a double-stranded DNA to form a triple helix. If each lncRNA forms a ribonucleo-protein complex with chromatin-modifying enzymes on one hand and, at the same time, a triple helix with a genomic region based on its specific nucleotide sequence on the other hand, it can induce de novo chromatin modifications at specific sites. Thus, the great variety of lncRNAs can be explained by the requirement for the diversity of "genomic address codes" specific to their cognate genomic regions where de novo chromatin modifications take place.
[ { "created": "Wed, 8 Mar 2017 01:54:24 GMT", "version": "v1" } ]
2017-04-07
[ [ "Nishikawa", "Ken", "" ], [ "Kinjo", "Akira R.", "" ] ]
The epigenome, i.e. the whole of chromatin modifications, is transferred from mother to daughter cells during cell differentiation. When de novo chromatin modifications (establishment or erasure of, respectively, new or pre-existing DNA methylations and/or histone modifications) are made in a daughter cell, however, it has a different epigenome than its mother cell. Although de novo chromatin modifications are an important event that comprises elementary processes of cell differentiation, its molecular mechanism remains poorly understood. We argue in this Letter that a key to solving this problem lies in understanding the role of long non-coding RNAs (lncRNAs)- a type of RNA that is becoming increasingly prominent in epigenetic studies. Many studies show that lncRNAs form ribonucleo-protein complexes in the nucleus and are involved in chromatin modifications. However, chromatin-modifying enzymes lack the information about genomic positions on which they act. It is known, on the other hand, that a single-stranded RNA in general can bind to a double-stranded DNA to form a triple helix. If each lncRNA forms a ribonucleo-protein complex with chromatin-modifying enzymes on one hand and, at the same time, a triple helix with a genomic region based on its specific nucleotide sequence on the other hand, it can induce de novo chromatin modifications at specific sites. Thus, the great variety of lncRNAs can be explained by the requirement for the diversity of "genomic address codes" specific to their cognate genomic regions where de novo chromatin modifications take place.
2004.03541
Erik Hoel
Johannes Kleiner, Erik Hoel
Falsification and consciousness
21 pages, 4 figures
null
10.1093/nc/niab001
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.
[ { "created": "Tue, 7 Apr 2020 17:07:55 GMT", "version": "v1" }, { "created": "Thu, 9 Jul 2020 22:43:19 GMT", "version": "v2" }, { "created": "Wed, 28 Apr 2021 15:47:09 GMT", "version": "v3" } ]
2021-04-29
[ [ "Kleiner", "Johannes", "" ], [ "Hoel", "Erik", "" ] ]
The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.
2406.05540
Yiqing Shen
Yiqing Shen, Zan Chen, Michail Mamalakis, Luhan He, Haiyang Xia, Tianbin Li, Yanzhou Su, Junjun He, Yu Guang Wang
A Fine-tuning Dataset and Benchmark for Large Language Models for Protein Understanding
null
null
null
null
q-bio.QM cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
The parallels between protein sequences and natural language in their sequential structures have inspired the application of large language models (LLMs) to protein understanding. Despite the success of LLMs in NLP, their effectiveness in comprehending protein sequences remains an open question, largely due to the absence of datasets linking protein sequences to descriptive text. Researchers have then attempted to adapt LLMs for protein understanding by integrating a protein sequence encoder with a pre-trained LLM. However, this adaptation raises a fundamental question: "Can LLMs, originally designed for NLP, effectively comprehend protein sequences as a form of language?" Current datasets fall short in addressing this question due to the lack of a direct correlation between protein sequences and corresponding text descriptions, limiting the ability to train and evaluate LLMs for protein understanding effectively. To bridge this gap, we introduce ProteinLMDataset, a dataset specifically designed for further self-supervised pretraining and supervised fine-tuning (SFT) of LLMs to enhance their capability for protein sequence comprehension. Specifically, ProteinLMDataset includes 17.46 billion tokens for pretraining and 893,000 instructions for SFT. Additionally, we present ProteinLMBench, the first benchmark dataset consisting of 944 manually verified multiple-choice questions for assessing the protein understanding capabilities of LLMs. ProteinLMBench incorporates protein-related details and sequences in multiple languages, establishing a new standard for evaluating LLMs' abilities in protein comprehension. The large language model InternLM2-7B, pretrained and fine-tuned on the ProteinLMDataset, outperforms GPT-4 on ProteinLMBench, achieving the highest accuracy score.
[ { "created": "Sat, 8 Jun 2024 18:11:30 GMT", "version": "v1" }, { "created": "Mon, 8 Jul 2024 16:39:35 GMT", "version": "v2" } ]
2024-07-09
[ [ "Shen", "Yiqing", "" ], [ "Chen", "Zan", "" ], [ "Mamalakis", "Michail", "" ], [ "He", "Luhan", "" ], [ "Xia", "Haiyang", "" ], [ "Li", "Tianbin", "" ], [ "Su", "Yanzhou", "" ], [ "He", "Junjun", "" ], [ "Wang", "Yu Guang", "" ] ]
The parallels between protein sequences and natural language in their sequential structures have inspired the application of large language models (LLMs) to protein understanding. Despite the success of LLMs in NLP, their effectiveness in comprehending protein sequences remains an open question, largely due to the absence of datasets linking protein sequences to descriptive text. Researchers have then attempted to adapt LLMs for protein understanding by integrating a protein sequence encoder with a pre-trained LLM. However, this adaptation raises a fundamental question: "Can LLMs, originally designed for NLP, effectively comprehend protein sequences as a form of language?" Current datasets fall short in addressing this question due to the lack of a direct correlation between protein sequences and corresponding text descriptions, limiting the ability to train and evaluate LLMs for protein understanding effectively. To bridge this gap, we introduce ProteinLMDataset, a dataset specifically designed for further self-supervised pretraining and supervised fine-tuning (SFT) of LLMs to enhance their capability for protein sequence comprehension. Specifically, ProteinLMDataset includes 17.46 billion tokens for pretraining and 893,000 instructions for SFT. Additionally, we present ProteinLMBench, the first benchmark dataset consisting of 944 manually verified multiple-choice questions for assessing the protein understanding capabilities of LLMs. ProteinLMBench incorporates protein-related details and sequences in multiple languages, establishing a new standard for evaluating LLMs' abilities in protein comprehension. The large language model InternLM2-7B, pretrained and fine-tuned on the ProteinLMDataset, outperforms GPT-4 on ProteinLMBench, achieving the highest accuracy score.
1209.3189
J\"org Ackermann
Alexander Schmitz, Tim Sch\"afer, Hendrik Sch\"afer, Claudia D\"oring, J\"org Ackermann, Norbert Dichter, Sylvia Hartmann, Martin-Leo Hansmann, and Ina Koch
Automated Image Analysis of Hodgkin lymphoma
12 papes, 5 figures, Dagstuhl seminar, Dagstuhl Seminar 12291 "Structure Discovery in Biology: Motifs, Networks & Phylogenies"
null
null
null
q-bio.TO physics.med-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hodgkin lymphoma is an unusual type of lymphoma, arising from malignant B-cells. Morphological and immunohistochemical features of malignant cells and their distribution differ from other cancer types. Based on systematic tissue image analysis, computer-aided exploration can provide new insights into Hodgkin lymphoma pathology. In this paper, we report results from an image analysis of CD30 immunostained Hodgkin lymphoma tissue section images. To the best of our knowledge, this is the first systematic application of image analysis to a set of tissue sections of Hodgkin lymphoma. We have implemented an automatic procedure to handle and explore image data in Aperio's SVS format. We use pre-processing approaches on a down-scaled image to separate the image objects from the background. Then, we apply a supervised classification method to assign pixels to predefined classes. Our pre-processing method is able to separate the tissue content of images from the image background. We analyzed three immunohistologically defined groups, non-lymphoma and the two most common forms of Hodgkin lymphoma, nodular sclerosis and mixed cellularity type. We found that nodular sclerosis and non-lymphoma images exhibit different amounts of CD30 stain, whereas mixed cellularity type exhibits a large variance and overlaps with the other groups. The results can be seen as a first step to computationally identify tumor regions in the images. This allows us to focus on these regions when performing computationally expensive tasks like object detection in the high-resolution image.
[ { "created": "Fri, 14 Sep 2012 13:42:49 GMT", "version": "v1" } ]
2012-09-17
[ [ "Schmitz", "Alexander", "" ], [ "Schäfer", "Tim", "" ], [ "Schäfer", "Hendrik", "" ], [ "Döring", "Claudia", "" ], [ "Ackermann", "Jörg", "" ], [ "Dichter", "Norbert", "" ], [ "Hartmann", "Sylvia", "" ], [ "Hansmann", "Martin-Leo", "" ], [ "Koch", "Ina", "" ] ]
Hodgkin lymphoma is an unusual type of lymphoma, arising from malignant B-cells. Morphological and immunohistochemical features of malignant cells and their distribution differ from other cancer types. Based on systematic tissue image analysis, computer-aided exploration can provide new insights into Hodgkin lymphoma pathology. In this paper, we report results from an image analysis of CD30 immunostained Hodgkin lymphoma tissue section images. To the best of our knowledge, this is the first systematic application of image analysis to a set of tissue sections of Hodgkin lymphoma. We have implemented an automatic procedure to handle and explore image data in Aperio's SVS format. We use pre-processing approaches on a down-scaled image to separate the image objects from the background. Then, we apply a supervised classification method to assign pixels to predefined classes. Our pre-processing method is able to separate the tissue content of images from the image background. We analyzed three immunohistologically defined groups, non-lymphoma and the two most common forms of Hodgkin lymphoma, nodular sclerosis and mixed cellularity type. We found that nodular sclerosis and non-lymphoma images exhibit different amounts of CD30 stain, whereas mixed cellularity type exhibits a large variance and overlaps with the other groups. The results can be seen as a first step to computationally identify tumor regions in the images. This allows us to focus on these regions when performing computationally expensive tasks like object detection in the high-resolution image.
1107.5124
Shivendra Tewari
Shivendra Tewari and Kaushik Majumdar
A Mathematical model for Astrocytes mediated LTP at Single Hippocampal Synapses
51 pages, 15 figures, Journal of Computational Neuroscience (to appear)
null
10.1007/s10827-012-0389-5
null
q-bio.NC math.DS q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many contemporary studies have shown that astrocytes play a significant role in modulating both short and long form of synaptic plasticity. There are very few experimental models which elucidate the role of astrocyte over Long-term Potentiation (LTP). Recently, Perea & Araque (2007) demonstrated a role of astrocytes in induction of LTP at single hippocampal synapses. They suggested a purely pre-synaptic basis for induction of this N-methyl-D- Aspartate (NMDA) Receptor-independent LTP. Also, the mechanisms underlying this pre-synaptic induction were not investigated. Here, in this article, we propose a mathematical model for astrocyte modulated LTP which successfully emulates the experimental findings of Perea & Araque (2007). Our study suggests the role of retrograde messengers, possibly Nitric Oxide (NO), for this pre-synaptically modulated LTP.
[ { "created": "Tue, 26 Jul 2011 06:23:36 GMT", "version": "v1" }, { "created": "Sun, 4 Dec 2011 18:24:52 GMT", "version": "v2" }, { "created": "Thu, 23 Feb 2012 01:52:50 GMT", "version": "v3" }, { "created": "Mon, 12 Mar 2012 14:02:54 GMT", "version": "v4" } ]
2012-06-05
[ [ "Tewari", "Shivendra", "" ], [ "Majumdar", "Kaushik", "" ] ]
Many contemporary studies have shown that astrocytes play a significant role in modulating both short and long form of synaptic plasticity. There are very few experimental models which elucidate the role of astrocyte over Long-term Potentiation (LTP). Recently, Perea & Araque (2007) demonstrated a role of astrocytes in induction of LTP at single hippocampal synapses. They suggested a purely pre-synaptic basis for induction of this N-methyl-D- Aspartate (NMDA) Receptor-independent LTP. Also, the mechanisms underlying this pre-synaptic induction were not investigated. Here, in this article, we propose a mathematical model for astrocyte modulated LTP which successfully emulates the experimental findings of Perea & Araque (2007). Our study suggests the role of retrograde messengers, possibly Nitric Oxide (NO), for this pre-synaptically modulated LTP.
0806.3978
Vincent Vu
Vincent Q. Vu, Bin Yu, Robert E. Kass
Information In The Non-Stationary Case
null
null
null
null
q-bio.NC cs.IT math.IT q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information estimates such as the ``direct method'' of Strong et al. (1998) sidestep the difficult problem of estimating the joint distribution of response and stimulus by instead estimating the difference between the marginal and conditional entropies of the response. While this is an effective estimation strategy, it tempts the practitioner to ignore the role of the stimulus and the meaning of mutual information. We show here that, as the number of trials increases indefinitely, the direct (or ``plug-in'') estimate of marginal entropy converges (with probability 1) to the entropy of the time-averaged conditional distribution of the response, and the direct estimate of the conditional entropy converges to the time-averaged entropy of the conditional distribution of the response. Under joint stationarity and ergodicity of the response and stimulus, the difference of these quantities converges to the mutual information. When the stimulus is deterministic or non-stationary the direct estimate of information no longer estimates mutual information, which is no longer meaningful, but it remains a measure of variability of the response distribution across time.
[ { "created": "Tue, 24 Jun 2008 20:13:08 GMT", "version": "v1" }, { "created": "Fri, 18 Jul 2008 23:29:00 GMT", "version": "v2" } ]
2008-07-19
[ [ "Vu", "Vincent Q.", "" ], [ "Yu", "Bin", "" ], [ "Kass", "Robert E.", "" ] ]
Information estimates such as the ``direct method'' of Strong et al. (1998) sidestep the difficult problem of estimating the joint distribution of response and stimulus by instead estimating the difference between the marginal and conditional entropies of the response. While this is an effective estimation strategy, it tempts the practitioner to ignore the role of the stimulus and the meaning of mutual information. We show here that, as the number of trials increases indefinitely, the direct (or ``plug-in'') estimate of marginal entropy converges (with probability 1) to the entropy of the time-averaged conditional distribution of the response, and the direct estimate of the conditional entropy converges to the time-averaged entropy of the conditional distribution of the response. Under joint stationarity and ergodicity of the response and stimulus, the difference of these quantities converges to the mutual information. When the stimulus is deterministic or non-stationary the direct estimate of information no longer estimates mutual information, which is no longer meaningful, but it remains a measure of variability of the response distribution across time.
2109.02172
Wera M Schmerer
Wera M Schmerer
Optimized protocol for DNA extraction from ancient skeletal remains using Chelex-100
8 pages, 1 figure, 2 tables, optimized protocol as supplement
null
10.33545/27074447.2021.v3.i1a.35
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
PCR-based analysis of skeletonized human remains is a common aspect in both forensic human identification as well as Ancient DNA research. In this, both areas not merely utilize very similar methodology, but also share the same problems regarding quantity and quality of recovered DNA and presence of inhibitory substances in samples from excavated remains. To enable amplification based analysis of the remains, development of optimized DNA extraction procedures is thus a critical factor in both areas. The study here presents an optimized protocol for DNA extraction from ancient skeletonized remains using Chelex-100, which proved to be effective in yielding amplifiable extracts from sample material excavated after centuries in a soil environment, which consequently have high inhibitor content and overall limited DNA preservation. Success of the optimization strategies utilized is shown in significantly improved amplification outcomes compared to the predecessor method.
[ { "created": "Sun, 5 Sep 2021 21:39:54 GMT", "version": "v1" } ]
2021-12-07
[ [ "Schmerer", "Wera M", "" ] ]
PCR-based analysis of skeletonized human remains is a common aspect in both forensic human identification as well as Ancient DNA research. In this, both areas not merely utilize very similar methodology, but also share the same problems regarding quantity and quality of recovered DNA and presence of inhibitory substances in samples from excavated remains. To enable amplification based analysis of the remains, development of optimized DNA extraction procedures is thus a critical factor in both areas. The study here presents an optimized protocol for DNA extraction from ancient skeletonized remains using Chelex-100, which proved to be effective in yielding amplifiable extracts from sample material excavated after centuries in a soil environment, which consequently have high inhibitor content and overall limited DNA preservation. Success of the optimization strategies utilized is shown in significantly improved amplification outcomes compared to the predecessor method.
1511.07652
Thomas R. Weikl
Guang-Kui Xu, Jinglei Hu, Reinhard Lipowsky, and Thomas R. Weikl
Binding constants of membrane-anchored receptors and ligands: a general theory corroborated by Monte Carlo simulations
17 pages, 11 figures; to appear in J. Chem. Phys
null
10.1063/1.4936134
null
q-bio.BM physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the 'two-dimensional' (2D) membrane environment of the proteins. An important problem therefore is to relate K2D} to the binding constant K3D} of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points 'in 2D', or unconstrained 'in 3D'. The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the 'dimensionality reduction' due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points.
[ { "created": "Tue, 24 Nov 2015 11:18:05 GMT", "version": "v1" } ]
2015-11-30
[ [ "Xu", "Guang-Kui", "" ], [ "Hu", "Jinglei", "" ], [ "Lipowsky", "Reinhard", "" ], [ "Weikl", "Thomas R.", "" ] ]
Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the 'two-dimensional' (2D) membrane environment of the proteins. An important problem therefore is to relate K2D} to the binding constant K3D} of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points 'in 2D', or unconstrained 'in 3D'. The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the 'dimensionality reduction' due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points.
2108.13823
Saurav Mandal PhD
Hanuman Verma, Saurav Mandal, Akshansh Gupta
Temporal Deep Learning Architecture for Prediction of COVID-19 Cases in India
13 pages
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
To combat the recent coronavirus disease 2019 (COVID-19), academician and clinician are in search of new approaches to predict the COVID-19 outbreak dynamic trends that may slow down or stop the pandemic. Epidemiological models like Susceptible-Infected-Recovered (SIR) and its variants are helpful to understand the dynamics trend of pandemic that may be used in decision making to optimize possible controls from the infectious disease. But these epidemiological models based on mathematical assumptions may not predict the real pandemic situation. Recently the new machine learning approaches are being used to understand the dynamic trend of COVID-19 spread. In this paper, we designed the recurrent and convolutional neural network models: vanilla LSTM, stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and hybrid CNN+LSTM model to capture the complex trend of COVID-19 outbreak and perform the forecasting of COVID-19 daily confirmed cases of 7, 14, 21 days for India and its four most affected states (Maharashtra, Kerala, Karnataka, and Tamil Nadu). The root mean square error (RMSE) and mean absolute percentage error (MAPE) evaluation metric are computed on the testing data to demonstrate the relative performance of these models. The results show that the stacked LSTM and hybrid CNN+LSTM models perform best relative to other models.
[ { "created": "Tue, 31 Aug 2021 13:28:51 GMT", "version": "v1" } ]
2021-09-01
[ [ "Verma", "Hanuman", "" ], [ "Mandal", "Saurav", "" ], [ "Gupta", "Akshansh", "" ] ]
To combat the recent coronavirus disease 2019 (COVID-19), academician and clinician are in search of new approaches to predict the COVID-19 outbreak dynamic trends that may slow down or stop the pandemic. Epidemiological models like Susceptible-Infected-Recovered (SIR) and its variants are helpful to understand the dynamics trend of pandemic that may be used in decision making to optimize possible controls from the infectious disease. But these epidemiological models based on mathematical assumptions may not predict the real pandemic situation. Recently the new machine learning approaches are being used to understand the dynamic trend of COVID-19 spread. In this paper, we designed the recurrent and convolutional neural network models: vanilla LSTM, stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and hybrid CNN+LSTM model to capture the complex trend of COVID-19 outbreak and perform the forecasting of COVID-19 daily confirmed cases of 7, 14, 21 days for India and its four most affected states (Maharashtra, Kerala, Karnataka, and Tamil Nadu). The root mean square error (RMSE) and mean absolute percentage error (MAPE) evaluation metric are computed on the testing data to demonstrate the relative performance of these models. The results show that the stacked LSTM and hybrid CNN+LSTM models perform best relative to other models.
1808.05865
Paul Moore
P.J. Moore, J. Gallacher, T.J. Lyons
Using path signatures to predict a diagnosis of Alzheimer's disease
5 pages, 3 figures. arXiv admin note: text overlap with arXiv:1808.03273
null
10.1371/journal.pone.0222212
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
The path signature is a means of feature generation that can encode nonlinear interactions in the data as well as the usual linear features. It can distinguish the ordering of time-sequenced changes: for example whether or not the hippocampus shrinks fast, then slowly or the converse. It provides interpretable features and its output is a fixed length vector irrespective of the number of input points so it can encode longitudinal data of varying length and with missing data points. In this paper we demonstrate the path signature in providing features to distinguish a set of people with Alzheimer's disease from a matched set of healthy individuals. The data used are volume measurements of the whole brain, ventricles and hippocampus from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The path signature method is shown to be a useful tool for the processing of sequential data which is becoming increasingly available as monitoring technologies are applied.
[ { "created": "Thu, 16 Aug 2018 15:41:58 GMT", "version": "v1" } ]
2020-07-01
[ [ "Moore", "P. J.", "" ], [ "Gallacher", "J.", "" ], [ "Lyons", "T. J.", "" ] ]
The path signature is a means of feature generation that can encode nonlinear interactions in the data as well as the usual linear features. It can distinguish the ordering of time-sequenced changes: for example whether or not the hippocampus shrinks fast, then slowly or the converse. It provides interpretable features and its output is a fixed length vector irrespective of the number of input points so it can encode longitudinal data of varying length and with missing data points. In this paper we demonstrate the path signature in providing features to distinguish a set of people with Alzheimer's disease from a matched set of healthy individuals. The data used are volume measurements of the whole brain, ventricles and hippocampus from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The path signature method is shown to be a useful tool for the processing of sequential data which is becoming increasingly available as monitoring technologies are applied.
1309.3002
Ron Nielsen
Ron W Nielsen
The Late-Pleistocene extinction of megafauna compared with the growth of human population
3 figures, 1 table, 7 pages, 3393 words. This version contains the latest new information about the growth of human population Note for the Moderator: This new version has been accepted but the old version has not been replaced. This renewed submission is to help you to replace the old version. Thank you for your help
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-dependent distribution of the global extinction of megafauna is compared with the growth of human population. There is no correlation between the two processes. Furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna.
[ { "created": "Wed, 11 Sep 2013 23:33:25 GMT", "version": "v1" }, { "created": "Fri, 13 Sep 2013 05:08:25 GMT", "version": "v2" }, { "created": "Thu, 19 Sep 2013 23:17:46 GMT", "version": "v3" }, { "created": "Fri, 25 Oct 2013 03:11:30 GMT", "version": "v4" }, { "created": "Sat, 18 Nov 2017 05:05:02 GMT", "version": "v5" }, { "created": "Thu, 23 Nov 2017 10:01:28 GMT", "version": "v6" } ]
2017-11-27
[ [ "Nielsen", "Ron W", "" ] ]
Time-dependent distribution of the global extinction of megafauna is compared with the growth of human population. There is no correlation between the two processes. Furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna.
1705.05990
Matthew Mizuhara
Matthew S. Mizuhara, Leonid Berlyand, Igor S. Aronson
Minimal Model of Directed Cell Motility on Patterned Substrates
13 pages, 3 figures
Phys. Rev. E 96, 052408 (2017)
10.1103/PhysRevE.96.052408
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crawling cell motility is vital to many biological processes such as wound healing and the immune response. Using a minimal model we investigate the effects of patterned substrate adhesiveness and biophysical cell parameters on the direction of cell motion. We show that cells with low adhesion site formation rates may move perpendicular to adhesive stripes while a those with high adhesion site formation rates results in motility only parallel to the substrate stripes. We explore the effects of varying the substrate pattern and the strength of actin polymerization on the directionality of the crawling cell; these results have applications in motile cell sorting and guiding on engineered substrates.
[ { "created": "Wed, 17 May 2017 02:30:02 GMT", "version": "v1" } ]
2017-11-22
[ [ "Mizuhara", "Matthew S.", "" ], [ "Berlyand", "Leonid", "" ], [ "Aronson", "Igor S.", "" ] ]
Crawling cell motility is vital to many biological processes such as wound healing and the immune response. Using a minimal model we investigate the effects of patterned substrate adhesiveness and biophysical cell parameters on the direction of cell motion. We show that cells with low adhesion site formation rates may move perpendicular to adhesive stripes while a those with high adhesion site formation rates results in motility only parallel to the substrate stripes. We explore the effects of varying the substrate pattern and the strength of actin polymerization on the directionality of the crawling cell; these results have applications in motile cell sorting and guiding on engineered substrates.
1305.3830
Yann Ponty
Yu Zhou (LRI, CMM), Yann Ponty (LIX, INRIA Saclay - Ile de France), St\'ephane Vialette (LIGM), J\'er\^ome Waldisp\"uhl, Yi Zhang, Alain Denise (LRI, INRIA Saclay - Ile de France, IGM)
Flexible RNA design under structure and sequence constraints using formal languages
ACM BCB 2013 - ACM Conference on Bioinformatics, Computational Biology and Biomedical Informatics (2013)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of RNA secondary structure design (also called inverse folding) is the following: given a target secondary structure, one aims to create a sequence that folds into, or is compatible with, a given structure. In several practical applications in biology, additional constraints must be taken into account, such as the presence/absence of regulatory motifs, either at a specific location or anywhere in the sequence. In this study, we investigate the design of RNA sequences from their targeted secondary structure, given these additional sequence constraints. To this purpose, we develop a general framework based on concepts of language theory, namely context-free grammars and finite automata. We efficiently combine a comprehensive set of constraints into a unifying context-free grammar of moderate size. From there, we use generic generic algorithms to perform a (weighted) random generation, or an exhaustive enumeration, of candidate sequences. The resulting method, whose complexity scales linearly with the length of the RNA, was implemented as a standalone program. The resulting software was embedded into a publicly available dedicated web server. The applicability demonstrated of the method on a concrete case study dedicated to Exon Splicing Enhancers, in which our approach was successfully used in the design of \emph{in vitro} experiments.
[ { "created": "Thu, 16 May 2013 14:51:46 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2013 18:40:19 GMT", "version": "v2" } ]
2013-08-02
[ [ "Zhou", "Yu", "", "LRI, CMM" ], [ "Ponty", "Yann", "", "LIX, INRIA Saclay - Ile de France" ], [ "Vialette", "Stéphane", "", "LIGM" ], [ "Waldispühl", "Jérôme", "", "LRI, INRIA Saclay - Ile de France, IGM" ], [ "Zhang", "Yi", "", "LRI, INRIA Saclay - Ile de France, IGM" ], [ "Denise", "Alain", "", "LRI, INRIA Saclay - Ile de France, IGM" ] ]
The problem of RNA secondary structure design (also called inverse folding) is the following: given a target secondary structure, one aims to create a sequence that folds into, or is compatible with, a given structure. In several practical applications in biology, additional constraints must be taken into account, such as the presence/absence of regulatory motifs, either at a specific location or anywhere in the sequence. In this study, we investigate the design of RNA sequences from their targeted secondary structure, given these additional sequence constraints. To this purpose, we develop a general framework based on concepts of language theory, namely context-free grammars and finite automata. We efficiently combine a comprehensive set of constraints into a unifying context-free grammar of moderate size. From there, we use generic generic algorithms to perform a (weighted) random generation, or an exhaustive enumeration, of candidate sequences. The resulting method, whose complexity scales linearly with the length of the RNA, was implemented as a standalone program. The resulting software was embedded into a publicly available dedicated web server. The applicability demonstrated of the method on a concrete case study dedicated to Exon Splicing Enhancers, in which our approach was successfully used in the design of \emph{in vitro} experiments.
1807.04228
Henrik Jeldtoft Jensen
Henrik Jeldtoft Jensen
Tangled Nature: A model of emergent structure and temporal mode among co-evolving agents
Invited contribution to Focus on Complexity in European Journal of Physics. 25 page, 1 figure
null
10.1088/1361-6404/aaee8f
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding systems level behaviour of many interacting agents is challenging in various ways, here we'll focus on the how the interaction between components can lead to hierarchical structures with different types of dynamics, or causations, at different levels. We use the Tangled Nature model to discuss the co-evolutionary aspects connecting the microscopic level of the individual to the macroscopic systems level. At the microscopic level the individual agent may undergo evolutionary changes due to mutations of strategies. The micro-dynamics always run at a constant rate. Nevertheless, the system's level dynamics exhibit a completely different type of intermittent abrupt dynamics where major upheavals keep throwing the system between meta-stable configurations. These dramatic transitions are described by a log-Poisson time statistics. The long time effect is a collectively adapted of the ecological network. We discuss the ecological and macroevolutionary consequences of the adaptive dynamics and briefly describe work using the Tangled Nature framework to analyse problems in economics, sociology, innovation and sustainability
[ { "created": "Wed, 11 Jul 2018 16:18:17 GMT", "version": "v1" } ]
2018-12-26
[ [ "Jensen", "Henrik Jeldtoft", "" ] ]
Understanding systems level behaviour of many interacting agents is challenging in various ways, here we'll focus on the how the interaction between components can lead to hierarchical structures with different types of dynamics, or causations, at different levels. We use the Tangled Nature model to discuss the co-evolutionary aspects connecting the microscopic level of the individual to the macroscopic systems level. At the microscopic level the individual agent may undergo evolutionary changes due to mutations of strategies. The micro-dynamics always run at a constant rate. Nevertheless, the system's level dynamics exhibit a completely different type of intermittent abrupt dynamics where major upheavals keep throwing the system between meta-stable configurations. These dramatic transitions are described by a log-Poisson time statistics. The long time effect is a collectively adapted of the ecological network. We discuss the ecological and macroevolutionary consequences of the adaptive dynamics and briefly describe work using the Tangled Nature framework to analyse problems in economics, sociology, innovation and sustainability
2009.08869
Wajid Arshad Abbasi
Wajid Arshad Abbasi, Syed Ali Abbas, Saiqa Andleeb
PANDA: Predicting the change in proteins binding affinity upon mutations using sequence information
null
Journal of Bioinformatics and Computational Biology, 2021
10.1142/S0219720021500153
null
q-bio.BM cs.AI cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Accurately determining a change in protein binding affinity upon mutations is important for the discovery and design of novel therapeutics and to assist mutagenesis studies. Determination of change in binding affinity upon mutations requires sophisticated, expensive, and time-consuming wet-lab experiments that can be aided with computational methods. Most of the computational prediction techniques require protein structures that limit their applicability to protein complexes with known structures. In this work, we explore the sequence-based prediction of change in protein binding affinity upon mutation. We have used protein sequence information instead of protein structures along with machine learning techniques to accurately predict the change in protein binding affinity upon mutation. Our proposed sequence-based novel change in protein binding affinity predictor called PANDA gives better accuracy than existing methods over the same validation set as well as on an external independent test dataset. On an external test dataset, our proposed method gives a maximum Pearson correlation coefficient of 0.52 in comparison to the state-of-the-art existing protein structure-based method called MutaBind which gives a maximum Pearson correlation coefficient of 0.59. Our proposed protein sequence-based method, to predict a change in binding affinity upon mutations, has wide applicability and comparable performance in comparison to existing protein structure-based methods. A cloud-based webserver implementation of PANDA and its python code is available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/panda.
[ { "created": "Wed, 16 Sep 2020 17:12:25 GMT", "version": "v1" } ]
2021-09-01
[ [ "Abbasi", "Wajid Arshad", "" ], [ "Abbas", "Syed Ali", "" ], [ "Andleeb", "Saiqa", "" ] ]
Accurately determining a change in protein binding affinity upon mutations is important for the discovery and design of novel therapeutics and to assist mutagenesis studies. Determination of change in binding affinity upon mutations requires sophisticated, expensive, and time-consuming wet-lab experiments that can be aided with computational methods. Most of the computational prediction techniques require protein structures that limit their applicability to protein complexes with known structures. In this work, we explore the sequence-based prediction of change in protein binding affinity upon mutation. We have used protein sequence information instead of protein structures along with machine learning techniques to accurately predict the change in protein binding affinity upon mutation. Our proposed sequence-based novel change in protein binding affinity predictor called PANDA gives better accuracy than existing methods over the same validation set as well as on an external independent test dataset. On an external test dataset, our proposed method gives a maximum Pearson correlation coefficient of 0.52 in comparison to the state-of-the-art existing protein structure-based method called MutaBind which gives a maximum Pearson correlation coefficient of 0.59. Our proposed protein sequence-based method, to predict a change in binding affinity upon mutations, has wide applicability and comparable performance in comparison to existing protein structure-based methods. A cloud-based webserver implementation of PANDA and its python code is available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/panda.
q-bio/0505043
Catherine Beauchemin
Catherine Beauchemin
Probing the Effects of the Well-mixed Assumption on Viral Infection Dynamics
LaTeX, 12 pages, 22 EPS figures, uses document class REVTeX 4, and packages float, graphics, amsmath, and SIunits
J. Theor. Biol., 242(2):464-477, 2006
10.1016/j.jtbi.2006.03.014
null
q-bio.CB q-bio.QM
null
Viral kinetics have been extensively studied in the past through the use of spatially well-mixed ordinary differential equations describing the time evolution of the diseased state. However, emerging spatial structures such as localized populations of dead cells might adversely affect the spread of infection, similar to the manner in which a counter-fire can stop a forest fire from spreading. In a previous publication (Beauchemin et al., 2005), a simple 2-D cellular automaton model was introduced and shown to be accurate enough to model an uncomplicated infection with influenza A. Here, this model is used to investigate the effects of relaxing the well-mixed assumption. Particularly, the effects of the initial distribution of infected cells, the regeneration rule for dead epithelial cells, and the proliferation rule for immune cells are explored and shown to have an important impact on the development and outcome of the viral infection in our model.
[ { "created": "Mon, 23 May 2005 18:55:42 GMT", "version": "v1" } ]
2024-04-02
[ [ "Beauchemin", "Catherine", "" ] ]
Viral kinetics have been extensively studied in the past through the use of spatially well-mixed ordinary differential equations describing the time evolution of the diseased state. However, emerging spatial structures such as localized populations of dead cells might adversely affect the spread of infection, similar to the manner in which a counter-fire can stop a forest fire from spreading. In a previous publication (Beauchemin et al., 2005), a simple 2-D cellular automaton model was introduced and shown to be accurate enough to model an uncomplicated infection with influenza A. Here, this model is used to investigate the effects of relaxing the well-mixed assumption. Particularly, the effects of the initial distribution of infected cells, the regeneration rule for dead epithelial cells, and the proliferation rule for immune cells are explored and shown to have an important impact on the development and outcome of the viral infection in our model.
1909.03801
Michael Sachs
Michael C Sachs and Arvid Sj\"olander and Erin E Gabriel
Aim for clinical utility, not just predictive accuracy
Submitted to Epidemiology
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
The predictions from an accurate prognostic model can be of great interest to patients and clinicians. When predictions are reported to individuals, they may decide to take action to improve their health or they may simply be comforted by the knowledge. However, if there is a clearly defined space of actions in the clinical context, a formal decision rule based on the prediction has the potential to have a much broader impact. Even if it is not the intended use of a developed prediction model, informal decision rules can often be found in practice. The use of a prediction-based decision rule should be formalized and compared to the standard of care in a randomized trial to assess its clinical utility, however, evidence is needed to motivate such a trial. We outline how observational data can be used to propose a decision rule based on a prognostic prediction model. We then propose a framework for emulating a prediction driven trial to evaluate the utility of a prediction-based decision rule in observational data. A split-sample structure can and should be used to develop the prognostic model, define the decision rule, and evaluate its clinical utility.
[ { "created": "Thu, 29 Aug 2019 12:55:05 GMT", "version": "v1" } ]
2019-09-10
[ [ "Sachs", "Michael C", "" ], [ "Sjölander", "Arvid", "" ], [ "Gabriel", "Erin E", "" ] ]
The predictions from an accurate prognostic model can be of great interest to patients and clinicians. When predictions are reported to individuals, they may decide to take action to improve their health or they may simply be comforted by the knowledge. However, if there is a clearly defined space of actions in the clinical context, a formal decision rule based on the prediction has the potential to have a much broader impact. Even if it is not the intended use of a developed prediction model, informal decision rules can often be found in practice. The use of a prediction-based decision rule should be formalized and compared to the standard of care in a randomized trial to assess its clinical utility, however, evidence is needed to motivate such a trial. We outline how observational data can be used to propose a decision rule based on a prognostic prediction model. We then propose a framework for emulating a prediction driven trial to evaluate the utility of a prediction-based decision rule in observational data. A split-sample structure can and should be used to develop the prognostic model, define the decision rule, and evaluate its clinical utility.
1303.3842
Kristina Crona
Kristina Crona, Dayonna Patterson, Kelly Stack, Devin Greene, Christiane Goulart, Mentar Mahmudi, Stephen D. Jacobs, Marcelo Kallman and Miriam Barlow
Antibiotic resistance landscapes: a quantification of theory-data incompatibility for fitness landscapes
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fitness landscapes are central in analyzing evolution, in particular for drug resistance mutations for bacteria and virus. We show that the fitness landscapes associated with antibiotic resistance are not compatible with any of the classical models; additive, uncorrelated and block fitness landscapes. The NK model is also discussed. It is frequently stated that virtually nothing is known about fitness landscapes in nature. We demonstrate that available records of antimicrobial drug mutations can reveal interesting properties of fitness landscapes in general. We apply the methods to analyze the TEM family of $\beta$-lactamases associated with antibiotic resistance. Laboratory results agree with our observations. The qualitative tools we suggest are well suited for comparisons of empirical fitness landscapes. Fitness landscapes are central in the theory of recombination and there is a potential for finding relations between the tools and recombination strategies.
[ { "created": "Fri, 15 Mar 2013 18:11:36 GMT", "version": "v1" } ]
2013-03-18
[ [ "Crona", "Kristina", "" ], [ "Patterson", "Dayonna", "" ], [ "Stack", "Kelly", "" ], [ "Greene", "Devin", "" ], [ "Goulart", "Christiane", "" ], [ "Mahmudi", "Mentar", "" ], [ "Jacobs", "Stephen D.", "" ], [ "Kallman", "Marcelo", "" ], [ "Barlow", "Miriam", "" ] ]
Fitness landscapes are central in analyzing evolution, in particular for drug resistance mutations for bacteria and virus. We show that the fitness landscapes associated with antibiotic resistance are not compatible with any of the classical models; additive, uncorrelated and block fitness landscapes. The NK model is also discussed. It is frequently stated that virtually nothing is known about fitness landscapes in nature. We demonstrate that available records of antimicrobial drug mutations can reveal interesting properties of fitness landscapes in general. We apply the methods to analyze the TEM family of $\beta$-lactamases associated with antibiotic resistance. Laboratory results agree with our observations. The qualitative tools we suggest are well suited for comparisons of empirical fitness landscapes. Fitness landscapes are central in the theory of recombination and there is a potential for finding relations between the tools and recombination strategies.