id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1309.4712
Liane Gabora
Liane Gabora, Diederik Aerts
Distilling the Essence of an Evolutionary Process and Implications for a Formal Description of Culture
In (W. Kistler, Ed.) Proceedings of Center for Human Evolution Workshop #4: Cultural Evolution, May 18-19, 2000. Bellevue, WA: Foundation for the Future (2005)
null
null
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been proposed that, since the origin of life and the ensuing evolution of biological species, a second evolutionary process has appeared on our planet. It is the evolution of culture-e.g., ideas, beliefs, and artifacts. Does culture evolve in the same genuine sense as biological life? And if so, does it evolve through natural selection, or by some other means? Why does no other species remotely approach the degree of cultural complexity of humans? These questions lie at the foundation of who we are and what makes our lives meaningful. Although much research has been done on how selective pressures operating at the biological level affect cognition and culture, little research has focused on culture as an evolutionary process in its own right. Like biological forms, cultural forms-ideas, attitudes, artifacts, mannerisms, etc.-incrementally adapt to the constraints and affordances of their environment through descent with modification. In some respects culture appears to be Darwinian, i.e., a process of differential replication and selection amongst randomly generated variants. This suggests that knowledge of biological evolution can be put to use to gain insight into culture. However, attempts to apply Darwinian theory to culture have not yielded the kind of unifying framework for the social sciences that it provided for the biological sciences, largely because of the nonrandom manner in which the mind-the hub of cultural change-generates and assimilates novelty. This paper investigates how and when humans became capable of supporting culture, and what previously held it back, focusing on how we attained the creative powers we now possess. To invent in the strategic, intuitive manner characteristic of humans requires a cognitive architecture that supports the capacity to spontaneously adapt concepts to new circumstances and merge them together to conceptualize new situations.
[ { "created": "Wed, 18 Sep 2013 17:22:25 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 02:27:33 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2019 21:51:46 GMT", "version": "v3" } ]
2019-07-09
[ [ "Gabora", "Liane", "" ], [ "Aerts", "Diederik", "" ] ]
It has been proposed that, since the origin of life and the ensuing evolution of biological species, a second evolutionary process has appeared on our planet. It is the evolution of culture-e.g., ideas, beliefs, and artifacts. Does culture evolve in the same genuine sense as biological life? And if so, does it evolve through natural selection, or by some other means? Why does no other species remotely approach the degree of cultural complexity of humans? These questions lie at the foundation of who we are and what makes our lives meaningful. Although much research has been done on how selective pressures operating at the biological level affect cognition and culture, little research has focused on culture as an evolutionary process in its own right. Like biological forms, cultural forms-ideas, attitudes, artifacts, mannerisms, etc.-incrementally adapt to the constraints and affordances of their environment through descent with modification. In some respects culture appears to be Darwinian, i.e., a process of differential replication and selection amongst randomly generated variants. This suggests that knowledge of biological evolution can be put to use to gain insight into culture. However, attempts to apply Darwinian theory to culture have not yielded the kind of unifying framework for the social sciences that it provided for the biological sciences, largely because of the nonrandom manner in which the mind-the hub of cultural change-generates and assimilates novelty. This paper investigates how and when humans became capable of supporting culture, and what previously held it back, focusing on how we attained the creative powers we now possess. To invent in the strategic, intuitive manner characteristic of humans requires a cognitive architecture that supports the capacity to spontaneously adapt concepts to new circumstances and merge them together to conceptualize new situations.
1105.0126
Neri Niccolai
Andrea Bernini, Vincenzo Venditti, Ottavia Spiga, Filippo Prischi, Mauro Botta, Angela Pui-Ling Tong, Wing-Tak Wong and Neri Niccolai
The surface accessibility of {\alpha}-bungarotoxin monitored by a novel paramagnetic probe
13 pages, 4 figures,preliminary report
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The surface accessibility of {\alpha}-bungarotoxin has been investigated by using Gd2L7, a newly designed paramagnetic NMR probe. Signal attenuations induced by Gd2L7 on {\alpha}-bungarotoxin C{\alpha}H peaks of 1H-13C HSQC spectra have been analyzed and compared with the ones previously obtained in the presence of GdDTPA-BMA. In spite of the different molecular size and shape, for the two probes a common pathway of approach to the {\alpha}-bungarotoxin surface can be observed with an equally enhanced access of both GdDTPA-BMA and Gd2L7 towards the protein surface side where the binding site is located. Molecular dynamics simulations suggest that protein backbone flexibility and surface hydration contribute to the observed preferential approach of both gadolinium complexes specifically to the part of the {\alpha}-bungarotoxin surface which is involved in the interaction with its physiological target, the nicotinic acetylcholine receptor.
[ { "created": "Sat, 30 Apr 2011 23:02:50 GMT", "version": "v1" } ]
2011-05-03
[ [ "Bernini", "Andrea", "" ], [ "Venditti", "Vincenzo", "" ], [ "Spiga", "Ottavia", "" ], [ "Prischi", "Filippo", "" ], [ "Botta", "Mauro", "" ], [ "Tong", "Angela Pui-Ling", "" ], [ "Wong", "Wing-Tak", "" ], [ "Niccolai", "Neri", "" ] ]
The surface accessibility of {\alpha}-bungarotoxin has been investigated by using Gd2L7, a newly designed paramagnetic NMR probe. Signal attenuations induced by Gd2L7 on {\alpha}-bungarotoxin C{\alpha}H peaks of 1H-13C HSQC spectra have been analyzed and compared with the ones previously obtained in the presence of GdDTPA-BMA. In spite of the different molecular size and shape, for the two probes a common pathway of approach to the {\alpha}-bungarotoxin surface can be observed with an equally enhanced access of both GdDTPA-BMA and Gd2L7 towards the protein surface side where the binding site is located. Molecular dynamics simulations suggest that protein backbone flexibility and surface hydration contribute to the observed preferential approach of both gadolinium complexes specifically to the part of the {\alpha}-bungarotoxin surface which is involved in the interaction with its physiological target, the nicotinic acetylcholine receptor.
q-bio/0601024
Denis Boyer
Denis Boyer, Gabriel Ramos-Fern\'andez, Octavio Miramontes, Jos\'e L. Mateos, Germinal Cocho, Hern\'an Larralde, Humberto Ramos, Fernando Rojas
Scale-free foraging by primates emerges from their interaction with a complex environment
31 pages, 4 figures. To appear in Proc. Roy. Soc. B. Minor revisions
null
null
null
q-bio.PE cond-mat.dis-nn
null
Scale-free foraging patterns are widespread among animals. These may be the outcome of an optimal searching strategy to find scarce randomly distributed resources, but a less explored alternative is that this behaviour may result from the interaction of foraging animals with a particular distribution of resources. We introduce a simple foraging model where individuals follow mental maps and choose their displacements according to a maximum efficiency criterion, in a spatially disordered environment containing many trees with a heterogeneous size distribution. We show that a particular tree size frequency distribution induces non-Gaussian movement patterns with multiple spatial scales (L\'evy walks). These results are consistent with tree size variation and Spider monkey (Ateles geoffroyi) foraging patterns. We discuss the consequences that our results may have for the patterns of seed dispersal by foraging primates.
[ { "created": "Tue, 17 Jan 2006 22:48:55 GMT", "version": "v1" }, { "created": "Tue, 31 Jan 2006 17:20:16 GMT", "version": "v2" } ]
2007-05-23
[ [ "Boyer", "Denis", "" ], [ "Ramos-Fernández", "Gabriel", "" ], [ "Miramontes", "Octavio", "" ], [ "Mateos", "José L.", "" ], [ "Cocho", "Germinal", "" ], [ "Larralde", "Hernán", "" ], [ "Ramos", "Humberto", "" ], [ "Rojas", "Fernando", "" ] ]
Scale-free foraging patterns are widespread among animals. These may be the outcome of an optimal searching strategy to find scarce randomly distributed resources, but a less explored alternative is that this behaviour may result from the interaction of foraging animals with a particular distribution of resources. We introduce a simple foraging model where individuals follow mental maps and choose their displacements according to a maximum efficiency criterion, in a spatially disordered environment containing many trees with a heterogeneous size distribution. We show that a particular tree size frequency distribution induces non-Gaussian movement patterns with multiple spatial scales (L\'evy walks). These results are consistent with tree size variation and Spider monkey (Ateles geoffroyi) foraging patterns. We discuss the consequences that our results may have for the patterns of seed dispersal by foraging primates.
2004.12665
Jae Woo Lee
Seo Yoon Chae, Kyoung-Eun Lee, Hyun Min Lee, Nam Jun, Quang Ahn Le, Biseko Juma Mafwele, Tae Ho Lee, Doo Hwan Kim, and Jae Woo Lee
Estimation of Infection Rate and Prediction of Initial Infected Individuals of COVID-19
12 pages, 2 figures
null
null
null
q-bio.PE nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the pandemic spreading of COVID-19 for some selected countries after the outbreak of the coronavirus in Wuhan City, China. We estimated the infection rate and the initial infected individuals of COVID-19 by using the officially reported data at the early stage of the epidemic for the susceptible (S), infectable (I), quarantined (Q), and the cofirmed recovered (Rk) population model, so called SIQRk model. In the reported data we know the quarantined cases and the recovered cases. We can not know the recovered cases from the asymptomatic cases. In the SIQRk model we can estimated the model parameters and the initial infecting cases (confirmed ans asymtomatic cases) from the data fits. We obtained the infection rate in the range between 0.233 and 0.462, the basic reproduction number Ro in the range between 1.8 and 3.5, and the initial number of infected individuals in the range betwee 10 and 8409 for some selected countries. By using fitting parameters we estimated the maximum time of the infection for Germany when the government are performing the quarantine policy. The disease is undergoing to the calm state about six months after first patients were identified.
[ { "created": "Mon, 27 Apr 2020 09:27:09 GMT", "version": "v1" } ]
2020-04-28
[ [ "Chae", "Seo Yoon", "" ], [ "Lee", "Kyoung-Eun", "" ], [ "Lee", "Hyun Min", "" ], [ "Jun", "Nam", "" ], [ "Le", "Quang Ahn", "" ], [ "Mafwele", "Biseko Juma", "" ], [ "Lee", "Tae Ho", "" ], [ "Kim", "Doo Hwan", "" ], [ "Lee", "Jae Woo", "" ] ]
We consider the pandemic spreading of COVID-19 for some selected countries after the outbreak of the coronavirus in Wuhan City, China. We estimated the infection rate and the initial infected individuals of COVID-19 by using the officially reported data at the early stage of the epidemic for the susceptible (S), infectable (I), quarantined (Q), and the cofirmed recovered (Rk) population model, so called SIQRk model. In the reported data we know the quarantined cases and the recovered cases. We can not know the recovered cases from the asymptomatic cases. In the SIQRk model we can estimated the model parameters and the initial infecting cases (confirmed ans asymtomatic cases) from the data fits. We obtained the infection rate in the range between 0.233 and 0.462, the basic reproduction number Ro in the range between 1.8 and 3.5, and the initial number of infected individuals in the range betwee 10 and 8409 for some selected countries. By using fitting parameters we estimated the maximum time of the infection for Germany when the government are performing the quarantine policy. The disease is undergoing to the calm state about six months after first patients were identified.
1902.10975
Jean-Paul Saint Martin
Jean-Paul Saint Martin (CR2P), Simona Saint Martin (CR2P)
Beltanelliformis brunsae Menner in Keller et al., 1974: an undoubted Ediacaran fossil from Neoproterozoic of Dobrogea (Romania)
null
Geodiversitas, Museum National d'Histoire Naturelle Paris, 2018
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ediacaran fossils are now largely known in different parts of the world. However, some countries are poorly documented on these remains of a still enigmatic life. Thus, rare fossils from the Neoproterozoic Histria Formation of central Dobrogea (Romania) have been reported. Two specimens with discoid imprints are described here in detail and assigned to the typical Ediacaran species Beltanelliformis brunsae Menner in Keller et al., 1974. This paleontological development confirms both the large geographical distribution of this species and the Ediacaran age of the Histria Formation. KEY WORDS Flat discoid imprints, Ediacaran biota, Precambrian, Dobrogea, Romania. R{\'E}SUM{\'E}
[ { "created": "Thu, 28 Feb 2019 09:43:53 GMT", "version": "v1" } ]
2019-03-01
[ [ "Martin", "Jean-Paul Saint", "", "CR2P" ], [ "Martin", "Simona Saint", "", "CR2P" ] ]
Ediacaran fossils are now largely known in different parts of the world. However, some countries are poorly documented on these remains of a still enigmatic life. Thus, rare fossils from the Neoproterozoic Histria Formation of central Dobrogea (Romania) have been reported. Two specimens with discoid imprints are described here in detail and assigned to the typical Ediacaran species Beltanelliformis brunsae Menner in Keller et al., 1974. This paleontological development confirms both the large geographical distribution of this species and the Ediacaran age of the Histria Formation. KEY WORDS Flat discoid imprints, Ediacaran biota, Precambrian, Dobrogea, Romania. R{\'E}SUM{\'E}
2202.05389
Viplove Arora
Viplove Arora, Enrico Amico, Joaqu\'in Go\~ni, Mario Ventresca
Investigating cognitive ability using action-based models of structural brain networks
null
null
10.1093/comnet/cnac037
null
q-bio.NC stat.ME
http://creativecommons.org/licenses/by/4.0/
Recent developments in network neuroscience have highlighted the importance of developing techniques for analyzing and modeling brain networks. A particularly powerful approach for studying complex neural systems is to formulate generative models that use wiring rules to synthesize networks closely resembling the topology of a given connectome. Successful models can highlight the principles by which a network is organized (identify structural features that arise from wiring rules versus those that emerge) and potentially uncover the mechanisms by which it grows and develops. Previous research has shown that such models can validate the effectiveness of spatial embedding and other (non-spatial) wiring rules in shaping the network topology of the human connectome. In this research, we propose variants of the action-based model that combine a variety of generative factors capable of explaining the topology of the human connectome. We test the descriptive validity of our models by evaluating their ability to explain between-subject variability. Our analysis provides evidence that geometric constraints are vital for connectivity between brain regions, and an action-based model relying on both topological and geometric properties can account for between-subject variability in structural network properties. Further, we test correlations between parameters of subject-optimized models and various measures of cognitive ability and find that higher cognitive ability is associated with an individual's tendency to form long-range or non-local connections.
[ { "created": "Wed, 19 Jan 2022 14:24:11 GMT", "version": "v1" } ]
2022-09-09
[ [ "Arora", "Viplove", "" ], [ "Amico", "Enrico", "" ], [ "Goñi", "Joaquín", "" ], [ "Ventresca", "Mario", "" ] ]
Recent developments in network neuroscience have highlighted the importance of developing techniques for analyzing and modeling brain networks. A particularly powerful approach for studying complex neural systems is to formulate generative models that use wiring rules to synthesize networks closely resembling the topology of a given connectome. Successful models can highlight the principles by which a network is organized (identify structural features that arise from wiring rules versus those that emerge) and potentially uncover the mechanisms by which it grows and develops. Previous research has shown that such models can validate the effectiveness of spatial embedding and other (non-spatial) wiring rules in shaping the network topology of the human connectome. In this research, we propose variants of the action-based model that combine a variety of generative factors capable of explaining the topology of the human connectome. We test the descriptive validity of our models by evaluating their ability to explain between-subject variability. Our analysis provides evidence that geometric constraints are vital for connectivity between brain regions, and an action-based model relying on both topological and geometric properties can account for between-subject variability in structural network properties. Further, we test correlations between parameters of subject-optimized models and various measures of cognitive ability and find that higher cognitive ability is associated with an individual's tendency to form long-range or non-local connections.
q-bio/0507006
Dietrich Stauffer
Michael Masa, Stanislaw Cebrat and Dietrich Stauffer
Does telomere elongation lead to a longer lifespan if cancer is considered?
9 pages including 5 figures
null
10.1016/j.physa.2005.08.043
null
q-bio.PE
null
As cell proliferation is limited due to the loss of telomere repeats in DNA of normal somatic cells during division, telomere attrition can possibly play an important role in determining the maximum life span of organisms as well as contribute to the process of biological ageing. With computer simulations of cell culture development in organisms, which consist of tissues of normal somatic cells with finite growth, we otain an increase of life span and life expectancy for longer telomeric DNA in the zygote. By additionally considering a two-mutation model for carcinogenesis and indefinite proliferation by the activation of telomerase, we demonstrate that the risk of dying due to cancer can outweigh the positive effect of longer telomeres on the longevity.
[ { "created": "Mon, 4 Jul 2005 17:54:10 GMT", "version": "v1" } ]
2009-11-11
[ [ "Masa", "Michael", "" ], [ "Cebrat", "Stanislaw", "" ], [ "Stauffer", "Dietrich", "" ] ]
As cell proliferation is limited due to the loss of telomere repeats in DNA of normal somatic cells during division, telomere attrition can possibly play an important role in determining the maximum life span of organisms as well as contribute to the process of biological ageing. With computer simulations of cell culture development in organisms, which consist of tissues of normal somatic cells with finite growth, we otain an increase of life span and life expectancy for longer telomeric DNA in the zygote. By additionally considering a two-mutation model for carcinogenesis and indefinite proliferation by the activation of telomerase, we demonstrate that the risk of dying due to cancer can outweigh the positive effect of longer telomeres on the longevity.
2012.03290
Claudia Solis-Lemus
Yizhou Liu and Claudia Solis-Lemus
WI Fast Stats: a collection of web apps for the visualization and analysis of WI Fast Plants data
5 pages, 4 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
WI Fast Stats is the first and only dedicated tool tailored to the WI Fast Plants educational objectives. WI Fast Stats is an integrated animated web page with a collection of R-developed web apps that provide Data Visualization and Data Analysis tools for WI Fast Plants data. WI Fast Stats is a user-friendly easy-to-use interface that will render Data Science accessible to K-16 teachers and students currently using WI Fast Plants lesson plans. Users do not need to have strong programming or mathematical background to use WI Fast Stats as the web apps are simple to use, well documented, and freely available.
[ { "created": "Sun, 6 Dec 2020 15:39:59 GMT", "version": "v1" } ]
2020-12-08
[ [ "Liu", "Yizhou", "" ], [ "Solis-Lemus", "Claudia", "" ] ]
WI Fast Stats is the first and only dedicated tool tailored to the WI Fast Plants educational objectives. WI Fast Stats is an integrated animated web page with a collection of R-developed web apps that provide Data Visualization and Data Analysis tools for WI Fast Plants data. WI Fast Stats is a user-friendly easy-to-use interface that will render Data Science accessible to K-16 teachers and students currently using WI Fast Plants lesson plans. Users do not need to have strong programming or mathematical background to use WI Fast Stats as the web apps are simple to use, well documented, and freely available.
1309.5717
Giovanni Bussi
Francesco Di Palma, Francesco Colizzi, and Giovanni Bussi
Ligand-induced stabilization of the aptamer terminal helix in the add adenine riboswitch
Accepted on RNA Journal
Di Palma, Colizzi, and Bussi, RNA 19, 1517 (2013)
10.1261/rna.040493.113
null
q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph
http://creativecommons.org/licenses/by/3.0/
Riboswitches are structured mRNA elements that modulate gene expression. They undergo conformational changes triggered by highly specific interactions with sensed metabolites. Among the structural rearrangements engaged by riboswitches, the forming and melting of the aptamer terminal helix, the so-called P1 stem, is essential for genetic control. The structural mechanisms by which this conformational change is modulated upon ligand binding mostly remain to be elucidated. Here, we used pulling molecular dynamics simulations to study the thermodynamics of the P1 stem in the add adenine riboswitch. The P1 ligand-dependent stabilization was quantified in terms of free energy and compared with thermodynamic data. This comparison suggests a model for the aptamer folding in which direct P1-ligand interactions play a minor role on the conformational switch when compared with those related to the ligand-induced aptamer preorganization.
[ { "created": "Mon, 23 Sep 2013 07:27:52 GMT", "version": "v1" } ]
2013-10-29
[ [ "Di Palma", "Francesco", "" ], [ "Colizzi", "Francesco", "" ], [ "Bussi", "Giovanni", "" ] ]
Riboswitches are structured mRNA elements that modulate gene expression. They undergo conformational changes triggered by highly specific interactions with sensed metabolites. Among the structural rearrangements engaged by riboswitches, the forming and melting of the aptamer terminal helix, the so-called P1 stem, is essential for genetic control. The structural mechanisms by which this conformational change is modulated upon ligand binding mostly remain to be elucidated. Here, we used pulling molecular dynamics simulations to study the thermodynamics of the P1 stem in the add adenine riboswitch. The P1 ligand-dependent stabilization was quantified in terms of free energy and compared with thermodynamic data. This comparison suggests a model for the aptamer folding in which direct P1-ligand interactions play a minor role on the conformational switch when compared with those related to the ligand-induced aptamer preorganization.
1603.08432
Lida Kanari
Lida Kanari, Pawe{\l} D{\l}otko, Martina Scolamiero, Ran Levi, Julian Shillcock, Kathryn Hess, Henry Markram
Quantifying topological invariants of neuronal morphologies
10 pages, 5 figures, conference or other essential info
null
null
null
q-bio.NC cs.DS math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nervous systems are characterized by neurons displaying a diversity of morphological shapes. Traditionally, different shapes have been qualitatively described based on visual inspection and quantitatively described based on morphometric parameters. Neither process provides a solid foundation for categorizing the various morphologies, a problem that is important in many fields. We propose a stable topological measure as a standardized descriptor for any tree-like morphology, which encodes its skeletal branching anatomy. More specifically it is a barcode of the branching tree as determined by a spherical filtration centered at the root or neuronal soma. This Topological Morphology Descriptor (TMD) allows for the discrimination of groups of random and neuronal trees at linear computational cost.
[ { "created": "Mon, 28 Mar 2016 16:32:44 GMT", "version": "v1" } ]
2016-03-29
[ [ "Kanari", "Lida", "" ], [ "Dłotko", "Paweł", "" ], [ "Scolamiero", "Martina", "" ], [ "Levi", "Ran", "" ], [ "Shillcock", "Julian", "" ], [ "Hess", "Kathryn", "" ], [ "Markram", "Henry", "" ] ]
Nervous systems are characterized by neurons displaying a diversity of morphological shapes. Traditionally, different shapes have been qualitatively described based on visual inspection and quantitatively described based on morphometric parameters. Neither process provides a solid foundation for categorizing the various morphologies, a problem that is important in many fields. We propose a stable topological measure as a standardized descriptor for any tree-like morphology, which encodes its skeletal branching anatomy. More specifically it is a barcode of the branching tree as determined by a spherical filtration centered at the root or neuronal soma. This Topological Morphology Descriptor (TMD) allows for the discrimination of groups of random and neuronal trees at linear computational cost.
1304.5324
Sergey Dobretsov
Raeid M. M. Abed, Sergey Dobretsov, Marwan Al-Fori, Sarath P. Gunasekera, Kumar Sudesh and Valerie J. Paul
Quorum sensing inhibitory compounds from extremophilic microorganisms isolated from a hypersaline cyanobacterial mat
First evidence of production of quorum sensing inhibitors from hypersaline bacteria. Accepted for publication by Journal Industrial Microbiology and Biotechnology
null
null
null
q-bio.CB q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study extremely halophilic and moderately thermophilic microorganisms from a hypersaline microbial mat were screened for their ability to produce antibacterial, antidiatom, antialgal and quorum sensing (QS) inhibitory compounds. Five bacterial strains belonging to the genera Marinobacter and Halomonas and one archaeal strain belonging to the genus Haloterrigena were isolated from a microbial mat. The strains were able to grow at a maximum salinity of 22-25% and a maximum temperature of 45-60{\deg}C. Hexanes, dichloromethane and butanol extracts from the strains inhibited the growth of at least one out of nine human pathogens. Only butanol extracts of supernatants of Halomonas sp. SK-1 inhibited growth of the microalga Dunaliella salina. Most extracts from isolates inhibited QS of the acyl homoserine lactone producer and reporter Chromobacterium violaceum CV017. Purification of QS inhibitory dichloromethane extracts of Marinobacter sp. SK-3 resulted in isolation of four related diketopiperazines (DKPs): cyclo(L-Pro-L-Phe), cyclo(L-Pro-L-Leu), cyclo(L-Pro-L-isoLeu) and cyclo(L-Pro-D-Phe). QS inhibitory properties of these DKPs were tested using C. violaceum CV017 and Escherichia coli-based QS reporters (pSB401 and pSB1075) deficient in AHL production. Cyclo(L-Pro-L-Phe) and cyclo(L-Pro-L-isoLeu) inhibited QS dependent production of violacein by C. violaceum CV017. Cyclo(L-Pro-L-Phe), cyclo(L-Pro-L-Leu), and cyclo(L-Pro-L-isoLeu) reduced QS dependent luminescence of the reporter E. coli pSB401 induced by 3-oxo-C6-HSL. Our study demonstrated the ability of halophilic and moderately thermophilic strains from a hypersaline microbial mat produce biotechnologically-relevant compounds that could be used as antifouling agents
[ { "created": "Fri, 19 Apr 2013 06:43:39 GMT", "version": "v1" } ]
2013-04-22
[ [ "Abed", "Raeid M. M.", "" ], [ "Dobretsov", "Sergey", "" ], [ "Al-Fori", "Marwan", "" ], [ "Gunasekera", "Sarath P.", "" ], [ "Sudesh", "Kumar", "" ], [ "Paul", "Valerie J.", "" ] ]
In this study extremely halophilic and moderately thermophilic microorganisms from a hypersaline microbial mat were screened for their ability to produce antibacterial, antidiatom, antialgal and quorum sensing (QS) inhibitory compounds. Five bacterial strains belonging to the genera Marinobacter and Halomonas and one archaeal strain belonging to the genus Haloterrigena were isolated from a microbial mat. The strains were able to grow at a maximum salinity of 22-25% and a maximum temperature of 45-60{\deg}C. Hexanes, dichloromethane and butanol extracts from the strains inhibited the growth of at least one out of nine human pathogens. Only butanol extracts of supernatants of Halomonas sp. SK-1 inhibited growth of the microalga Dunaliella salina. Most extracts from isolates inhibited QS of the acyl homoserine lactone producer and reporter Chromobacterium violaceum CV017. Purification of QS inhibitory dichloromethane extracts of Marinobacter sp. SK-3 resulted in isolation of four related diketopiperazines (DKPs): cyclo(L-Pro-L-Phe), cyclo(L-Pro-L-Leu), cyclo(L-Pro-L-isoLeu) and cyclo(L-Pro-D-Phe). QS inhibitory properties of these DKPs were tested using C. violaceum CV017 and Escherichia coli-based QS reporters (pSB401 and pSB1075) deficient in AHL production. Cyclo(L-Pro-L-Phe) and cyclo(L-Pro-L-isoLeu) inhibited QS dependent production of violacein by C. violaceum CV017. Cyclo(L-Pro-L-Phe), cyclo(L-Pro-L-Leu), and cyclo(L-Pro-L-isoLeu) reduced QS dependent luminescence of the reporter E. coli pSB401 induced by 3-oxo-C6-HSL. Our study demonstrated the ability of halophilic and moderately thermophilic strains from a hypersaline microbial mat produce biotechnologically-relevant compounds that could be used as antifouling agents
2104.04457
Kevin Yang
Zachary Wu, Kadina E. Johnston, Frances H. Arnold, Kevin K. Yang
Protein sequence design with deep generative models
11 pages, 2 figures
null
10.1016/j.cbpa.2021.04.004
null
q-bio.QM cs.LG q-bio.BM stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Protein engineering seeks to identify protein sequences with optimized properties. When guided by machine learning, protein sequence generation methods can draw on prior knowledge and experimental efforts to improve this process. In this review, we highlight recent applications of machine learning to generate protein sequences, focusing on the emerging field of deep generative methods.
[ { "created": "Fri, 9 Apr 2021 16:08:15 GMT", "version": "v1" } ]
2021-05-28
[ [ "Wu", "Zachary", "" ], [ "Johnston", "Kadina E.", "" ], [ "Arnold", "Frances H.", "" ], [ "Yang", "Kevin K.", "" ] ]
Protein engineering seeks to identify protein sequences with optimized properties. When guided by machine learning, protein sequence generation methods can draw on prior knowledge and experimental efforts to improve this process. In this review, we highlight recent applications of machine learning to generate protein sequences, focusing on the emerging field of deep generative methods.
1801.10144
Manlio De Domenico
Giuseppe Mangioni, Giuseppe Jurman, Manlio De Domenico
Multilayer flows in molecular networks identify biological modules in the human proteome
9 pages, 6 figures
null
null
null
q-bio.MN physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A variety of complex systems exhibit different types of relationships simultaneously that can be modeled by multiplex networks. A typical problem is to determine the community structure of such systems that, in general, depend on one or more parameters to be tuned. In this study we propose one measure, grounded on information theory, to find the optimal value of the relax rate characterizing Multiplex Infomap, the generalization of the Infomap algorithm to the realm of multilayer networks. We evaluate our methodology on synthetic networks, to show that the most representative community structure can be reliably identified when the most appropriate relax rate is used. Capitalizing on these results, we use this measure to identify the most reliable meso-scale functional organization in the human protein-protein interaction multiplex network and compare the observed clusters against a collection of independently annotated gene sets from the Molecular Signatures Database (MSigDB). Our analysis reveals that modules obtained with the optimal value of the relax rate are biologically significant and, remarkably, with higher functional content than the ones obtained from the aggregate representation of the human proteome. Our framework allows us to characterize the meso-scale structure of those multilayer systems whose layers are not explicitly interconnected each other -- as in the case of edge-colored models -- the ones describing most biological networks, from proteomes to connectomes.
[ { "created": "Tue, 30 Jan 2018 18:52:35 GMT", "version": "v1" } ]
2018-01-31
[ [ "Mangioni", "Giuseppe", "" ], [ "Jurman", "Giuseppe", "" ], [ "De Domenico", "Manlio", "" ] ]
A variety of complex systems exhibit different types of relationships simultaneously that can be modeled by multiplex networks. A typical problem is to determine the community structure of such systems that, in general, depend on one or more parameters to be tuned. In this study we propose one measure, grounded on information theory, to find the optimal value of the relax rate characterizing Multiplex Infomap, the generalization of the Infomap algorithm to the realm of multilayer networks. We evaluate our methodology on synthetic networks, to show that the most representative community structure can be reliably identified when the most appropriate relax rate is used. Capitalizing on these results, we use this measure to identify the most reliable meso-scale functional organization in the human protein-protein interaction multiplex network and compare the observed clusters against a collection of independently annotated gene sets from the Molecular Signatures Database (MSigDB). Our analysis reveals that modules obtained with the optimal value of the relax rate are biologically significant and, remarkably, with higher functional content than the ones obtained from the aggregate representation of the human proteome. Our framework allows us to characterize the meso-scale structure of those multilayer systems whose layers are not explicitly interconnected each other -- as in the case of edge-colored models -- the ones describing most biological networks, from proteomes to connectomes.
2005.05653
Shahaf Armon
S. Armon, M. Moshe, E. Sharon
The Intermittent Nature of Leaf Growth Fields
20 pages, 4 figures, 4 sections of supplementary material -with one supplementary figure
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What are the general principles that allow proper growth of a tissue or an organ? A growing leaf is an example of such a system: it increases its area by orders of magnitude, maintaining a proper (usually flat) shape. How can this be achieved without a central control unit? One would think that a combination of uniform growth with viscoelastic rheology would allow that. Here we show that that the exact opposite process is in action: the natural growth of the leaf surface strongly fluctuates in time and position. By combining high resolution measurements and multi-scale statistical analysis, we suggest a paradigm-change in the way leaf growth is viewed. We measure the in-plane tissue growth of Tobacco leaves in Lagrangian coordinates in 3D and study the statistics of three scalar fields associated with the tensorial growth field: The growth rate, its isotropy and directionality. We identify the governing time and length scales of the fluctuations in the growth rate, and capture abundant switching between local area swelling and shrinking, when measured in high resolution. At lower spatio-temporal resolution the growth rate field becomes smooth. In contrast, the anisotropy field increases over time. Finally, we find significant differences between growth measured during day and night. The growth at night is found to be more intermittent, with shorter correlation lengths and no global directionality. Despite its fluctuative nature, growth fields are not random, thus carry information about growth regulation. Indeed, mechanical analysis shows that a growing leaf can stay flat only if the measured fluctuations are regulated/correlated. Our measurements suggest that the entire statistics of growth fields, and not just their means, should be studied. In particular, the regulation of such fields and the link between their characteristics and the global geometry of a leaf should be studied.
[ { "created": "Tue, 12 May 2020 10:03:23 GMT", "version": "v1" } ]
2020-05-13
[ [ "Armon", "S.", "" ], [ "Moshe", "M.", "" ], [ "Sharon", "E.", "" ] ]
What are the general principles that allow proper growth of a tissue or an organ? A growing leaf is an example of such a system: it increases its area by orders of magnitude, maintaining a proper (usually flat) shape. How can this be achieved without a central control unit? One would think that a combination of uniform growth with viscoelastic rheology would allow that. Here we show that that the exact opposite process is in action: the natural growth of the leaf surface strongly fluctuates in time and position. By combining high resolution measurements and multi-scale statistical analysis, we suggest a paradigm-change in the way leaf growth is viewed. We measure the in-plane tissue growth of Tobacco leaves in Lagrangian coordinates in 3D and study the statistics of three scalar fields associated with the tensorial growth field: The growth rate, its isotropy and directionality. We identify the governing time and length scales of the fluctuations in the growth rate, and capture abundant switching between local area swelling and shrinking, when measured in high resolution. At lower spatio-temporal resolution the growth rate field becomes smooth. In contrast, the anisotropy field increases over time. Finally, we find significant differences between growth measured during day and night. The growth at night is found to be more intermittent, with shorter correlation lengths and no global directionality. Despite its fluctuative nature, growth fields are not random, thus carry information about growth regulation. Indeed, mechanical analysis shows that a growing leaf can stay flat only if the measured fluctuations are regulated/correlated. Our measurements suggest that the entire statistics of growth fields, and not just their means, should be studied. In particular, the regulation of such fields and the link between their characteristics and the global geometry of a leaf should be studied.
2407.14794
Toni Giorgino
Antonio Mirarchi, Toni Giorgino, Gianni De Fabritiis
mdCATH: A Large-Scale MD Dataset for Data-Driven Computational Biophysics
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in protein structure determination are revolutionizing our understanding of proteins. Still, a significant gap remains in the availability of comprehensive datasets that focus on the dynamics of proteins, which are crucial for understanding protein function, folding, and interactions. To address this critical gap, we introduce mdCATH, a dataset generated through an extensive set of all-atom molecular dynamics simulations of a diverse and representative collection of protein domains. This dataset comprises all-atom systems for 5,398 domains, modeled with a state-of-the-art classical force field, and simulated in five replicates each at five temperatures from 320 K to 413 K. The mdCATH dataset records coordinates and forces every 1 ns, for over 62 ms of accumulated simulation time, effectively capturing the dynamics of the various classes of domains and providing a unique resource for proteome-wide statistical analyses of protein unfolding thermodynamics and kinetics. We outline the dataset structure and showcase its potential through four easily reproducible case studies, highlighting its capabilities in advancing protein science.
[ { "created": "Sat, 20 Jul 2024 07:48:11 GMT", "version": "v1" } ]
2024-07-23
[ [ "Mirarchi", "Antonio", "" ], [ "Giorgino", "Toni", "" ], [ "De Fabritiis", "Gianni", "" ] ]
Recent advancements in protein structure determination are revolutionizing our understanding of proteins. Still, a significant gap remains in the availability of comprehensive datasets that focus on the dynamics of proteins, which are crucial for understanding protein function, folding, and interactions. To address this critical gap, we introduce mdCATH, a dataset generated through an extensive set of all-atom molecular dynamics simulations of a diverse and representative collection of protein domains. This dataset comprises all-atom systems for 5,398 domains, modeled with a state-of-the-art classical force field, and simulated in five replicates each at five temperatures from 320 K to 413 K. The mdCATH dataset records coordinates and forces every 1 ns, for over 62 ms of accumulated simulation time, effectively capturing the dynamics of the various classes of domains and providing a unique resource for proteome-wide statistical analyses of protein unfolding thermodynamics and kinetics. We outline the dataset structure and showcase its potential through four easily reproducible case studies, highlighting its capabilities in advancing protein science.
2111.01961
Eli Shlizerman
Rahul Biswas and Eli Shlizerman
Statistical Perspective on Functional and Causal Neural Connectomics: A Comparative Study
null
Front. Syst. Neurosci. (2022)
10.3389/fnsys.2022.817962
null
q-bio.NC q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representation of brain network interactions is fundamental to the translation of neural structure to brain function. As such, methodologies for mapping neural interactions into structural models, i.e., inference of functional connectome from neural recordings, are key for the study of brain networks. While multiple approaches have been proposed for functional connectomics based on statistical associations between neural activity, association does not necessarily incorporate causation. Additional approaches have been proposed to incorporate aspects of causality to turn functional connectomes into causal functional connectomes, however, these methodologies typically focus on specific aspects of causality. This warrants a systematic statistical framework for causal functional connectomics that defines the foundations of common aspects of causality. Such a framework can assist in contrasting existing approaches and to guide development of further causal methodologies. In this work, we develop such a statistical guide. In particular, we consolidate the notions of associations and representations of neural interaction, i.e., types of neural connectomics, and then describe causal modeling in the statistics literature. We particularly focus on the introduction of directed Markov graphical models as a framework through which we define the Directed Markov Property -- an essential criterion for examining the causality of proposed functional connectomes. We demonstrate how based on these notions, a comparative study of several existing approaches for finding causal functional connectivity from neural activity can be conducted. We proceed by providing an outlook ahead regarding the additional properties that future approaches could include to thoroughly address causality.
[ { "created": "Wed, 3 Nov 2021 00:59:33 GMT", "version": "v1" } ]
2022-03-18
[ [ "Biswas", "Rahul", "" ], [ "Shlizerman", "Eli", "" ] ]
Representation of brain network interactions is fundamental to the translation of neural structure to brain function. As such, methodologies for mapping neural interactions into structural models, i.e., inference of functional connectome from neural recordings, are key for the study of brain networks. While multiple approaches have been proposed for functional connectomics based on statistical associations between neural activity, association does not necessarily incorporate causation. Additional approaches have been proposed to incorporate aspects of causality to turn functional connectomes into causal functional connectomes, however, these methodologies typically focus on specific aspects of causality. This warrants a systematic statistical framework for causal functional connectomics that defines the foundations of common aspects of causality. Such a framework can assist in contrasting existing approaches and to guide development of further causal methodologies. In this work, we develop such a statistical guide. In particular, we consolidate the notions of associations and representations of neural interaction, i.e., types of neural connectomics, and then describe causal modeling in the statistics literature. We particularly focus on the introduction of directed Markov graphical models as a framework through which we define the Directed Markov Property -- an essential criterion for examining the causality of proposed functional connectomes. We demonstrate how based on these notions, a comparative study of several existing approaches for finding causal functional connectivity from neural activity can be conducted. We proceed by providing an outlook ahead regarding the additional properties that future approaches could include to thoroughly address causality.
1605.07371
Jan Humplik
Jan Humplik, Ga\v{s}per Tka\v{c}ik
Semiparametric energy-based probabilistic models
8 pages, 3 figures
null
null
null
q-bio.NC cond-mat.stat-mech stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic models can be defined by an energy function, where the probability of each state is proportional to the exponential of the state's negative energy. This paper considers a generalization of energy-based models in which the probability of a state is proportional to an arbitrary positive, strictly decreasing, and twice differentiable function of the state's energy. The precise shape of the nonlinear map from energies to unnormalized probabilities has to be learned from data together with the parameters of the energy function. As a case study we show that the above generalization of a fully visible Boltzmann machine yields an accurate model of neural activity of retinal ganglion cells. We attribute this success to the model's ability to easily capture distributions whose probabilities span a large dynamic range, a possible consequence of latent variables that globally couple the system. Similar features have recently been observed in many datasets, suggesting that our new method has wide applicability.
[ { "created": "Tue, 24 May 2016 10:51:13 GMT", "version": "v1" } ]
2016-05-25
[ [ "Humplik", "Jan", "" ], [ "Tkačik", "Gašper", "" ] ]
Probabilistic models can be defined by an energy function, where the probability of each state is proportional to the exponential of the state's negative energy. This paper considers a generalization of energy-based models in which the probability of a state is proportional to an arbitrary positive, strictly decreasing, and twice differentiable function of the state's energy. The precise shape of the nonlinear map from energies to unnormalized probabilities has to be learned from data together with the parameters of the energy function. As a case study we show that the above generalization of a fully visible Boltzmann machine yields an accurate model of neural activity of retinal ganglion cells. We attribute this success to the model's ability to easily capture distributions whose probabilities span a large dynamic range, a possible consequence of latent variables that globally couple the system. Similar features have recently been observed in many datasets, suggesting that our new method has wide applicability.
1506.06048
Bhaskar Sen
Bhaskar Sen, Zheng Shi, and Gregory Burlet
Diagnosing ADHD from fMRI Scans Using Hidden Markov Models
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper applies a hidden Markov model to the problem of Attention Deficit Hyperactivity Disorder (ADHD) diagnosis from resting-state functional Magnetic Resonance Image (fMRI) scans of subjects. The proposed model considers the temporal evolution of fMRI voxel activations in the cortex, cingulate gyrus, and thalamus regions of the brain in order to make a diagnosis. Four feature dimen- sionality reduction methods are applied to the fMRI scan: voxel means, voxel weighted means, principal components analysis, and kernel principal components analysis. Using principal components analysis and kernel principal components analysis for dimensionality reduction, the proposed algorithm yielded an accu- racy of 63.01% and 62.06%, respectively, on the ADHD-200 competition dataset when differentiating between healthy control, ADHD innattentive, and ADHD combined types.
[ { "created": "Fri, 19 Jun 2015 15:19:35 GMT", "version": "v1" } ]
2015-06-22
[ [ "Sen", "Bhaskar", "" ], [ "Shi", "Zheng", "" ], [ "Burlet", "Gregory", "" ] ]
This paper applies a hidden Markov model to the problem of Attention Deficit Hyperactivity Disorder (ADHD) diagnosis from resting-state functional Magnetic Resonance Image (fMRI) scans of subjects. The proposed model considers the temporal evolution of fMRI voxel activations in the cortex, cingulate gyrus, and thalamus regions of the brain in order to make a diagnosis. Four feature dimen- sionality reduction methods are applied to the fMRI scan: voxel means, voxel weighted means, principal components analysis, and kernel principal components analysis. Using principal components analysis and kernel principal components analysis for dimensionality reduction, the proposed algorithm yielded an accu- racy of 63.01% and 62.06%, respectively, on the ADHD-200 competition dataset when differentiating between healthy control, ADHD innattentive, and ADHD combined types.
2008.00126
Jacques Demongeot
Jacques Demongeot, Herv\'e Seligmann
SARS-CoV-2 and miRNA-like inhibition power
14 pages, 12 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
(1) Background: RNA viruses and especially coronaviruses could act inside host cells not only by building their own proteins, but also by perturbing the cell metabolism. We show the possibility of miRNA-like inhibitions by the SARS-CoV-2 concerning for example the hemoglobin and type I interferons syntheses, hence highly perturbing oxygen distribution in vital organs and immune response as described by clinicians; (2) Methods: We compare RNA subsequences of SARS-CoV-2 protein S and RNA-dependent RNA polymerase genes to mRNA sequences of beta-globin and type I interferons; (3) Results: RNA subsequences longer than eight nucleotides from SARS-CoV-2 genome could hybridize subsequences of the mRNA of beta-globin and of type I interferons; (4) Conclusions: Beyond viral protein production, Covid-19 might affect vital processes like host oxygen transport and immune response.
[ { "created": "Fri, 31 Jul 2020 23:56:03 GMT", "version": "v1" } ]
2020-08-04
[ [ "Demongeot", "Jacques", "" ], [ "Seligmann", "Hervé", "" ] ]
(1) Background: RNA viruses and especially coronaviruses could act inside host cells not only by building their own proteins, but also by perturbing the cell metabolism. We show the possibility of miRNA-like inhibitions by the SARS-CoV-2 concerning for example the hemoglobin and type I interferons syntheses, hence highly perturbing oxygen distribution in vital organs and immune response as described by clinicians; (2) Methods: We compare RNA subsequences of SARS-CoV-2 protein S and RNA-dependent RNA polymerase genes to mRNA sequences of beta-globin and type I interferons; (3) Results: RNA subsequences longer than eight nucleotides from SARS-CoV-2 genome could hybridize subsequences of the mRNA of beta-globin and of type I interferons; (4) Conclusions: Beyond viral protein production, Covid-19 might affect vital processes like host oxygen transport and immune response.
1611.09149
Hendrik Richter
Hendrik Richter
Dynamic landscape models of coevolutionary games
arXiv admin note: substantial text overlap with arXiv:1603.06374
BioSystems 153-154, 26-44, 2017
null
null
q-bio.PE cs.NE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Players of coevolutionary games may update not only their strategies but also their networks of interaction. Based on interpreting the payoff of players as fitness, dynamic landscape models are proposed. The modeling procedure is carried out for Prisoner's Dilemma (PD) and Snowdrift (SD) games that both use either birth--death (BD) or death--birth (DB) strategy updating. The main focus is on using dynamic fitness landscapes as a mathematical model of coevolutionary game dynamics. Hence, an alternative tool for analyzing coevolutionary games becomes available, and landscape measures such as modality, ruggedness and information content can be computed and analyzed. In addition, fixation properties of the games and quantifiers characterizing the interaction networks are calculated numerically. Relations are established between landscape properties expressed by landscape measures and quantifiers of coevolutionary game dynamics such as fixation probabilities, fixation times and network properties.
[ { "created": "Mon, 28 Nov 2016 14:55:50 GMT", "version": "v1" }, { "created": "Wed, 15 Mar 2017 12:50:04 GMT", "version": "v2" } ]
2017-03-16
[ [ "Richter", "Hendrik", "" ] ]
Players of coevolutionary games may update not only their strategies but also their networks of interaction. Based on interpreting the payoff of players as fitness, dynamic landscape models are proposed. The modeling procedure is carried out for Prisoner's Dilemma (PD) and Snowdrift (SD) games that both use either birth--death (BD) or death--birth (DB) strategy updating. The main focus is on using dynamic fitness landscapes as a mathematical model of coevolutionary game dynamics. Hence, an alternative tool for analyzing coevolutionary games becomes available, and landscape measures such as modality, ruggedness and information content can be computed and analyzed. In addition, fixation properties of the games and quantifiers characterizing the interaction networks are calculated numerically. Relations are established between landscape properties expressed by landscape measures and quantifiers of coevolutionary game dynamics such as fixation probabilities, fixation times and network properties.
q-bio/0502010
Karen Luz Burgoa K. Luz-Burgoa
K. Luz-Burgoa, Tony Dell and Toshinori Okuyama
Sympatric Speciation in a Simple Food Web
Contribution to the Proceedings of the Complex Systems Summer School 2004, organized by the Santa Fe Institute (9 pages and 6 figures)
null
null
null
q-bio.PE
null
Observations of the evolution of species groups in nature, such as well recognized Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with such radiations. The Penna model is one such model and has been widely used to study aging. In this paper we use the basic Penna model to investigate the process of sympatric speciation in a simple food web model. Initially our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we directly manipulate the food source, that is, its size distribution, and monitore the changes in the populations structures. Sympatric speciation is obtained for the consumer species in both webs, and our results confirm that the speciation velocity depends on how far up,in the food chain, the focus population is feeding. Simulations are done with three different sexual imprinting-like mechanisms, in order to discuss adaptation by natural selection.
[ { "created": "Fri, 11 Feb 2005 20:39:47 GMT", "version": "v1" } ]
2007-05-23
[ [ "Luz-Burgoa", "K.", "" ], [ "Dell", "Tony", "" ], [ "Okuyama", "Toshinori", "" ] ]
Observations of the evolution of species groups in nature, such as well recognized Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with such radiations. The Penna model is one such model and has been widely used to study aging. In this paper we use the basic Penna model to investigate the process of sympatric speciation in a simple food web model. Initially our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we directly manipulate the food source, that is, its size distribution, and monitore the changes in the populations structures. Sympatric speciation is obtained for the consumer species in both webs, and our results confirm that the speciation velocity depends on how far up,in the food chain, the focus population is feeding. Simulations are done with three different sexual imprinting-like mechanisms, in order to discuss adaptation by natural selection.
2005.08701
Taegeun Song
Woo Seok Lee, Junghyo Jo, and Taegeun Song
Machine learning for the diagnosis of early stage diabetes using temporal glucose profiles
4 pages, 2 figure
null
10.1007/s40042-021-00056-8
null
q-bio.QM cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning shows remarkable success for recognizing patterns in data. Here we apply the machine learning (ML) for the diagnosis of early stage diabetes, which is known as a challenging task in medicine. Blood glucose levels are tightly regulated by two counter-regulatory hormones, insulin and glucagon, and the failure of the glucose homeostasis leads to the common metabolic disease, diabetes mellitus. It is a chronic disease that has a long latent period the complicates detection of the disease at an early stage. The vast majority of diabetics result from that diminished effectiveness of insulin action. The insulin resistance must modify the temporal profile of blood glucose. Thus we propose to use ML to detect the subtle change in the temporal pattern of glucose concentration. Time series data of blood glucose with sufficient resolution is currently unavailable, so we confirm the proposal using synthetic data of glucose profiles produced by a biophysical model that considers the glucose regulation and hormone action. Multi-layered perceptrons, convolutional neural networks, and recurrent neural networks all identified the degree of insulin resistance with high accuracy above $85\%$.
[ { "created": "Mon, 18 May 2020 13:31:12 GMT", "version": "v1" } ]
2021-02-24
[ [ "Lee", "Woo Seok", "" ], [ "Jo", "Junghyo", "" ], [ "Song", "Taegeun", "" ] ]
Machine learning shows remarkable success for recognizing patterns in data. Here we apply the machine learning (ML) for the diagnosis of early stage diabetes, which is known as a challenging task in medicine. Blood glucose levels are tightly regulated by two counter-regulatory hormones, insulin and glucagon, and the failure of the glucose homeostasis leads to the common metabolic disease, diabetes mellitus. It is a chronic disease that has a long latent period the complicates detection of the disease at an early stage. The vast majority of diabetics result from that diminished effectiveness of insulin action. The insulin resistance must modify the temporal profile of blood glucose. Thus we propose to use ML to detect the subtle change in the temporal pattern of glucose concentration. Time series data of blood glucose with sufficient resolution is currently unavailable, so we confirm the proposal using synthetic data of glucose profiles produced by a biophysical model that considers the glucose regulation and hormone action. Multi-layered perceptrons, convolutional neural networks, and recurrent neural networks all identified the degree of insulin resistance with high accuracy above $85\%$.
1206.4862
Matjaz Perc
Daqing Guo, Qingyun Wang, Matjaz Perc
Complex synchronous behavior in interneuronal networks with delayed inhibitory and fast electrical synapses
8 two-column pages, 7 figures; accepted for publication in Physical Review E
Phys. Rev. E 85 (2012) 061905
10.1103/PhysRevE.85.061905
null
q-bio.NC nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of fast-spiking interneurons are crucial for the generation of neural oscillations in the brain. Here we study the synchronous behavior of interneuronal networks that are coupled by delayed inhibitory and fast electrical synapses. We find that both coupling modes play a crucial role by the synchronization of the network. In addition, delayed inhibitory synapses affect the emerging oscillatory patterns. By increasing the inhibitory synaptic delay, we observe a transition from regular to mixed oscillatory patterns at a critical value. We also examine how the unreliability of inhibitory synapses influences the emergence of synchronization and the oscillatory patterns. We find that low levels of reliability tend to destroy synchronization, and moreover, that interneuronal networks with long inhibitory synaptic delays require a minimal level of reliability for the mixed oscillatory pattern to be maintained.
[ { "created": "Thu, 21 Jun 2012 13:08:32 GMT", "version": "v1" } ]
2012-06-22
[ [ "Guo", "Daqing", "" ], [ "Wang", "Qingyun", "" ], [ "Perc", "Matjaz", "" ] ]
Networks of fast-spiking interneurons are crucial for the generation of neural oscillations in the brain. Here we study the synchronous behavior of interneuronal networks that are coupled by delayed inhibitory and fast electrical synapses. We find that both coupling modes play a crucial role by the synchronization of the network. In addition, delayed inhibitory synapses affect the emerging oscillatory patterns. By increasing the inhibitory synaptic delay, we observe a transition from regular to mixed oscillatory patterns at a critical value. We also examine how the unreliability of inhibitory synapses influences the emergence of synchronization and the oscillatory patterns. We find that low levels of reliability tend to destroy synchronization, and moreover, that interneuronal networks with long inhibitory synaptic delays require a minimal level of reliability for the mixed oscillatory pattern to be maintained.
1507.01731
Kumar Sankar Ray
Kumar Sankar Ray and Mandrita Mondal
Prediction of Radiation Fog by DNA Computing
36 pages
null
null
null
q-bio.BM cs.AI cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a wet lab algorithm for prediction of radiation fog by DNA computing. The concept of DNA computing is essentially exploited for generating the classifier algorithm in the wet lab. The classifier is based on a new concept of similarity based fuzzy reasoning suitable for wet lab implementation. This new concept of similarity based fuzzy reasoning is different from conventional approach to fuzzy reasoning based on similarity measure and also replaces the logical aspect of classical fuzzy reasoning by DNA chemistry. Thus, we add a new dimension to existing forms of fuzzy reasoning by bringing it down to nanoscale. We exploit the concept of massive parallelism of DNA computing by designing this new classifier in the wet lab. This newly designed classifier is very much generalized in nature and apart from prediction of radiation fog this methodology can be applied to other types of data also. To achieve our goal we first fuzzify the given observed parameters in a form of synthetic DNA sequence which is called fuzzy DNA and which handles the vague concept of human reasoning.
[ { "created": "Tue, 7 Jul 2015 09:57:44 GMT", "version": "v1" } ]
2015-07-08
[ [ "Ray", "Kumar Sankar", "" ], [ "Mondal", "Mandrita", "" ] ]
In this paper we propose a wet lab algorithm for prediction of radiation fog by DNA computing. The concept of DNA computing is essentially exploited for generating the classifier algorithm in the wet lab. The classifier is based on a new concept of similarity based fuzzy reasoning suitable for wet lab implementation. This new concept of similarity based fuzzy reasoning is different from conventional approach to fuzzy reasoning based on similarity measure and also replaces the logical aspect of classical fuzzy reasoning by DNA chemistry. Thus, we add a new dimension to existing forms of fuzzy reasoning by bringing it down to nanoscale. We exploit the concept of massive parallelism of DNA computing by designing this new classifier in the wet lab. This newly designed classifier is very much generalized in nature and apart from prediction of radiation fog this methodology can be applied to other types of data also. To achieve our goal we first fuzzify the given observed parameters in a form of synthetic DNA sequence which is called fuzzy DNA and which handles the vague concept of human reasoning.
1304.1072
Matthew Seetin
Matthew G. Seetin, Wipapat Kladwang, J. P. Bida, and Rhiju Das
Massively Parallel RNA Chemical Mapping with a Reduced Bias MAP-seq Protocol
22 pages, 5 figures
null
null
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemical mapping methods probe RNA structure by revealing and leveraging correlations of a nucleotide's structural accessibility or flexibility with its reactivity to various chemical probes. Pioneering work by Lucks and colleagues has expanded this method to probe hundreds of molecules at once on an Illumina sequencing platform, obviating the use of slab gels or capillary electrophoresis on one molecule at a time. Here, we describe optimizations to this method from our lab, resulting in the MAP-seq protocol (Multiplexed Accessibility Probing read out through sequencing), version 1.0. The protocol permits the quantitative probing of thousands of RNAs at once, by several chemical modification reagents, on the time scale of a day using a table-top Illumina machine. This method and a software package MAPseeker (http://simtk.org/home/map_seeker) address several potential sources of bias, by eliminating PCR steps, improving ligation efficiencies of ssDNA adapters, and avoiding problematic heuristics in prior algorithms. We hope that the step-by-step description of MAP-seq 1.0 will help other RNA mapping laboratories to transition from electrophoretic to next-generation sequencing methods and to further reduce the turnaround time and any remaining biases of the protocol.
[ { "created": "Wed, 3 Apr 2013 19:53:19 GMT", "version": "v1" } ]
2013-04-04
[ [ "Seetin", "Matthew G.", "" ], [ "Kladwang", "Wipapat", "" ], [ "Bida", "J. P.", "" ], [ "Das", "Rhiju", "" ] ]
Chemical mapping methods probe RNA structure by revealing and leveraging correlations of a nucleotide's structural accessibility or flexibility with its reactivity to various chemical probes. Pioneering work by Lucks and colleagues has expanded this method to probe hundreds of molecules at once on an Illumina sequencing platform, obviating the use of slab gels or capillary electrophoresis on one molecule at a time. Here, we describe optimizations to this method from our lab, resulting in the MAP-seq protocol (Multiplexed Accessibility Probing read out through sequencing), version 1.0. The protocol permits the quantitative probing of thousands of RNAs at once, by several chemical modification reagents, on the time scale of a day using a table-top Illumina machine. This method and a software package MAPseeker (http://simtk.org/home/map_seeker) address several potential sources of bias, by eliminating PCR steps, improving ligation efficiencies of ssDNA adapters, and avoiding problematic heuristics in prior algorithms. We hope that the step-by-step description of MAP-seq 1.0 will help other RNA mapping laboratories to transition from electrophoretic to next-generation sequencing methods and to further reduce the turnaround time and any remaining biases of the protocol.
1509.03775
Ronald Vale
Ronald D. Vale
Accelerating Scientific Publication in Biology
39 pages, 6 figures, 1 table, and a Q&A related to pre-prints
null
10.1073/pnas.1511912112
null
q-bio.OT cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past thirty years. More experimental data is now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of Ph.D. training. Since publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community.
[ { "created": "Sat, 12 Sep 2015 20:11:21 GMT", "version": "v1" } ]
2016-04-27
[ [ "Vale", "Ronald D.", "" ] ]
Scientific publications enable results and ideas to be transmitted throughout the scientific community. The number and type of journal publications also have become the primary criteria used in evaluating career advancement. Our analysis suggests that publication practices have changed considerably in the life sciences over the past thirty years. More experimental data is now required for publication, and the average time required for graduate students to publish their first paper has increased and is approaching the desirable duration of Ph.D. training. Since publication is generally a requirement for career progression, schemes to reduce the time of graduate student and postdoctoral training may be difficult to implement without also considering new mechanisms for accelerating communication of their work. The increasing time to publication also delays potential catalytic effects that ensue when many scientists have access to new information. The time has come for life scientists, funding agencies, and publishers to discuss how to communicate new findings in a way that best serves the interests of the public and the scientific community.
1612.04049
Coralie Fritsch
Coralie Fritsch (IECL, CMAP, TOSCA), Fabien Campillo (MATHNEURO), Otso Ovaskainen
A numerical approach to determine mutant invasion fitness and evolutionary singular strategies
null
Theoretical Population Biology, Elsevier, 2017, 115
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a numerical approach to study the invasion fitness of a mutant and to determine evolutionary singular strategies in evolutionary structured models in which the competitive exclusion principle holds. Our approach is based on a dual representation, which consists of the modelling of the small size mutant population by a stochastic model and the computation of its corresponding deterministic model. The use of the deterministic model greatly facilitates the numerical determination of the feasibility of invasion as well as the convergence-stability of the evolutionary singular strategy. Our approach combines standard adaptive dynamics with the link between the mutant survival criterion in the stochastic model and the sign of the eigenvalue in the corresponding deterministic model. We present our method in the context of a mass-structured individual-based chemostat model. We exploit a previously derived mathematical relationship between stochastic and deterministic representations of the mutant population in the chemostat model to derive a general numerical method for analyzing the invasion fitness in the stochastic models. Our method can be applied to the broad class of evolutionary models for which a link between the stochastic and deterministic invasion fitnesses can be established.
[ { "created": "Tue, 13 Dec 2016 07:41:25 GMT", "version": "v1" }, { "created": "Tue, 23 May 2017 13:49:18 GMT", "version": "v2" } ]
2017-05-24
[ [ "Fritsch", "Coralie", "", "IECL, CMAP, TOSCA" ], [ "Campillo", "Fabien", "", "MATHNEURO" ], [ "Ovaskainen", "Otso", "" ] ]
We propose a numerical approach to study the invasion fitness of a mutant and to determine evolutionary singular strategies in evolutionary structured models in which the competitive exclusion principle holds. Our approach is based on a dual representation, which consists of the modelling of the small size mutant population by a stochastic model and the computation of its corresponding deterministic model. The use of the deterministic model greatly facilitates the numerical determination of the feasibility of invasion as well as the convergence-stability of the evolutionary singular strategy. Our approach combines standard adaptive dynamics with the link between the mutant survival criterion in the stochastic model and the sign of the eigenvalue in the corresponding deterministic model. We present our method in the context of a mass-structured individual-based chemostat model. We exploit a previously derived mathematical relationship between stochastic and deterministic representations of the mutant population in the chemostat model to derive a general numerical method for analyzing the invasion fitness in the stochastic models. Our method can be applied to the broad class of evolutionary models for which a link between the stochastic and deterministic invasion fitnesses can be established.
0706.2383
Jason Locasale W
Jason W. Locasale
Allovalency revisited: an analysis of multisite phosphorylation and substrate rebinding
44 pages, 5 figures; accepted Journal of Chemical Physics
null
10.1063/1.2841124
null
q-bio.SC q-bio.MN
null
The utilization of multiple phosphorylation sites in regulating a biological response is ubiquitous in cell signaling. If each site contributes an additional, equivalent binding site, then one consequence of an increase in the number of phosphorylations may be to increase the probability that, upon disassociation, a ligand immediately rebinds to its receptor. How such effects may influence cell signaling systems has been less studied. Here, a self-consistent integral equation formalism for ligand rebinding, in conjunction with Monte Carlo simulations, is employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses. Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects are predicted. Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly. The model also predicts that ligand rebinding in itself appears insufficient to generative a highly cooperative biological response.
[ { "created": "Fri, 15 Jun 2007 22:49:09 GMT", "version": "v1" }, { "created": "Tue, 19 Jun 2007 13:14:07 GMT", "version": "v2" }, { "created": "Mon, 25 Jun 2007 17:35:08 GMT", "version": "v3" }, { "created": "Wed, 9 Jan 2008 22:17:27 GMT", "version": "v4" } ]
2009-11-13
[ [ "Locasale", "Jason W.", "" ] ]
The utilization of multiple phosphorylation sites in regulating a biological response is ubiquitous in cell signaling. If each site contributes an additional, equivalent binding site, then one consequence of an increase in the number of phosphorylations may be to increase the probability that, upon disassociation, a ligand immediately rebinds to its receptor. How such effects may influence cell signaling systems has been less studied. Here, a self-consistent integral equation formalism for ligand rebinding, in conjunction with Monte Carlo simulations, is employed to further investigate the effects of multiple, equivalent binding sites on shaping biological responses. Multiple regimes that characterize qualitatively different physics due to the differential prevalence of rebinding effects are predicted. Calculations suggest that when ligand rebinding contributes significantly to the dose response, a purely allovalent model can influence the binding curves nonlinearly. The model also predicts that ligand rebinding in itself appears insufficient to generative a highly cooperative biological response.
1302.6771
Su-Chan Park
Su-Chan Park and Joachim Krug
Rate of adaptation in sexuals and asexuals: A solvable model of the Fisher-Muller effect
Title has been changed. Supporting Information (animation) can be found in the source file. 53 pages. 10 figures. To appear in Genetics
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adaptation of large asexual populations is hampered by the competition between independently arising beneficial mutations in different individuals, which is known as clonal interference. Fisher and Muller proposed that recombination provides an evolutionary advantage in large populations by alleviating this competition. Based on recent progress in quantifying the speed of adaptation in asexual populations undergoing clonal interference, we present a detailed analysis of the Fisher-Muller mechanism for a model genome consisting of two loci with an infinite number of beneficial alleles each and multiplicative fitness effects. We solve the infinite population dynamics exactly and show that, for a particular, natural mutation scheme, the speed of adaptation in sexuals is twice as large as in asexuals. Guided by the infinite population result and by previous work on asexual adaptation, we postulate an expression for the speed of adaptation in finite sexual populations that agrees with numerical simulations over a wide range of population sizes and recombination rates. The ratio of the sexual to asexual adaptation speed is a function of population size that increases in the clonal interference regime and approaches 2 for extremely large populations. The simulations also show that the imbalance between the numbers of accumulated mutations at the two loci is strongly suppressed even by a small amount of recombination. The generalization of the model to an arbitrary number $L$ of loci is briefly discussed. If each offspring samples the alleles at each locus from the gene pool of the whole population rather than from two parents, the ratio of the sexual to asexual adaptation speed is approximately equal to $L$ in large populations. A possible realization of this scenario is the reassortment of genetic material in RNA viruses with $L$ genomic segments.
[ { "created": "Wed, 27 Feb 2013 13:54:21 GMT", "version": "v1" }, { "created": "Thu, 15 Aug 2013 15:31:51 GMT", "version": "v2" } ]
2013-08-16
[ [ "Park", "Su-Chan", "" ], [ "Krug", "Joachim", "" ] ]
The adaptation of large asexual populations is hampered by the competition between independently arising beneficial mutations in different individuals, which is known as clonal interference. Fisher and Muller proposed that recombination provides an evolutionary advantage in large populations by alleviating this competition. Based on recent progress in quantifying the speed of adaptation in asexual populations undergoing clonal interference, we present a detailed analysis of the Fisher-Muller mechanism for a model genome consisting of two loci with an infinite number of beneficial alleles each and multiplicative fitness effects. We solve the infinite population dynamics exactly and show that, for a particular, natural mutation scheme, the speed of adaptation in sexuals is twice as large as in asexuals. Guided by the infinite population result and by previous work on asexual adaptation, we postulate an expression for the speed of adaptation in finite sexual populations that agrees with numerical simulations over a wide range of population sizes and recombination rates. The ratio of the sexual to asexual adaptation speed is a function of population size that increases in the clonal interference regime and approaches 2 for extremely large populations. The simulations also show that the imbalance between the numbers of accumulated mutations at the two loci is strongly suppressed even by a small amount of recombination. The generalization of the model to an arbitrary number $L$ of loci is briefly discussed. If each offspring samples the alleles at each locus from the gene pool of the whole population rather than from two parents, the ratio of the sexual to asexual adaptation speed is approximately equal to $L$ in large populations. A possible realization of this scenario is the reassortment of genetic material in RNA viruses with $L$ genomic segments.
1307.0451
Valentina Agoni
Valentina Agoni
Insights Into Quantitative Biology: analysis of cellular adaptation
8 pages, 3 figures
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last years many powerful techniques have emerged to measure protein interactions as well as gene expression. Many progresses have been done since the introduction of these techniques but not toward quantitative analysis of data. In this paper we show how to study cellular adaptation and how to detect cellular subpopulations. Moreover we go deeper in analyzing signal transduction pathways dynamics.
[ { "created": "Mon, 1 Jul 2013 17:52:37 GMT", "version": "v1" } ]
2013-07-02
[ [ "Agoni", "Valentina", "" ] ]
In the last years many powerful techniques have emerged to measure protein interactions as well as gene expression. Many progresses have been done since the introduction of these techniques but not toward quantitative analysis of data. In this paper we show how to study cellular adaptation and how to detect cellular subpopulations. Moreover we go deeper in analyzing signal transduction pathways dynamics.
1608.05665
Danielle Bassett
Danielle S. Bassett and Edward T. Bullmore
Small-World Brain Networks Revisited
13 pages, 8 figures in The Neuroscientist, 2016
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is nearly 20 years since the concept of a small-world network was first quantitatively defined, by a combination of high clustering and short path length; and about 10 years since this metric of complex network topology began to be widely applied to analysis of neuroimaging and other neuroscience data as part of the rapid growth of the new field of connectomics. Here we review briefly the foundational concepts of graph theoretical estimation and generation of small-world networks. We take stock of some of the key developments in the field in the past decade and we consider in some detail the implications of recent studies using high-resolution tract-tracing methods to map the anatomical networks of the macaque and the mouse. In doing so, we draw attention to the important methodological distinction between topological analysis of binary or unweighted graphs, which have provided a popular but simple approach to brain network analysis in the past, and the topology of weighted graphs, which retain more biologically relevant information and are more appropriate to the increasingly sophisticated data on brain connectivity emerging from contemporary tract-tracing and other imaging studies. We conclude by highlighting some possible future trends in the further development of weighted small-worldness as part of a deeper and broader understanding of the topology and the functional value of the strong and weak links between areas of mammalian cortex.
[ { "created": "Fri, 19 Aug 2016 16:34:24 GMT", "version": "v1" } ]
2016-08-22
[ [ "Bassett", "Danielle S.", "" ], [ "Bullmore", "Edward T.", "" ] ]
It is nearly 20 years since the concept of a small-world network was first quantitatively defined, by a combination of high clustering and short path length; and about 10 years since this metric of complex network topology began to be widely applied to analysis of neuroimaging and other neuroscience data as part of the rapid growth of the new field of connectomics. Here we review briefly the foundational concepts of graph theoretical estimation and generation of small-world networks. We take stock of some of the key developments in the field in the past decade and we consider in some detail the implications of recent studies using high-resolution tract-tracing methods to map the anatomical networks of the macaque and the mouse. In doing so, we draw attention to the important methodological distinction between topological analysis of binary or unweighted graphs, which have provided a popular but simple approach to brain network analysis in the past, and the topology of weighted graphs, which retain more biologically relevant information and are more appropriate to the increasingly sophisticated data on brain connectivity emerging from contemporary tract-tracing and other imaging studies. We conclude by highlighting some possible future trends in the further development of weighted small-worldness as part of a deeper and broader understanding of the topology and the functional value of the strong and weak links between areas of mammalian cortex.
q-bio/0406013
G\'abor Cs\'anyi
Bal\'azs Szendr\"oi, G\'abor Cs\'anyi
Polynomial epidemics and clustering in contact networks
null
null
null
null
q-bio.PE
null
It is widely known that the spread of the HIV virus was slower than exponential in several populations, even at the very beginning of the epidemic. We show that this implies a significant reduction in the effective reproductive rate of the epidemic, and describe a general mechanism, related to the clustering properties of the disease transmission network, capable of explaining this reduction. Our considerations provide a new angle on polynomial epidemic processes, and may have implications for the choice of strategy against such epidemics.
[ { "created": "Fri, 4 Jun 2004 21:12:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Szendröi", "Balázs", "" ], [ "Csányi", "Gábor", "" ] ]
It is widely known that the spread of the HIV virus was slower than exponential in several populations, even at the very beginning of the epidemic. We show that this implies a significant reduction in the effective reproductive rate of the epidemic, and describe a general mechanism, related to the clustering properties of the disease transmission network, capable of explaining this reduction. Our considerations provide a new angle on polynomial epidemic processes, and may have implications for the choice of strategy against such epidemics.
2207.04487
Steven Frank
Steven A. Frank
Automatic differentiation and the optimization of differential equation models in biology
null
null
10.3389/fevo.2022.1010278
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
A computational revolution unleashed the power of artificial neural networks. At the heart of that revolution is automatic differentiation, which calculates the derivative of a performance measure relative to a large number of parameters. Differentiation enhances the discovery of improved performance in large models, an achievement that was previously difficult or impossible. Recently, a second computational advance optimizes the temporal trajectories traced by differential equations. Optimization requires differentiating a measure of performance over a trajectory, such as the closeness of tracking the environment, with respect to the parameters of the differential equations. Because model trajectories are usually calculated numerically by multistep algorithms, such as Runge-Kutta, the automatic differentiation must be passed through the numerical algorithm. This article explains how such automatic differentiation of trajectories is achieved. It also discusses why such computational breakthroughs are likely to advance theoretical and statistical studies of biological problems, in which one can consider variables as dynamic paths over time and space. Many common problems arise between improving success in computational learning models over performance landscapes, improving evolutionary fitness over adaptive landscapes, and improving statistical fits to data over information landscapes.
[ { "created": "Sun, 10 Jul 2022 15:27:26 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2022 14:54:56 GMT", "version": "v2" } ]
2023-12-27
[ [ "Frank", "Steven A.", "" ] ]
A computational revolution unleashed the power of artificial neural networks. At the heart of that revolution is automatic differentiation, which calculates the derivative of a performance measure relative to a large number of parameters. Differentiation enhances the discovery of improved performance in large models, an achievement that was previously difficult or impossible. Recently, a second computational advance optimizes the temporal trajectories traced by differential equations. Optimization requires differentiating a measure of performance over a trajectory, such as the closeness of tracking the environment, with respect to the parameters of the differential equations. Because model trajectories are usually calculated numerically by multistep algorithms, such as Runge-Kutta, the automatic differentiation must be passed through the numerical algorithm. This article explains how such automatic differentiation of trajectories is achieved. It also discusses why such computational breakthroughs are likely to advance theoretical and statistical studies of biological problems, in which one can consider variables as dynamic paths over time and space. Many common problems arise between improving success in computational learning models over performance landscapes, improving evolutionary fitness over adaptive landscapes, and improving statistical fits to data over information landscapes.
1210.7164
Roeland M.H. Merks
Margriet M. Palm and Roeland M. H. Merks
Vascular networks due to dynamically arrested crystalline ordering of elongated cells
11 pages, 4 figures. Published as: Palm and Merks (2013) Physical Review E 87, 012725. The present version includes a correction in the calculation of the nematic order parameter. Erratum submitted to PRE on Jun 5th 2013. The correction does not affect the conclusions
Physical Review E 87, 012725 (2013)
10.1103/PhysRevE.87.012725
null
q-bio.CB cond-mat.soft physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental and theoretical studies suggest that crystallization and glass-like solidification are useful analogies for understanding cell ordering in confluent biological tissues. It remains unexplored how cellular ordering contributes to pattern formation during morphogenesis. With a computational model we show that a system of elongated, cohering biological cells can get dynamically arrested in a network pattern. Our model provides a new explanation for the formation of cellular networks in culture systems that exclude intercellular interaction via chemotaxis or mechanical traction.
[ { "created": "Fri, 26 Oct 2012 15:08:36 GMT", "version": "v1" }, { "created": "Wed, 9 Jan 2013 21:19:01 GMT", "version": "v2" }, { "created": "Wed, 5 Jun 2013 13:21:12 GMT", "version": "v3" } ]
2015-03-13
[ [ "Palm", "Margriet M.", "" ], [ "Merks", "Roeland M. H.", "" ] ]
Recent experimental and theoretical studies suggest that crystallization and glass-like solidification are useful analogies for understanding cell ordering in confluent biological tissues. It remains unexplored how cellular ordering contributes to pattern formation during morphogenesis. With a computational model we show that a system of elongated, cohering biological cells can get dynamically arrested in a network pattern. Our model provides a new explanation for the formation of cellular networks in culture systems that exclude intercellular interaction via chemotaxis or mechanical traction.
2010.07897
James Zou
James Zou, Aubrey Johnson, Jeanelle France, Srinidhi Bharadwaj, Zeljko Tomljanovic, Yaakov Stern, Adam M. Brickman, Devangere P. Devanand, Jose A. Luchsinger, William C. Kreisl, and Frank A. Provenzano
Spatial Registration Evaluation of [18F]-MK6240 PET
19 pages, 8 Figures, 4 Tables
null
null
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image registration is an important preprocessing step in neuroimaging which allows for the matching of anatomical and functional information between modalities and subjects. This can be challenging if there are gross differences in image geometry or in signal intensity, such as in the case of some molecular PET radioligands, where control subjects display relative lack of signal relative to noise within intracranial regions, and may have off target binding that may be confused as other regions, and may vary depending on subject. The use of intermediary images or volumes have been shown to aide registration in such cases. To account for this phenomena within our own longitudinal aging cohort, we generated a population specific MRI and PET template from a broad distribution of 30 amyloid negative subjects. We then registered the PET image of each of these subjects, as well as a holdout set of thirty 'template-naive' subjects to their corresponding MRI images using the template image as an intermediate using three different sets of registration parameters and procedures. To evaluate the performance of both conventional registration and our method, we compared these to the registration of the attenuation CT (acquired at time of PET acquisition) to MRI as the reference. We then used our template to directly derive SUVR values without the use of MRI. We found that conventional registration was comparable to an existing CT based standard, and there was no significant difference in errors collectively amongst all methods tested. In addition, there were no significant differences between existing and MR-less tau PET quantification methods. We conclude that a template-based method is a feasible alternative to, or salvage for, direct registration and MR-less quantification; and, may be preferred in cases where there is doubt about the similarity between two image modalities.
[ { "created": "Thu, 15 Oct 2020 17:21:59 GMT", "version": "v1" } ]
2020-10-16
[ [ "Zou", "James", "" ], [ "Johnson", "Aubrey", "" ], [ "France", "Jeanelle", "" ], [ "Bharadwaj", "Srinidhi", "" ], [ "Tomljanovic", "Zeljko", "" ], [ "Stern", "Yaakov", "" ], [ "Brickman", "Adam M.", "" ], [ "Devanand", "Devangere P.", "" ], [ "Luchsinger", "Jose A.", "" ], [ "Kreisl", "William C.", "" ], [ "Provenzano", "Frank A.", "" ] ]
Image registration is an important preprocessing step in neuroimaging which allows for the matching of anatomical and functional information between modalities and subjects. This can be challenging if there are gross differences in image geometry or in signal intensity, such as in the case of some molecular PET radioligands, where control subjects display relative lack of signal relative to noise within intracranial regions, and may have off target binding that may be confused as other regions, and may vary depending on subject. The use of intermediary images or volumes have been shown to aide registration in such cases. To account for this phenomena within our own longitudinal aging cohort, we generated a population specific MRI and PET template from a broad distribution of 30 amyloid negative subjects. We then registered the PET image of each of these subjects, as well as a holdout set of thirty 'template-naive' subjects to their corresponding MRI images using the template image as an intermediate using three different sets of registration parameters and procedures. To evaluate the performance of both conventional registration and our method, we compared these to the registration of the attenuation CT (acquired at time of PET acquisition) to MRI as the reference. We then used our template to directly derive SUVR values without the use of MRI. We found that conventional registration was comparable to an existing CT based standard, and there was no significant difference in errors collectively amongst all methods tested. In addition, there were no significant differences between existing and MR-less tau PET quantification methods. We conclude that a template-based method is a feasible alternative to, or salvage for, direct registration and MR-less quantification; and, may be preferred in cases where there is doubt about the similarity between two image modalities.
1808.07850
Jorge F. Mejias
R. R. Deza, J. I. Deza, N. Martinez, J. F. Mejias, H. S. Wio
A nonequilibrium-potential approach to competition in neural populations
16 pages, 3 figures
null
null
null
q-bio.NC cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
Energy landscapes are a useful aid for the understanding of dynamical systems, and a valuable tool for their analysis. For a broad class of rate models of neural networks, we derive a global Lyapunov function which provides an energy landscape without any symmetry constraint. This newly obtained `nonequilibrium potential' (NEP) predicts with high accuracy the outcomes of the dynamics in the globally stable cases studied here. Common features of the models in this class are bistability --with implications for working memory and slow neural oscillations --and `population burst', also relevant in neuroscience. Instead, limit cycles are not found. Their nonexistence can be proven by resort to the Bendixson--Dulac theorem, at least when the NEP remains positive and in the (also generic) singular limit of these models. Hopefully, this NEP will help understand average neural network dynamics from a more formal standpoint, and will also be of help in the description of large heterogeneous neural networks.
[ { "created": "Thu, 23 Aug 2018 17:18:53 GMT", "version": "v1" } ]
2018-08-24
[ [ "Deza", "R. R.", "" ], [ "Deza", "J. I.", "" ], [ "Martinez", "N.", "" ], [ "Mejias", "J. F.", "" ], [ "Wio", "H. S.", "" ] ]
Energy landscapes are a useful aid for the understanding of dynamical systems, and a valuable tool for their analysis. For a broad class of rate models of neural networks, we derive a global Lyapunov function which provides an energy landscape without any symmetry constraint. This newly obtained `nonequilibrium potential' (NEP) predicts with high accuracy the outcomes of the dynamics in the globally stable cases studied here. Common features of the models in this class are bistability --with implications for working memory and slow neural oscillations --and `population burst', also relevant in neuroscience. Instead, limit cycles are not found. Their nonexistence can be proven by resort to the Bendixson--Dulac theorem, at least when the NEP remains positive and in the (also generic) singular limit of these models. Hopefully, this NEP will help understand average neural network dynamics from a more formal standpoint, and will also be of help in the description of large heterogeneous neural networks.
2402.17650
Mohammad Ghalavand
Mohammad Ghalavand, Javad Hatami, Seyed Kamaledin Setarehdan, Fatimah Nosrati, Hananeh Ghalavand, Ali Nikhalat-Jahromi
Comparison of the Effects of Interaction with Intentional Agent and Artificial Intelligence using fNIRS
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
As societal interactions increasingly involve both intentional and unintentional agents, understanding their effects on human cognition becomes paramount. This study investigates the neural correlates of interacting with intentional versus artificial agents in a simulated tennis game scenario. Employing functional near-infrared spectroscopy (fNIRS), brain activity in 50 male participants during gameplay against both types of opponents was analyzed. The used methodological approach ensures ecological validity by simulating real-world decision-making scenarios while participants undergo fNIRS scanning, avoiding the constraints of traditional neuroimaging methods. Six prefrontal cortex channels are focused on, leveraging the 10-20 system, to capture nuanced differences in brain activity. Wavelet analysis was utilized to dissect the data into frequency-specific differences, revealing subtle variations across different channels and frequency bands. Moreover, activity was quantified by comparing average data signals between rest and play modes across all points using Generalized Linear Model (GLM). The findings unveil significant differences in neural activation patterns, particularly in one specific channel and frequency range, suggesting distinct cognitive processing when interacting with intentional agents. These results align with previous neuroimaging studies and contribute to understanding the neural underpinnings of human-agent interactions in naturalistic settings. While acknowledging study limitations, including sample homogeneity and spatial accuracy constraints, the study's findings underscore the potential of fNIRS in exploring complex cognitive phenomena beyond laboratory confines.
[ { "created": "Tue, 27 Feb 2024 16:20:46 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 07:31:10 GMT", "version": "v2" } ]
2024-06-05
[ [ "Ghalavand", "Mohammad", "" ], [ "Hatami", "Javad", "" ], [ "Setarehdan", "Seyed Kamaledin", "" ], [ "Nosrati", "Fatimah", "" ], [ "Ghalavand", "Hananeh", "" ], [ "Nikhalat-Jahromi", "Ali", "" ] ]
As societal interactions increasingly involve both intentional and unintentional agents, understanding their effects on human cognition becomes paramount. This study investigates the neural correlates of interacting with intentional versus artificial agents in a simulated tennis game scenario. Employing functional near-infrared spectroscopy (fNIRS), brain activity in 50 male participants during gameplay against both types of opponents was analyzed. The used methodological approach ensures ecological validity by simulating real-world decision-making scenarios while participants undergo fNIRS scanning, avoiding the constraints of traditional neuroimaging methods. Six prefrontal cortex channels are focused on, leveraging the 10-20 system, to capture nuanced differences in brain activity. Wavelet analysis was utilized to dissect the data into frequency-specific differences, revealing subtle variations across different channels and frequency bands. Moreover, activity was quantified by comparing average data signals between rest and play modes across all points using Generalized Linear Model (GLM). The findings unveil significant differences in neural activation patterns, particularly in one specific channel and frequency range, suggesting distinct cognitive processing when interacting with intentional agents. These results align with previous neuroimaging studies and contribute to understanding the neural underpinnings of human-agent interactions in naturalistic settings. While acknowledging study limitations, including sample homogeneity and spatial accuracy constraints, the study's findings underscore the potential of fNIRS in exploring complex cognitive phenomena beyond laboratory confines.
1506.02321
Ildefonso De la Fuente M
Ildefonso M. De la Fuente, Iker Malaina, Alberto Perez-Samartin, Jesus M. Cortes, Asier Erramuzpe, Maria Dolores Boyano, Carlos Bringas, Maria Fedetz and Luis Martinez
Information and long term memory in calcium signals
35 pages and 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unicellular organisms are open metabolic systems that need to process information about their external environment in order to survive. In most types of tissues and organisms, cells use calcium signaling to carry information from the extracellular side of the plasma membrane to the different metabolic targets of their internal medium. This information might be encoded in the amplitude, frequency, duration, waveform or timing of the calcium oscillations. Thus, specific information coming from extracellular stimuli can be encoded in the calcium signal and decoded again later in different locations within the cell. Despite its cellular importance, little is known about the quantitative informative properties of the calcium concentration dynamics inside the cell. In order to understand some of these informational properties, we have studied experimental Ca2+ series of Xenopus laevis oocytes under different external pH stimulus. The data has been analyzed by means of information-theoretic approaches such as Conditional Entropy, Information Retention, and other non-linear dynamics tools such as the power spectra, the Largest Lyapunov exponent and the bridge detrended Scaled Window Variance analysis. We have quantified the biomolecular information flows of the experimental data in bits, and essential aspects of the information contained in the experimental calcium fluxes have been exhaustively analyzed. Our main result shows that inside all the studied intracellular Ca2+ flows a highly organized informational structure emerge, which exhibit deterministic chaotic behavior, long term memory and complex oscillations of the uncertainty reduction based on past values. The understanding of the informational properties of calcium signals is one of the key elements to elucidate the physiological functional coupling of the cell and the integrative dynamics of cellular life.
[ { "created": "Sun, 7 Jun 2015 22:56:24 GMT", "version": "v1" } ]
2015-06-09
[ [ "De la Fuente", "Ildefonso M.", "" ], [ "Malaina", "Iker", "" ], [ "Perez-Samartin", "Alberto", "" ], [ "Cortes", "Jesus M.", "" ], [ "Erramuzpe", "Asier", "" ], [ "Boyano", "Maria Dolores", "" ], [ "Bringas", "Carlos", "" ], [ "Fedetz", "Maria", "" ], [ "Martinez", "Luis", "" ] ]
Unicellular organisms are open metabolic systems that need to process information about their external environment in order to survive. In most types of tissues and organisms, cells use calcium signaling to carry information from the extracellular side of the plasma membrane to the different metabolic targets of their internal medium. This information might be encoded in the amplitude, frequency, duration, waveform or timing of the calcium oscillations. Thus, specific information coming from extracellular stimuli can be encoded in the calcium signal and decoded again later in different locations within the cell. Despite its cellular importance, little is known about the quantitative informative properties of the calcium concentration dynamics inside the cell. In order to understand some of these informational properties, we have studied experimental Ca2+ series of Xenopus laevis oocytes under different external pH stimulus. The data has been analyzed by means of information-theoretic approaches such as Conditional Entropy, Information Retention, and other non-linear dynamics tools such as the power spectra, the Largest Lyapunov exponent and the bridge detrended Scaled Window Variance analysis. We have quantified the biomolecular information flows of the experimental data in bits, and essential aspects of the information contained in the experimental calcium fluxes have been exhaustively analyzed. Our main result shows that inside all the studied intracellular Ca2+ flows a highly organized informational structure emerge, which exhibit deterministic chaotic behavior, long term memory and complex oscillations of the uncertainty reduction based on past values. The understanding of the informational properties of calcium signals is one of the key elements to elucidate the physiological functional coupling of the cell and the integrative dynamics of cellular life.
1604.08134
M. D. Betterton
Zachary Gergely, Ammon Crapo, Loren E. Hough, J. Richard McIntosh, Meredith D. Betterton
Kinesin-8 effects on mitotic microtubule dynamics contribute to spindle function in fission yeast
Accepted by Molecular Biology of the Cell
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinesin-8 motor proteins destabilize microtubules. Their absence during cell division is associated with disorganized mitotic chromosome movements and chromosome loss. Despite recent work studying effects of kinesin 8s on microtubule dynamics, it remains unclear whether the kinesin-8 mitotic phenotypes are consequences of their effect on microtubule dynamics, their well-established motor activity, or additional unknown functions. To better understand the role of kinesin-8 proteins in mitosis, we have studied the effects of deletion of the fission-yeast kinesin-8 proteins Klp5 and Klp6 on chromosome movements and spindle length dynamics. Aberrant microtubule-driven kinetochore pushing movements and tripolar mitotic spindles occurred in cells lacking Klp5 but not Klp6. Kinesin-8 deletion strains showed large fluctuations in metaphase spindle length, suggesting a disruption of spindle length stabilization. Comparison of our results from light microscopy with a mathematical model suggests that kinesin-8 induced effects on microtubule dynamics, kinetochore attachment stability, and sliding force in the spindle can explain the aberrant chromosome movements and spindle length fluctuations seen.
[ { "created": "Wed, 27 Apr 2016 16:33:14 GMT", "version": "v1" } ]
2016-04-28
[ [ "Gergely", "Zachary", "" ], [ "Crapo", "Ammon", "" ], [ "Hough", "Loren E.", "" ], [ "McIntosh", "J. Richard", "" ], [ "Betterton", "Meredith D.", "" ] ]
Kinesin-8 motor proteins destabilize microtubules. Their absence during cell division is associated with disorganized mitotic chromosome movements and chromosome loss. Despite recent work studying effects of kinesin 8s on microtubule dynamics, it remains unclear whether the kinesin-8 mitotic phenotypes are consequences of their effect on microtubule dynamics, their well-established motor activity, or additional unknown functions. To better understand the role of kinesin-8 proteins in mitosis, we have studied the effects of deletion of the fission-yeast kinesin-8 proteins Klp5 and Klp6 on chromosome movements and spindle length dynamics. Aberrant microtubule-driven kinetochore pushing movements and tripolar mitotic spindles occurred in cells lacking Klp5 but not Klp6. Kinesin-8 deletion strains showed large fluctuations in metaphase spindle length, suggesting a disruption of spindle length stabilization. Comparison of our results from light microscopy with a mathematical model suggests that kinesin-8 induced effects on microtubule dynamics, kinetochore attachment stability, and sliding force in the spindle can explain the aberrant chromosome movements and spindle length fluctuations seen.
1202.4868
Pascal Szacherski
Pierre Grangeat (LE2S), Pascal Szacherski (LE2S, IMS), Laurent Gerfault (LE2S), Jean-Fran\c{c}ois Giovannelli (IMS)
Bayesian hierarchical reconstruction of protein profiles including a digestion model
pr\'esentation orale; 59th American Society for Mass Spectrometry Conference, Dallas : France (2011)
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction : Mass spectrometry approaches are very attractive to detect protein panels in a sensitive and high speed way. MS can be coupled to many proteomic separation techniques. However, controlling technological variability on these analytical chains is a critical point. Adequate information processing is mandatory for data analysis to take into account the complexity of the analysed mixture, to improve the measurement reliability and to make the technology user friendly. Therefore we develop a hierarchical parametric probabilistic model of the LC-MS analytical chain including the technological variability. We introduce a Bayesian reconstruction methodology to recover the protein biomarkers content in a robust way. We will focus on the digestion step since it brings a major contribution to technological variability. Method : In this communication, we introduce a hierarchical model of the LC-MS analytical chain. Such a chain is a cascade of molecular events depicted by a graph structure, each node being associated to a molecular state such as protein, peptide and ion and each branch to a molecular processing such as digestion, ionisation and LC-MS separation. This molecular graph defines a hierarchical mixture model. We extend the Bayesian statistical framework we have introduced previously [1] to this hierarchical description. As an example, we will consider the digestion step. We describe the digestion process on a pair of peptides within the targeted protein as a Bernoulli random process associated with a cleavage probability controlled by the digestion kinetic law.
[ { "created": "Wed, 22 Feb 2012 09:49:22 GMT", "version": "v1" } ]
2012-02-23
[ [ "Grangeat", "Pierre", "", "LE2S" ], [ "Szacherski", "Pascal", "", "LE2S, IMS" ], [ "Gerfault", "Laurent", "", "LE2S" ], [ "Giovannelli", "Jean-François", "", "IMS" ] ]
Introduction : Mass spectrometry approaches are very attractive to detect protein panels in a sensitive and high speed way. MS can be coupled to many proteomic separation techniques. However, controlling technological variability on these analytical chains is a critical point. Adequate information processing is mandatory for data analysis to take into account the complexity of the analysed mixture, to improve the measurement reliability and to make the technology user friendly. Therefore we develop a hierarchical parametric probabilistic model of the LC-MS analytical chain including the technological variability. We introduce a Bayesian reconstruction methodology to recover the protein biomarkers content in a robust way. We will focus on the digestion step since it brings a major contribution to technological variability. Method : In this communication, we introduce a hierarchical model of the LC-MS analytical chain. Such a chain is a cascade of molecular events depicted by a graph structure, each node being associated to a molecular state such as protein, peptide and ion and each branch to a molecular processing such as digestion, ionisation and LC-MS separation. This molecular graph defines a hierarchical mixture model. We extend the Bayesian statistical framework we have introduced previously [1] to this hierarchical description. As an example, we will consider the digestion step. We describe the digestion process on a pair of peptides within the targeted protein as a Bernoulli random process associated with a cleavage probability controlled by the digestion kinetic law.
1310.0479
Ranjeetha Bharath
Ranjeetha Bharath and Jean-Jacques Slotine
Nonlinear Observer Design and Synchronization Analysis for Classical Models of Neural Oscillators
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work explores four nonlinear classical models of neural oscillators, the Hodgkin-Huxley model, the Fitzhugh-Nagumo model, the Morris-Lecar model, and the Hindmarsh-Rose model. Nonlinear contraction theory is used to develop observers and perform synchronization analysis on these systems. Neural oscillation and signaling models are based on the biological function of the neuron, with behavior mediated through the channeling of ions across the cell membrane. The variable assumed to be measured is the membrane potential, which may be obtained empirically through the use of a neuronal force-clamp system, or may be transmitted through the axon to other neurons. All other variables are estimated by using partial state or full state observers. Basic observer rate convergence analysis is performed for the Fitzhugh Nagumo system, partial state observer design is performed for the Morris-Lecar system, and basic synchronization analysis is performed for both the Fitzhugh-Nagumo and the Hodgkin-Huxley systems.
[ { "created": "Tue, 1 Oct 2013 20:20:13 GMT", "version": "v1" } ]
2013-10-03
[ [ "Bharath", "Ranjeetha", "" ], [ "Slotine", "Jean-Jacques", "" ] ]
This work explores four nonlinear classical models of neural oscillators, the Hodgkin-Huxley model, the Fitzhugh-Nagumo model, the Morris-Lecar model, and the Hindmarsh-Rose model. Nonlinear contraction theory is used to develop observers and perform synchronization analysis on these systems. Neural oscillation and signaling models are based on the biological function of the neuron, with behavior mediated through the channeling of ions across the cell membrane. The variable assumed to be measured is the membrane potential, which may be obtained empirically through the use of a neuronal force-clamp system, or may be transmitted through the axon to other neurons. All other variables are estimated by using partial state or full state observers. Basic observer rate convergence analysis is performed for the Fitzhugh Nagumo system, partial state observer design is performed for the Morris-Lecar system, and basic synchronization analysis is performed for both the Fitzhugh-Nagumo and the Hodgkin-Huxley systems.
0808.4014
Aaron Clauset
Aaron Clauset and Sidney Redner
Evolutionary Model of Species Body Mass Diversification
4 pages, 4 figures
Phys. Rev. Lett. 102, 038103 (2009)
10.1103/PhysRevLett.102.038103
null
q-bio.PE physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a quantitative model for the biological evolution of species body masses within large groups of related species, e.g., terrestrial mammals, in which body mass M evolves according to branching (speciation), multiplicative diffusion, and an extinction probability that increases logarithmically with mass. We describe this evolution in terms of a convection-diffusion-reaction equation for ln M. The steady-state behavior is in good agreement with empirical data on recent terrestrial mammals, and the time-dependent behavior also agrees with data on extinct mammal species between 95 - 50 Myr ago.
[ { "created": "Fri, 29 Aug 2008 01:37:04 GMT", "version": "v1" }, { "created": "Thu, 22 Jan 2009 16:42:14 GMT", "version": "v2" } ]
2009-01-22
[ [ "Clauset", "Aaron", "" ], [ "Redner", "Sidney", "" ] ]
We present a quantitative model for the biological evolution of species body masses within large groups of related species, e.g., terrestrial mammals, in which body mass M evolves according to branching (speciation), multiplicative diffusion, and an extinction probability that increases logarithmically with mass. We describe this evolution in terms of a convection-diffusion-reaction equation for ln M. The steady-state behavior is in good agreement with empirical data on recent terrestrial mammals, and the time-dependent behavior also agrees with data on extinct mammal species between 95 - 50 Myr ago.
2303.01444
Hector Zenil
Hector Zenil, Francisco Hern\'andez-Quiroz, Santiago Hern\'andez-Orozco, Abicumaran Uthamacumaran and Kourosh Saeb-Parsy
An Adaptive Computational Intelligence Approach to Personalised Health and Immune Age Characterisation from Common Haematological Markers
30 pages + appendix
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a simulated digital model that learns a person's optimal blood health over time. Using a learning adaptive algorithm, our model provides a risk assessment score that compares an individual's chronological age from birth to an estimation of a biological immune age derived from the score. We demonstrate its efficacy against real and synthetic data from medically relevant cases, extreme cases, and empirical blood cell count data from 100K data records in the Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey (CDC NHANES) that spans 13 years. We find that the score is informative when distinguishing healthy individuals from those with diseases, both self-reported and abnormal blood tests manifested, providing an entry-level score for patient triaging. We show that, by analyzing an individual's Full Blood Count (FBC) or Complete Blood Count (CBC), test results over time allows us to calculate an immune age score that correlates with chronological age. The immune age score, derived solely from popular hematological markers, can be widely used to serve purposes of precise healthcare, and predictive medicine.
[ { "created": "Thu, 2 Mar 2023 18:03:03 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2023 21:34:24 GMT", "version": "v2" }, { "created": "Thu, 16 Nov 2023 16:14:39 GMT", "version": "v3" }, { "created": "Thu, 14 Mar 2024 00:36:53 GMT", "version": "v4" }, { "created": "Tue, 19 Mar 2024 21:55:40 GMT", "version": "v5" } ]
2024-03-21
[ [ "Zenil", "Hector", "" ], [ "Hernández-Quiroz", "Francisco", "" ], [ "Hernández-Orozco", "Santiago", "" ], [ "Uthamacumaran", "Abicumaran", "" ], [ "Saeb-Parsy", "Kourosh", "" ] ]
We introduce a simulated digital model that learns a person's optimal blood health over time. Using a learning adaptive algorithm, our model provides a risk assessment score that compares an individual's chronological age from birth to an estimation of a biological immune age derived from the score. We demonstrate its efficacy against real and synthetic data from medically relevant cases, extreme cases, and empirical blood cell count data from 100K data records in the Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey (CDC NHANES) that spans 13 years. We find that the score is informative when distinguishing healthy individuals from those with diseases, both self-reported and abnormal blood tests manifested, providing an entry-level score for patient triaging. We show that, by analyzing an individual's Full Blood Count (FBC) or Complete Blood Count (CBC), test results over time allows us to calculate an immune age score that correlates with chronological age. The immune age score, derived solely from popular hematological markers, can be widely used to serve purposes of precise healthcare, and predictive medicine.
1001.4845
Sitabhra Sinha
Sitabhra Sinha, T Jesan and Nivedita Chatterjee
Systems biology: From the cell to the brain
6 pages, 5 figures
Current Trends in Science: Platinum Jubilee Special (Ed. N Mukunda), Bangalore: Indian Academy of Sciences, 2009, pp 199-205
null
null
q-bio.MN physics.bio-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the completion of human genome mapping, the focus of scientists seeking to explain the biological complexity of living systems is shifting from analyzing the individual components (such as a particular gene or biochemical reaction) to understanding the set of interactions amongst the large number of components that results in the different functions of the organism. To this end, the area of systems biology attempts to achieve a "systems-level" description of biology by focusing on the network of interactions instead of the characteristics of its isolated parts. In this article, we briefly describe some of the emerging themes of research in "network" biology, looking at dynamical processes occurring at the two different length scales of within the cell and between cells, viz., the intra-cellular signaling network and the nervous system. We show that focusing on the systems-level aspects of these problems allows one to observe surprising and illuminating common themes amongst them.
[ { "created": "Wed, 27 Jan 2010 03:35:20 GMT", "version": "v1" } ]
2010-01-28
[ [ "Sinha", "Sitabhra", "" ], [ "Jesan", "T", "" ], [ "Chatterjee", "Nivedita", "" ] ]
With the completion of human genome mapping, the focus of scientists seeking to explain the biological complexity of living systems is shifting from analyzing the individual components (such as a particular gene or biochemical reaction) to understanding the set of interactions amongst the large number of components that results in the different functions of the organism. To this end, the area of systems biology attempts to achieve a "systems-level" description of biology by focusing on the network of interactions instead of the characteristics of its isolated parts. In this article, we briefly describe some of the emerging themes of research in "network" biology, looking at dynamical processes occurring at the two different length scales of within the cell and between cells, viz., the intra-cellular signaling network and the nervous system. We show that focusing on the systems-level aspects of these problems allows one to observe surprising and illuminating common themes amongst them.
1611.04030
Konstantin Blyuss
N. Sherborne, J.C. Miller, K.B. Blyuss, I.Z. Kiss
Mean-field models for non-Markovian epidemics on networks: from edge-based compartmental to pairwise models
24 pages, 4 figures, submitted
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel extension of the edge-based compartmental model for epidemics with arbitrary distributions of transmission and recovery times. Using the message passing approach we also derive a new pairwise-like model for epidemics with Markovian transmission and an arbitrary recovery period. The new pairwise-like model allows one to formally prove that the message passing and edge-based compartmental models are equivalent in the case of Markovian transmission and arbitrary recovery processes. The edge-based and message passing models are conjectured to also be equivalent for arbitrary transmission processes; we show the first step of a full proof of this. The new pairwise-like model encompasses many existing well-known models that can be obtained by appropriate reductions. It is also amenable to a relatively straightforward numerical implementation. We test the theoretical results by comparing the numerical solutions of the various pairwise-like models to results based on explicit stochastic network simulations.
[ { "created": "Sat, 12 Nov 2016 18:13:32 GMT", "version": "v1" } ]
2016-11-15
[ [ "Sherborne", "N.", "" ], [ "Miller", "J. C.", "" ], [ "Blyuss", "K. B.", "" ], [ "Kiss", "I. Z.", "" ] ]
This paper presents a novel extension of the edge-based compartmental model for epidemics with arbitrary distributions of transmission and recovery times. Using the message passing approach we also derive a new pairwise-like model for epidemics with Markovian transmission and an arbitrary recovery period. The new pairwise-like model allows one to formally prove that the message passing and edge-based compartmental models are equivalent in the case of Markovian transmission and arbitrary recovery processes. The edge-based and message passing models are conjectured to also be equivalent for arbitrary transmission processes; we show the first step of a full proof of this. The new pairwise-like model encompasses many existing well-known models that can be obtained by appropriate reductions. It is also amenable to a relatively straightforward numerical implementation. We test the theoretical results by comparing the numerical solutions of the various pairwise-like models to results based on explicit stochastic network simulations.
2001.06881
Maurizio De Pitt\`a
Maurizio De Pitt\`a
Neuron-Glial Interactions
43 pages, 2 figures, 1 table. Accepted for publication in the "Encyclopedia of Computational Neuroscience," D. Jaeger and R. Jung eds., Springer-Verlag New York, 2020 (2nd edition)
null
10.1007/978-1-4614-7320-6_100691-1
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Although lagging behind classical computational neuroscience, theoretical and computational approaches are beginning to emerge to characterize different aspects of neuron-glial interactions. This chapter aims to provide essential knowledge on neuron-glial interactions in the mammalian brain, leveraging on computational studies that focus on structure (anatomy) and function (physiology) of such interactions in the healthy brain. Although our understanding of the need of neuron-glial interactions in the brain is still at its infancy, being mostly based on predictions that await for experimental validation, simple general modeling arguments borrowed from control theory are introduced to support the importance of including such interactions in traditional neuron-based modeling paradigms.
[ { "created": "Sun, 19 Jan 2020 18:30:07 GMT", "version": "v1" } ]
2021-03-10
[ [ "De Pittà", "Maurizio", "" ] ]
Although lagging behind classical computational neuroscience, theoretical and computational approaches are beginning to emerge to characterize different aspects of neuron-glial interactions. This chapter aims to provide essential knowledge on neuron-glial interactions in the mammalian brain, leveraging on computational studies that focus on structure (anatomy) and function (physiology) of such interactions in the healthy brain. Although our understanding of the need of neuron-glial interactions in the brain is still at its infancy, being mostly based on predictions that await for experimental validation, simple general modeling arguments borrowed from control theory are introduced to support the importance of including such interactions in traditional neuron-based modeling paradigms.
2008.07054
Kevin Maurin
K\'evin J. L. Maurin
An empirical guide for producing a dated phylogeny with treePL in a maximum likelihood framework
11 pages, 3 figures, supplementary material available at https://doi.org/10.5281/zenodo.3989030
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
treePL uses a penalised likelihood approach to produce a dated phylogeny in a maximum likelihood framework. Since its publication in 2012, few resources have been developed to explain how to use it properly. In this guide, I provide a step-by-step protocol for producing a dated phylogeny using treePL, based on my experience building a large dated phylogeny with it and conducting additional tests on a smaller phylogeny. I also provide the necessary data to reproduce one of the example phylogenies presented. I compare these treePL phylogenies to BEAST2-built counterparts. Even though I cannot explain precisely how treePL works, the evidence discussed in this guide suggest that the empirical protocol presented is reliable.
[ { "created": "Mon, 17 Aug 2020 01:42:19 GMT", "version": "v1" }, { "created": "Tue, 18 Aug 2020 08:25:52 GMT", "version": "v2" } ]
2020-08-19
[ [ "Maurin", "Kévin J. L.", "" ] ]
treePL uses a penalised likelihood approach to produce a dated phylogeny in a maximum likelihood framework. Since its publication in 2012, few resources have been developed to explain how to use it properly. In this guide, I provide a step-by-step protocol for producing a dated phylogeny using treePL, based on my experience building a large dated phylogeny with it and conducting additional tests on a smaller phylogeny. I also provide the necessary data to reproduce one of the example phylogenies presented. I compare these treePL phylogenies to BEAST2-built counterparts. Even though I cannot explain precisely how treePL works, the evidence discussed in this guide suggest that the empirical protocol presented is reliable.
q-bio/0702050
John Rhodes
Elizabeth S. Allman, John A. Rhodes
Identifying evolutionary trees and substitution parameters for the general Markov model with invariable sites
null
null
null
null
q-bio.PE math.AG math.ST stat.TH
null
The general Markov plus invariable sites (GM+I) model of biological sequence evolution is a two-class model in which an unknown proportion of sites are not allowed to change, while the remainder undergo substitutions according to a Markov process on a tree. For statistical use it is important to know if the model is identifiable; can both the tree topology and the numerical parameters be determined from a joint distribution describing sequences only at the leaves of the tree? We establish that for generic parameters both the tree and all numerical parameter values can be recovered, up to clearly understood issues of `label swapping.' The method of analysis is algebraic, using phylogenetic invariants to study the variety defined by the model. Simple rational formulas, expressed in terms of determinantal ratios, are found for recovering numerical parameters describing the invariable sites.
[ { "created": "Fri, 23 Feb 2007 23:08:32 GMT", "version": "v1" } ]
2011-11-10
[ [ "Allman", "Elizabeth S.", "" ], [ "Rhodes", "John A.", "" ] ]
The general Markov plus invariable sites (GM+I) model of biological sequence evolution is a two-class model in which an unknown proportion of sites are not allowed to change, while the remainder undergo substitutions according to a Markov process on a tree. For statistical use it is important to know if the model is identifiable; can both the tree topology and the numerical parameters be determined from a joint distribution describing sequences only at the leaves of the tree? We establish that for generic parameters both the tree and all numerical parameter values can be recovered, up to clearly understood issues of `label swapping.' The method of analysis is algebraic, using phylogenetic invariants to study the variety defined by the model. Simple rational formulas, expressed in terms of determinantal ratios, are found for recovering numerical parameters describing the invariable sites.
2212.13543
Sebastian Lobentanzer
Sebastian Lobentanzer, Patrick Aloy, Jan Baumbach, Balazs Bohar, Pornpimol Charoentong, Katharina Danhauser, Tunca Do\u{g}an, Johann Dreo, Ian Dunham, Adri\`a Fernandez-Torras, Benjamin M. Gyori, Michael Hartung, Charles Tapley Hoyt, Christoph Klein, Tamas Korcsmaros, Andreas Maier, Matthias Mann, David Ochoa, Elena Pareja-Lorente, Ferdinand Popp, Martin Preusse, Niklas Probul, Benno Schwikowski, B\"unyamin Sen, Maximilian T. Strauss, Denes Turei, Erva Ulusoy, Judith Andrea Heidrun Wodke, Julio Saez-Rodriguez
Democratising Knowledge Representation with BioCypher
34 pages, 6 figures; submitted to Nature Biotechnology
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Standardising the representation of biomedical knowledge among all researchers is an insurmountable task, hindering the effectiveness of many computational methods. To facilitate harmonisation and interoperability despite this fundamental challenge, we propose to standardise the framework of knowledge graph creation instead. We implement this standardisation in BioCypher, a FAIR (findable, accessible, interoperable, reusable) framework to transparently build biomedical knowledge graphs while preserving provenances of the source data. Mapping the knowledge onto biomedical ontologies helps to balance the needs for harmonisation, human and machine readability, and ease of use and accessibility to non-specialist researchers. We demonstrate the usefulness of this framework on a variety of use cases, from maintenance of task-specific knowledge stores, to interoperability between biomedical domains, to on-demand building of task-specific knowledge graphs for federated learning. BioCypher (https://biocypher.org) frees up valuable developer time; we encourage further development and usage by the community.
[ { "created": "Tue, 27 Dec 2022 16:29:52 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2023 12:10:09 GMT", "version": "v2" }, { "created": "Tue, 17 Jan 2023 17:08:25 GMT", "version": "v3" } ]
2023-01-18
[ [ "Lobentanzer", "Sebastian", "" ], [ "Aloy", "Patrick", "" ], [ "Baumbach", "Jan", "" ], [ "Bohar", "Balazs", "" ], [ "Charoentong", "Pornpimol", "" ], [ "Danhauser", "Katharina", "" ], [ "Doğan", "Tunca", "" ], [ "Dreo", "Johann", "" ], [ "Dunham", "Ian", "" ], [ "Fernandez-Torras", "Adrià", "" ], [ "Gyori", "Benjamin M.", "" ], [ "Hartung", "Michael", "" ], [ "Hoyt", "Charles Tapley", "" ], [ "Klein", "Christoph", "" ], [ "Korcsmaros", "Tamas", "" ], [ "Maier", "Andreas", "" ], [ "Mann", "Matthias", "" ], [ "Ochoa", "David", "" ], [ "Pareja-Lorente", "Elena", "" ], [ "Popp", "Ferdinand", "" ], [ "Preusse", "Martin", "" ], [ "Probul", "Niklas", "" ], [ "Schwikowski", "Benno", "" ], [ "Sen", "Bünyamin", "" ], [ "Strauss", "Maximilian T.", "" ], [ "Turei", "Denes", "" ], [ "Ulusoy", "Erva", "" ], [ "Wodke", "Judith Andrea Heidrun", "" ], [ "Saez-Rodriguez", "Julio", "" ] ]
Standardising the representation of biomedical knowledge among all researchers is an insurmountable task, hindering the effectiveness of many computational methods. To facilitate harmonisation and interoperability despite this fundamental challenge, we propose to standardise the framework of knowledge graph creation instead. We implement this standardisation in BioCypher, a FAIR (findable, accessible, interoperable, reusable) framework to transparently build biomedical knowledge graphs while preserving provenances of the source data. Mapping the knowledge onto biomedical ontologies helps to balance the needs for harmonisation, human and machine readability, and ease of use and accessibility to non-specialist researchers. We demonstrate the usefulness of this framework on a variety of use cases, from maintenance of task-specific knowledge stores, to interoperability between biomedical domains, to on-demand building of task-specific knowledge graphs for federated learning. BioCypher (https://biocypher.org) frees up valuable developer time; we encourage further development and usage by the community.
1501.02847
Peter Wilton R
Peter R. Wilton, Shai Carmi, Asger Hobolth
The SMC' is a highly accurate approximation to the ancestral recombination graph
Revised manuscript
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two sequentially Markov coalescent models (SMC and SMC') are available as tractable approximations to the ancestral recombination graph (ARG). We present a Markov process describing coalescence at two fixed points along a pair of sequences evolving under the SMC'. Using our Markov process, we derive a number of new quantities related to the pairwise SMC', thereby analytically quantifying for the first time the similarity between the SMC' and ARG. We use our process to show that the joint distribution of pairwise coalescence times at recombination sites under the SMC' is the same as it is marginally under the ARG, which demonstrates that the SMC' is, in a particular well-defined, intuitive sense, the most appropriate first-order sequentially Markov approximation to the ARG. Finally, we use these results to show that population size estimates under the pairwise SMC are asymptotically biased, while under the pairwise SMC' they are approximately asymptotically unbiased.
[ { "created": "Mon, 12 Jan 2015 23:09:09 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2015 22:05:40 GMT", "version": "v2" } ]
2015-03-06
[ [ "Wilton", "Peter R.", "" ], [ "Carmi", "Shai", "" ], [ "Hobolth", "Asger", "" ] ]
Two sequentially Markov coalescent models (SMC and SMC') are available as tractable approximations to the ancestral recombination graph (ARG). We present a Markov process describing coalescence at two fixed points along a pair of sequences evolving under the SMC'. Using our Markov process, we derive a number of new quantities related to the pairwise SMC', thereby analytically quantifying for the first time the similarity between the SMC' and ARG. We use our process to show that the joint distribution of pairwise coalescence times at recombination sites under the SMC' is the same as it is marginally under the ARG, which demonstrates that the SMC' is, in a particular well-defined, intuitive sense, the most appropriate first-order sequentially Markov approximation to the ARG. Finally, we use these results to show that population size estimates under the pairwise SMC are asymptotically biased, while under the pairwise SMC' they are approximately asymptotically unbiased.
0812.0256
Steffen Waldherr
Steffen Waldherr and Frank Allg\"ower
Searching bifurcations in high-dimensional parameter space via a feedback loop breaking approach
25 pages, 2 tables, 4 figures
International Journal of Systems Science 40:769-782, 2009
10.1080/00207720902957269
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bifurcations leading to complex dynamical behaviour of non-linear systems are often encountered when the characteristics of feedback circuits in the system are varied. In systems with many unknown or varying parameters, it is an interesting, but difficult problem to find parameter values for which specific bifurcations occur. In this paper, we develop a loop breaking approach to evaluate the influence of parameter values on feedback circuit characteristics. This approach allows a theoretical classification of feedback circuit characteristics related to possible bifurcations in the system. Based on the theoretical results, a numerical algorithm for bifurcation search in a possibly high-dimensional parameter space is developed. The application of the proposed algorithm is illustrated by searching for a Hopf bifurcation in a model of the mitogen activated protein kinase (MAPK) cascade, which is a classical example for biochemical signal transduction.
[ { "created": "Mon, 1 Dec 2008 10:46:58 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2010 13:40:54 GMT", "version": "v2" } ]
2010-09-23
[ [ "Waldherr", "Steffen", "" ], [ "Allgöwer", "Frank", "" ] ]
Bifurcations leading to complex dynamical behaviour of non-linear systems are often encountered when the characteristics of feedback circuits in the system are varied. In systems with many unknown or varying parameters, it is an interesting, but difficult problem to find parameter values for which specific bifurcations occur. In this paper, we develop a loop breaking approach to evaluate the influence of parameter values on feedback circuit characteristics. This approach allows a theoretical classification of feedback circuit characteristics related to possible bifurcations in the system. Based on the theoretical results, a numerical algorithm for bifurcation search in a possibly high-dimensional parameter space is developed. The application of the proposed algorithm is illustrated by searching for a Hopf bifurcation in a model of the mitogen activated protein kinase (MAPK) cascade, which is a classical example for biochemical signal transduction.
1711.02791
Grzegorz A Rempala
Hye-Won Kang, Wasiur R. KhudaBukhsh, Heinz Koeppl and Grzegorz A. Rempa{\l}a
Quasi-steady-state approximations derived from the stochastic model of enzyme kinetics
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we derive several quasi steady-state approximations (QSSAs) to the stochastic reaction network describing the Michaelis-Menten enzyme kinetics. We show how the different assumptions about chemical species abundance and reaction rates lead to the standard QSSA (sQSSA), the total QSSA (tQSSA), and the reverse QSSA (rQSSA) approximations. These three QSSAs have been widely studied in the literature in deterministic ordinary differential equation (ODE) settings and several sets of conditions for their validity have been proposed. By using multiscaling techniques introduced in Kang (and Kurtz 2013) and Ball et al. (2006) we show that these conditions for deterministic QSSAs largely agree with the ones for QSSAs in the large volume limits of the underlying stochastic enzyme kinetic network.
[ { "created": "Wed, 8 Nov 2017 01:24:13 GMT", "version": "v1" } ]
2017-11-09
[ [ "Kang", "Hye-Won", "" ], [ "KhudaBukhsh", "Wasiur R.", "" ], [ "Koeppl", "Heinz", "" ], [ "Rempała", "Grzegorz A.", "" ] ]
In this paper we derive several quasi steady-state approximations (QSSAs) to the stochastic reaction network describing the Michaelis-Menten enzyme kinetics. We show how the different assumptions about chemical species abundance and reaction rates lead to the standard QSSA (sQSSA), the total QSSA (tQSSA), and the reverse QSSA (rQSSA) approximations. These three QSSAs have been widely studied in the literature in deterministic ordinary differential equation (ODE) settings and several sets of conditions for their validity have been proposed. By using multiscaling techniques introduced in Kang (and Kurtz 2013) and Ball et al. (2006) we show that these conditions for deterministic QSSAs largely agree with the ones for QSSAs in the large volume limits of the underlying stochastic enzyme kinetic network.
1603.06634
Fabio Zanini
Fabio Zanini, Vadim Puller, Johanna Brodin, Jan Albert, Richard Neher
In-vivo mutation rates and fitness landscape of HIV-1
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Mutation rates and fitness costs of deleterious mutations are difficult to measure in vivo but essential for a quantitative understanding of evolution. Using whole genome deep sequencing data from longitudinal samples during untreated HIV-1 infection, we estimated mutation rates and fitness costs in HIV-1 from the temporal dynamics of genetic variation. At approximately neutral sites, mutations accumulate with a rate of 1.2 x 10^-5 per site per day, in agreement with the rate measured in cell cultures. The rate from G to A is largest, followed by the other transitions C to T, T to C, and A to G, while transversions are more rare. At non-neutral sites, most mutations reduce virus replication; using a model of mutation selection balance, we estimated the fitness cost of mutations at every site in the HIV-1 genome. About half of all nonsynonymous mutations have large fitness costs (greater than 10\%), while most synonymous mutations have costs below 1\%. The cost of synonymous mutations is especially low in most of gag and pol, while much higher costs are observed in important RNA structures and regulatory regions. The intrapatient fitness cost estimates are consistent across multiple patients, suggesting that the deleterious part of the fitness landscape is universal and explains a large fraction of global HIV-1 group M diversity.
[ { "created": "Mon, 21 Mar 2016 22:08:43 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2016 19:25:32 GMT", "version": "v2" } ]
2016-07-04
[ [ "Zanini", "Fabio", "" ], [ "Puller", "Vadim", "" ], [ "Brodin", "Johanna", "" ], [ "Albert", "Jan", "" ], [ "Neher", "Richard", "" ] ]
Mutation rates and fitness costs of deleterious mutations are difficult to measure in vivo but essential for a quantitative understanding of evolution. Using whole genome deep sequencing data from longitudinal samples during untreated HIV-1 infection, we estimated mutation rates and fitness costs in HIV-1 from the temporal dynamics of genetic variation. At approximately neutral sites, mutations accumulate with a rate of 1.2 x 10^-5 per site per day, in agreement with the rate measured in cell cultures. The rate from G to A is largest, followed by the other transitions C to T, T to C, and A to G, while transversions are more rare. At non-neutral sites, most mutations reduce virus replication; using a model of mutation selection balance, we estimated the fitness cost of mutations at every site in the HIV-1 genome. About half of all nonsynonymous mutations have large fitness costs (greater than 10\%), while most synonymous mutations have costs below 1\%. The cost of synonymous mutations is especially low in most of gag and pol, while much higher costs are observed in important RNA structures and regulatory regions. The intrapatient fitness cost estimates are consistent across multiple patients, suggesting that the deleterious part of the fitness landscape is universal and explains a large fraction of global HIV-1 group M diversity.
1311.6857
Chuan-Chao Wang
Chuan-Chao Wang, Yunzhi Huang, Shao-Qing Wen, Chun Chen, Li Jin, Hui Li
Agriculture driving male expansion in Neolithic Time
9 pages, 2 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of agriculture is suggested to have driven extensive human population growths. However, genetic evidence from maternal mitochondrial genomes suggests major population expansions began before the emergence of agriculture. Therefore, role of agriculture that played in initial population expansions still remains controversial. Here, we analyzed a set of globally distributed whole Y chromosome and mitochondrial genomes of 526 male samples from 1000 Genome Project. We found that most major paternal lineage expansions coalesced in Neolithic Time. The estimated effective population sizes through time revealed strong evidence for 10- to 100-fold increase in population growth of males with the advent of agriculture. This sex-biased Neolithic expansion might result from the reduction in hunting-related mortality of males.
[ { "created": "Wed, 27 Nov 2013 02:04:29 GMT", "version": "v1" } ]
2013-11-28
[ [ "Wang", "Chuan-Chao", "" ], [ "Huang", "Yunzhi", "" ], [ "Wen", "Shao-Qing", "" ], [ "Chen", "Chun", "" ], [ "Jin", "Li", "" ], [ "Li", "Hui", "" ] ]
The emergence of agriculture is suggested to have driven extensive human population growths. However, genetic evidence from maternal mitochondrial genomes suggests major population expansions began before the emergence of agriculture. Therefore, role of agriculture that played in initial population expansions still remains controversial. Here, we analyzed a set of globally distributed whole Y chromosome and mitochondrial genomes of 526 male samples from 1000 Genome Project. We found that most major paternal lineage expansions coalesced in Neolithic Time. The estimated effective population sizes through time revealed strong evidence for 10- to 100-fold increase in population growth of males with the advent of agriculture. This sex-biased Neolithic expansion might result from the reduction in hunting-related mortality of males.
1412.7187
Rajita Menon
Rajita Menon and Kirill S. Korolev
Public good diffusion limits microbial mutualism
12 pages, 4 figures
Phys. Rev. Lett. 114, (2015) 168102
10.1103/PhysRevLett.114.168102
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard game theory cannot describe microbial interactions mediated by diffusible molecules. Nevertheless, we show that one can still model microbial dynamics using game theory with parameters renormalized by diffusion. Contrary to expectations, greater sharing of metabolites reduces the strength of cooperation and leads to species extinction via a nonequilibrium phase transition. We report analytic results for the critical diffusivity and the length scale of species intermixing. We also show that fitness nonlinearities suppress mutualism and favor the species producing slower nutrients.
[ { "created": "Mon, 22 Dec 2014 22:22:33 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2015 00:35:56 GMT", "version": "v2" }, { "created": "Mon, 4 May 2015 22:07:28 GMT", "version": "v3" }, { "created": "Sat, 23 May 2015 23:57:47 GMT", "version": "v4" } ]
2015-05-26
[ [ "Menon", "Rajita", "" ], [ "Korolev", "Kirill S.", "" ] ]
Standard game theory cannot describe microbial interactions mediated by diffusible molecules. Nevertheless, we show that one can still model microbial dynamics using game theory with parameters renormalized by diffusion. Contrary to expectations, greater sharing of metabolites reduces the strength of cooperation and leads to species extinction via a nonequilibrium phase transition. We report analytic results for the critical diffusivity and the length scale of species intermixing. We also show that fitness nonlinearities suppress mutualism and favor the species producing slower nutrients.
2311.00417
David A. Kessler
David A. Kessler and Herbert Levine
Multiple possible patterns can emerge from virus-immune coevolution
Revised version
null
null
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adaptive immune system engages in an arms race with evolving viruses, trying to generate new responses to viral strains that continually move away from the set of variants that have already elicited a functional immune response. In previous work, it has been argued that this dynamical process can lead to a propagating pulse of ever-changing viral population and concomitant immune response. Here, we introduce a new stochastic model of viral-host coevolution and a deterministic approximation thereof with which to study these dynamics. We show that there is indeed a possible pulse solution, but for a large host population size, the pulse becomes unstable to the generation of new infections in its wake, leading to an extended endemic infection pattern. This time-dependent endemic pattern eventually reaches a fluctuating steady-state when the infected population size reaches a large enough fraction of the total number of hosts.
[ { "created": "Wed, 1 Nov 2023 10:19:41 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2024 20:03:27 GMT", "version": "v2" } ]
2024-06-04
[ [ "Kessler", "David A.", "" ], [ "Levine", "Herbert", "" ] ]
The adaptive immune system engages in an arms race with evolving viruses, trying to generate new responses to viral strains that continually move away from the set of variants that have already elicited a functional immune response. In previous work, it has been argued that this dynamical process can lead to a propagating pulse of ever-changing viral population and concomitant immune response. Here, we introduce a new stochastic model of viral-host coevolution and a deterministic approximation thereof with which to study these dynamics. We show that there is indeed a possible pulse solution, but for a large host population size, the pulse becomes unstable to the generation of new infections in its wake, leading to an extended endemic infection pattern. This time-dependent endemic pattern eventually reaches a fluctuating steady-state when the infected population size reaches a large enough fraction of the total number of hosts.
2304.09050
Vernon Lawhern
Stephen M. Gordon, Jonathan R. McDaniel, Kevin W. King, Vernon J. Lawhern, Jonathan Touryan
Decoding Neural Activity to Assess Individual Latent State in Ecologically Valid Contexts
null
Journal of Neural Engineering, vol. 20(4), 2023
10.1088/1741-2552/acee20
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks.
[ { "created": "Tue, 18 Apr 2023 15:15:00 GMT", "version": "v1" } ]
2023-10-13
[ [ "Gordon", "Stephen M.", "" ], [ "McDaniel", "Jonathan R.", "" ], [ "King", "Kevin W.", "" ], [ "Lawhern", "Vernon J.", "" ], [ "Touryan", "Jonathan", "" ] ]
There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks.
1408.5128
Stefano Bo
Stefano Bo, Marco Del Giudice and Antonio Celani
Thermodynamic limits to information harvesting by sensory systems
Revised version: 18 pages, 5 figures
null
10.1088/1742-5468/2015/01/P01014
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In view of the relation between information and thermodynamics we investigate how much information about an external protocol can be stored in the memory of a stochastic measurement device given an energy budget. We consider a layered device with a memory component storing information about the external environment by monitoring the history of a sensory part coupled to the environment. We derive an integral fluctuation theorem for the entropy production and a measure of the information accumulated in the memory device. Its most immediate consequence is that the amount of information is bounded by the average thermodynamic entropy produced by the process. At equilibrium no entropy is produced and therefore the memory device does not add any information about the environment to the sensory component. Consequently, if the system operates at equilibrium the addition of a memory component is superfluous. Such device can be used to model the sensing process of a cell measuring the external concentration of a chemical compound and encoding the measurement in the amount of phosphorylated cytoplasmic proteins.
[ { "created": "Thu, 21 Aug 2014 16:35:22 GMT", "version": "v1" }, { "created": "Mon, 27 Jul 2015 12:21:59 GMT", "version": "v2" } ]
2015-07-28
[ [ "Bo", "Stefano", "" ], [ "Del Giudice", "Marco", "" ], [ "Celani", "Antonio", "" ] ]
In view of the relation between information and thermodynamics we investigate how much information about an external protocol can be stored in the memory of a stochastic measurement device given an energy budget. We consider a layered device with a memory component storing information about the external environment by monitoring the history of a sensory part coupled to the environment. We derive an integral fluctuation theorem for the entropy production and a measure of the information accumulated in the memory device. Its most immediate consequence is that the amount of information is bounded by the average thermodynamic entropy produced by the process. At equilibrium no entropy is produced and therefore the memory device does not add any information about the environment to the sensory component. Consequently, if the system operates at equilibrium the addition of a memory component is superfluous. Such device can be used to model the sensing process of a cell measuring the external concentration of a chemical compound and encoding the measurement in the amount of phosphorylated cytoplasmic proteins.
q-bio/0510044
Danuta Makowiec
Danuta Makowiec, Rafal Galaska, Aleksandra Dudkowska, Andrzej Rynkiewicz, Marcin Zwierz
Long-range dependencies in heart rate signals- revisited
24 pages
null
10.1016/j.physa.2006.02.038
null
q-bio.TO
null
The RR series extracted from human electrocardiogram signal (ECG) is considered as a fractal stochastic process. The manifestation of long-range dependencies is the presence of power laws in scale dependent process characteristics. Exponents of these laws: $\beta$ - describing power spectrum decay, $\alpha$ - responsible for decay of detrended fluctuations or $H$ related to, so-called, roughness of a signal, are known to differentiate hearts of healthy people from hearts with congestive heart failure. There is a strong expectation that resolution spectrum of exponents, so-called, local exponents in place of global exponents allows to study differences between hearts in details. The arguments are given that local exponents obtained in multifractal analysis by the two methods: wavelet transform modulus maxima (WTMM) and multifractal detrended fluctuation analysis (MDFA), allow to recognize the following four stages of the heart: healthy and young, healthy and advance in years, subjects with left ventricle systolic dysfunction (NYHA I--III class) and characterized by severe congestive heart failure (NYHA III-IV class).
[ { "created": "Mon, 24 Oct 2005 07:00:16 GMT", "version": "v1" } ]
2009-11-11
[ [ "Makowiec", "Danuta", "" ], [ "Galaska", "Rafal", "" ], [ "Dudkowska", "Aleksandra", "" ], [ "Rynkiewicz", "Andrzej", "" ], [ "Zwierz", "Marcin", "" ] ]
The RR series extracted from human electrocardiogram signal (ECG) is considered as a fractal stochastic process. The manifestation of long-range dependencies is the presence of power laws in scale dependent process characteristics. Exponents of these laws: $\beta$ - describing power spectrum decay, $\alpha$ - responsible for decay of detrended fluctuations or $H$ related to, so-called, roughness of a signal, are known to differentiate hearts of healthy people from hearts with congestive heart failure. There is a strong expectation that resolution spectrum of exponents, so-called, local exponents in place of global exponents allows to study differences between hearts in details. The arguments are given that local exponents obtained in multifractal analysis by the two methods: wavelet transform modulus maxima (WTMM) and multifractal detrended fluctuation analysis (MDFA), allow to recognize the following four stages of the heart: healthy and young, healthy and advance in years, subjects with left ventricle systolic dysfunction (NYHA I--III class) and characterized by severe congestive heart failure (NYHA III-IV class).
2104.09135
Mar\'ia Vallet-Regi
Cicuendez M, Doadrio JC, Hernandez A, Portoles MT, Izquierdo-Barba I, Vallet-Regi M
Multifunctional pH sensitive 3D scaffolds for treatment and prevention of bone infection
31 pages, 8 figures
Acta Biomaterialia. 65, 450-461 (2018)
10.1016/j.actbio.2017.11.009
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Multifunctional-therapeutic 3D scaffolds have been prepared. These biomaterials are able to destroy the S. aureus bacteria biofilm and to allow bone regeneration at the same time. The present study is focused on the design of pH sensitive 3D hierarchical meso-macroporous scaffolds based on MGHA nanocomposite formed by a mesostructured glassy network with embedded hydroxyapatite nanoparticles, whose mesopores have been loaded with levofloxacin as antibacterial agent. These 3D platforms exhibit controlled and pH-dependent levofloxacin release, sustained over time at physiological pH (7.4) and notably increased at infection pH (6.7 and 5.5), which is due to the different interaction rate between diverse levofloxacin species and the silica matrix. These 3D systems are able to inhibit the S. aureus growth and to destroy the bacterial biofilm without cytotoxic effects on human osteoblasts and allowing an adequate colonization and differentiation of preosteoblastic cells on their surface. These findings suggest promising applications of these hierarchical MGHA nanocomposite scaffolds for the treatment and prevention of bone infection.
[ { "created": "Mon, 19 Apr 2021 08:33:06 GMT", "version": "v1" } ]
2021-04-20
[ [ "M", "Cicuendez", "" ], [ "JC", "Doadrio", "" ], [ "A", "Hernandez", "" ], [ "MT", "Portoles", "" ], [ "I", "Izquierdo-Barba", "" ], [ "M", "Vallet-Regi", "" ] ]
Multifunctional-therapeutic 3D scaffolds have been prepared. These biomaterials are able to destroy the S. aureus bacteria biofilm and to allow bone regeneration at the same time. The present study is focused on the design of pH sensitive 3D hierarchical meso-macroporous scaffolds based on MGHA nanocomposite formed by a mesostructured glassy network with embedded hydroxyapatite nanoparticles, whose mesopores have been loaded with levofloxacin as antibacterial agent. These 3D platforms exhibit controlled and pH-dependent levofloxacin release, sustained over time at physiological pH (7.4) and notably increased at infection pH (6.7 and 5.5), which is due to the different interaction rate between diverse levofloxacin species and the silica matrix. These 3D systems are able to inhibit the S. aureus growth and to destroy the bacterial biofilm without cytotoxic effects on human osteoblasts and allowing an adequate colonization and differentiation of preosteoblastic cells on their surface. These findings suggest promising applications of these hierarchical MGHA nanocomposite scaffolds for the treatment and prevention of bone infection.
1711.04377
James Hope Mr
J. Hope, F. Vanholsbeeck, A. McDaid
A model of electrical impedance tomography on peripheral nerves for a neural-prosthetic control interface
13 pages, 9 figures
Hope, J., Vanholsbeeck, F., McDaid, A., A model of electrical impedance tomography implemented in nerve-cuff for neural-prosthetics control. Physiological Measurement, 2018
10.1088/1361-6579/aab73a
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: A model is presented to evaluate the viability of using electrical impedance tomography (EIT) with a nerve cuff to record neural activity in peripheral nerves. Approach: Established modelling approaches in neural-EIT are expanded on to be used, for the first time, on myelinated fibres which are abundant in mammalian peripheral nerves and transmit motor commands. Main results: Fibre impedance models indicate activity in unmyelinated fibres can be screened out using operating frequencies above 100 Hz. At 1 kHz and 10 mm electrode spacing, impedance magnitude of inactive intra-fascicle tissue and the fraction changes during neural activity are estimated to be 1,142 {\Omega}.cm and -8.8x10-4, respectively, with a transverse current, and 328 {\Omega}.cm & -0.30, respectively with a longitudinal current. We show that a novel EIT drive and measurement electrode pattern which utilises longitudinal current and longitudinal differential boundary voltage measurements could distinguish activity in different fascicles of a three-fascicle mammalian nerve using pseudo-experimental data synthesised to replicate real operating conditions. Significance: The results of this study provide an estimate of the transient change in impedance of intra-fascicle tissue during neural activity in mammalian nerve, and present a viable EIT electrode pattern, both of which are critical steps towards implementing EIT in a nerve cuff for neural prosthetics interfaces.
[ { "created": "Sun, 12 Nov 2017 22:52:48 GMT", "version": "v1" } ]
2018-05-09
[ [ "Hope", "J.", "" ], [ "Vanholsbeeck", "F.", "" ], [ "McDaid", "A.", "" ] ]
Objective: A model is presented to evaluate the viability of using electrical impedance tomography (EIT) with a nerve cuff to record neural activity in peripheral nerves. Approach: Established modelling approaches in neural-EIT are expanded on to be used, for the first time, on myelinated fibres which are abundant in mammalian peripheral nerves and transmit motor commands. Main results: Fibre impedance models indicate activity in unmyelinated fibres can be screened out using operating frequencies above 100 Hz. At 1 kHz and 10 mm electrode spacing, impedance magnitude of inactive intra-fascicle tissue and the fraction changes during neural activity are estimated to be 1,142 {\Omega}.cm and -8.8x10-4, respectively, with a transverse current, and 328 {\Omega}.cm & -0.30, respectively with a longitudinal current. We show that a novel EIT drive and measurement electrode pattern which utilises longitudinal current and longitudinal differential boundary voltage measurements could distinguish activity in different fascicles of a three-fascicle mammalian nerve using pseudo-experimental data synthesised to replicate real operating conditions. Significance: The results of this study provide an estimate of the transient change in impedance of intra-fascicle tissue during neural activity in mammalian nerve, and present a viable EIT electrode pattern, both of which are critical steps towards implementing EIT in a nerve cuff for neural prosthetics interfaces.
2304.04673
Mengjin Dong
Mengjin Dong, Long Xie, Sandhitsu R. Das, Jiancong Wang, Laura E.M. Wisse, Robin deFlores, David A. Wolk, Paul A. Yushkevich (for the Alzheimer's Disease Neuroimaging Initiative)
Regional Deep Atrophy: a Self-Supervised Learning Method to Automatically Identify Regions Associated With Alzheimer's Disease Progression From Longitudinal MRI
Submitted to NeuroImage for review
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Longitudinal assessment of brain atrophy, particularly in the hippocampus, is a well-studied biomarker for neurodegenerative diseases, such as Alzheimer's disease (AD). In clinical trials, estimation of brain progressive rates can be applied to track therapeutic efficacy of disease modifying treatments. However, most state-of-the-art measurements calculate changes directly by segmentation and/or deformable registration of MRI images, and may misreport head motion or MRI artifacts as neurodegeneration, impacting their accuracy. In our previous study, we developed a deep learning method DeepAtrophy that uses a convolutional neural network to quantify differences between longitudinal MRI scan pairs that are associated with time. DeepAtrophy has high accuracy in inferring temporal information from longitudinal MRI scans, such as temporal order or relative inter-scan interval. DeepAtrophy also provides an overall atrophy score that was shown to perform well as a potential biomarker of disease progression and treatment efficacy. However, DeepAtrophy is not interpretable, and it is unclear what changes in the MRI contribute to progression measurements. In this paper, we propose Regional Deep Atrophy (RDA), which combines the temporal inference approach from DeepAtrophy with a deformable registration neural network and attention mechanism that highlights regions in the MRI image where longitudinal changes are contributing to temporal inference. RDA has similar prediction accuracy as DeepAtrophy, but its additional interpretability makes it more acceptable for use in clinical settings, and may lead to more sensitive biomarkers for disease monitoring in clinical trials of early AD.
[ { "created": "Mon, 10 Apr 2023 15:50:19 GMT", "version": "v1" } ]
2023-04-11
[ [ "Dong", "Mengjin", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Xie", "Long", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Das", "Sandhitsu R.", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Wang", "Jiancong", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Wisse", "Laura E. M.", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "deFlores", "Robin", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Wolk", "David A.", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ], [ "Yushkevich", "Paul A.", "", "for the Alzheimer's\n Disease Neuroimaging Initiative" ] ]
Longitudinal assessment of brain atrophy, particularly in the hippocampus, is a well-studied biomarker for neurodegenerative diseases, such as Alzheimer's disease (AD). In clinical trials, estimation of brain progressive rates can be applied to track therapeutic efficacy of disease modifying treatments. However, most state-of-the-art measurements calculate changes directly by segmentation and/or deformable registration of MRI images, and may misreport head motion or MRI artifacts as neurodegeneration, impacting their accuracy. In our previous study, we developed a deep learning method DeepAtrophy that uses a convolutional neural network to quantify differences between longitudinal MRI scan pairs that are associated with time. DeepAtrophy has high accuracy in inferring temporal information from longitudinal MRI scans, such as temporal order or relative inter-scan interval. DeepAtrophy also provides an overall atrophy score that was shown to perform well as a potential biomarker of disease progression and treatment efficacy. However, DeepAtrophy is not interpretable, and it is unclear what changes in the MRI contribute to progression measurements. In this paper, we propose Regional Deep Atrophy (RDA), which combines the temporal inference approach from DeepAtrophy with a deformable registration neural network and attention mechanism that highlights regions in the MRI image where longitudinal changes are contributing to temporal inference. RDA has similar prediction accuracy as DeepAtrophy, but its additional interpretability makes it more acceptable for use in clinical settings, and may lead to more sensitive biomarkers for disease monitoring in clinical trials of early AD.
2307.10343
Tony Tu
Tony Tu, Gautham Krishna, Amirali Aghazadeh
ProtiGeno: a prokaryotic short gene finder using protein language models
Accepted at the 2023 ICML Workshop on Computational Biology
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prokaryotic gene prediction plays an important role in understanding the biology of organisms and their function with applications in medicine and biotechnology. Although the current gene finders are highly sensitive in finding long genes, their sensitivity decreases noticeably in finding shorter genes (<180 nts). The culprit is insufficient annotated gene data to identify distinguishing features in short open reading frames (ORFs). We develop a deep learning-based method called ProtiGeno, specifically targeting short prokaryotic genes using a protein language model trained on millions of evolved proteins. In systematic large-scale experiments on 4,288 prokaryotic genomes, we demonstrate that ProtiGeno predicts short coding and noncoding genes with higher accuracy and recall than the current state-of-the-art gene finders. We discuss the predictive features of ProtiGeno and possible limitations by visualizing the three-dimensional structure of the predicted short genes. Data, codes, and models are available at https://github.com/tonytu16/protigeno.
[ { "created": "Wed, 19 Jul 2023 16:46:42 GMT", "version": "v1" } ]
2023-07-21
[ [ "Tu", "Tony", "" ], [ "Krishna", "Gautham", "" ], [ "Aghazadeh", "Amirali", "" ] ]
Prokaryotic gene prediction plays an important role in understanding the biology of organisms and their function with applications in medicine and biotechnology. Although the current gene finders are highly sensitive in finding long genes, their sensitivity decreases noticeably in finding shorter genes (<180 nts). The culprit is insufficient annotated gene data to identify distinguishing features in short open reading frames (ORFs). We develop a deep learning-based method called ProtiGeno, specifically targeting short prokaryotic genes using a protein language model trained on millions of evolved proteins. In systematic large-scale experiments on 4,288 prokaryotic genomes, we demonstrate that ProtiGeno predicts short coding and noncoding genes with higher accuracy and recall than the current state-of-the-art gene finders. We discuss the predictive features of ProtiGeno and possible limitations by visualizing the three-dimensional structure of the predicted short genes. Data, codes, and models are available at https://github.com/tonytu16/protigeno.
0904.1900
Merek Siu
Merek Siu, Hari Shroff, Jake Siegel, Ann McEvoy, David Sivak, Ann Maris, Andrew Spakowitz, Jan Liphardt
Mechanical conversion of low-affinity Integration Host Factor binding sites into high-affinity sites
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although DNA is often bent in vivo, it is unclear how DNA-bending forces modulate DNA-protein binding affinity. Here, we report how a range of DNA-bending forces modulates the binding of the Integration Host Factor (IHF) protein to various DNAs. Using solution fluorimetry and electrophoretic mobility shift assays, we measured the affinity of IHF for DNAs with different bending forces and sequence mutations. Bending force was adjusted by varying the fraction of double-stranded DNA in a circular substrate, or by changing the overall size of the circle (1). DNA constructs contained a pair of Forster Resonance Energy Transfer dyes that served as probes for affinity assays, and read out bending forces measured by optical force sensors (2). Small bending forces significantly increased binding affinity; this effect saturated beyond ~3 pN. Surprisingly, when DNA sequences that bound IHF only weakly were mechanically bent by circularization, they bound IHF more tightly than the linear "high-affinity" binding sequence. These findings demonstrate that small bending forces can greatly augment binding at sites that deviate from a protein's consensus binding sequence. Since cellular DNA is subject to mechanical deformation and condensation, affinities of architectural proteins determined in vitro using short linear DNAs may not reflect in vivo affinities.
[ { "created": "Mon, 13 Apr 2009 02:38:33 GMT", "version": "v1" } ]
2009-04-14
[ [ "Siu", "Merek", "" ], [ "Shroff", "Hari", "" ], [ "Siegel", "Jake", "" ], [ "McEvoy", "Ann", "" ], [ "Sivak", "David", "" ], [ "Maris", "Ann", "" ], [ "Spakowitz", "Andrew", "" ], [ "Liphardt", "Jan", "" ] ]
Although DNA is often bent in vivo, it is unclear how DNA-bending forces modulate DNA-protein binding affinity. Here, we report how a range of DNA-bending forces modulates the binding of the Integration Host Factor (IHF) protein to various DNAs. Using solution fluorimetry and electrophoretic mobility shift assays, we measured the affinity of IHF for DNAs with different bending forces and sequence mutations. Bending force was adjusted by varying the fraction of double-stranded DNA in a circular substrate, or by changing the overall size of the circle (1). DNA constructs contained a pair of Forster Resonance Energy Transfer dyes that served as probes for affinity assays, and read out bending forces measured by optical force sensors (2). Small bending forces significantly increased binding affinity; this effect saturated beyond ~3 pN. Surprisingly, when DNA sequences that bound IHF only weakly were mechanically bent by circularization, they bound IHF more tightly than the linear "high-affinity" binding sequence. These findings demonstrate that small bending forces can greatly augment binding at sites that deviate from a protein's consensus binding sequence. Since cellular DNA is subject to mechanical deformation and condensation, affinities of architectural proteins determined in vitro using short linear DNAs may not reflect in vivo affinities.
q-bio/0509001
Eric J. Deeds
Eric J. Deeds, Orr Ashenberg and Eugene I. Shakhnovich
A simple physical model for scaling in protein-protein interaction networks
50 pages, 17 figures
null
10.1073/pnas.0509715102
null
q-bio.MN q-bio.BM
null
It has recently been demonstrated that many biological networks exhibit a scale-free topology where the probability of observing a node with a certain number of edges (k) follows a power law: i.e. p(k) ~ k^-g. This observation has been reproduced by evolutionary models. Here we consider the network of protein-protein interactions and demonstrate that two published independent measurements of these interactions produce graphs that are only weakly correlated with one another despite their strikingly similar topology. We then propose a physical model based on the fundamental principle that (de)solvation is a major physical factor in protein-protein interactions. This model reproduces not only the scale-free nature of such graphs but also a number of higher-order correlations in these networks. A key support of the model is provided by the discovery of a significant correlation between number of interactions made by a protein and the fraction of hydrophobic residues on its surface. The model presented in this paper represents the first physical model for experimentally determined protein-protein interactions that comprehensively reproduces the topological features of interaction networks. These results have profound implications for understanding not only protein-protein interactions but also other types of scale-free networks.
[ { "created": "Wed, 31 Aug 2005 22:05:48 GMT", "version": "v1" } ]
2009-11-11
[ [ "Deeds", "Eric J.", "" ], [ "Ashenberg", "Orr", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
It has recently been demonstrated that many biological networks exhibit a scale-free topology where the probability of observing a node with a certain number of edges (k) follows a power law: i.e. p(k) ~ k^-g. This observation has been reproduced by evolutionary models. Here we consider the network of protein-protein interactions and demonstrate that two published independent measurements of these interactions produce graphs that are only weakly correlated with one another despite their strikingly similar topology. We then propose a physical model based on the fundamental principle that (de)solvation is a major physical factor in protein-protein interactions. This model reproduces not only the scale-free nature of such graphs but also a number of higher-order correlations in these networks. A key support of the model is provided by the discovery of a significant correlation between number of interactions made by a protein and the fraction of hydrophobic residues on its surface. The model presented in this paper represents the first physical model for experimentally determined protein-protein interactions that comprehensively reproduces the topological features of interaction networks. These results have profound implications for understanding not only protein-protein interactions but also other types of scale-free networks.
1309.4761
Benjamin Greenbaum
Benjamin Greenbaum, Pradeep Kumar, and Albert Libchaber
Amino Acid Distributions and the Effect of Optimal Growth Temperature
31 pages, 7 Figures, 2 Tables
null
null
null
q-bio.GN q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform an exhaustive analysis of genome statistics for organisms, particularly extremophiles, growing in a wide range of physicochemical conditions. Specifically, we demonstrate how the correlation between the frequency of amino acids and their molecular weight, preserved on average, typically decreases as optimal growth temperature increases. We show how the relation between codon degeneracy and amino acid mass is enforced across these organisms. We assess the occurrence of contiguous amino acids, finding several significant short words, often containing cysteine, histidine or proline. Typically, the significance of these words is independent of growth temperature. In a novel approach, first-passage distributions are used to capture correlations between discontiguous residues. We find a nearly universal exponential background that we relate to properties of the aforementioned individual amino acid frequencies. We find this approach reliably extracts correlations that depend on growth temperature, some of which have not been previously characterized.
[ { "created": "Wed, 18 Sep 2013 19:28:35 GMT", "version": "v1" } ]
2013-09-19
[ [ "Greenbaum", "Benjamin", "" ], [ "Kumar", "Pradeep", "" ], [ "Libchaber", "Albert", "" ] ]
We perform an exhaustive analysis of genome statistics for organisms, particularly extremophiles, growing in a wide range of physicochemical conditions. Specifically, we demonstrate how the correlation between the frequency of amino acids and their molecular weight, preserved on average, typically decreases as optimal growth temperature increases. We show how the relation between codon degeneracy and amino acid mass is enforced across these organisms. We assess the occurrence of contiguous amino acids, finding several significant short words, often containing cysteine, histidine or proline. Typically, the significance of these words is independent of growth temperature. In a novel approach, first-passage distributions are used to capture correlations between discontiguous residues. We find a nearly universal exponential background that we relate to properties of the aforementioned individual amino acid frequencies. We find this approach reliably extracts correlations that depend on growth temperature, some of which have not been previously characterized.
1907.06954
Cecilia Clementi
Eugen Hruska, Vivekanandan Balasubramanian, Hyungro Lee, Shantenu Jha, Cecilia Clementi
Extensible and Scalable Adaptive Sampling on Supercomputers
17 pages, 9 figures
null
null
null
q-bio.QM physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The accurate sampling of protein dynamics is an ongoing challenge despite the utilization of High-Performance Computers (HPC) systems. Utilizing only "brute force" MD simulations requires an unacceptably long time to solution. Adaptive sampling methods allow a more effective sampling of protein dynamics than standard MD simulations. Depending on the restarting strategy the speed up can be more than one order of magnitude. One challenge limiting the utilization of adaptive sampling by domain experts is the relatively high complexity of efficiently running adaptive sampling on HPC systems. We discuss how the ExTASY framework can set up new adaptive sampling strategies, and reliably execute resulting workflows at scale on HPC platforms. Here the folding dynamics of four proteins are predicted with no a priori information.
[ { "created": "Tue, 16 Jul 2019 12:08:51 GMT", "version": "v1" }, { "created": "Thu, 24 Sep 2020 13:55:22 GMT", "version": "v2" } ]
2020-09-25
[ [ "Hruska", "Eugen", "" ], [ "Balasubramanian", "Vivekanandan", "" ], [ "Lee", "Hyungro", "" ], [ "Jha", "Shantenu", "" ], [ "Clementi", "Cecilia", "" ] ]
The accurate sampling of protein dynamics is an ongoing challenge despite the utilization of High-Performance Computers (HPC) systems. Utilizing only "brute force" MD simulations requires an unacceptably long time to solution. Adaptive sampling methods allow a more effective sampling of protein dynamics than standard MD simulations. Depending on the restarting strategy the speed up can be more than one order of magnitude. One challenge limiting the utilization of adaptive sampling by domain experts is the relatively high complexity of efficiently running adaptive sampling on HPC systems. We discuss how the ExTASY framework can set up new adaptive sampling strategies, and reliably execute resulting workflows at scale on HPC platforms. Here the folding dynamics of four proteins are predicted with no a priori information.
1803.09364
Katherine Meyer
Katherine Meyer
Extinction debt repayment via timely habitat restoration
14 pages, 3 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Habitat destruction threatens the viability of many populations, but its full consequences can take considerable time to unfold. Much of the discourse surrounding extinction debts--the number of species that persist transiently following habitat loss, despite being headed for extinction--frames ultimate population crashes as the means of settling the debt. However, slow population decline also opens an opportunity to repay the debt by restoring habitat. The timing necessary for such habitat restoration to rescue a population from extinction has not been well studied. Here we determine habitat restoration deadlines for a spatially implicit Levins/Tilman population model modified by an Allee effect. We find that conditions that hinder detection of an extinction debt also provide forgiving restoration timeframes. Our results highlight the importance of transient dynamics in restoration and suggest the beginnings of an analytic theory to understand outcomes of temporary press perturbations in a broad class of ecological systems.
[ { "created": "Sun, 25 Mar 2018 23:14:38 GMT", "version": "v1" } ]
2018-03-28
[ [ "Meyer", "Katherine", "" ] ]
Habitat destruction threatens the viability of many populations, but its full consequences can take considerable time to unfold. Much of the discourse surrounding extinction debts--the number of species that persist transiently following habitat loss, despite being headed for extinction--frames ultimate population crashes as the means of settling the debt. However, slow population decline also opens an opportunity to repay the debt by restoring habitat. The timing necessary for such habitat restoration to rescue a population from extinction has not been well studied. Here we determine habitat restoration deadlines for a spatially implicit Levins/Tilman population model modified by an Allee effect. We find that conditions that hinder detection of an extinction debt also provide forgiving restoration timeframes. Our results highlight the importance of transient dynamics in restoration and suggest the beginnings of an analytic theory to understand outcomes of temporary press perturbations in a broad class of ecological systems.
2404.06481
Alexis Molina
Ra\'ul Mi\~n\'an, Javier Gallardo, \'Alvaro Ciudad, Alexis Molina
GeoDirDock: Guiding Docking Along Geodesic Paths
Generative and Experimental Perspectives for Biomolecular Design Workshop at ICLR 2024
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
This work introduces GeoDirDock (GDD), a novel approach to molecular docking that enhances the accuracy and physical plausibility of ligand docking predictions. GDD guides the denoising process of a diffusion model along geodesic paths within multiple spaces representing translational, rotational, and torsional degrees of freedom. Our method leverages expert knowledge to direct the generative modeling process, specifically targeting desired protein-ligand interaction regions. We demonstrate that GDD significantly outperforms existing blind docking methods in terms of RMSD accuracy and physicochemical pose realism. Our results indicate that incorporating domain expertise into the diffusion process leads to more biologically relevant docking predictions. Additionally, we explore the potential of GDD for lead optimization in drug discovery through angle transfer in maximal common substructure (MCS) docking, showcasing its capability to predict ligand orientations for chemically similar compounds accurately.
[ { "created": "Tue, 9 Apr 2024 17:31:18 GMT", "version": "v1" } ]
2024-04-10
[ [ "Miñán", "Raúl", "" ], [ "Gallardo", "Javier", "" ], [ "Ciudad", "Álvaro", "" ], [ "Molina", "Alexis", "" ] ]
This work introduces GeoDirDock (GDD), a novel approach to molecular docking that enhances the accuracy and physical plausibility of ligand docking predictions. GDD guides the denoising process of a diffusion model along geodesic paths within multiple spaces representing translational, rotational, and torsional degrees of freedom. Our method leverages expert knowledge to direct the generative modeling process, specifically targeting desired protein-ligand interaction regions. We demonstrate that GDD significantly outperforms existing blind docking methods in terms of RMSD accuracy and physicochemical pose realism. Our results indicate that incorporating domain expertise into the diffusion process leads to more biologically relevant docking predictions. Additionally, we explore the potential of GDD for lead optimization in drug discovery through angle transfer in maximal common substructure (MCS) docking, showcasing its capability to predict ligand orientations for chemically similar compounds accurately.
1801.09848
Ardavan Salehi Nobandegani
Ardavan S. Nobandegani, Kevin da Silva Castanheira, A. Ross Otto, Thomas R. Shultz
Over-representation of Extreme Events in Decision-Making: A Rational Metacognitive Account
null
null
null
null
q-bio.NC cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Availability bias, manifested in the over-representation of extreme eventualities in decision-making, is a well-known cognitive bias, and is generally taken as evidence of human irrationality. In this work, we present the first rational, metacognitive account of the Availability bias, formally articulated at Marr's algorithmic level of analysis. Concretely, we present a normative, metacognitive model of how a cognitive system should over-represent extreme eventualities, depending on the amount of time available at its disposal for decision-making. Our model also accounts for two well-known framing effects in human decision-making under risk---the fourfold pattern of risk preferences in outcome probability (Tversky & Kahneman, 1992) and in outcome magnitude (Markovitz, 1952)---thereby providing the first metacognitively-rational basis for those effects. Empirical evidence, furthermore, confirms an important prediction of our model. Surprisingly, our model is unimaginably robust with respect to its focal parameter. We discuss the implications of our work for studies on human decision-making, and conclude by presenting a counterintuitive prediction of our model, which, if confirmed, would have intriguing implications for human decision-making under risk. To our knowledge, our model is the first metacognitive, resource-rational process model of cognitive biases in decision-making.
[ { "created": "Tue, 30 Jan 2018 04:33:25 GMT", "version": "v1" } ]
2018-01-31
[ [ "Nobandegani", "Ardavan S.", "" ], [ "Castanheira", "Kevin da Silva", "" ], [ "Otto", "A. Ross", "" ], [ "Shultz", "Thomas R.", "" ] ]
The Availability bias, manifested in the over-representation of extreme eventualities in decision-making, is a well-known cognitive bias, and is generally taken as evidence of human irrationality. In this work, we present the first rational, metacognitive account of the Availability bias, formally articulated at Marr's algorithmic level of analysis. Concretely, we present a normative, metacognitive model of how a cognitive system should over-represent extreme eventualities, depending on the amount of time available at its disposal for decision-making. Our model also accounts for two well-known framing effects in human decision-making under risk---the fourfold pattern of risk preferences in outcome probability (Tversky & Kahneman, 1992) and in outcome magnitude (Markovitz, 1952)---thereby providing the first metacognitively-rational basis for those effects. Empirical evidence, furthermore, confirms an important prediction of our model. Surprisingly, our model is unimaginably robust with respect to its focal parameter. We discuss the implications of our work for studies on human decision-making, and conclude by presenting a counterintuitive prediction of our model, which, if confirmed, would have intriguing implications for human decision-making under risk. To our knowledge, our model is the first metacognitive, resource-rational process model of cognitive biases in decision-making.
1701.03993
Christoph Adami
Nitash C G, Thomas LaBar, Arend Hintze, and Christoph Adami (Michigan State University)
Origin of life in a digital microcosm
20 pages, 7 figures. To appear in special issue of Philosophical Transactions of the Royal Society A: Re-Conceptualizing the Origins of Life from a Physical Sciences Perspective
null
null
null
q-bio.PE cs.IT math.IT nlin.AO q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While all organisms on Earth descend from a common ancestor, there is no consensus on whether the origin of this ancestral self-replicator was a one-off event or whether it was only the final survivor of multiple origins. Here we use the digital evolution system Avida to study the origin of self-replicating computer programs. By using a computational system, we avoid many of the uncertainties inherent in any biochemical system of self-replicators (while running the risk of ignoring a fundamental aspect of biochemistry). We generated the exhaustive set of minimal-genome self-replicators and analyzed the network structure of this fitness landscape. We further examined the evolvability of these self-replicators and found that the evolvability of a self-replicator is dependent on its genomic architecture. We studied the differential ability of replicators to take over the population when competed against each other (akin to a primordial-soup model of biogenesis) and found that the probability of a self-replicator out-competing the others is not uniform. Instead, progenitor (most-recent common ancestor) genotypes are clustered in a small region of the replicator space. Our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life.
[ { "created": "Sun, 15 Jan 2017 05:26:11 GMT", "version": "v1" } ]
2017-01-17
[ [ "G", "Nitash C", "", "Michigan\n State University" ], [ "LaBar", "Thomas", "", "Michigan\n State University" ], [ "Hintze", "Arend", "", "Michigan\n State University" ], [ "Adami", "Christoph", "", "Michigan\n State University" ] ]
While all organisms on Earth descend from a common ancestor, there is no consensus on whether the origin of this ancestral self-replicator was a one-off event or whether it was only the final survivor of multiple origins. Here we use the digital evolution system Avida to study the origin of self-replicating computer programs. By using a computational system, we avoid many of the uncertainties inherent in any biochemical system of self-replicators (while running the risk of ignoring a fundamental aspect of biochemistry). We generated the exhaustive set of minimal-genome self-replicators and analyzed the network structure of this fitness landscape. We further examined the evolvability of these self-replicators and found that the evolvability of a self-replicator is dependent on its genomic architecture. We studied the differential ability of replicators to take over the population when competed against each other (akin to a primordial-soup model of biogenesis) and found that the probability of a self-replicator out-competing the others is not uniform. Instead, progenitor (most-recent common ancestor) genotypes are clustered in a small region of the replicator space. Our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life.
2306.06450
Pablo de Castro
Vivian Dornelas, Pablo de Castro, Justin M. Calabrese, William F. Fagan, Ricardo Martinez-Garcia
Movement bias in asymmetric landscapes and its impact on population distribution and critical habitat size
21 pages, 8 figures
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Ecologists have long investigated how demographic and movement parameters determine the spatial distribution and critical habitat size of a population. However, most models oversimplify movement behavior, neglecting how landscape heterogeneity influences individual movement. We relax this assumption and introduce a reaction-advection-diffusion equation that describes population dynamics when individuals exhibit space-dependent movement bias toward preferred regions. Our model incorporates two types of these preferred regions: a high-quality habitat patch, termed `habitat', which is included to model avoidance of degraded habitats like deforested regions; and a preferred location, such as a chemoattractant source or a watering hole, that we allow to be asymmetrically located with respect to habitat edges. In this scenario, the critical habitat size depends on both the relative position of the preferred location and the movement bias intensities. When preferred locations are near habitat edges, the critical habitat size can decrease when diffusion increases, a phenomenon called the drift paradox. Also, ecological traps arise when the habitat overcrowds due to excessive attractiveness or the preferred location is near a low-quality region. Our results highlight the importance of species-specific movement behavior and habitat preference as drivers of population dynamics in fragmented landscapes and, therefore, in the design of protected areas.
[ { "created": "Sat, 10 Jun 2023 14:14:10 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 18:39:15 GMT", "version": "v2" } ]
2024-03-11
[ [ "Dornelas", "Vivian", "" ], [ "de Castro", "Pablo", "" ], [ "Calabrese", "Justin M.", "" ], [ "Fagan", "William F.", "" ], [ "Martinez-Garcia", "Ricardo", "" ] ]
Ecologists have long investigated how demographic and movement parameters determine the spatial distribution and critical habitat size of a population. However, most models oversimplify movement behavior, neglecting how landscape heterogeneity influences individual movement. We relax this assumption and introduce a reaction-advection-diffusion equation that describes population dynamics when individuals exhibit space-dependent movement bias toward preferred regions. Our model incorporates two types of these preferred regions: a high-quality habitat patch, termed `habitat', which is included to model avoidance of degraded habitats like deforested regions; and a preferred location, such as a chemoattractant source or a watering hole, that we allow to be asymmetrically located with respect to habitat edges. In this scenario, the critical habitat size depends on both the relative position of the preferred location and the movement bias intensities. When preferred locations are near habitat edges, the critical habitat size can decrease when diffusion increases, a phenomenon called the drift paradox. Also, ecological traps arise when the habitat overcrowds due to excessive attractiveness or the preferred location is near a low-quality region. Our results highlight the importance of species-specific movement behavior and habitat preference as drivers of population dynamics in fragmented landscapes and, therefore, in the design of protected areas.
1508.02930
Francois Baccelli
Eliza O'Reilly, Francois Baccelli, Gustavo de Veciana and Haris Vikalo
End-to-End Optimization of High Throughput DNA Sequencing
null
null
null
null
q-bio.GN math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the core of high throughput DNA sequencing platforms lies a bio-physical surface process that results in a random geometry of clusters of homogenous short DNA fragments typically hundreds of base pairs long - bridge amplification. The statistical properties of this random process and length of the fragments are critical as they affect the information that can be subsequently extracted, i.e., density of successfully inferred DNA fragment reads. The ensemble of overlapping DNA fragment reads are then used to computationally reconstruct the much longer target genome sequence, e.g, ranging from hundreds of thousands to billions of base pairs. The success of the reconstruction in turn depends on having a sufficiently large ensemble of DNA fragments that are sufficiently long. In this paper using stochastic geometry we model and optimize the end-to-end process linking and partially controlling the statistics of the physical processes to the success of the computational step. This provides, for the first time, a framework capturing salient features of such sequencing platforms that can be used to study cost, performance or sensitivity of the sequencing process.
[ { "created": "Tue, 4 Aug 2015 13:50:52 GMT", "version": "v1" } ]
2015-08-13
[ [ "O'Reilly", "Eliza", "" ], [ "Baccelli", "Francois", "" ], [ "de Veciana", "Gustavo", "" ], [ "Vikalo", "Haris", "" ] ]
At the core of high throughput DNA sequencing platforms lies a bio-physical surface process that results in a random geometry of clusters of homogenous short DNA fragments typically hundreds of base pairs long - bridge amplification. The statistical properties of this random process and length of the fragments are critical as they affect the information that can be subsequently extracted, i.e., density of successfully inferred DNA fragment reads. The ensemble of overlapping DNA fragment reads are then used to computationally reconstruct the much longer target genome sequence, e.g, ranging from hundreds of thousands to billions of base pairs. The success of the reconstruction in turn depends on having a sufficiently large ensemble of DNA fragments that are sufficiently long. In this paper using stochastic geometry we model and optimize the end-to-end process linking and partially controlling the statistics of the physical processes to the success of the computational step. This provides, for the first time, a framework capturing salient features of such sequencing platforms that can be used to study cost, performance or sensitivity of the sequencing process.
1604.02554
P. Gaspard
Pierre Gaspard
Kinetics and thermodynamics of DNA polymerases with exonuclease proofreading
Physical Review E (2016)
Phys. Rev. E 93, 042420 (2016)
10.1103/PhysRevE.93.042420
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinetic theory and thermodynamics are applied to DNA polymerases with exonuclease activity, taking into account the dependence of the rates on the previously incorportated nucleotide. The replication fidelity is shown to increase significantly thanks to this dependence at the basis of the mechanism of exonuclease proofreading. In particular, this dependence can provide up to a hundred-fold lowering of the error probability under physiological conditions. Theory is compared with numerical simulations for the DNA polymerases of T7 viruses and human mitochondria.
[ { "created": "Sat, 9 Apr 2016 12:22:16 GMT", "version": "v1" } ]
2018-01-04
[ [ "Gaspard", "Pierre", "" ] ]
Kinetic theory and thermodynamics are applied to DNA polymerases with exonuclease activity, taking into account the dependence of the rates on the previously incorportated nucleotide. The replication fidelity is shown to increase significantly thanks to this dependence at the basis of the mechanism of exonuclease proofreading. In particular, this dependence can provide up to a hundred-fold lowering of the error probability under physiological conditions. Theory is compared with numerical simulations for the DNA polymerases of T7 viruses and human mitochondria.
2001.05430
Jason Kamran Jr Eshraghian
Jason K. Eshraghian and Seungbum Baek and Wesley Thio and Yulia Sandamirskaya and Herbert H.C. Iu and Wei D. Lu
A Real-Time Retinomorphic Simulator Using a Conductance-Based Discrete Neuronal Network
5 pages, 4 figures, accepted for 2020 IEEE AICAS
null
null
null
q-bio.NC eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an optimized conductance-based retina microcircuit simulator which transforms light stimuli into a series of graded and spiking action potentials through photo transduction. We use discrete retinal neuron blocks based on a collation of single-compartment models and morphologically realistic formulations, and successfully achieve a biologically real-time simulator. This is done by optimizing the numerical methods employed to solve the system of over 270 nonlinear ordinary differential equations and parameters. Our simulator includes some of the most recent advances in compartmental modeling to include five intrinsic ion currents of each cell whilst ensuring real-time performance, in attaining the ion-current and membrane responses of the photoreceptor rod and cone cells, the bipolar and amacrine cells, their laterally connected electrical and chemical synapses, and the output ganglion cell. It exhibits dynamical retinal behavior such as spike-frequency adaptation, rebound activation, fast-spiking, and subthreshold responsivity. Light stimuli incident at the photoreceptor rod and cone cells is modulated through the system of differential equations, enabling the user to probe the neuronal response at any point in the network. This is in contrast to many other retina encoding schemes which prefer to `black-box' the preceding stages to the spike train output. Our simulator is made available open source, with the hope that it will benefit neuroscientists and machine learning practitioners in better understanding the retina sub-circuitries, how retina cells optimize the representation of visual information, and in generating large datasets of biologically accurate graded and spiking responses.
[ { "created": "Thu, 26 Dec 2019 20:23:13 GMT", "version": "v1" } ]
2020-01-16
[ [ "Eshraghian", "Jason K.", "" ], [ "Baek", "Seungbum", "" ], [ "Thio", "Wesley", "" ], [ "Sandamirskaya", "Yulia", "" ], [ "Iu", "Herbert H. C.", "" ], [ "Lu", "Wei D.", "" ] ]
We present an optimized conductance-based retina microcircuit simulator which transforms light stimuli into a series of graded and spiking action potentials through photo transduction. We use discrete retinal neuron blocks based on a collation of single-compartment models and morphologically realistic formulations, and successfully achieve a biologically real-time simulator. This is done by optimizing the numerical methods employed to solve the system of over 270 nonlinear ordinary differential equations and parameters. Our simulator includes some of the most recent advances in compartmental modeling to include five intrinsic ion currents of each cell whilst ensuring real-time performance, in attaining the ion-current and membrane responses of the photoreceptor rod and cone cells, the bipolar and amacrine cells, their laterally connected electrical and chemical synapses, and the output ganglion cell. It exhibits dynamical retinal behavior such as spike-frequency adaptation, rebound activation, fast-spiking, and subthreshold responsivity. Light stimuli incident at the photoreceptor rod and cone cells is modulated through the system of differential equations, enabling the user to probe the neuronal response at any point in the network. This is in contrast to many other retina encoding schemes which prefer to `black-box' the preceding stages to the spike train output. Our simulator is made available open source, with the hope that it will benefit neuroscientists and machine learning practitioners in better understanding the retina sub-circuitries, how retina cells optimize the representation of visual information, and in generating large datasets of biologically accurate graded and spiking responses.
q-bio/0609003
Yunfeng Shan Dr.
Yunfeng Shan and Xiu-Qing Li
Maximum-frequency gene tree: a simplified genome-scale approach to overcoming incongruence in molecular phylogenies
10 pages, 5 figures, Evolution 2006, July 23-27 Stony Brook University, NY, USA
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the fi nding. Four algorithms: maximum parsimony, minimum evolution, maximum likelihood, and neighbor-joining, were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the "true tree" by all the four algorithms. However, the most frequent gene tree, termed "maximum gene-support tree" (MGS tree, or WMGS tree for the weighted one), in yeasts, baculoviruses, or plants was consistently found to be the "true tree" among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1) The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2) There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3) The maximum gene-support tree refl ects the phylogenetic relationship among species in comparison. Keywords: genome, gene evolution, molecular phylogeny, true tree
[ { "created": "Sat, 2 Sep 2006 14:20:49 GMT", "version": "v1" }, { "created": "Mon, 11 Sep 2006 03:05:54 GMT", "version": "v2" }, { "created": "Sat, 7 Jun 2008 21:21:19 GMT", "version": "v3" } ]
2008-06-09
[ [ "Shan", "Yunfeng", "" ], [ "Li", "Xiu-Qing", "" ] ]
Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the fi nding. Four algorithms: maximum parsimony, minimum evolution, maximum likelihood, and neighbor-joining, were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the "true tree" by all the four algorithms. However, the most frequent gene tree, termed "maximum gene-support tree" (MGS tree, or WMGS tree for the weighted one), in yeasts, baculoviruses, or plants was consistently found to be the "true tree" among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1) The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2) There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3) The maximum gene-support tree refl ects the phylogenetic relationship among species in comparison. Keywords: genome, gene evolution, molecular phylogeny, true tree
2102.03847
Suzette Geriente
Diederik Aerts, Jonito Aerts Argu\"elles, Lester Beltran, Suzette Geriente (Center Leo Apostel for Interdisciplinary Studies, Free University of Brussels (VUB), Brussels, Belgium), and Sandro Sozzo (School of Business and Centre IQSCS, University of Leicester, Leicester, United Kingdom)
Entanglement in Cognition violating Bell Inequalities Beyond Cirel'son's Bound
22 pages
null
null
null
q-bio.NC quant-ph
http://creativecommons.org/licenses/by/4.0/
We present the results of two tests where a sample of human participants were asked to make judgements about the conceptual combinations {\it The Animal Acts} and {\it The Animal eats the Food}. Both tests significantly violate the Clauser-Horne-Shimony-Holt version of Bell inequalities (`CHSH inequality'), thus exhibiting manifestly non-classical behaviour due to the meaning connection between the individual concepts that are combined. We then apply a quantum-theoretic framework which we developed for any Bell-type situation and represent empirical data in complex Hilbert space. We show that the observed violations of the CHSH inequality can be explained as a consequence of a strong form of `quantum entanglement' between the component conceptual entities in which both the state and measurements are entangled. We finally observe that a quantum model in Hilbert space can be elaborated in these Bell-type situations even when the CHSH violation exceeds the known `Cirel'son bound', in contrast to a widespread belief. These findings confirm and strengthen the results we recently obtained in a variety of cognitive tests and document and image retrieval operations on the same conceptual combinations.
[ { "created": "Sun, 7 Feb 2021 16:57:59 GMT", "version": "v1" } ]
2021-02-09
[ [ "Aerts", "Diederik", "", "Center Leo Apostel for Interdisciplinary Studies, Free University\n of Brussels" ], [ "Arguëlles", "Jonito Aerts", "", "Center Leo Apostel for Interdisciplinary Studies, Free University\n of Brussels" ], [ "Beltran", "Lester", "", "Center Leo Apostel for Interdisciplinary Studies, Free University\n of Brussels" ], [ "Geriente", "Suzette", "", "Center Leo Apostel for Interdisciplinary Studies, Free University\n of Brussels" ], [ "Sozzo", "Sandro", "", "School of Business\n and Centre IQSCS, University of Leicester, Leicester, United Kingdom" ] ]
We present the results of two tests where a sample of human participants were asked to make judgements about the conceptual combinations {\it The Animal Acts} and {\it The Animal eats the Food}. Both tests significantly violate the Clauser-Horne-Shimony-Holt version of Bell inequalities (`CHSH inequality'), thus exhibiting manifestly non-classical behaviour due to the meaning connection between the individual concepts that are combined. We then apply a quantum-theoretic framework which we developed for any Bell-type situation and represent empirical data in complex Hilbert space. We show that the observed violations of the CHSH inequality can be explained as a consequence of a strong form of `quantum entanglement' between the component conceptual entities in which both the state and measurements are entangled. We finally observe that a quantum model in Hilbert space can be elaborated in these Bell-type situations even when the CHSH violation exceeds the known `Cirel'son bound', in contrast to a widespread belief. These findings confirm and strengthen the results we recently obtained in a variety of cognitive tests and document and image retrieval operations on the same conceptual combinations.
1304.5232
Anca Radulescu
Anca Radulescu
Neural network spectral robustness under perturbations of the underlying graph
manuscript: 27 pages; references: 5 pages; 2 appendices; 16 figures
null
null
null
q-bio.NC cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have been using graph theoretical approaches to model complex networks (such as social, infrastructural or biological networks), and how their hardwired circuitry relates to their dynamic evolution in time. Understanding how configuration reflects on the coupled behavior in a system of dynamic nodes can be of great importance, for example in the context of how the brain connectome is affecting brain function. However, the connectivity patterns that appear in brain networks, and their individual effects on network dynamics, are far from being fully understood. We study the connections between edge configuration and dynamics in a simple oriented network composed of two interconnected cliques (representative of brain feedback regulatory circuitry). In this paper, our main goal is to study the spectra of the graph adjacency and Laplacian matrices, with a focus on three aspects in particular: (1) the sensitivity/robustness the spectrum in response to varying the intra and inter-modular edge density, (2) the effects on the spectrum of perturbing the edge configuration, while keeping the densities fixed and (3) the effects of increasing the network size. We study some tractable aspects analytically, then simulate more general results numerically. This paper aims to clarify, from analytical and modeling perspectives, the underpinnings of our related work, which further addresses how graph properties affect the network's temporal dynamics and phase transitions. We propose that this type of results may be helpful when studying small networks such as macroscopic brain circuits. We suggest potential applications to understanding synaptic restructuring in learning networks, and the effects of network configuration to function of emotion-regulatory neural circuits.
[ { "created": "Thu, 18 Apr 2013 19:40:49 GMT", "version": "v1" }, { "created": "Thu, 16 Jul 2015 15:19:00 GMT", "version": "v2" } ]
2015-07-17
[ [ "Radulescu", "Anca", "" ] ]
Recent studies have been using graph theoretical approaches to model complex networks (such as social, infrastructural or biological networks), and how their hardwired circuitry relates to their dynamic evolution in time. Understanding how configuration reflects on the coupled behavior in a system of dynamic nodes can be of great importance, for example in the context of how the brain connectome is affecting brain function. However, the connectivity patterns that appear in brain networks, and their individual effects on network dynamics, are far from being fully understood. We study the connections between edge configuration and dynamics in a simple oriented network composed of two interconnected cliques (representative of brain feedback regulatory circuitry). In this paper, our main goal is to study the spectra of the graph adjacency and Laplacian matrices, with a focus on three aspects in particular: (1) the sensitivity/robustness the spectrum in response to varying the intra and inter-modular edge density, (2) the effects on the spectrum of perturbing the edge configuration, while keeping the densities fixed and (3) the effects of increasing the network size. We study some tractable aspects analytically, then simulate more general results numerically. This paper aims to clarify, from analytical and modeling perspectives, the underpinnings of our related work, which further addresses how graph properties affect the network's temporal dynamics and phase transitions. We propose that this type of results may be helpful when studying small networks such as macroscopic brain circuits. We suggest potential applications to understanding synaptic restructuring in learning networks, and the effects of network configuration to function of emotion-regulatory neural circuits.
2003.14288
Daniel Cajueiro
Saulo B. Bastos and Daniel O. Cajueiro
Modeling and forecasting the early evolution of the Covid-19 pandemic in Brazil
We received several interesting comments from a large number of people and we decide to update the second version of the paper
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model and forecast the early evolution of the COVID-19 pandemic in Brazil using Brazilian recent data from February 25, 2020 to March 30, 2020. This early period accounts for unawareness of the epidemiological characteristics of the disease in a new territory, sub-notification of the real numbers of infected people and the timely introduction of social distancing policies to flatten the spread of the disease. We use two variations of the SIR model and we include a parameter that comprises the effects of social distancing measures. Short and long term forecasts show that the social distancing policy imposed by the government is able to flatten the pattern of infection of the COVID-19. However, our results also show that if this policy does not last enough time, it is only able to shift the peak of infection into the future keeping the value of the peak in almost the same value. Furthermore, our long term simulations forecast the optimal date to end the policy. Finally, we show that the proportion of asymptomatic individuals affects the amplitude of the peak of symptomatic infected, suggesting that it is important to test the population.
[ { "created": "Tue, 31 Mar 2020 15:21:39 GMT", "version": "v1" }, { "created": "Thu, 9 Apr 2020 03:23:52 GMT", "version": "v2" }, { "created": "Sat, 13 Jun 2020 03:49:48 GMT", "version": "v3" } ]
2020-06-16
[ [ "Bastos", "Saulo B.", "" ], [ "Cajueiro", "Daniel O.", "" ] ]
We model and forecast the early evolution of the COVID-19 pandemic in Brazil using Brazilian recent data from February 25, 2020 to March 30, 2020. This early period accounts for unawareness of the epidemiological characteristics of the disease in a new territory, sub-notification of the real numbers of infected people and the timely introduction of social distancing policies to flatten the spread of the disease. We use two variations of the SIR model and we include a parameter that comprises the effects of social distancing measures. Short and long term forecasts show that the social distancing policy imposed by the government is able to flatten the pattern of infection of the COVID-19. However, our results also show that if this policy does not last enough time, it is only able to shift the peak of infection into the future keeping the value of the peak in almost the same value. Furthermore, our long term simulations forecast the optimal date to end the policy. Finally, we show that the proportion of asymptomatic individuals affects the amplitude of the peak of symptomatic infected, suggesting that it is important to test the population.
2302.09558
Maurice HT Ling
Maurice HT Ling
Of (Biological) Models and Simulations
null
MOJ Proteomics & Bioinformatics 3(4): 00093 (2016)
null
null
q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Modeling and simulation are recognized as important aspects of the scientific method for more than 70 years but its adoption in biology has been slow. Debates on its representativeness, usefulness, and whether the effort spent on such endeavors is worthwhile, exist to this day. Here, I argue that most of learning is modeling; hence, arriving at a contradiction if models are not useful. Representing biological systems through mathematical models can be difficult but the modeling procedure is a process in itself that follows a semi-formal set of rules. Although seldom reported, failure in modeling is not a rare event but I argue that this is usually a result of erroneous underlying knowledge or misapplication of a model beyond its intended purpose. I argue that in many biological studies, simulation is the only experimental tool. In others, simulation is a means of reducing possible combinations of experimental work; thereby, presenting an economical case for simulation; thus, worthwhile to engage in this endeavor. The representativeness of simulation depends on the validation, verification, assumptions, and limitations of the underlying model. This will be illustrated using the inter-relationship between population, samples, probability theory, and statistics.
[ { "created": "Sun, 19 Feb 2023 12:35:12 GMT", "version": "v1" } ]
2023-02-21
[ [ "Ling", "Maurice HT", "" ] ]
Modeling and simulation are recognized as important aspects of the scientific method for more than 70 years but its adoption in biology has been slow. Debates on its representativeness, usefulness, and whether the effort spent on such endeavors is worthwhile, exist to this day. Here, I argue that most of learning is modeling; hence, arriving at a contradiction if models are not useful. Representing biological systems through mathematical models can be difficult but the modeling procedure is a process in itself that follows a semi-formal set of rules. Although seldom reported, failure in modeling is not a rare event but I argue that this is usually a result of erroneous underlying knowledge or misapplication of a model beyond its intended purpose. I argue that in many biological studies, simulation is the only experimental tool. In others, simulation is a means of reducing possible combinations of experimental work; thereby, presenting an economical case for simulation; thus, worthwhile to engage in this endeavor. The representativeness of simulation depends on the validation, verification, assumptions, and limitations of the underlying model. This will be illustrated using the inter-relationship between population, samples, probability theory, and statistics.
1802.01437
Artem Efremov K
Artem K. Efremov and Jie Yan
Transfer-matrix calculations of the effects of tension and torque constraints on DNA-protein interactions
61 pages, including 14 figures and 2 tables
The article is published in Nucleic Acids Research journal. 2018
10.1093/nar/gky478
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Organization and maintenance of the chromosomal DNA in living cells strongly depends on the DNA interactions with a plethora of DNA-binding proteins. Single-molecule studies show that formation of nucleoprotein complexes on DNA by such proteins is frequently subject to force and torque constraints applied to the DNA. Although the existing experimental techniques allow to exert these type of mechanical constraints on individual DNA biopolymers, their exact effects in regulation of DNA-protein interactions are still not completely understood due to the lack of systematic theoretical methods able to efficiently interpret complex experimental observations. To fill this gap, we have developed a general theoretical framework based on the transfer-matrix calculations that can be used to accurately describe behaviour of DNA-protein interactions under force and torque constraints. Potential applications of the constructed theoretical approach are demonstrated by predicting how these constraints affect the DNA-binding properties of different types of architectural proteins. Obtained results provide important insights into potential physiological functions of mechanical forces in the chromosomal DNA organization by architectural proteins as well as into single-DNA manipulation studies of DNA-protein interactions.
[ { "created": "Mon, 5 Feb 2018 14:50:29 GMT", "version": "v1" }, { "created": "Wed, 9 May 2018 14:42:44 GMT", "version": "v2" }, { "created": "Tue, 4 Sep 2018 04:53:54 GMT", "version": "v3" } ]
2018-09-05
[ [ "Efremov", "Artem K.", "" ], [ "Yan", "Jie", "" ] ]
Organization and maintenance of the chromosomal DNA in living cells strongly depends on the DNA interactions with a plethora of DNA-binding proteins. Single-molecule studies show that formation of nucleoprotein complexes on DNA by such proteins is frequently subject to force and torque constraints applied to the DNA. Although the existing experimental techniques allow to exert these type of mechanical constraints on individual DNA biopolymers, their exact effects in regulation of DNA-protein interactions are still not completely understood due to the lack of systematic theoretical methods able to efficiently interpret complex experimental observations. To fill this gap, we have developed a general theoretical framework based on the transfer-matrix calculations that can be used to accurately describe behaviour of DNA-protein interactions under force and torque constraints. Potential applications of the constructed theoretical approach are demonstrated by predicting how these constraints affect the DNA-binding properties of different types of architectural proteins. Obtained results provide important insights into potential physiological functions of mechanical forces in the chromosomal DNA organization by architectural proteins as well as into single-DNA manipulation studies of DNA-protein interactions.
1207.0033
Jose Acacio de Barros
Jos\'e Acacio de Barros
Quantum-like model of behavioral response computation using neural oscillators
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose the use of neural interference as the origin of quantum-like effects in the brain. We do so by using a neural oscillator model consistent with neurophysiological data. The model used was shown to reproduce well the predictions of behavioral stimulus-response theory. The quantum-like effects are obtained by the spreading activation of incompatible oscillators, leading to an interference-like effect mediated by inhibitory and excitatory synapses.
[ { "created": "Fri, 29 Jun 2012 23:28:21 GMT", "version": "v1" }, { "created": "Mon, 8 Oct 2012 16:32:07 GMT", "version": "v2" } ]
2012-10-09
[ [ "de Barros", "José Acacio", "" ] ]
In this paper we propose the use of neural interference as the origin of quantum-like effects in the brain. We do so by using a neural oscillator model consistent with neurophysiological data. The model used was shown to reproduce well the predictions of behavioral stimulus-response theory. The quantum-like effects are obtained by the spreading activation of incompatible oscillators, leading to an interference-like effect mediated by inhibitory and excitatory synapses.
1306.5275
Eric Shea-Brown
David Leen and Eric Shea-Brown
A simple mechanism for higher-order correlations in integrate-and-fire neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. Open questions surround the basic nature of these correlations. In particular, what leads to higher-order correlations -- correlations in the population activity that extend beyond those expected from cell pairs? Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that leads to strong higher-order correlations, as for earlier work with discrete threshold crossing models. Moreover, we find that the same is true for another widely used, doubly-stochastic model of neural spiking, the linear-nonlinear cascade. We explain the surprisingly strong connection between the collective dynamics produced by these models, and conclude that higher-order correlations are both broadly expected and possible to capture with surprising accuracy by simplified (and tractable) descriptions of neural spiking.
[ { "created": "Fri, 21 Jun 2013 23:34:15 GMT", "version": "v1" } ]
2013-06-25
[ [ "Leen", "David", "" ], [ "Shea-Brown", "Eric", "" ] ]
The collective dynamics of neural populations are often characterized in terms of correlations in the spike activity of different neurons. Open questions surround the basic nature of these correlations. In particular, what leads to higher-order correlations -- correlations in the population activity that extend beyond those expected from cell pairs? Here, we examine this question for a simple, but ubiquitous, circuit feature: common fluctuating input arriving to spiking neurons of integrate-and-fire type. We show that leads to strong higher-order correlations, as for earlier work with discrete threshold crossing models. Moreover, we find that the same is true for another widely used, doubly-stochastic model of neural spiking, the linear-nonlinear cascade. We explain the surprisingly strong connection between the collective dynamics produced by these models, and conclude that higher-order correlations are both broadly expected and possible to capture with surprising accuracy by simplified (and tractable) descriptions of neural spiking.
1604.01359
Markus Meister
Markus Meister
Physical limits to magnetogenetics
14 pages, 2 figures. Revision includes analysis of magnetic heating of ferritin constructs. Spoiler alert: also implausible
null
10.7554/eLife.17210
null
q-bio.NC q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells.
[ { "created": "Tue, 5 Apr 2016 18:39:49 GMT", "version": "v1" }, { "created": "Sun, 24 Apr 2016 08:45:49 GMT", "version": "v2" }, { "created": "Mon, 4 Jul 2016 00:17:05 GMT", "version": "v3" } ]
2016-09-09
[ [ "Meister", "Markus", "" ] ]
This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells.
1304.4860
Aaron Quinlan Ph.D.
Uma Paila, Brad Chapman, Rory Kirchner, and Aaron Quinlan
GEMINI: integrative exploration of genetic variation and genome annotations
null
null
10.1371/journal.pcbi.1003153
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and flexible set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or variant prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate the utility of GEMINI for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to will provide researchers with a standard framework for medical genomics.
[ { "created": "Wed, 17 Apr 2013 15:31:05 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2013 14:28:16 GMT", "version": "v2" } ]
2015-06-15
[ [ "Paila", "Uma", "" ], [ "Chapman", "Brad", "" ], [ "Kirchner", "Rory", "" ], [ "Quinlan", "Aaron", "" ] ]
Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and flexible set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or variant prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate the utility of GEMINI for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to will provide researchers with a standard framework for medical genomics.
2406.06397
Yuta Nagano
Yuta Nagano, Andrew Pyo, Martina Milighetti, James Henderson, John Shawe-Taylor, Benny Chain, Andreas Tiffeau-Mayer
Contrastive learning of T cell receptor representations
19 pages, 17 figures
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Computational prediction of the interaction of T cell receptors (TCRs) and their ligands is a grand challenge in immunology. Despite advances in high-throughput assays, specificity-labelled TCR data remains sparse. In other domains, the pre-training of language models on unlabelled data has been successfully used to address data bottlenecks. However, it is unclear how to best pre-train protein language models for TCR specificity prediction. Here we introduce a TCR language model called SCEPTR (Simple Contrastive Embedding of the Primary sequence of T cell Receptors), capable of data-efficient transfer learning. Through our model, we introduce a novel pre-training strategy combining autocontrastive learning and masked-language modelling, which enables SCEPTR to achieve its state-of-the-art performance. In contrast, existing protein language models and a variant of SCEPTR pre-trained without autocontrastive learning are outperformed by sequence alignment-based methods. We anticipate that contrastive learning will be a useful paradigm to decode the rules of TCR specificity.
[ { "created": "Mon, 10 Jun 2024 15:50:45 GMT", "version": "v1" } ]
2024-06-11
[ [ "Nagano", "Yuta", "" ], [ "Pyo", "Andrew", "" ], [ "Milighetti", "Martina", "" ], [ "Henderson", "James", "" ], [ "Shawe-Taylor", "John", "" ], [ "Chain", "Benny", "" ], [ "Tiffeau-Mayer", "Andreas", "" ] ]
Computational prediction of the interaction of T cell receptors (TCRs) and their ligands is a grand challenge in immunology. Despite advances in high-throughput assays, specificity-labelled TCR data remains sparse. In other domains, the pre-training of language models on unlabelled data has been successfully used to address data bottlenecks. However, it is unclear how to best pre-train protein language models for TCR specificity prediction. Here we introduce a TCR language model called SCEPTR (Simple Contrastive Embedding of the Primary sequence of T cell Receptors), capable of data-efficient transfer learning. Through our model, we introduce a novel pre-training strategy combining autocontrastive learning and masked-language modelling, which enables SCEPTR to achieve its state-of-the-art performance. In contrast, existing protein language models and a variant of SCEPTR pre-trained without autocontrastive learning are outperformed by sequence alignment-based methods. We anticipate that contrastive learning will be a useful paradigm to decode the rules of TCR specificity.
1210.0330
Peter Csermely
Peter Csermely, Tamas Korcsmaros, Huba J. M. Kiss, Gabor London and Ruth Nussinov
Structure and dynamics of molecular networks: A novel paradigm of drug discovery. A comprehensive review
76 pages, 23 Figures, 12 Tables and 1270 references
Pharmacology and Therapeutics 138:333-408 (2013)
10.1016/j.pharmthera.2013.01.016
null
q-bio.MN cond-mat.dis-nn cs.SI nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes or edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing more than 1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.
[ { "created": "Mon, 1 Oct 2012 10:01:56 GMT", "version": "v1" }, { "created": "Thu, 24 Jan 2013 12:18:51 GMT", "version": "v2" }, { "created": "Sat, 11 May 2013 09:38:57 GMT", "version": "v3" } ]
2013-05-14
[ [ "Csermely", "Peter", "" ], [ "Korcsmaros", "Tamas", "" ], [ "Kiss", "Huba J. M.", "" ], [ "London", "Gabor", "" ], [ "Nussinov", "Ruth", "" ] ]
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes or edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing more than 1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.
2206.01047
Imra Aqeel
Imra Aqeel, Abdul Majid, Muhammad Ismail, Hina Bashir
Drug Repurposing For SARS-COV-2 Using Molecular Docking
7 Pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Drug repurposing is an unconventional approach that is used to investigate new therapeutic aids of existing and shelved drugs. Recent advancement in technologies and the availability of the data of genomics, proteomics, transcriptomics, etc., and with the accessibility of large and reliable database resources, there are abundantly of opportunities to discover drugs by drug repurposing in an efficient manner. The recent pandemic of SARS-COV-2, that caused the death of 6,245,750 human beings to date, has tremendously increase the exceptional usage of bioinformatics tools in interpreting the molecular characterizations of viral infections. In this paper, we have employed various bioinformatics tools such as AutoDock-Vina, PyMol etc. We have found a leading drug candidate Cepharanthine that has shown better results and effectiveness than recently used antiviral drug candidates such as Favipiravir, IDX184, Remedesivir, Ribavirin and etc. This paper has analyzed Cepharanthine potential therapeutic importance as a drug of choice in managing COVID-19 cases. It is anticipated that proposed study would be beneficial for researchers and medical practitioners in handling SARS-CoV-2 and its variant related diseases.
[ { "created": "Thu, 2 Jun 2022 13:58:13 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2022 07:34:59 GMT", "version": "v2" } ]
2022-07-27
[ [ "Aqeel", "Imra", "" ], [ "Majid", "Abdul", "" ], [ "Ismail", "Muhammad", "" ], [ "Bashir", "Hina", "" ] ]
Drug repurposing is an unconventional approach that is used to investigate new therapeutic aids of existing and shelved drugs. Recent advancement in technologies and the availability of the data of genomics, proteomics, transcriptomics, etc., and with the accessibility of large and reliable database resources, there are abundantly of opportunities to discover drugs by drug repurposing in an efficient manner. The recent pandemic of SARS-COV-2, that caused the death of 6,245,750 human beings to date, has tremendously increase the exceptional usage of bioinformatics tools in interpreting the molecular characterizations of viral infections. In this paper, we have employed various bioinformatics tools such as AutoDock-Vina, PyMol etc. We have found a leading drug candidate Cepharanthine that has shown better results and effectiveness than recently used antiviral drug candidates such as Favipiravir, IDX184, Remedesivir, Ribavirin and etc. This paper has analyzed Cepharanthine potential therapeutic importance as a drug of choice in managing COVID-19 cases. It is anticipated that proposed study would be beneficial for researchers and medical practitioners in handling SARS-CoV-2 and its variant related diseases.
2102.00971
Rui Wang
Kaifu Gao, Rui Wang, Jiahui Chen, Limei Cheng, Jaclyn Frishcosy, Yuta Huzumi, Yuchi Qiu, Tom Schluckbier, and Guo-Wei Wei
Methodology-centered review of molecular modeling, simulation, and prediction of SARS-CoV-2
99 pages, 17 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The deadly coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has gone out of control globally. Despite much effort by scientists, medical experts, and society in general, the slow progress on drug discovery and antibody therapeutic development, the unknown possible side effects of the existing vaccines, and the high transmission rate of the SARS-CoV-2, remind us of the sad reality that our current understanding of the transmission, infectivity, and evolution of SARS-CoV-2 is unfortunately very limited. The major limitation is the lack of mechanistic understanding of viral-host cell interactions, the viral regulation, protein-protein interactions, including antibody-antigen binding, protein-drug binding, host immune response, etc. This limitation will likely haunt the scientific community for a long time and have a devastating consequence in combating COVID-19 and other pathogens. Notably, compared to the long-cycle, highly cost, and safety-demanding molecular-level experiments, the theoretical and computational studies are economical, speedy, and easy to perform. There exists a tsunami of the literature on molecular modeling, simulation, and prediction of SARS-CoV-2 that has become impossible to fully be covered in a review. To provide the reader a quick update about the status of molecular modeling, simulation, and prediction of SARS-CoV-2, we present a comprehensive and systematic methodology-centered narrative in the nick of time. Aspects such as molecular modeling, Monte Carlo (MC) methods, structural bioinformatics, machine learning, deep learning, and mathematical approaches are included in this review. This review will be beneficial to researchers who are looking for ways to contribute to SARS-CoV-2 studies and those who are assessing the current status in the field.
[ { "created": "Mon, 1 Feb 2021 16:54:54 GMT", "version": "v1" } ]
2021-02-02
[ [ "Gao", "Kaifu", "" ], [ "Wang", "Rui", "" ], [ "Chen", "Jiahui", "" ], [ "Cheng", "Limei", "" ], [ "Frishcosy", "Jaclyn", "" ], [ "Huzumi", "Yuta", "" ], [ "Qiu", "Yuchi", "" ], [ "Schluckbier", "Tom", "" ], [ "Wei", "Guo-Wei", "" ] ]
The deadly coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has gone out of control globally. Despite much effort by scientists, medical experts, and society in general, the slow progress on drug discovery and antibody therapeutic development, the unknown possible side effects of the existing vaccines, and the high transmission rate of the SARS-CoV-2, remind us of the sad reality that our current understanding of the transmission, infectivity, and evolution of SARS-CoV-2 is unfortunately very limited. The major limitation is the lack of mechanistic understanding of viral-host cell interactions, the viral regulation, protein-protein interactions, including antibody-antigen binding, protein-drug binding, host immune response, etc. This limitation will likely haunt the scientific community for a long time and have a devastating consequence in combating COVID-19 and other pathogens. Notably, compared to the long-cycle, highly cost, and safety-demanding molecular-level experiments, the theoretical and computational studies are economical, speedy, and easy to perform. There exists a tsunami of the literature on molecular modeling, simulation, and prediction of SARS-CoV-2 that has become impossible to fully be covered in a review. To provide the reader a quick update about the status of molecular modeling, simulation, and prediction of SARS-CoV-2, we present a comprehensive and systematic methodology-centered narrative in the nick of time. Aspects such as molecular modeling, Monte Carlo (MC) methods, structural bioinformatics, machine learning, deep learning, and mathematical approaches are included in this review. This review will be beneficial to researchers who are looking for ways to contribute to SARS-CoV-2 studies and those who are assessing the current status in the field.
1508.04416
Amir Toor
B. Abdul Razzaq, A. Scalora, V. Koparde, J. Meier, M. Mahmood, S. Salman, M. Jameson-Lee, M. Serrano, N. Sheth, M. Voelkner, D. Kobulnicky, C. Roberts, A. Ferreira-Gonzalez, M. Manjili, G. Buck, M. Neale, A. Toor
Dynamical System Modeling to Simulate Donor T Cell Response to Whole Exome Sequencing-Derived Recipient Peptides Demonstrates Different Alloreactivity Potential In HLA-Matched and Mismatched Donor-Recipient Pairs
50 pages (including references and supplementary materials), 12 figures, 3 supplementary figures
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stem cell transplants may be considered as dynamical systems to allow sequence differences across the exomes of the transplant donors and recipients to be used to simulate an alloreactive T cell response. Whole exome sequencing was performed on HLA matched stem cell transplant donor-recipient pairs, and the nucleotide sequence differences translated to peptides. The binding affinity of the peptides to the relevant HLA in each pair was determined. The resulting array of peptide-HLA binding affinity values in each patient was used to simulate an alloreactive donor derived T cell repertoire. This simulated T cell repertoire reproduces a number of features of clinically observed T cell repertoire. The simulated, alloreactive T cell repertoire was markedly different in HLA matched stem cell transplant donors and recipients and demonstrates a possible correlation with survival.
[ { "created": "Sun, 16 Aug 2015 16:10:44 GMT", "version": "v1" } ]
2015-08-19
[ [ "Razzaq", "B. Abdul", "" ], [ "Scalora", "A.", "" ], [ "Koparde", "V.", "" ], [ "Meier", "J.", "" ], [ "Mahmood", "M.", "" ], [ "Salman", "S.", "" ], [ "Jameson-Lee", "M.", "" ], [ "Serrano", "M.", "" ], [ "Sheth", "N.", "" ], [ "Voelkner", "M.", "" ], [ "Kobulnicky", "D.", "" ], [ "Roberts", "C.", "" ], [ "Ferreira-Gonzalez", "A.", "" ], [ "Manjili", "M.", "" ], [ "Buck", "G.", "" ], [ "Neale", "M.", "" ], [ "Toor", "A.", "" ] ]
Stem cell transplants may be considered as dynamical systems to allow sequence differences across the exomes of the transplant donors and recipients to be used to simulate an alloreactive T cell response. Whole exome sequencing was performed on HLA matched stem cell transplant donor-recipient pairs, and the nucleotide sequence differences translated to peptides. The binding affinity of the peptides to the relevant HLA in each pair was determined. The resulting array of peptide-HLA binding affinity values in each patient was used to simulate an alloreactive donor derived T cell repertoire. This simulated T cell repertoire reproduces a number of features of clinically observed T cell repertoire. The simulated, alloreactive T cell repertoire was markedly different in HLA matched stem cell transplant donors and recipients and demonstrates a possible correlation with survival.
2105.09158
Wayne Huggins
Alexander J. Ropelewski, Megan A. Rizzo, Jason R. Swedlow, Jan Huisken, Pavel Osten, Neda Khanjani, Kurt Weiss, Vesselina Bakalov, Michelle Engle, Lauren Gridley, Michelle Krzyzanowski, Tom Madden, Deborah Maiese, Justin Waterfield, David Williams, Carol Hamilton, and Wayne Huggins
Essential Metadata for 3D BRAIN Microscopy
10 pages, 1 table
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent advances in fluorescence microscopy techniques and tissue clearing, labeling, and staining provide unprecedented opportunities to investigate brain structure and function. These experiments' images make it possible to catalog brain cell types and define their location, morphology, and connectivity in a native context, leading to a better understanding of normal development and disease etiology. Consistent annotation of metadata is needed to provide the context necessary to understand, reuse, and integrate these data. This report describes an effort to establish metadata standards for 3D microscopy datasets for use by the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the neuroscience research community. These standards were built on existing efforts and developed with input from the brain microscopy community to promote adoption. The resulting Essential Metadata for 3D BRAIN Microscopy includes 91 fields organized into seven categories: Contributors, Funders, Publication, Instrument, Dataset, Specimen, and Image. Adoption of these metadata standards will ensure that investigators receive credit for their work, promote data reuse, facilitate downstream analysis of shared data, and encourage collaboration.
[ { "created": "Tue, 18 May 2021 10:32:13 GMT", "version": "v1" } ]
2021-05-20
[ [ "Ropelewski", "Alexander J.", "" ], [ "Rizzo", "Megan A.", "" ], [ "Swedlow", "Jason R.", "" ], [ "Huisken", "Jan", "" ], [ "Osten", "Pavel", "" ], [ "Khanjani", "Neda", "" ], [ "Weiss", "Kurt", "" ], [ "Bakalov", "Vesselina", "" ], [ "Engle", "Michelle", "" ], [ "Gridley", "Lauren", "" ], [ "Krzyzanowski", "Michelle", "" ], [ "Madden", "Tom", "" ], [ "Maiese", "Deborah", "" ], [ "Waterfield", "Justin", "" ], [ "Williams", "David", "" ], [ "Hamilton", "Carol", "" ], [ "Huggins", "Wayne", "" ] ]
Recent advances in fluorescence microscopy techniques and tissue clearing, labeling, and staining provide unprecedented opportunities to investigate brain structure and function. These experiments' images make it possible to catalog brain cell types and define their location, morphology, and connectivity in a native context, leading to a better understanding of normal development and disease etiology. Consistent annotation of metadata is needed to provide the context necessary to understand, reuse, and integrate these data. This report describes an effort to establish metadata standards for 3D microscopy datasets for use by the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the neuroscience research community. These standards were built on existing efforts and developed with input from the brain microscopy community to promote adoption. The resulting Essential Metadata for 3D BRAIN Microscopy includes 91 fields organized into seven categories: Contributors, Funders, Publication, Instrument, Dataset, Specimen, and Image. Adoption of these metadata standards will ensure that investigators receive credit for their work, promote data reuse, facilitate downstream analysis of shared data, and encourage collaboration.
1908.07662
Dmytro Guzenko
Dmytro Guzenko, Aleix Lafita, Bohdan Monastyrskyy, Andriy Kryshtafovych, Jose M. Duarte
Assessment of protein assembly prediction in CASP13
null
null
10.1002/prot.25795
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
We present the assembly category assessment in the 13th edition of the CASP community-wide experiment. For the second time, protein assemblies constitute an independent assessment category. Compared to the last edition we see a clear uptake in participation, more oligomeric targets released, and consistent, albeit modest, improvement of the predictions quality. Looking at the tertiary structure predictions we observe that ignoring the oligomeric state of the targets hinders modelling success. We also note that some contact prediction groups successfully predicted homomeric interfacial contacts, though it appears that these predictions were not used for assembly modelling. Homology modelling with sizeable human intervention appears to form the basis of the assembly prediction techniques in this round of CASP. Future developments should see more integrated approaches to modelling where multiple subunits are a natural part of the modelling process, which would benefit the structure prediction field as a whole.
[ { "created": "Wed, 21 Aug 2019 01:14:57 GMT", "version": "v1" } ]
2019-08-22
[ [ "Guzenko", "Dmytro", "" ], [ "Lafita", "Aleix", "" ], [ "Monastyrskyy", "Bohdan", "" ], [ "Kryshtafovych", "Andriy", "" ], [ "Duarte", "Jose M.", "" ] ]
We present the assembly category assessment in the 13th edition of the CASP community-wide experiment. For the second time, protein assemblies constitute an independent assessment category. Compared to the last edition we see a clear uptake in participation, more oligomeric targets released, and consistent, albeit modest, improvement of the predictions quality. Looking at the tertiary structure predictions we observe that ignoring the oligomeric state of the targets hinders modelling success. We also note that some contact prediction groups successfully predicted homomeric interfacial contacts, though it appears that these predictions were not used for assembly modelling. Homology modelling with sizeable human intervention appears to form the basis of the assembly prediction techniques in this round of CASP. Future developments should see more integrated approaches to modelling where multiple subunits are a natural part of the modelling process, which would benefit the structure prediction field as a whole.
q-bio/0411025
Aniello Buonocore
A. Buonocore, L. Caputo, Y. Ishii, E. Pirozzi, T. Yanagida and L. M. Ricciardi
A Phenomenological model of Myosin II dynamics in the presence of external loads
6 figures, 8 tables
null
null
null
q-bio.BM
null
We address the controversial hot question concerning the validity of the loose coupling versus the lever-arm theories in the actomyosin dynamics by re-interpreting and extending the phenomenological washboard potential model proposed by some of us in a previous paper. In this new model a Brownian motion harnessing thermal energy is assumed to co-exist with the deterministic swing of the lever-arm, to yield an excellent fit of the set of data obtained by some of us on the sliding of Myosin II heads on immobilized actin filaments under various load conditions. Our theoretical arguments are complemented by accurate numerical simulations, and the robustness of the model is tested via different choices of parameters and potential profiles.
[ { "created": "Thu, 11 Nov 2004 17:23:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Buonocore", "A.", "" ], [ "Caputo", "L.", "" ], [ "Ishii", "Y.", "" ], [ "Pirozzi", "E.", "" ], [ "Yanagida", "T.", "" ], [ "Ricciardi", "L. M.", "" ] ]
We address the controversial hot question concerning the validity of the loose coupling versus the lever-arm theories in the actomyosin dynamics by re-interpreting and extending the phenomenological washboard potential model proposed by some of us in a previous paper. In this new model a Brownian motion harnessing thermal energy is assumed to co-exist with the deterministic swing of the lever-arm, to yield an excellent fit of the set of data obtained by some of us on the sliding of Myosin II heads on immobilized actin filaments under various load conditions. Our theoretical arguments are complemented by accurate numerical simulations, and the robustness of the model is tested via different choices of parameters and potential profiles.
2306.01824
Le Zhang
Le Zhang, Jiayang Chen, Tao Shen, Yu Li, Siqi Sun
Enhancing the Protein Tertiary Structure Prediction by Multiple Sequence Alignment Generation
null
null
null
null
q-bio.QM cs.CE cs.LG q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The field of protein folding research has been greatly advanced by deep learning methods, with AlphaFold2 (AF2) demonstrating exceptional performance and atomic-level precision. As co-evolution is integral to protein structure prediction, AF2's accuracy is significantly influenced by the depth of multiple sequence alignment (MSA), which requires extensive exploration of a large protein database for similar sequences. However, not all protein sequences possess abundant homologous families, and consequently, AF2's performance can degrade on such queries, at times failing to produce meaningful results. To address this, we introduce a novel generative language model, MSA-Augmenter, which leverages protein-specific attention mechanisms and large-scale MSAs to generate useful, novel protein sequences not currently found in databases. These sequences supplement shallow MSAs, enhancing the accuracy of structural property predictions. Our experiments on CASP14 demonstrate that MSA-Augmenter can generate de novo sequences that retain co-evolutionary information from inferior MSAs, thereby improving protein structure prediction quality on top of strong AF2.
[ { "created": "Fri, 2 Jun 2023 14:13:50 GMT", "version": "v1" } ]
2023-06-06
[ [ "Zhang", "Le", "" ], [ "Chen", "Jiayang", "" ], [ "Shen", "Tao", "" ], [ "Li", "Yu", "" ], [ "Sun", "Siqi", "" ] ]
The field of protein folding research has been greatly advanced by deep learning methods, with AlphaFold2 (AF2) demonstrating exceptional performance and atomic-level precision. As co-evolution is integral to protein structure prediction, AF2's accuracy is significantly influenced by the depth of multiple sequence alignment (MSA), which requires extensive exploration of a large protein database for similar sequences. However, not all protein sequences possess abundant homologous families, and consequently, AF2's performance can degrade on such queries, at times failing to produce meaningful results. To address this, we introduce a novel generative language model, MSA-Augmenter, which leverages protein-specific attention mechanisms and large-scale MSAs to generate useful, novel protein sequences not currently found in databases. These sequences supplement shallow MSAs, enhancing the accuracy of structural property predictions. Our experiments on CASP14 demonstrate that MSA-Augmenter can generate de novo sequences that retain co-evolutionary information from inferior MSAs, thereby improving protein structure prediction quality on top of strong AF2.
1206.4084
Paul Gardner
Marc P. Hoeppner, Lars E. Barquist and Paul P. Gardner
An Introduction to RNA Databases
27 pages, 10 figures, 1 tables. Submitted as a chapter for "An introduction to RNA bioinformatics" to be published by "Methods in Molecular Biology"
Methods in Molecular Biology 1097:107-123 (2013)
10.1007/978-1-62703-709-9_6
null
q-bio.BM q-bio.GN
http://creativecommons.org/licenses/by/3.0/
We present an introduction to RNA databases. The history and technology behind RNA databases is briefly discussed. We examine differing methods of data collection and curation, and discuss their impact on both the scope and accuracy of the resulting databases. Finally, we demonstrate these principals through detailed examination of four leading RNA databases: Noncode, miRBase, Rfam, and SILVA.
[ { "created": "Mon, 18 Jun 2012 21:48:47 GMT", "version": "v1" } ]
2014-07-01
[ [ "Hoeppner", "Marc P.", "" ], [ "Barquist", "Lars E.", "" ], [ "Gardner", "Paul P.", "" ] ]
We present an introduction to RNA databases. The history and technology behind RNA databases is briefly discussed. We examine differing methods of data collection and curation, and discuss their impact on both the scope and accuracy of the resulting databases. Finally, we demonstrate these principals through detailed examination of four leading RNA databases: Noncode, miRBase, Rfam, and SILVA.
1811.06866
Ulisse Ferrari
Ulisse Ferrari, St\'ephane Deny, Abhishek Sengupta, Romain Caplette, Jos\'e-Alain Sahel, Deniz Dalkara, Serge Picaud, Jens Duebel, Olivier Marre
Optogenetic vision restoration with high resolution
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The majority of inherited retinal degenerations are due to photoreceptor cell death. In many cases ganglion cells are spared making it possible to stimulate them to restore visual function. Several studies (Bi et al., 2006; Lin et al., 2008; Sengupta et al., 2016; Caporale et al., 2011; Berry et al., 2017) have shown that it is possible to express an optogenetic protein in ganglion cells and make them light sensitive. This is a promising strategy to restore vision since optical targeting may be more precise than electrical stimulation with a retinal prothesis. However the spatial resolution of optogenetically-reactivated retinas has not been measured with fine-grained stimulation patterns. Since the optogenetic protein is also expressed in axons, it is unclear if these neurons will only be sensitive to the stimulation of a small region covering their somas and dendrites, or if they will also respond to any stimulation overlapping with their axon, dramatically impairing spatial resolution. Here we recorded responses of mouse and macaque retinas to random checkerboard patterns following an in vivo optogenetic therapy. We show that optogenetically activated ganglion cells are each sensitive to a small region of visual space. A simple model based on this small receptive field predicted accurately their responses to complex stimuli. From this model, we simulated how the entire population of light sensitive ganglion cells would respond to letters of different sizes. We then estimated the maximal acuity expected by a patient, assuming it could make an optimal use of the information delivered by this reactivated retina. The obtained acuity is above the limit of legal blindness. This high spatial resolution is a promising result for future clinical studies.
[ { "created": "Fri, 16 Nov 2018 15:41:19 GMT", "version": "v1" } ]
2018-11-19
[ [ "Ferrari", "Ulisse", "" ], [ "Deny", "Stéphane", "" ], [ "Sengupta", "Abhishek", "" ], [ "Caplette", "Romain", "" ], [ "Sahel", "José-Alain", "" ], [ "Dalkara", "Deniz", "" ], [ "Picaud", "Serge", "" ], [ "Duebel", "Jens", "" ], [ "Marre", "Olivier", "" ] ]
The majority of inherited retinal degenerations are due to photoreceptor cell death. In many cases ganglion cells are spared making it possible to stimulate them to restore visual function. Several studies (Bi et al., 2006; Lin et al., 2008; Sengupta et al., 2016; Caporale et al., 2011; Berry et al., 2017) have shown that it is possible to express an optogenetic protein in ganglion cells and make them light sensitive. This is a promising strategy to restore vision since optical targeting may be more precise than electrical stimulation with a retinal prothesis. However the spatial resolution of optogenetically-reactivated retinas has not been measured with fine-grained stimulation patterns. Since the optogenetic protein is also expressed in axons, it is unclear if these neurons will only be sensitive to the stimulation of a small region covering their somas and dendrites, or if they will also respond to any stimulation overlapping with their axon, dramatically impairing spatial resolution. Here we recorded responses of mouse and macaque retinas to random checkerboard patterns following an in vivo optogenetic therapy. We show that optogenetically activated ganglion cells are each sensitive to a small region of visual space. A simple model based on this small receptive field predicted accurately their responses to complex stimuli. From this model, we simulated how the entire population of light sensitive ganglion cells would respond to letters of different sizes. We then estimated the maximal acuity expected by a patient, assuming it could make an optimal use of the information delivered by this reactivated retina. The obtained acuity is above the limit of legal blindness. This high spatial resolution is a promising result for future clinical studies.
2406.18535
Zhuo Chen
Jinzhe Liu, Xiangsheng Huang, Zhuo Chen, Yin Fang
DRAK: Unlocking Molecular Insights with Domain-Specific Retrieval-Augmented Knowledge in LLMs
Ongoing work; 11 pages, 6 Figures, 2 Tables
null
null
null
q-bio.BM cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) encounter challenges with the unique syntax of specific domains, such as biomolecules. Existing fine-tuning or modality alignment techniques struggle to bridge the domain knowledge gap and understand complex molecular data, limiting LLMs' progress in specialized fields. To overcome these limitations, we propose an expandable and adaptable non-parametric knowledge injection framework named Domain-specific Retrieval-Augmented Knowledge (DRAK), aimed at enhancing reasoning capabilities in specific domains. Utilizing knowledge-aware prompts and gold label-induced reasoning, DRAK has developed profound expertise in the molecular domain and the capability to handle a broad spectrum of analysis tasks. We evaluated two distinct forms of DRAK variants, proving that DRAK exceeds previous benchmarks on six molecular tasks within the Mol-Instructions dataset. Extensive experiments have underscored DRAK's formidable performance and its potential to unlock molecular insights, offering a unified paradigm for LLMs to tackle knowledge-intensive tasks in specific domains. Our code will be available soon.
[ { "created": "Mon, 4 Mar 2024 15:04:05 GMT", "version": "v1" } ]
2024-06-28
[ [ "Liu", "Jinzhe", "" ], [ "Huang", "Xiangsheng", "" ], [ "Chen", "Zhuo", "" ], [ "Fang", "Yin", "" ] ]
Large Language Models (LLMs) encounter challenges with the unique syntax of specific domains, such as biomolecules. Existing fine-tuning or modality alignment techniques struggle to bridge the domain knowledge gap and understand complex molecular data, limiting LLMs' progress in specialized fields. To overcome these limitations, we propose an expandable and adaptable non-parametric knowledge injection framework named Domain-specific Retrieval-Augmented Knowledge (DRAK), aimed at enhancing reasoning capabilities in specific domains. Utilizing knowledge-aware prompts and gold label-induced reasoning, DRAK has developed profound expertise in the molecular domain and the capability to handle a broad spectrum of analysis tasks. We evaluated two distinct forms of DRAK variants, proving that DRAK exceeds previous benchmarks on six molecular tasks within the Mol-Instructions dataset. Extensive experiments have underscored DRAK's formidable performance and its potential to unlock molecular insights, offering a unified paradigm for LLMs to tackle knowledge-intensive tasks in specific domains. Our code will be available soon.
2402.12210
Charlotte Dugourd-Camus
Charlotte Dugourd-Camus and Claudia P. Ferreira and Mostafa Adimy
Modeling the mechanisms of antibody mixtures in viral infections: the cases of sequential homologous and heterologous dengue infections
21 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Antibodies play an essential role in the immune response to viral infections, vaccination, or antibody therapy. Nevertheless, they can be either protective or harmful during the immune response. In addition, competition or cooperation between antibodies, when mixed, can enhance or reduce this protective or harmful effect. Using the laws of chemical reactions to model the binding of antibodies to antigens and their actions to neutralize or enhance infection, we propose a new approach to modeling the activity of the antigen-antibody complex. The resulting expression covers not only purely competitive or purely independent binding between antibodies but also synergistic binding which, depending on the type of antibody, can promote either neutralization or enhancement of the viral activity. We then integrate this expression of viral activity in a within-host model, involving both healthy and infected target cells, virus replication, and the production of two antibodies during sequential infections. We investigate the existence of steady-states (disease-free and endemic) and their local and global asymptotic stability. We complete our study with numerical simulations to illustrate different scenarios. In particular, the scenario where both antibodies are neutralizing (homologous sequential DENV infection) and the scenario where one antibody is neutralizing and the other enhancing (heterologous sequential DENV infection). Our model indicates that efficient viral neutralization is associated with purely independent antibody binding, whereas strong enhancement of viral activity is expected in the case of purely competitive antibody binding. The model developed here has several potential applications for a variety of viral infections involving different antibody molecules. It could also be useful for studying the efficacy of vaccines and antibody-based therapies, for example for HIV.
[ { "created": "Mon, 19 Feb 2024 15:13:31 GMT", "version": "v1" } ]
2024-02-20
[ [ "Dugourd-Camus", "Charlotte", "" ], [ "Ferreira", "Claudia P.", "" ], [ "Adimy", "Mostafa", "" ] ]
Antibodies play an essential role in the immune response to viral infections, vaccination, or antibody therapy. Nevertheless, they can be either protective or harmful during the immune response. In addition, competition or cooperation between antibodies, when mixed, can enhance or reduce this protective or harmful effect. Using the laws of chemical reactions to model the binding of antibodies to antigens and their actions to neutralize or enhance infection, we propose a new approach to modeling the activity of the antigen-antibody complex. The resulting expression covers not only purely competitive or purely independent binding between antibodies but also synergistic binding which, depending on the type of antibody, can promote either neutralization or enhancement of the viral activity. We then integrate this expression of viral activity in a within-host model, involving both healthy and infected target cells, virus replication, and the production of two antibodies during sequential infections. We investigate the existence of steady-states (disease-free and endemic) and their local and global asymptotic stability. We complete our study with numerical simulations to illustrate different scenarios. In particular, the scenario where both antibodies are neutralizing (homologous sequential DENV infection) and the scenario where one antibody is neutralizing and the other enhancing (heterologous sequential DENV infection). Our model indicates that efficient viral neutralization is associated with purely independent antibody binding, whereas strong enhancement of viral activity is expected in the case of purely competitive antibody binding. The model developed here has several potential applications for a variety of viral infections involving different antibody molecules. It could also be useful for studying the efficacy of vaccines and antibody-based therapies, for example for HIV.
1712.05606
Venkat Bokka
Venkat Bokka, Abhishek Dey, Shaunak Sen
Period-Amplitude Co-variation in Biomolecular Oscillators
null
null
10.1049/iet-syb.2018.0015
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The period and amplitude of biomolecular oscillators are functionally important properties in multiple contexts. For a biomolecular oscillator, the overall constraints in how tuning of amplitude affects period, and vice versa, are generally unclear. Here we investigate this co-variation of the period and amplitude in mathematical models of biomolecular oscillators using both simulations and analytical approximations. We computed the amplitude-period co-variation of eleven benchmark biomolecular oscillators as their parameters were individually varied around a nominal value, classifying the various co-variation patterns such as a simultaneous increase/ decrease in period and amplitude. Next, we repeated the classification using a power norm-based amplitude metric, to account for the amplitudes of the many biomolecular species that may be part of the oscillations, finding largely similar trends. Finally, we calculate "scaling laws" of period-amplitude co-variation for a subset of these benchmark oscillators finding that as the approximated period increases, the upper bound of the amplitude increases, or reaches a constant value. Based on these results, we discuss the effect of different parameters on the type of period-amplitude co-variation as well as the difficulty in achieving an oscillation with large amplitude and small period.
[ { "created": "Fri, 15 Dec 2017 10:29:30 GMT", "version": "v1" }, { "created": "Mon, 5 Mar 2018 10:05:10 GMT", "version": "v2" }, { "created": "Tue, 10 Apr 2018 06:01:56 GMT", "version": "v3" } ]
2018-04-11
[ [ "Bokka", "Venkat", "" ], [ "Dey", "Abhishek", "" ], [ "Sen", "Shaunak", "" ] ]
The period and amplitude of biomolecular oscillators are functionally important properties in multiple contexts. For a biomolecular oscillator, the overall constraints in how tuning of amplitude affects period, and vice versa, are generally unclear. Here we investigate this co-variation of the period and amplitude in mathematical models of biomolecular oscillators using both simulations and analytical approximations. We computed the amplitude-period co-variation of eleven benchmark biomolecular oscillators as their parameters were individually varied around a nominal value, classifying the various co-variation patterns such as a simultaneous increase/ decrease in period and amplitude. Next, we repeated the classification using a power norm-based amplitude metric, to account for the amplitudes of the many biomolecular species that may be part of the oscillations, finding largely similar trends. Finally, we calculate "scaling laws" of period-amplitude co-variation for a subset of these benchmark oscillators finding that as the approximated period increases, the upper bound of the amplitude increases, or reaches a constant value. Based on these results, we discuss the effect of different parameters on the type of period-amplitude co-variation as well as the difficulty in achieving an oscillation with large amplitude and small period.
2310.07908
Keith Murray
Keith T. Murray
Recurrent networks recognize patterns with low-dimensional oscillations
7 pages, 8 figures
null
null
null
q-bio.NC cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study proposes a novel dynamical mechanism for pattern recognition discovered by interpreting a recurrent neural network (RNN) trained on a simple task inspired by the SET card game. We interpreted the trained RNN as recognizing patterns via phase shifts in a low-dimensional limit cycle in a manner analogous to transitions in a finite state automaton (FSA). We further validated this interpretation by handcrafting a simple oscillatory model that reproduces the dynamics of the trained RNN. Our findings not only suggest of a potential dynamical mechanism capable of pattern recognition, but also suggest of a potential neural implementation of FSA. Above all, this work contributes to the growing discourse on deep learning model interpretability.
[ { "created": "Wed, 11 Oct 2023 21:25:12 GMT", "version": "v1" } ]
2023-10-13
[ [ "Murray", "Keith T.", "" ] ]
This study proposes a novel dynamical mechanism for pattern recognition discovered by interpreting a recurrent neural network (RNN) trained on a simple task inspired by the SET card game. We interpreted the trained RNN as recognizing patterns via phase shifts in a low-dimensional limit cycle in a manner analogous to transitions in a finite state automaton (FSA). We further validated this interpretation by handcrafting a simple oscillatory model that reproduces the dynamics of the trained RNN. Our findings not only suggest of a potential dynamical mechanism capable of pattern recognition, but also suggest of a potential neural implementation of FSA. Above all, this work contributes to the growing discourse on deep learning model interpretability.