id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2004.00498
Jordi Garcia-Ojalvo
Miguel A. Casal, Santiago Galella, Oscar Vilarroya and Jordi Garcia-Ojalvo
Soft-wired long-term memory in a natural recurrent neuronal network
16 pages, 6 figures
null
10.1063/5.0009709
null
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal networks provide living organisms with the ability to process information. They are also characterized by abundant recurrent connections, which give rise to strong feedback that dictates their dynamics and endows them with fading (short-term) memory. The role of recurrence in long-term memory, on the other hand, is still unclear. Here we use the neuronal network of the roundworm C. elegans to show that recurrent architectures in living organisms can exhibit long-term memory without relying on specific hard-wired modules. A genetic algorithm reveals that the experimentally observed dynamics of the worm's neuronal network exhibits maximal complexity (as measured by permutation entropy). In that complex regime, the response of the system to repeated presentations of a time-varying stimulus reveals a consistent behavior that can be interpreted as soft-wired long-term memory.
[ { "created": "Wed, 1 Apr 2020 15:22:36 GMT", "version": "v1" } ]
2020-06-24
[ [ "Casal", "Miguel A.", "" ], [ "Galella", "Santiago", "" ], [ "Vilarroya", "Oscar", "" ], [ "Garcia-Ojalvo", "Jordi", "" ] ]
Neuronal networks provide living organisms with the ability to process information. They are also characterized by abundant recurrent connections, which give rise to strong feedback that dictates their dynamics and endows them with fading (short-term) memory. The role of recurrence in long-term memory, on the other hand, is still unclear. Here we use the neuronal network of the roundworm C. elegans to show that recurrent architectures in living organisms can exhibit long-term memory without relying on specific hard-wired modules. A genetic algorithm reveals that the experimentally observed dynamics of the worm's neuronal network exhibits maximal complexity (as measured by permutation entropy). In that complex regime, the response of the system to repeated presentations of a time-varying stimulus reveals a consistent behavior that can be interpreted as soft-wired long-term memory.
1606.01372
John Helliwell R
John R. Helliwell, Simon W. M. Tanley, Antoine M. M. Schreurs and Loes M. J. Kroon-Batenburg
Cisplatin coordination chemistry determination at hen egg white lysozyme His15 with ligand distances and angles, and their standard uncertainties, and also reporting a split occupancy effect
10 pages. One figure with two stereos and two further separate (identical to the stereos) figures with atom labels. This arXiv article is the first part of a Data Review
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the interest of L Messori and A Merlino 2016 Coordination Chemistry Reviews in the platinum ions coordination geometries in our PDB entries 4dd4 and 4dd6 we have extended our original analyses.
[ { "created": "Sat, 4 Jun 2016 11:36:03 GMT", "version": "v1" } ]
2016-06-07
[ [ "Helliwell", "John R.", "" ], [ "Tanley", "Simon W. M.", "" ], [ "Schreurs", "Antoine M. M.", "" ], [ "Kroon-Batenburg", "Loes M. J.", "" ] ]
Following the interest of L Messori and A Merlino 2016 Coordination Chemistry Reviews in the platinum ions coordination geometries in our PDB entries 4dd4 and 4dd6 we have extended our original analyses.
1511.07313
Antti Niemi
Jin Dai, Antti J. Niemi, Jianfeng He, Adam Sieradzan, Nevena Ilieva
Bloch spin waves and emergent structure in protein folding with HIV envelope glycoprotein as an example
20 pages 29 figures
Phys. Rev. E 93, 032409 (2016)
10.1103/PhysRevE.93.032409
null
q-bio.BM cond-mat.soft nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We inquire how structure emerges during the process of protein folding. For this we scrutinise col- lective many-atom motions during all-atom molecular dynamics simulations. We introduce, develop and employ various topological techniques, in combination with analytic tools that we deduce from the concept of integrable models and structure of discrete nonlinear Schroedinger equation. The example we consider is an alpha-helical subunit of the HIV envelope glycoprotein gp41. The helical structure is stable when the subunit is part of the biological oligomer. But in isolation the helix becomes unstable, and the monomer starts deforming. We follow the process computationally. We interpret the evolving structure both in terms of a backbone based Heisenberg spin chain and in terms of a side chain based XY spin chain. We find that in both cases the formation of protein super-secondary structure is akin the formation of a topological Bloch domain wall along a spin chain. During the process we identify three individual Bloch walls and we show that each of them can be modelled with a very high precision in terms of a soliton solution to a discrete nonlinear Schroedinger equation.
[ { "created": "Mon, 23 Nov 2015 17:03:00 GMT", "version": "v1" } ]
2016-04-06
[ [ "Dai", "Jin", "" ], [ "Niemi", "Antti J.", "" ], [ "He", "Jianfeng", "" ], [ "Sieradzan", "Adam", "" ], [ "Ilieva", "Nevena", "" ] ]
We inquire how structure emerges during the process of protein folding. For this we scrutinise col- lective many-atom motions during all-atom molecular dynamics simulations. We introduce, develop and employ various topological techniques, in combination with analytic tools that we deduce from the concept of integrable models and structure of discrete nonlinear Schroedinger equation. The example we consider is an alpha-helical subunit of the HIV envelope glycoprotein gp41. The helical structure is stable when the subunit is part of the biological oligomer. But in isolation the helix becomes unstable, and the monomer starts deforming. We follow the process computationally. We interpret the evolving structure both in terms of a backbone based Heisenberg spin chain and in terms of a side chain based XY spin chain. We find that in both cases the formation of protein super-secondary structure is akin the formation of a topological Bloch domain wall along a spin chain. During the process we identify three individual Bloch walls and we show that each of them can be modelled with a very high precision in terms of a soliton solution to a discrete nonlinear Schroedinger equation.
q-bio/0408025
Alexey Mazur K.
Alexey K. Mazur
Sequence-dependent B-A transitions in DNA in silico: Electrostatic condensation mechanism
12 pages, 6 figures, RevTeX4
null
null
null
q-bio.BM physics.bio-ph
null
Dynamics of the polymorphic A<->B transitions in DNA is compared for two polypurine sequences, poly(dA).poly(dT) and poly(dG).poly(dC), long known to exhibit contrasting properties in experiments. In free molecular dynamics simulations reversible transitions are induced by changing the size of a water drop around DNA neutralized by Na ions. In poly(dG).poly(dC) the B<->A transitions are easy, smooth and perfectly reversible. In contrast, a B->A transition in poly(dA).poly(dT) dodecamer fragment could not be obtained even though its A-form is stable under low hydration. Normal B->A transitions are observed, however, in long poly(dA).poly(dT) stretches flanked by GC pairs. An intermediate range of hydration numbers is identified where opposite transitions are observed in the two dodecamer fragments, namely, A->B in poly(dA).poly(dT) and B->A in poly(dG).poly(dC). With hydration numbers close to the stability limit of the B-form, the two sequences exhibit qualitatively different counterion distributions, with a characteristic accumulation of Na ions next to the opening of the minor groove in poly(dA).poly(dT). This difference can explain an increased persistence of poly(dA).poly(dT) DNA towards A-form in crystalline and amorphous fibers as compared to solution conditions. The good overall agreement with experimental data corroborates the general role of the electrostatic condensation mechanism in the A/B polymorphism in DNA.
[ { "created": "Sat, 28 Aug 2004 16:42:19 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mazur", "Alexey K.", "" ] ]
Dynamics of the polymorphic A<->B transitions in DNA is compared for two polypurine sequences, poly(dA).poly(dT) and poly(dG).poly(dC), long known to exhibit contrasting properties in experiments. In free molecular dynamics simulations reversible transitions are induced by changing the size of a water drop around DNA neutralized by Na ions. In poly(dG).poly(dC) the B<->A transitions are easy, smooth and perfectly reversible. In contrast, a B->A transition in poly(dA).poly(dT) dodecamer fragment could not be obtained even though its A-form is stable under low hydration. Normal B->A transitions are observed, however, in long poly(dA).poly(dT) stretches flanked by GC pairs. An intermediate range of hydration numbers is identified where opposite transitions are observed in the two dodecamer fragments, namely, A->B in poly(dA).poly(dT) and B->A in poly(dG).poly(dC). With hydration numbers close to the stability limit of the B-form, the two sequences exhibit qualitatively different counterion distributions, with a characteristic accumulation of Na ions next to the opening of the minor groove in poly(dA).poly(dT). This difference can explain an increased persistence of poly(dA).poly(dT) DNA towards A-form in crystalline and amorphous fibers as compared to solution conditions. The good overall agreement with experimental data corroborates the general role of the electrostatic condensation mechanism in the A/B polymorphism in DNA.
0803.0850
Thierry Rabilloud
Estelle Rousselet (LCBM), Alain Martelli (LCBM), Mireille Chevallet (BBSI), H\'el\`ene Diemer (IPHC), Alain Van Dorssealer (IPHC), Thierry Rabilloud (BBSI), Jean-Marc Moulis (LCBM)
Zinc adaptation and resistance to cadmium toxicity in mammalian cells. Molecular insight by proteomic analysis
in press in Proteomics
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To identify proteins involved in cellular adaptive responses to zinc, a comparative proteome analysis between a previously developed high zinc- and cadmium- resistant human epithelial cell line (HZR) and the parental HeLa cells has been carried out. Differentially produced proteins included co-chaperones, proteins associated with oxido-reductase activities, and ubiquitin. Biochemical pathways to which these proteins belong were probed for their involvement in the resistance of both cell lines against cadmium toxicity. Among endoplasmic reticulum stressors, thapsigargin sensitized HZR cells, but not HeLa cells, to cadmium toxicity more acutely than tunicamycin, implying that these cells heavily relied on proper intracellular calcium distribution. The similar sensitivity of both HeLa and HZR cells to inhibitors of the proteasome, such as MG-132 or lactacystin, excluded improved proteasome activity as a mechanism associated with zinc adaptation of HZR cells. The enzyme 4-hydroxyphenylpyruvate dioxygenase was overproduced in HZR cells as compared to HeLa cells. It transforms 4-hydroxyphenylpyruvate to homogentisate in the second step of tyrosine catabolism. Inhibition of 4-hydroxyphenylpyruvate dioxygenase decreased the resistance of HZR cells against cadmium, but not that of HeLa cells, suggesting that adaptation to zinc overload and increased 4-hydroxyphenylpyruvate removal are linked in HZR cells
[ { "created": "Thu, 6 Mar 2008 12:40:10 GMT", "version": "v1" } ]
2008-12-18
[ [ "Rousselet", "Estelle", "", "LCBM" ], [ "Martelli", "Alain", "", "LCBM" ], [ "Chevallet", "Mireille", "", "BBSI" ], [ "Diemer", "Hélène", "", "IPHC" ], [ "Van Dorssealer", "Alain", "", "IPHC" ], [ "Rabilloud", "Thierry", "", "BBSI" ], [ "Moulis", "Jean-Marc", "", "LCBM" ] ]
To identify proteins involved in cellular adaptive responses to zinc, a comparative proteome analysis between a previously developed high zinc- and cadmium- resistant human epithelial cell line (HZR) and the parental HeLa cells has been carried out. Differentially produced proteins included co-chaperones, proteins associated with oxido-reductase activities, and ubiquitin. Biochemical pathways to which these proteins belong were probed for their involvement in the resistance of both cell lines against cadmium toxicity. Among endoplasmic reticulum stressors, thapsigargin sensitized HZR cells, but not HeLa cells, to cadmium toxicity more acutely than tunicamycin, implying that these cells heavily relied on proper intracellular calcium distribution. The similar sensitivity of both HeLa and HZR cells to inhibitors of the proteasome, such as MG-132 or lactacystin, excluded improved proteasome activity as a mechanism associated with zinc adaptation of HZR cells. The enzyme 4-hydroxyphenylpyruvate dioxygenase was overproduced in HZR cells as compared to HeLa cells. It transforms 4-hydroxyphenylpyruvate to homogentisate in the second step of tyrosine catabolism. Inhibition of 4-hydroxyphenylpyruvate dioxygenase decreased the resistance of HZR cells against cadmium, but not that of HeLa cells, suggesting that adaptation to zinc overload and increased 4-hydroxyphenylpyruvate removal are linked in HZR cells
q-bio/0402026
John Hertz
Alexander Lerchner, Mandana Ahmadi and John Hertz
High conductance states in a mean field cortical network model
7 pages, 3 figures, presented at CNS 2003, to be published in Neurocomputing
null
null
2004-14
q-bio.NC
null
Measured responses from visual cortical neurons show that spike times tend to be correlated rather than exactly Poisson distributed. Fano factors vary and are usually greater than 1 due to the tendency of spikes being clustered into bursts. We show that this behavior emerges naturally in a balanced cortical network model with random connectivity and conductance-based synapses. We employ mean field theory with correctly colored noise to describe temporal correlations in the neuronal activity. Our results illuminate the connection between two independent experimental findings: high conductance states of cortical neurons in their natural environment, and variable non-Poissonian spike statistics with Fano factors greater than 1.
[ { "created": "Wed, 11 Feb 2004 21:00:07 GMT", "version": "v1" } ]
2007-05-23
[ [ "Lerchner", "Alexander", "" ], [ "Ahmadi", "Mandana", "" ], [ "Hertz", "John", "" ] ]
Measured responses from visual cortical neurons show that spike times tend to be correlated rather than exactly Poisson distributed. Fano factors vary and are usually greater than 1 due to the tendency of spikes being clustered into bursts. We show that this behavior emerges naturally in a balanced cortical network model with random connectivity and conductance-based synapses. We employ mean field theory with correctly colored noise to describe temporal correlations in the neuronal activity. Our results illuminate the connection between two independent experimental findings: high conductance states of cortical neurons in their natural environment, and variable non-Poissonian spike statistics with Fano factors greater than 1.
2307.02680
Charles Puelz
Marshall Davey, Charles Puelz, Simone Rossi, Margaret Anne Smith, David R. Wells, Greg Sturgeon, W. Paul Segars, John P. Vavalle, Charles S. Peskin and Boyce E. Griffith
Simulating Cardiac Fluid Dynamics in the Human Heart
null
null
null
null
q-bio.TO cs.NA math.NA physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients.
[ { "created": "Wed, 5 Jul 2023 22:43:59 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 14:50:13 GMT", "version": "v2" } ]
2023-10-25
[ [ "Davey", "Marshall", "" ], [ "Puelz", "Charles", "" ], [ "Rossi", "Simone", "" ], [ "Smith", "Margaret Anne", "" ], [ "Wells", "David R.", "" ], [ "Sturgeon", "Greg", "" ], [ "Segars", "W. Paul", "" ], [ "Vavalle", "John P.", "" ], [ "Peskin", "Charles S.", "" ], [ "Griffith", "Boyce E.", "" ] ]
Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients.
2309.11458
Samuel M. Fischer
Samuel M. Fischer, Xugao Wang, Andreas Huth
Distinguishing mature and immature trees allows to estimate forest carbon uptake from stand structure
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relating forest productivity to local variations in forest structure has been a long-standing challenge. Previous studies often focused on the connection between forest structure and stand-level photosynthesis (GPP). However, biomass production (NPP) and net ecosystem exchange (NEE) are also subject to respiration and other carbon losses, which vary with local conditions and life history traits. Here, we use a simulation approach to study how these losses impact forest productivity and reveal themselves in forest structure. We fit the process-based forest model Formind to a 25ha inventory of an old-growth temperate forest in China and classify trees as "mature" (full-grown) or "immature" based on their intrinsic carbon use efficiency. Our results reveal a strong negative connection between the stand-level carbon use efficiency and the prevalence of mature trees: GPP increases with the total basal area, whereas NPP and NEE are driven by the basal area of immature trees. Accordingly, the basal area entropy - a structural proxy for the prevalence of immature trees - correlated well with NPP and NEE and had higher predictive power than other structural characteristics such as Shannon diversity and height standard deviation. Our results were robust across spatial scales (0.04-1ha) and yield promising hypotheses field studies and new theoretical work.
[ { "created": "Wed, 20 Sep 2023 16:52:54 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2023 12:32:51 GMT", "version": "v2" } ]
2023-11-20
[ [ "Fischer", "Samuel M.", "" ], [ "Wang", "Xugao", "" ], [ "Huth", "Andreas", "" ] ]
Relating forest productivity to local variations in forest structure has been a long-standing challenge. Previous studies often focused on the connection between forest structure and stand-level photosynthesis (GPP). However, biomass production (NPP) and net ecosystem exchange (NEE) are also subject to respiration and other carbon losses, which vary with local conditions and life history traits. Here, we use a simulation approach to study how these losses impact forest productivity and reveal themselves in forest structure. We fit the process-based forest model Formind to a 25ha inventory of an old-growth temperate forest in China and classify trees as "mature" (full-grown) or "immature" based on their intrinsic carbon use efficiency. Our results reveal a strong negative connection between the stand-level carbon use efficiency and the prevalence of mature trees: GPP increases with the total basal area, whereas NPP and NEE are driven by the basal area of immature trees. Accordingly, the basal area entropy - a structural proxy for the prevalence of immature trees - correlated well with NPP and NEE and had higher predictive power than other structural characteristics such as Shannon diversity and height standard deviation. Our results were robust across spatial scales (0.04-1ha) and yield promising hypotheses field studies and new theoretical work.
0801.1296
Atul Narang
Jason T. Noel and Atul Narang
Gene regulation in continuous cultures: A unified theory for bacteria and yeasts
67 pages, 20 figures
null
null
null
q-bio.CB q-bio.MN
null
During batch growth on mixtures of two growth-limiting substrates, microbes consume the substrates either sequentially or simultaneously. These growth patterns are manifested in all types of bacteria and yeasts. The ubiquity of these growth patterns suggests that they are driven by a universal mechanism common to all microbial species. In previous work, we showed that a minimal model accounting only for enzyme induction and dilution explains the phenotypes observed in batch cultures of various wild-type and mutant/recombinant cells. Here, we examine the extension of the minimal model to continuous cultures. We show that: (1) Several enzymatic trends, usually attributed to specific regulatory mechanisms such as catabolite repression, are completely accounted for by dilution. (2) The bifurcation diagram of the minimal model for continuous cultures, which classifies the substrate consumption pattern at any given dilution rate and feed concentrations, provides a a precise explanation for the empirically observed correlation between the growth patterns in batch and continuous cultures. (3) Numerical simulations of the model are in excellent agreement with the data. The model captures the variation of the steady state substrate concentrations, cell densities, and enzyme levels during the single- and mixed-substrate growth of bacteria and yeasts at various dilution rates and feed concentrations. (4) This variation is well-approximated by simple analytical expressions that furnish physical insights into the steady states of continuous cultures. The minimal model provides a framework for quantitating the effect of regulatory mechanisms. We illustrate this by analyzing several data sets from the literature.
[ { "created": "Tue, 8 Jan 2008 19:00:40 GMT", "version": "v1" } ]
2008-01-09
[ [ "Noel", "Jason T.", "" ], [ "Narang", "Atul", "" ] ]
During batch growth on mixtures of two growth-limiting substrates, microbes consume the substrates either sequentially or simultaneously. These growth patterns are manifested in all types of bacteria and yeasts. The ubiquity of these growth patterns suggests that they are driven by a universal mechanism common to all microbial species. In previous work, we showed that a minimal model accounting only for enzyme induction and dilution explains the phenotypes observed in batch cultures of various wild-type and mutant/recombinant cells. Here, we examine the extension of the minimal model to continuous cultures. We show that: (1) Several enzymatic trends, usually attributed to specific regulatory mechanisms such as catabolite repression, are completely accounted for by dilution. (2) The bifurcation diagram of the minimal model for continuous cultures, which classifies the substrate consumption pattern at any given dilution rate and feed concentrations, provides a a precise explanation for the empirically observed correlation between the growth patterns in batch and continuous cultures. (3) Numerical simulations of the model are in excellent agreement with the data. The model captures the variation of the steady state substrate concentrations, cell densities, and enzyme levels during the single- and mixed-substrate growth of bacteria and yeasts at various dilution rates and feed concentrations. (4) This variation is well-approximated by simple analytical expressions that furnish physical insights into the steady states of continuous cultures. The minimal model provides a framework for quantitating the effect of regulatory mechanisms. We illustrate this by analyzing several data sets from the literature.
2206.02290
Sachin Gavali
Sachin Gavali, Karen Ross, Chuming Chen, Julie Cowart, Cathy H. Wu
A knowledge graph representation learning approach to predict novel kinase-substrate interactions
null
null
null
null
q-bio.QM cs.AI q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The human proteome contains a vast network of interacting kinases and substrates. Even though some kinases have proven to be immensely useful as therapeutic targets, a majority are still understudied. In this work, we present a novel knowledge graph representation learning approach to predict novel interaction partners for understudied kinases. Our approach uses a phosphoproteomic knowledge graph constructed by integrating data from iPTMnet, Protein Ontology, Gene Ontology and BioKG. The representation of kinases and substrates in this knowledge graph are learned by performing directed random walks on triples coupled with a modified SkipGram or CBOW model. These representations are then used as an input to a supervised classification model to predict novel interactions for understudied kinases. We also present a post-predictive analysis of the predicted interactions and an ablation study of the phosphoproteomic knowledge graph to gain an insight into the biology of the understudied kinases.
[ { "created": "Sun, 5 Jun 2022 23:55:40 GMT", "version": "v1" }, { "created": "Fri, 10 Jun 2022 00:09:23 GMT", "version": "v2" } ]
2022-06-13
[ [ "Gavali", "Sachin", "" ], [ "Ross", "Karen", "" ], [ "Chen", "Chuming", "" ], [ "Cowart", "Julie", "" ], [ "Wu", "Cathy H.", "" ] ]
The human proteome contains a vast network of interacting kinases and substrates. Even though some kinases have proven to be immensely useful as therapeutic targets, a majority are still understudied. In this work, we present a novel knowledge graph representation learning approach to predict novel interaction partners for understudied kinases. Our approach uses a phosphoproteomic knowledge graph constructed by integrating data from iPTMnet, Protein Ontology, Gene Ontology and BioKG. The representation of kinases and substrates in this knowledge graph are learned by performing directed random walks on triples coupled with a modified SkipGram or CBOW model. These representations are then used as an input to a supervised classification model to predict novel interactions for understudied kinases. We also present a post-predictive analysis of the predicted interactions and an ablation study of the phosphoproteomic knowledge graph to gain an insight into the biology of the understudied kinases.
1607.05175
Philip Patton
Philip T. Patton, Krishna Pacifici and Jaime Collazo
Inferring habitat quality and habitat selection using static site occupancy models
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When evaluating the ecological value of land use within a landscape, investigators typically rely on measures of habitat selection and habitat quality. Traditional measures of habitat selection and habitat quality require data from resource intensive study designs (e.g., telemetry or mark-recapture). Often, managers must evaluate ecological value despite only having data from less resource intensive study designs. In this paper, we use occupancy data to measure habitat quality and habitat selection response for the Puerto Rican Vireo, an endemic songbird whose population growth is depressed by brood parasitism from the Shiny Cowbird. We were interested in how vireo habitat quality and vireo habitat selection varied among three land uses (forest, shaded coffee plantations, and sun coffee) in Puerto Rico. We estimated vireo occupancy probability as a measure of habitat selection, and the probability of cowbird occurrence given vireo presence as a measure of habitat quality. To estimate the latter, we explored different ways of modeling the occurrence of the two species jointly, and compared these models to independent models using measures of predictive performance. Vireos preferentially selected forested sites and shaded coffee sites over sun coffee sites. By our measure of habitat quality, either type of coffee plantation was poor quality, and the forested sites were high quality. This suggests that shade coffee may be an ecological trap for the vireo in our study area. One joint model performed best by our measures of predictive ability, thus showing that the cowbird may not occur independently of the vireo. Vireo population dynamics in our study area may benefit from having large amounts of forest relative to coffee plantations. Incorporating species interactions into occupancy models has the potential to improve monitoring for conservation.
[ { "created": "Mon, 18 Jul 2016 16:46:24 GMT", "version": "v1" } ]
2016-07-19
[ [ "Patton", "Philip T.", "" ], [ "Pacifici", "Krishna", "" ], [ "Collazo", "Jaime", "" ] ]
When evaluating the ecological value of land use within a landscape, investigators typically rely on measures of habitat selection and habitat quality. Traditional measures of habitat selection and habitat quality require data from resource intensive study designs (e.g., telemetry or mark-recapture). Often, managers must evaluate ecological value despite only having data from less resource intensive study designs. In this paper, we use occupancy data to measure habitat quality and habitat selection response for the Puerto Rican Vireo, an endemic songbird whose population growth is depressed by brood parasitism from the Shiny Cowbird. We were interested in how vireo habitat quality and vireo habitat selection varied among three land uses (forest, shaded coffee plantations, and sun coffee) in Puerto Rico. We estimated vireo occupancy probability as a measure of habitat selection, and the probability of cowbird occurrence given vireo presence as a measure of habitat quality. To estimate the latter, we explored different ways of modeling the occurrence of the two species jointly, and compared these models to independent models using measures of predictive performance. Vireos preferentially selected forested sites and shaded coffee sites over sun coffee sites. By our measure of habitat quality, either type of coffee plantation was poor quality, and the forested sites were high quality. This suggests that shade coffee may be an ecological trap for the vireo in our study area. One joint model performed best by our measures of predictive ability, thus showing that the cowbird may not occur independently of the vireo. Vireo population dynamics in our study area may benefit from having large amounts of forest relative to coffee plantations. Incorporating species interactions into occupancy models has the potential to improve monitoring for conservation.
2203.06138
Taylor Petty
Taylor Petty, Jan Hannig, Tunde I Huszar, Hari Iyer
A New String Edit Distance and Applications
24 pages, 3 figures. Submitted to MDPI Algorithms
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
String edit distances have been used for decades in applications ranging from spelling correction and web search suggestions to DNA analysis. Most string edit distances are variations of the Levenshtein distance and consider only single-character edits. In forensic applications polymorphic genetic markers such as short tandem repeats (STRs) are used. At these repetitive motifs the DNA copying errors consist of more than just single base differences. More often the phenomenon of ``stutter'' is observed, where the number of repeated units differs (by whole units) from the template. To adapt the Levenshtein distance to be suitable for forensic applications where DNA sequence similarity is of interest, a generalized string edit distance is defined that accommodates the addition or deletion of whole motifs in addition to single-nucleotide edits. A dynamic programming implementation is developed for computing this distance between sequences. The novelty of this algorithm is in handling the complex interactions that arise between multiple- and single-character edits. Forensic examples illustrate the purpose and use of the Restricted Forensic Levenshtein (RFL) distance measure, but applications extend to sequence alignment and string similarity in other biological areas, as well as dynamic programming algorithms more broadly.
[ { "created": "Fri, 11 Mar 2022 18:10:49 GMT", "version": "v1" }, { "created": "Thu, 17 Mar 2022 21:33:17 GMT", "version": "v2" }, { "created": "Wed, 11 May 2022 17:27:26 GMT", "version": "v3" } ]
2022-05-12
[ [ "Petty", "Taylor", "" ], [ "Hannig", "Jan", "" ], [ "Huszar", "Tunde I", "" ], [ "Iyer", "Hari", "" ] ]
String edit distances have been used for decades in applications ranging from spelling correction and web search suggestions to DNA analysis. Most string edit distances are variations of the Levenshtein distance and consider only single-character edits. In forensic applications polymorphic genetic markers such as short tandem repeats (STRs) are used. At these repetitive motifs the DNA copying errors consist of more than just single base differences. More often the phenomenon of ``stutter'' is observed, where the number of repeated units differs (by whole units) from the template. To adapt the Levenshtein distance to be suitable for forensic applications where DNA sequence similarity is of interest, a generalized string edit distance is defined that accommodates the addition or deletion of whole motifs in addition to single-nucleotide edits. A dynamic programming implementation is developed for computing this distance between sequences. The novelty of this algorithm is in handling the complex interactions that arise between multiple- and single-character edits. Forensic examples illustrate the purpose and use of the Restricted Forensic Levenshtein (RFL) distance measure, but applications extend to sequence alignment and string similarity in other biological areas, as well as dynamic programming algorithms more broadly.
q-bio/0501029
Ophir Flomenbom
Ophir Flomenbom, and Joseph Klafter
Uncorrelated two-state single molecule trajectories from reducible kinetic schemes
null
Acta Phys. Pol. B 36, 1527-1535 (2005)
null
null
q-bio.SC
null
Trajectories of on-off events are the output of many single molecule experiments. Usually, one describes the underlying mechanism that generates the trajectory using a kinetic scheme, and by analyzing the trajectory aims at deducing this scheme. In a previous work [O. Flomenbom, J. Klafter, and A. Szabo, submitted (2004)], we showed that when successive events along a trajectory are uncorrelated, all the information in the trajectory is contained in two basic functions, which are the waiting time probability functions (PDFs) of the on state and of the off state. The kinetic schemes that lead to such uncorrelated trajectories were termed reducible. Here we discuss the reasons that lead to reducible schemes. In particular, the topology of reducible schemes is characterized and proven.
[ { "created": "Thu, 20 Jan 2005 19:05:47 GMT", "version": "v1" } ]
2010-08-16
[ [ "Flomenbom", "Ophir", "" ], [ "Klafter", "Joseph", "" ] ]
Trajectories of on-off events are the output of many single molecule experiments. Usually, one describes the underlying mechanism that generates the trajectory using a kinetic scheme, and by analyzing the trajectory aims at deducing this scheme. In a previous work [O. Flomenbom, J. Klafter, and A. Szabo, submitted (2004)], we showed that when successive events along a trajectory are uncorrelated, all the information in the trajectory is contained in two basic functions, which are the waiting time probability functions (PDFs) of the on state and of the off state. The kinetic schemes that lead to such uncorrelated trajectories were termed reducible. Here we discuss the reasons that lead to reducible schemes. In particular, the topology of reducible schemes is characterized and proven.
1711.02624
Glenn Young
Glenn Young and Andrew Belmonte
Fast cheater migration stabilizes coexistence in a public goods dilemma on networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Through the lens of game theory, cooperation is frequently considered an unsustainable strategy: if an entire population is cooperating, each indi- vidual can increase its overall fitness by choosing not to cooperate, thereby still receiving all the benefit of its cooperating neighbors while no longer expending its own energy. Observable cooperation in naturally-occurring public goods games is consequently of great interest, as such systems offer insight into both the emergence and sustainability of cooperation. Here we consider a population that obeys a public goods game on a network of discrete regions (that we call nests), between any two of which individuals are free to migrate. We construct a system of piecewise-smooth ordinary differential equations that couple the within-nest population dynamics and the between-nest migratory dynamics. Through a combination of analytical and numerical methods, we show that if the workers within the population migrate sufficiently fast relative to the cheaters, the network loses stability first through a Hopf bifurcation, then a torus bifurcation, after which one or more nests collapse. Our results indicate that fast moving cheaters can act to stabilize worker-cheater coexistence within network that would otherwise collapse. We end with a comparison of our results with the dynamics observed in colonies of the ant species Pristomyrmex punctatus and in those of the Cape honeybee Apis mellifera capensis, and argue that they qualitatively agree.
[ { "created": "Tue, 7 Nov 2017 17:39:37 GMT", "version": "v1" } ]
2017-11-08
[ [ "Young", "Glenn", "" ], [ "Belmonte", "Andrew", "" ] ]
Through the lens of game theory, cooperation is frequently considered an unsustainable strategy: if an entire population is cooperating, each indi- vidual can increase its overall fitness by choosing not to cooperate, thereby still receiving all the benefit of its cooperating neighbors while no longer expending its own energy. Observable cooperation in naturally-occurring public goods games is consequently of great interest, as such systems offer insight into both the emergence and sustainability of cooperation. Here we consider a population that obeys a public goods game on a network of discrete regions (that we call nests), between any two of which individuals are free to migrate. We construct a system of piecewise-smooth ordinary differential equations that couple the within-nest population dynamics and the between-nest migratory dynamics. Through a combination of analytical and numerical methods, we show that if the workers within the population migrate sufficiently fast relative to the cheaters, the network loses stability first through a Hopf bifurcation, then a torus bifurcation, after which one or more nests collapse. Our results indicate that fast moving cheaters can act to stabilize worker-cheater coexistence within network that would otherwise collapse. We end with a comparison of our results with the dynamics observed in colonies of the ant species Pristomyrmex punctatus and in those of the Cape honeybee Apis mellifera capensis, and argue that they qualitatively agree.
1703.10927
Duc Nguyen
Bao Wang, Zhixiong Zhao, Duc D. Nguyen, Guo-Wei Wei
Feature functional theory - binding predictor (FFT-BP) for the blind prediction of binding free energies
25 pages, 11 figures
null
null
null
q-bio.QM cs.LG physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a feature functional theory - binding predictor (FFT-BP) for the protein-ligand binding affinity prediction. The underpinning assumptions of FFT-BP are as follows: i) representability: there exists a microscopic feature vector that can uniquely characterize and distinguish one protein-ligand complex from another; ii) feature-function relationship: the macroscopic features, including binding free energy, of a complex is a functional of microscopic feature vectors; and iii) similarity: molecules with similar microscopic features have similar macroscopic features, such as binding affinity. Physical models, such as implicit solvent models and quantum theory, are utilized to extract microscopic features, while machine learning algorithms are employed to rank the similarity among protein-ligand complexes. A large variety of numerical validations and tests confirms the accuracy and robustness of the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99, 2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation coefficients are 0.75, 0.80, and 0.78, respectively.
[ { "created": "Fri, 31 Mar 2017 15:00:07 GMT", "version": "v1" } ]
2017-04-03
[ [ "Wang", "Bao", "" ], [ "Zhao", "Zhixiong", "" ], [ "Nguyen", "Duc D.", "" ], [ "Wei", "Guo-Wei", "" ] ]
We present a feature functional theory - binding predictor (FFT-BP) for the protein-ligand binding affinity prediction. The underpinning assumptions of FFT-BP are as follows: i) representability: there exists a microscopic feature vector that can uniquely characterize and distinguish one protein-ligand complex from another; ii) feature-function relationship: the macroscopic features, including binding free energy, of a complex is a functional of microscopic feature vectors; and iii) similarity: molecules with similar microscopic features have similar macroscopic features, such as binding affinity. Physical models, such as implicit solvent models and quantum theory, are utilized to extract microscopic features, while machine learning algorithms are employed to rank the similarity among protein-ligand complexes. A large variety of numerical validations and tests confirms the accuracy and robustness of the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99, 2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation coefficients are 0.75, 0.80, and 0.78, respectively.
2303.14440
Fatemeh Malekinejad
Atefeh Akbarnia Dafrazi, Tahmineh Mehrabi, Fatemeh Malekinejad
A Bioinformatics Study for Recognition of Hub Genes and Pathways in Pancreatic Ductal Adenocarcinoma
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Background: The aim of this study is to use bioinformatics to discover the biomarkers associated with patients with Pancreatic Ductal Adenocarcinoma(PDAC). Material and Methods: GSE28735, GSE15471, and GSE62452 are gene microarray datasets drived from the GEO database, included 153 PDAC samples and 145 normal samples. By analyzing both Gene Ontology (GO) and the Kyoto Encyclopedia of Genes (KEGG), for screening DEGs has provided information about their biological function. Protein-protein interactions (PPI) of DEGs were analyzed using the Search Tool for the Retrieval of Interacting Genes database (STRING) and visualized by Cytoscape. UALCAN was also used to perform prognostic analyses. Results: In Pancreatic ductal adenocarcinoma, we discovered 2264 upregulated DEGs (uDEGs) and 723 downregulated DEGs (dDEGs). The Gene Ontology (GO) indicated that extracellular matrix organization, extracellular structure organization, collagen catabolic process and also the uDEGs are enriched in focal adhesion, PI3K-Akt signaling pathway, ECM-receptor interaction by KEGG analysis. The top ten hub genes were identified from the PPI network: COL1A1, COL3A1, COL1A2, FN1, COL5A2, ITGA3, FBN1, MET, COL6A3, and BGN. These 10 hub genes are highly upregulated in pancreatic ductal adenocarcinoma, according to GEPIA research. The UALCAN prognostic analysis of the 10 hub genes revealed that two of the elevated genes, ITGA3 and MET, significantly shortened PDAC patient survival time. Pancreatic ductal adenocarcinoma is linked to Focal adhesion, the PI3K-Akt signaling pathway, and ECM-receptor interaction, according to Module analysis from the PPI network. Conclusion: This investigation identified hub genes and important signal pathways, which adds to our knowledge of molecular mechanisms and could be exploited as diagnostic and therapeutic biomarkers for Pancreatic ductal adenocarcinoma.
[ { "created": "Sat, 25 Mar 2023 11:26:55 GMT", "version": "v1" } ]
2023-03-28
[ [ "Dafrazi", "Atefeh Akbarnia", "" ], [ "Mehrabi", "Tahmineh", "" ], [ "Malekinejad", "Fatemeh", "" ] ]
Background: The aim of this study is to use bioinformatics to discover the biomarkers associated with patients with Pancreatic Ductal Adenocarcinoma(PDAC). Material and Methods: GSE28735, GSE15471, and GSE62452 are gene microarray datasets drived from the GEO database, included 153 PDAC samples and 145 normal samples. By analyzing both Gene Ontology (GO) and the Kyoto Encyclopedia of Genes (KEGG), for screening DEGs has provided information about their biological function. Protein-protein interactions (PPI) of DEGs were analyzed using the Search Tool for the Retrieval of Interacting Genes database (STRING) and visualized by Cytoscape. UALCAN was also used to perform prognostic analyses. Results: In Pancreatic ductal adenocarcinoma, we discovered 2264 upregulated DEGs (uDEGs) and 723 downregulated DEGs (dDEGs). The Gene Ontology (GO) indicated that extracellular matrix organization, extracellular structure organization, collagen catabolic process and also the uDEGs are enriched in focal adhesion, PI3K-Akt signaling pathway, ECM-receptor interaction by KEGG analysis. The top ten hub genes were identified from the PPI network: COL1A1, COL3A1, COL1A2, FN1, COL5A2, ITGA3, FBN1, MET, COL6A3, and BGN. These 10 hub genes are highly upregulated in pancreatic ductal adenocarcinoma, according to GEPIA research. The UALCAN prognostic analysis of the 10 hub genes revealed that two of the elevated genes, ITGA3 and MET, significantly shortened PDAC patient survival time. Pancreatic ductal adenocarcinoma is linked to Focal adhesion, the PI3K-Akt signaling pathway, and ECM-receptor interaction, according to Module analysis from the PPI network. Conclusion: This investigation identified hub genes and important signal pathways, which adds to our knowledge of molecular mechanisms and could be exploited as diagnostic and therapeutic biomarkers for Pancreatic ductal adenocarcinoma.
1609.04699
Owen Richfield
Owen Richfield, Md. Ashad Alam, Vince Calhoun, Yu-Ping Wang
Learning Schizophrenia Imaging Genetics Data Via Multiple Kernel Canonical Correlation Analysis
arXiv admin note: text overlap with arXiv:1606.00113
null
null
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kernel and Multiple Kernel Canonical Correlation Analysis (CCA) are employed to classify schizophrenic and healthy patients based on their SNPs, DNA Methylation and fMRI data. Kernel and Multiple Kernel CCA are popular methods for finding nonlinear correlations between high-dimensional datasets. Data was gathered from 183 patients, 79 with schizophrenia and 104 healthy controls. Kernel and Multiple Kernel CCA represent new avenues for studying schizophrenia, because, to our knowledge, these methods have not been used on these data before. Classification is performed via k-means clustering on the kernel matrix outputs of the Kernel and Multiple Kernel CCA algorithm. Accuracies of the Kernel and Multiple Kernel CCA classification are compared to that of the regularized linear CCA algorithm classification, and are found to be significantly more accurate. Both algorithms demonstrate maximal accuracies when the combination of DNA methylation and fMRI data are used, and experience lower accuracies when the SNP data are incorporated.
[ { "created": "Thu, 15 Sep 2016 15:26:37 GMT", "version": "v1" } ]
2016-09-16
[ [ "Richfield", "Owen", "" ], [ "Alam", "Md. Ashad", "" ], [ "Calhoun", "Vince", "" ], [ "Wang", "Yu-Ping", "" ] ]
Kernel and Multiple Kernel Canonical Correlation Analysis (CCA) are employed to classify schizophrenic and healthy patients based on their SNPs, DNA Methylation and fMRI data. Kernel and Multiple Kernel CCA are popular methods for finding nonlinear correlations between high-dimensional datasets. Data was gathered from 183 patients, 79 with schizophrenia and 104 healthy controls. Kernel and Multiple Kernel CCA represent new avenues for studying schizophrenia, because, to our knowledge, these methods have not been used on these data before. Classification is performed via k-means clustering on the kernel matrix outputs of the Kernel and Multiple Kernel CCA algorithm. Accuracies of the Kernel and Multiple Kernel CCA classification are compared to that of the regularized linear CCA algorithm classification, and are found to be significantly more accurate. Both algorithms demonstrate maximal accuracies when the combination of DNA methylation and fMRI data are used, and experience lower accuracies when the SNP data are incorporated.
1307.4415
Carl Boettiger
Carl Boettiger and Alan Hastings
No early warning signals for stochastic transitions: insights from large deviation theory
null
2013. Proceedings of the Royal Society B: Biological Sciences 280, 20131372-20131372
10.1098/rspb.2013.1372
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
A reply to Drake (2013) "Early warning signals of stochastic switching" http://dx.doi.org/10.1098/rspb.2013.0686
[ { "created": "Tue, 16 Jul 2013 20:11:28 GMT", "version": "v1" } ]
2013-07-18
[ [ "Boettiger", "Carl", "" ], [ "Hastings", "Alan", "" ] ]
A reply to Drake (2013) "Early warning signals of stochastic switching" http://dx.doi.org/10.1098/rspb.2013.0686
1605.02869
Xiaoli Wu
Qi She, Xiaoli Wu, Beth Jelfs, Adam S. Charles, Rosa H.M.Chan
An Efficient and Flexible Spike Train Model via Empirical Bayes
16 pages, 20 figures, 3 tables
IEEE Trans. Signal Processing 69 (2021) 3236-3251
10.1109/TSP.2021.3076885
null
q-bio.QM eess.SP q-bio.NC stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Accurate statistical models of neural spike responses can characterize the information carried by neural populations. But the limited samples of spike counts during recording usually result in model overfitting. Besides, current models assume spike counts to be Poisson-distributed, which ignores the fact that many neurons demonstrate over-dispersed spiking behaviour. Although the Negative Binomial Generalized Linear Model (NB-GLM) provides a powerful tool for modeling over-dispersed spike counts, the maximum likelihood-based standard NB-GLM leads to highly variable and inaccurate parameter estimates. Thus, we propose a hierarchical parametric empirical Bayes method to estimate the neural spike responses among neuronal population. Our method integrates both Generalized Linear Models (GLMs) and empirical Bayes theory, which aims to (1) improve the accuracy and reliability of parameter estimation, compared to the maximum likelihood-based method for NB-GLM and Poisson-GLM; (2) effectively capture the over-dispersion nature of spike counts from both simulated data and experimental data; and (3) provide insight into both neural interactions and spiking behaviours of the neuronal populations. We apply our approach to study both simulated data and experimental neural data. The estimation of simulation data indicates that the new framework can accurately predict mean spike counts simulated from different models and recover the connectivity weights among neural populations. The estimation based on retinal neurons demonstrate the proposed method outperforms both NB-GLM and Poisson-GLM in terms of the predictive log-likelihood of held-out data. Codes are available in https://doi.org/10.5281/zenodo.4704423
[ { "created": "Tue, 10 May 2016 06:57:16 GMT", "version": "v1" }, { "created": "Sat, 24 Sep 2016 07:39:29 GMT", "version": "v2" }, { "created": "Mon, 28 May 2018 01:34:32 GMT", "version": "v3" }, { "created": "Wed, 20 Jan 2021 13:20:57 GMT", "version": "v4" }, { "created": "Thu, 1 Apr 2021 13:14:48 GMT", "version": "v5" }, { "created": "Tue, 27 Apr 2021 05:07:01 GMT", "version": "v6" } ]
2021-06-17
[ [ "She", "Qi", "" ], [ "Wu", "Xiaoli", "" ], [ "Jelfs", "Beth", "" ], [ "Charles", "Adam S.", "" ], [ "Chan", "Rosa H. M.", "" ] ]
Accurate statistical models of neural spike responses can characterize the information carried by neural populations. But the limited samples of spike counts during recording usually result in model overfitting. Besides, current models assume spike counts to be Poisson-distributed, which ignores the fact that many neurons demonstrate over-dispersed spiking behaviour. Although the Negative Binomial Generalized Linear Model (NB-GLM) provides a powerful tool for modeling over-dispersed spike counts, the maximum likelihood-based standard NB-GLM leads to highly variable and inaccurate parameter estimates. Thus, we propose a hierarchical parametric empirical Bayes method to estimate the neural spike responses among neuronal population. Our method integrates both Generalized Linear Models (GLMs) and empirical Bayes theory, which aims to (1) improve the accuracy and reliability of parameter estimation, compared to the maximum likelihood-based method for NB-GLM and Poisson-GLM; (2) effectively capture the over-dispersion nature of spike counts from both simulated data and experimental data; and (3) provide insight into both neural interactions and spiking behaviours of the neuronal populations. We apply our approach to study both simulated data and experimental neural data. The estimation of simulation data indicates that the new framework can accurately predict mean spike counts simulated from different models and recover the connectivity weights among neural populations. The estimation based on retinal neurons demonstrate the proposed method outperforms both NB-GLM and Poisson-GLM in terms of the predictive log-likelihood of held-out data. Codes are available in https://doi.org/10.5281/zenodo.4704423
0910.2526
Lee Worden
Lee Worden
Notes from the Greenhouse World: A Study in Coevolution, Planetary Sustainability, and Community Structure
To appear in a special issue of Ecological Economics in honor of Richard B. Norgaard
null
10.1016/j.ecolecon.2009.06.017
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores coevolution and governance of common goods using models of coevolving biospheres, in which adapting populations must collectively regulate their planet's climate or face extinction. The results support the Gaia hypothesis against challenges based on the tragedy of the commons: model creatures are often able to work together to maintain the common good (a suitable climate) without being undermined by "free riders." A long-term dynamics appears in which communities that cannot sustain Gaian cooperation give way to communities that can. This result provides an argument why a Gaia scenario should generally be observed, rather than a tragedy of the commons scenario. Second, a close look at how communities fail reveals failures that do not fit the tragedy of the commons framework and are better described in terms of conflict between differently positioned parties, with power over different aspects of the system. In the context of Norgaard's work, all these observations can be read as narratives of coevolution relevant to social communities as well as ecological ones, contrasting with pessimistic scenarios about common governance and supporting respect for traditional arrangements and restraint in intervention.
[ { "created": "Wed, 14 Oct 2009 04:42:04 GMT", "version": "v1" } ]
2009-10-15
[ [ "Worden", "Lee", "" ] ]
This paper explores coevolution and governance of common goods using models of coevolving biospheres, in which adapting populations must collectively regulate their planet's climate or face extinction. The results support the Gaia hypothesis against challenges based on the tragedy of the commons: model creatures are often able to work together to maintain the common good (a suitable climate) without being undermined by "free riders." A long-term dynamics appears in which communities that cannot sustain Gaian cooperation give way to communities that can. This result provides an argument why a Gaia scenario should generally be observed, rather than a tragedy of the commons scenario. Second, a close look at how communities fail reveals failures that do not fit the tragedy of the commons framework and are better described in terms of conflict between differently positioned parties, with power over different aspects of the system. In the context of Norgaard's work, all these observations can be read as narratives of coevolution relevant to social communities as well as ecological ones, contrasting with pessimistic scenarios about common governance and supporting respect for traditional arrangements and restraint in intervention.
2103.15561
Arash Mehrjou
Arash Mehrjou, Ashkan Soleymani, Amin Abyaneh, Samir Bhatt, Bernhard Sch\"olkopf, Stefan Bauer
Pyfectious: An individual-level simulator to discover optimal containment polices for epidemic diseases
null
null
null
null
q-bio.PE cs.AI cs.LG cs.MA cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Simulating the spread of infectious diseases in human communities is critical for predicting the trajectory of an epidemic and verifying various policies to control the devastating impacts of the outbreak. Many existing simulators are based on compartment models that divide people into a few subsets and simulate the dynamics among those subsets using hypothesized differential equations. However, these models lack the requisite granularity to study the effect of intelligent policies that influence every individual in a particular way. In this work, we introduce a simulator software capable of modeling a population structure and controlling the disease's propagation at an individualistic level. In order to estimate the confidence of the conclusions drawn from the simulator, we employ a comprehensive probabilistic approach where the entire population is constructed as a hierarchical random variable. This approach makes the inferred conclusions more robust against sampling artifacts and gives confidence bounds for decisions based on the simulation results. To showcase potential applications, the simulator parameters are set based on the formal statistics of the COVID-19 pandemic, and the outcome of a wide range of control measures is investigated. Furthermore, the simulator is used as the environment of a reinforcement learning problem to find the optimal policies to control the pandemic. The obtained experimental results indicate the simulator's adaptability and capacity in making sound predictions and a successful policy derivation example based on real-world data. As an exemplary application, our results show that the proposed policy discovery method can lead to control measures that produce significantly fewer infected individuals in the population and protect the health system against saturation.
[ { "created": "Wed, 24 Mar 2021 10:54:46 GMT", "version": "v1" }, { "created": "Wed, 21 Apr 2021 00:13:57 GMT", "version": "v2" } ]
2021-04-22
[ [ "Mehrjou", "Arash", "" ], [ "Soleymani", "Ashkan", "" ], [ "Abyaneh", "Amin", "" ], [ "Bhatt", "Samir", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Bauer", "Stefan", "" ] ]
Simulating the spread of infectious diseases in human communities is critical for predicting the trajectory of an epidemic and verifying various policies to control the devastating impacts of the outbreak. Many existing simulators are based on compartment models that divide people into a few subsets and simulate the dynamics among those subsets using hypothesized differential equations. However, these models lack the requisite granularity to study the effect of intelligent policies that influence every individual in a particular way. In this work, we introduce a simulator software capable of modeling a population structure and controlling the disease's propagation at an individualistic level. In order to estimate the confidence of the conclusions drawn from the simulator, we employ a comprehensive probabilistic approach where the entire population is constructed as a hierarchical random variable. This approach makes the inferred conclusions more robust against sampling artifacts and gives confidence bounds for decisions based on the simulation results. To showcase potential applications, the simulator parameters are set based on the formal statistics of the COVID-19 pandemic, and the outcome of a wide range of control measures is investigated. Furthermore, the simulator is used as the environment of a reinforcement learning problem to find the optimal policies to control the pandemic. The obtained experimental results indicate the simulator's adaptability and capacity in making sound predictions and a successful policy derivation example based on real-world data. As an exemplary application, our results show that the proposed policy discovery method can lead to control measures that produce significantly fewer infected individuals in the population and protect the health system against saturation.
2110.00293
Mattia Zanella
Giacomo Albi, Giulia Bertaglia, Walter Boscheri, Giacomo Dimarco, Lorenzo Pareschi, Giuseppe Toscani, Mattia Zanella
Kinetic modelling of epidemic dynamics: social contacts, control with uncertain data, and multiscale spatial dynamics
null
In: Bellomo N., Chaplain M.A.J. (eds) Predicting Pandemics in a Globally Connected World, Vol. 1 (2022) Modeling and Simulation in Science, Engineering and Technology
10.1007/978-3-030-96562-4_3
null
q-bio.PE math.OC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this survey we report some recent results in the mathematical modeling of epidemic phenomena through the use of kinetic equations. We initially consider models of interaction between agents in which social characteristics play a key role in the spread of an epidemic, such as the age of individuals, the number of social contacts, and their economic wealth. Subsequently, for such models, we discuss the possibility of containing the epidemic through an appropriate optimal control formulation based on the policy maker's perception of the progress of the epidemic. The role of uncertainty in the data is also discussed and addressed. Finally, the kinetic modeling is extended to spatially dependent settings using multiscale transport models that can characterize the impact of movement dynamics on epidemic advancement on both one-dimensional networks and realistic two-dimensional geographic settings.
[ { "created": "Fri, 1 Oct 2021 09:59:55 GMT", "version": "v1" } ]
2023-09-11
[ [ "Albi", "Giacomo", "" ], [ "Bertaglia", "Giulia", "" ], [ "Boscheri", "Walter", "" ], [ "Dimarco", "Giacomo", "" ], [ "Pareschi", "Lorenzo", "" ], [ "Toscani", "Giuseppe", "" ], [ "Zanella", "Mattia", "" ] ]
In this survey we report some recent results in the mathematical modeling of epidemic phenomena through the use of kinetic equations. We initially consider models of interaction between agents in which social characteristics play a key role in the spread of an epidemic, such as the age of individuals, the number of social contacts, and their economic wealth. Subsequently, for such models, we discuss the possibility of containing the epidemic through an appropriate optimal control formulation based on the policy maker's perception of the progress of the epidemic. The role of uncertainty in the data is also discussed and addressed. Finally, the kinetic modeling is extended to spatially dependent settings using multiscale transport models that can characterize the impact of movement dynamics on epidemic advancement on both one-dimensional networks and realistic two-dimensional geographic settings.
1408.2945
Jean-Charles Walter
Luca Ciandrini, I. Neri, Jean-Charles Walter, O. Dauloudet, A. Parmeggiani
Motor proteins traffic regulation by supply-demand balance of resources
31 pages, 10 figures
Phys. Biol. 11 (2014) 056006
10.1088/1478-3975/11/5/056006
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cells and in vitro assays the number of motor proteins involved in biological transport processes is far from being unlimited. The cytoskeletal binding sites are in contact with the same finite reservoir of motors (either the cytosol or the flow chamber) and hence compete for recruiting the available motors, potentially depleting the reservoir and affecting cytoskeletal transport. In this work we provide a theoretical framework to study, analytically and numerically, how motor density profiles and crowding along cytoskeletal filaments depend on the competition of motors for their binding sites. We propose two models in which finite processive motor proteins actively advance along cytoskeletal filaments and are continuously exchanged with the motor pool. We first look at homogeneous reservoirs and then examine the effects of free motor diffusion in the surrounding medium. We consider as a reference situation recent in vitro experimental setups of kinesin-8 motors binding and moving along microtubule filaments in a flow chamber. We investigate how the crowding of linear motor proteins moving on a filament can be regulated by the balance between supply (concentration of motor proteins in the flow chamber) and demand (total number of polymerised tubulin heterodimers). We present analytical results for the density profiles of bound motors, the reservoir depletion, and propose novel phase diagrams that present the formation of jams of motor proteins on the filament as a function of two tuneable experimental parameters: the motor protein concentration and the concentration of tubulins polymerized into cytoskeletal filaments. Extensive numerical simulations corroborate the analytical results for parameters in the experimental range and also address the effects of diffusion of motor proteins in the reservoir.
[ { "created": "Wed, 13 Aug 2014 08:50:16 GMT", "version": "v1" }, { "created": "Wed, 24 Sep 2014 13:15:59 GMT", "version": "v2" } ]
2014-09-26
[ [ "Ciandrini", "Luca", "" ], [ "Neri", "I.", "" ], [ "Walter", "Jean-Charles", "" ], [ "Dauloudet", "O.", "" ], [ "Parmeggiani", "A.", "" ] ]
In cells and in vitro assays the number of motor proteins involved in biological transport processes is far from being unlimited. The cytoskeletal binding sites are in contact with the same finite reservoir of motors (either the cytosol or the flow chamber) and hence compete for recruiting the available motors, potentially depleting the reservoir and affecting cytoskeletal transport. In this work we provide a theoretical framework to study, analytically and numerically, how motor density profiles and crowding along cytoskeletal filaments depend on the competition of motors for their binding sites. We propose two models in which finite processive motor proteins actively advance along cytoskeletal filaments and are continuously exchanged with the motor pool. We first look at homogeneous reservoirs and then examine the effects of free motor diffusion in the surrounding medium. We consider as a reference situation recent in vitro experimental setups of kinesin-8 motors binding and moving along microtubule filaments in a flow chamber. We investigate how the crowding of linear motor proteins moving on a filament can be regulated by the balance between supply (concentration of motor proteins in the flow chamber) and demand (total number of polymerised tubulin heterodimers). We present analytical results for the density profiles of bound motors, the reservoir depletion, and propose novel phase diagrams that present the formation of jams of motor proteins on the filament as a function of two tuneable experimental parameters: the motor protein concentration and the concentration of tubulins polymerized into cytoskeletal filaments. Extensive numerical simulations corroborate the analytical results for parameters in the experimental range and also address the effects of diffusion of motor proteins in the reservoir.
0901.0530
Aaron Clauset
Aaron Clauset
Comment on Yu et al., "High Quality Binary Protein Interaction Map of the Yeast Interactome Network." Science 322, 104 (2008)
4 pages, 2 figures, 1 table
null
null
null
q-bio.MN physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We test the claim by Yu et al. -- presented in Science 322, 104 (2008) -- that the degree distribution of the yeast (Saccharomyces cerevisiae) protein-interaction network is best approximated by a power law. Yu et al. consider three versions of this network. In all three cases, however, we find the most likely power-law model of the data is distinct from and incompatible with the one given by Yu et al. Only one network admits good statistical support for any power law, and in that case, the power law explains only the distribution of the upper 10% of node degrees. These results imply that there is considerably more structure present in the yeast interactome than suggested by Yu et al., and that these networks should probably not be called "scale free."
[ { "created": "Mon, 5 Jan 2009 18:11:42 GMT", "version": "v1" } ]
2009-01-06
[ [ "Clauset", "Aaron", "" ] ]
We test the claim by Yu et al. -- presented in Science 322, 104 (2008) -- that the degree distribution of the yeast (Saccharomyces cerevisiae) protein-interaction network is best approximated by a power law. Yu et al. consider three versions of this network. In all three cases, however, we find the most likely power-law model of the data is distinct from and incompatible with the one given by Yu et al. Only one network admits good statistical support for any power law, and in that case, the power law explains only the distribution of the upper 10% of node degrees. These results imply that there is considerably more structure present in the yeast interactome than suggested by Yu et al., and that these networks should probably not be called "scale free."
1805.06950
Kymberleigh Pagel
Wazim Mohammed Ismail, Kymberleigh A. Pagel, Vikas Pejaver, Simo V. Zhang, Sofia Casasa, Matthew Mort, David N. Cooper, Matthew W. Hahn, and Predrag Radivojac
The sequencing and interpretation of the genome obtained from a Serbian individual
18 pages, 2 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Recent genetic studies and whole-genome sequencing projects have greatly improved our understanding of human variation and clinically actionable genetic information. Smaller ethnic populations, however, remain underrepresented in both individual and large-scale sequencing efforts and hence present an opportunity to discover new variants of biomedical and demographic significance. This report describes the sequencing and analysis of a genome obtained from an individual of Serbian origin, introducing tens of thousands of previously unknown variants to the currently available pool. Ancestry analysis places this individual in close proximity of the Central and Eastern European populations; i.e., closest to Croatian, Bulgarian and Hungarian individuals and, in terms of other Europeans, furthest from Ashkenazi Jewish, Spanish, Sicilian, and Baltic individuals. Our analysis confirmed gene flow between Neanderthal and ancestral pan-European populations, with similar contributions to the Serbian genome as those observed in other European groups. Finally, to assess the burden of potentially disease-causing/clinically relevant variation in the sequenced genome, we utilized manually curated genotype-phenotype association databases and variant-effect predictors. We identified several variants that have previously been associated with severe early-onset disease that is not evident in the proband, as well as variants that could yet prove to be clinically relevant to the proband over the next decades. The presence of numerous private and low-frequency variants along with the observed and predicted disease-causing mutations in this genome exemplify some of the global challenges of genome interpretation, especially in the context of understudied ethnic groups.
[ { "created": "Thu, 17 May 2018 20:08:02 GMT", "version": "v1" } ]
2018-05-21
[ [ "Ismail", "Wazim Mohammed", "" ], [ "Pagel", "Kymberleigh A.", "" ], [ "Pejaver", "Vikas", "" ], [ "Zhang", "Simo V.", "" ], [ "Casasa", "Sofia", "" ], [ "Mort", "Matthew", "" ], [ "Cooper", "David N.", "" ], [ "Hahn", "Matthew W.", "" ], [ "Radivojac", "Predrag", "" ] ]
Recent genetic studies and whole-genome sequencing projects have greatly improved our understanding of human variation and clinically actionable genetic information. Smaller ethnic populations, however, remain underrepresented in both individual and large-scale sequencing efforts and hence present an opportunity to discover new variants of biomedical and demographic significance. This report describes the sequencing and analysis of a genome obtained from an individual of Serbian origin, introducing tens of thousands of previously unknown variants to the currently available pool. Ancestry analysis places this individual in close proximity of the Central and Eastern European populations; i.e., closest to Croatian, Bulgarian and Hungarian individuals and, in terms of other Europeans, furthest from Ashkenazi Jewish, Spanish, Sicilian, and Baltic individuals. Our analysis confirmed gene flow between Neanderthal and ancestral pan-European populations, with similar contributions to the Serbian genome as those observed in other European groups. Finally, to assess the burden of potentially disease-causing/clinically relevant variation in the sequenced genome, we utilized manually curated genotype-phenotype association databases and variant-effect predictors. We identified several variants that have previously been associated with severe early-onset disease that is not evident in the proband, as well as variants that could yet prove to be clinically relevant to the proband over the next decades. The presence of numerous private and low-frequency variants along with the observed and predicted disease-causing mutations in this genome exemplify some of the global challenges of genome interpretation, especially in the context of understudied ethnic groups.
2111.05727
Carlos Oscar S. Sorzano
C.O.S. Sorzano
The mathematics of contagious diseases and their limitations in forecasting
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by-nc-nd/4.0/
This article explores mathematical models for understanding the evolution of contagious diseases. The most widely known set of models are the compartmental ones, which are based on a set of differential equations. But these are not the only models. This review visits many different families of models. Additionally, we show these families, not as unrelated entities, but following a common thread in which the problems or assumptions of a model are solved or generalized by another model. In this way, we can understand their relationships, assumptions, simplifications, and, ultimately, limitations. Prompted by the current Covid19 pandemic, we have a special focus on spread forecasting. We illustrate the difficulties encountered to do realistic predictions. In general, they are only approximations to a reality whose biological and societal complexity is much larger. Particularly troublesome are the large underlying variability, the problem's time-varying nature, and the difficulty to estimate the required parameters for a faithful model. Additionally, we will also see that these models have a multiplicative nature implying that small errors in the system parameters cause a huge uncertainty in the prediction. Stochastic or agent-based models can overcome some of the modeling problems of systems based on differential or stochastic equations. Their main difficulty is that they are as accurate and realistic as the data available to estimate their detailed parametrization, and very often this detailed data is not at the modeller's disposal. Although the predictive power of mathematical models to forecast the evolution of a contagious disease is very limited, these models are still very useful to plan interventions as they can calculate their impact if all other parameters stay fixed. They are also very useful to understand the properties of disease propagation in complex systems.
[ { "created": "Wed, 10 Nov 2021 15:11:11 GMT", "version": "v1" }, { "created": "Sat, 4 Dec 2021 06:56:02 GMT", "version": "v2" } ]
2021-12-07
[ [ "Sorzano", "C. O. S.", "" ] ]
This article explores mathematical models for understanding the evolution of contagious diseases. The most widely known set of models are the compartmental ones, which are based on a set of differential equations. But these are not the only models. This review visits many different families of models. Additionally, we show these families, not as unrelated entities, but following a common thread in which the problems or assumptions of a model are solved or generalized by another model. In this way, we can understand their relationships, assumptions, simplifications, and, ultimately, limitations. Prompted by the current Covid19 pandemic, we have a special focus on spread forecasting. We illustrate the difficulties encountered to do realistic predictions. In general, they are only approximations to a reality whose biological and societal complexity is much larger. Particularly troublesome are the large underlying variability, the problem's time-varying nature, and the difficulty to estimate the required parameters for a faithful model. Additionally, we will also see that these models have a multiplicative nature implying that small errors in the system parameters cause a huge uncertainty in the prediction. Stochastic or agent-based models can overcome some of the modeling problems of systems based on differential or stochastic equations. Their main difficulty is that they are as accurate and realistic as the data available to estimate their detailed parametrization, and very often this detailed data is not at the modeller's disposal. Although the predictive power of mathematical models to forecast the evolution of a contagious disease is very limited, these models are still very useful to plan interventions as they can calculate their impact if all other parameters stay fixed. They are also very useful to understand the properties of disease propagation in complex systems.
1904.05375
Daniel Moyer
Daniel Moyer, Greg Ver Steeg, Chantal M. W. Tax, Paul M. Thompson
Scanner Invariant Representations for Diffusion MRI Harmonization
null
null
null
null
q-bio.QM cs.LG eess.IV stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation. Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data. Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context. Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data.
[ { "created": "Wed, 10 Apr 2019 18:10:19 GMT", "version": "v1" }, { "created": "Fri, 31 Jan 2020 19:11:39 GMT", "version": "v2" } ]
2020-02-04
[ [ "Moyer", "Daniel", "" ], [ "Steeg", "Greg Ver", "" ], [ "Tax", "Chantal M. W.", "" ], [ "Thompson", "Paul M.", "" ] ]
Purpose: In the present work we describe the correction of diffusion-weighted MRI for site and scanner biases using a novel method based on invariant representation. Theory and Methods: Pooled imaging data from multiple sources are subject to variation between the sources. Correcting for these biases has become very important as imaging studies increase in size and multi-site cases become more common. We propose learning an intermediate representation invariant to site/protocol variables, a technique adapted from information theory-based algorithmic fairness; by leveraging the data processing inequality, such a representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to underlying structures. To implement this, we use a deep learning method based on variational auto-encoders (VAE) to construct scanner invariant encodings of the imaging data. Results: To evaluate our method, we use training data from the 2018 MICCAI Computational Diffusion MRI (CDMRI) Challenge Harmonization dataset. Our proposed method shows improvements on independent test data relative to a recently published baseline method on each subtask, mapping data from three different scanning contexts to and from one separate target scanning context. Conclusion: As imaging studies continue to grow, the use of pooled multi-site imaging will similarly increase. Invariant representation presents a strong candidate for the harmonization of these data.
0811.1289
Eytan Domany
Yuval Tabach, Ran Brosh, Yossi Buganim, Anat Reiner, Or Zuk, Assif Yitzhaky, Mark Koudritsky, Varda Rotter and Eytan Domany
Wide-Scale Analysis of Human Functional Transcription Factor Binding Reveals a Strong Bias towards the Transcription Start Site
31 pages, including Supplementary Information and figures
PLoS ONE Issue 8, e807 Aug 2007
10.1371/journal.pone.0000807
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel method to screen the promoters of a set of genes with shared biological function, against a precompiled library of motifs, and find those motifs which are statistically over-represented in the gene set. The gene sets were obtained from the functional Gene Ontology (GO) classification; for each set and motif we optimized the sequence similarity score threshold, independently for every location window (measured with respect to the TSS), taking into account the location dependent nucleotide heterogeneity along the promoters of the target genes. We performed a high throughput analysis, searching the promoters (from 200bp downstream to 1000bp upstream the TSS), of more than 8000 human and 23,000 mouse genes, for 134 functional Gene Ontology classes and for 412 known DNA motifs. When combined with binding site and location conservation between human and mouse, the method identifies with high probability functional binding sites that regulate groups of biologically related genes. We found many location-sensitive functional binding events and showed that they clustered close to the TSS. Our method and findings were put to several experimental tests. By allowing a "flexible" threshold and combining our functional class and location specific search method with conservation between human and mouse, we are able to identify reliably functional TF binding sites. This is an essential step towards constructing regulatory networks and elucidating the design principles that govern transcriptional regulation of expression. The promoter region proximal to the TSS appears to be of central importance for regulation of transcription in human and mouse, just as it is in bacteria and yeast.
[ { "created": "Sat, 8 Nov 2008 18:45:02 GMT", "version": "v1" } ]
2008-11-11
[ [ "Tabach", "Yuval", "" ], [ "Brosh", "Ran", "" ], [ "Buganim", "Yossi", "" ], [ "Reiner", "Anat", "" ], [ "Zuk", "Or", "" ], [ "Yitzhaky", "Assif", "" ], [ "Koudritsky", "Mark", "" ], [ "Rotter", "Varda", "" ], [ "Domany", "Eytan", "" ] ]
We introduce a novel method to screen the promoters of a set of genes with shared biological function, against a precompiled library of motifs, and find those motifs which are statistically over-represented in the gene set. The gene sets were obtained from the functional Gene Ontology (GO) classification; for each set and motif we optimized the sequence similarity score threshold, independently for every location window (measured with respect to the TSS), taking into account the location dependent nucleotide heterogeneity along the promoters of the target genes. We performed a high throughput analysis, searching the promoters (from 200bp downstream to 1000bp upstream the TSS), of more than 8000 human and 23,000 mouse genes, for 134 functional Gene Ontology classes and for 412 known DNA motifs. When combined with binding site and location conservation between human and mouse, the method identifies with high probability functional binding sites that regulate groups of biologically related genes. We found many location-sensitive functional binding events and showed that they clustered close to the TSS. Our method and findings were put to several experimental tests. By allowing a "flexible" threshold and combining our functional class and location specific search method with conservation between human and mouse, we are able to identify reliably functional TF binding sites. This is an essential step towards constructing regulatory networks and elucidating the design principles that govern transcriptional regulation of expression. The promoter region proximal to the TSS appears to be of central importance for regulation of transcription in human and mouse, just as it is in bacteria and yeast.
2310.01562
Matus Medo
Mat\'u\v{s} Medo, Michaela Medov\'a
A comprehensive comparison of tools for fitting mutational signatures
15 pages, 6 figures, Supporting Information included
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Mutational signatures connect characteristic mutational patterns in the genome with biological processes that take place in the tumor tissues. Analysis of mutational signatures can help elucidate tumor evolution, prognosis, and therapeutic strategies. Although tools for extracting mutational signatures de novo have been extensively benchmarked, a similar effort is lacking for tools that fit known mutational signatures to a given catalog of mutations. We fill this gap by comprehensively evaluating eleven signature fitting tools (well-established as well as recent) on synthetic input data. To create realistic input data, we use empirical signature weights in tumor tissue samples from the COSMIC database. The study design allows us to assess the effects of the number of mutations, type of cancer, and the catalog of reference signatures on the results obtained with various fitting tools. We find substantial performance differences between the evaluated tools. Averaged over 120,000 simulated mutational catalogs corresponding to eight different cancer types, SigProfilerSingleSample and SigProfilerAssignment perform best for small and large numbers of mutations per sample, respectively. We further show that ad hoc constraining the list of reference signatures is likely to produce inferior results and that noisy estimates of signature weights in samples with as few as 100 mutations can still be useful in downstream analysis.
[ { "created": "Mon, 2 Oct 2023 18:53:44 GMT", "version": "v1" } ]
2023-10-04
[ [ "Medo", "Matúš", "" ], [ "Medová", "Michaela", "" ] ]
Mutational signatures connect characteristic mutational patterns in the genome with biological processes that take place in the tumor tissues. Analysis of mutational signatures can help elucidate tumor evolution, prognosis, and therapeutic strategies. Although tools for extracting mutational signatures de novo have been extensively benchmarked, a similar effort is lacking for tools that fit known mutational signatures to a given catalog of mutations. We fill this gap by comprehensively evaluating eleven signature fitting tools (well-established as well as recent) on synthetic input data. To create realistic input data, we use empirical signature weights in tumor tissue samples from the COSMIC database. The study design allows us to assess the effects of the number of mutations, type of cancer, and the catalog of reference signatures on the results obtained with various fitting tools. We find substantial performance differences between the evaluated tools. Averaged over 120,000 simulated mutational catalogs corresponding to eight different cancer types, SigProfilerSingleSample and SigProfilerAssignment perform best for small and large numbers of mutations per sample, respectively. We further show that ad hoc constraining the list of reference signatures is likely to produce inferior results and that noisy estimates of signature weights in samples with as few as 100 mutations can still be useful in downstream analysis.
1306.0199
Osamu Narikiyo
Daisuke Sato, Osamu Narikiyo
Replication of proto-RNAs sustained by ligase-helicase cycle in oligomer world
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mechanism of the replication of proto-RNAs in oligomer world is proposed. The replication is carried out by a minimum cycle which is sustained by a ligase and a helicase. We expect that such a cycle actually worked in the primordial soup and can be constructed in vitro. By computer simulation the products of the replication acquires diversity and complexity. Such diversity and complexity are the bases of the evolution.
[ { "created": "Sun, 2 Jun 2013 11:54:22 GMT", "version": "v1" } ]
2013-06-04
[ [ "Sato", "Daisuke", "" ], [ "Narikiyo", "Osamu", "" ] ]
A mechanism of the replication of proto-RNAs in oligomer world is proposed. The replication is carried out by a minimum cycle which is sustained by a ligase and a helicase. We expect that such a cycle actually worked in the primordial soup and can be constructed in vitro. By computer simulation the products of the replication acquires diversity and complexity. Such diversity and complexity are the bases of the evolution.
1910.05546
Zedong Bi
Zedong Bi and Changsong Zhou
Understanding the computation of time using neural network models
null
PNAS 117 (19) 10530-10540, 2020
10.1073/pnas.1921609117
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How the animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multi-seconds in working memory? How temporal information is processed concurrently with spatial information and decision making? Why there are strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and non-temporal information are coded in subspaces orthogonal with each other, and the state trajectories with time at different non-temporal information are quasi-parallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and non-temporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of non-temporal information are similar or not. We identified four factors that facilitate strong temporal signals in non-timing tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and is supported by and gives predictions to a number of experimental phenomena.
[ { "created": "Sat, 12 Oct 2019 10:07:42 GMT", "version": "v1" }, { "created": "Mon, 9 Dec 2019 12:48:46 GMT", "version": "v2" }, { "created": "Mon, 2 Mar 2020 14:41:37 GMT", "version": "v3" }, { "created": "Tue, 7 Jul 2020 11:58:25 GMT", "version": "v4" } ]
2020-07-08
[ [ "Bi", "Zedong", "" ], [ "Zhou", "Changsong", "" ] ]
To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How the animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multi-seconds in working memory? How temporal information is processed concurrently with spatial information and decision making? Why there are strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and non-temporal information are coded in subspaces orthogonal with each other, and the state trajectories with time at different non-temporal information are quasi-parallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and non-temporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of non-temporal information are similar or not. We identified four factors that facilitate strong temporal signals in non-timing tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and is supported by and gives predictions to a number of experimental phenomena.
1305.6635
Peter Pfaffelhuber
Martin Jansen and Peter Pfaffelhuber
Stochastic gene expression with delay
14 pages
null
null
null
q-bio.MN math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The expression of genes usually follows a two-step procedure. First, a gene (encoded in the genome) is transcribed resulting in a strand of (messenger) RNA. Afterwards, the RNA is translated into protein. Classically, this gene expression is modeled using a Markov jump process including activation and deactivation of the gene, transcription and translation rates together with degradation of RNA and protein. We extend this model by adding delays (with arbitrary distributions) to transcription and translation. Such delays can e.g.\ mean that RNA has to be transported to a different part of a cell before translation can be initiated. Already in the classical model, production of RNA and protein come in bursts by activation and deactivation of the gene, resulting in a large variance of the number of RNA and proteins in equilibrium. We derive precise formulas for this second-order structure with the model including delay in equilibrium. As a general fact, the delay decreases the variance of the number of RNA and proteins.
[ { "created": "Tue, 28 May 2013 21:14:02 GMT", "version": "v1" }, { "created": "Thu, 7 Aug 2014 15:24:36 GMT", "version": "v2" } ]
2014-08-08
[ [ "Jansen", "Martin", "" ], [ "Pfaffelhuber", "Peter", "" ] ]
The expression of genes usually follows a two-step procedure. First, a gene (encoded in the genome) is transcribed resulting in a strand of (messenger) RNA. Afterwards, the RNA is translated into protein. Classically, this gene expression is modeled using a Markov jump process including activation and deactivation of the gene, transcription and translation rates together with degradation of RNA and protein. We extend this model by adding delays (with arbitrary distributions) to transcription and translation. Such delays can e.g.\ mean that RNA has to be transported to a different part of a cell before translation can be initiated. Already in the classical model, production of RNA and protein come in bursts by activation and deactivation of the gene, resulting in a large variance of the number of RNA and proteins in equilibrium. We derive precise formulas for this second-order structure with the model including delay in equilibrium. As a general fact, the delay decreases the variance of the number of RNA and proteins.
1509.05121
Irineo Cabreros
Irineo Cabreros, Emmanuel Abbe and Aristotelis Tsirigos
Detecting Community Structures in Hi-C Genomic Data
null
null
null
null
q-bio.GN cs.SI stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection (CD) algorithms are applied to Hi-C data to discover new communities of loci in the 3D conformation of human and mouse DNA. We find that CD has some distinct advantages over pre-existing methods: (1) it is capable of finding a variable number of communities, (2) it can detect communities of DNA loci either adjacent or distant in the 1D sequence, and (3) it allows us to obtain a principled value of k, the number of communities present. Forcing k = 2, our method recovers earlier findings of Lieberman-Aiden, et al. (2009), but letting k be a parameter, our method obtains as optimal value k = 6, discovering new candidate communities. In addition to discovering large communities that partition entire chromosomes, we also show that CD can detect small-scale topologically associating domains (TADs) such as those found in Dixon, et al. (2012). CD thus provides a natural and flexible statistical framework for understanding the folding structure of DNA at multiple scales in Hi-C data.
[ { "created": "Thu, 17 Sep 2015 04:23:55 GMT", "version": "v1" } ]
2015-09-18
[ [ "Cabreros", "Irineo", "" ], [ "Abbe", "Emmanuel", "" ], [ "Tsirigos", "Aristotelis", "" ] ]
Community detection (CD) algorithms are applied to Hi-C data to discover new communities of loci in the 3D conformation of human and mouse DNA. We find that CD has some distinct advantages over pre-existing methods: (1) it is capable of finding a variable number of communities, (2) it can detect communities of DNA loci either adjacent or distant in the 1D sequence, and (3) it allows us to obtain a principled value of k, the number of communities present. Forcing k = 2, our method recovers earlier findings of Lieberman-Aiden, et al. (2009), but letting k be a parameter, our method obtains as optimal value k = 6, discovering new candidate communities. In addition to discovering large communities that partition entire chromosomes, we also show that CD can detect small-scale topologically associating domains (TADs) such as those found in Dixon, et al. (2012). CD thus provides a natural and flexible statistical framework for understanding the folding structure of DNA at multiple scales in Hi-C data.
0812.2451
Michel Aoun
Michel Aoun, Annick Hourmant, Jean-Yves Cabon
Efficacy of Put and Spd sprayed on leaves from Brassica juncea plants against Cd2+-induced oxidative stress
Submitted to Chemosphere, 30 pages, 10 Figures and 3 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The protective effect exerted by polyamines (Put and Spd) against cadmium (Cd) stress was investigated in Brassica juncea plants. Treatment with CdCl2 (75 micro-Mole) resulted in a rise of Cd accumulation, a decrease of fresh and dry weights in every plant organ, an increase of free polyamine content at limb and stem levels as well as a decrease at root level. On the other hand, the total conjugated polyamine levels in the stem tissues were unaffected by Cd. In the leaf tissues, this metal caused a reduction of chlorophyll a content, a rise of guaiacol peroxidase (GPOX) activity and an increase of malondialdehyde (MDA), soluble glucide, proline and amino acid contents. Exogenous application, by spraying, of putrescine (Put) and spermidine (Spd) to leaf tissues reduced CdCl2-induced stress. These polyamines proved to exert a partial, though significant, protection of the foliar fresh weight and to alleviate the oxidative stress generated by Cd through reductions of MDA amounts and GPOX (E.C.1.11.1.7) activity. The enhancement of chlorophyll a content in plants by Put and those of Chl a and Chl b by Spd both constitute evidences of their efficacy against the Cd2+-induced loss of pigments. Conversely to Put, Spd caused a decrease of Cd content in leave tissues and a rise in the stems and roots; these findings are in favour of a stimulation of Cd uptake by Spd. The proline stimulation observed with Cd was reduced further to the spraying of Put onto tissues, but the decrease induced by Spd was more limited. In the plants treated with Cd, the amino acid contents in the leaves were unaffected by Put and Spd spraying; on the other hand, Cd2+ disturbed polyamine levels (free and acido-soluble conjugated-forms); we notice the rise of total free PAs and the decrease of their conjugated-ones.
[ { "created": "Fri, 12 Dec 2008 18:28:05 GMT", "version": "v1" } ]
2008-12-15
[ [ "Aoun", "Michel", "" ], [ "Hourmant", "Annick", "" ], [ "Cabon", "Jean-Yves", "" ] ]
The protective effect exerted by polyamines (Put and Spd) against cadmium (Cd) stress was investigated in Brassica juncea plants. Treatment with CdCl2 (75 micro-Mole) resulted in a rise of Cd accumulation, a decrease of fresh and dry weights in every plant organ, an increase of free polyamine content at limb and stem levels as well as a decrease at root level. On the other hand, the total conjugated polyamine levels in the stem tissues were unaffected by Cd. In the leaf tissues, this metal caused a reduction of chlorophyll a content, a rise of guaiacol peroxidase (GPOX) activity and an increase of malondialdehyde (MDA), soluble glucide, proline and amino acid contents. Exogenous application, by spraying, of putrescine (Put) and spermidine (Spd) to leaf tissues reduced CdCl2-induced stress. These polyamines proved to exert a partial, though significant, protection of the foliar fresh weight and to alleviate the oxidative stress generated by Cd through reductions of MDA amounts and GPOX (E.C.1.11.1.7) activity. The enhancement of chlorophyll a content in plants by Put and those of Chl a and Chl b by Spd both constitute evidences of their efficacy against the Cd2+-induced loss of pigments. Conversely to Put, Spd caused a decrease of Cd content in leave tissues and a rise in the stems and roots; these findings are in favour of a stimulation of Cd uptake by Spd. The proline stimulation observed with Cd was reduced further to the spraying of Put onto tissues, but the decrease induced by Spd was more limited. In the plants treated with Cd, the amino acid contents in the leaves were unaffected by Put and Spd spraying; on the other hand, Cd2+ disturbed polyamine levels (free and acido-soluble conjugated-forms); we notice the rise of total free PAs and the decrease of their conjugated-ones.
2303.11944
Kevin Schmidt
Kevin Schmidt, Othalia Larue, Ray Kulhanek, Dylan Flaute, Razvan Veliche, Christian Manasseh, Nelson Dellis, Scott Clouse, Jared Culbertson, Steve Rogers
Representational Tenets for Memory Athletics
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
We describe the current state of world-class memory competitions, including the methods used to prepare for and compete in memory competitions, based on the subjective report of World Memory Championship Grandmaster and co-author Nelson Dellis. We then explore the reported experiences through the lens of the Simulated, Situated, and Structurally coherent Qualia (S3Q) theory of consciousness, in order to propose a set of experiments to help further understand the boundaries of expert memory performance.
[ { "created": "Wed, 22 Feb 2023 19:40:59 GMT", "version": "v1" } ]
2023-03-22
[ [ "Schmidt", "Kevin", "" ], [ "Larue", "Othalia", "" ], [ "Kulhanek", "Ray", "" ], [ "Flaute", "Dylan", "" ], [ "Veliche", "Razvan", "" ], [ "Manasseh", "Christian", "" ], [ "Dellis", "Nelson", "" ], [ "Clouse", "Scott", "" ], [ "Culbertson", "Jared", "" ], [ "Rogers", "Steve", "" ] ]
We describe the current state of world-class memory competitions, including the methods used to prepare for and compete in memory competitions, based on the subjective report of World Memory Championship Grandmaster and co-author Nelson Dellis. We then explore the reported experiences through the lens of the Simulated, Situated, and Structurally coherent Qualia (S3Q) theory of consciousness, in order to propose a set of experiments to help further understand the boundaries of expert memory performance.
2304.04662
Tunca Do\u{g}an
Atakan Y\"uksel, Erva Ulusoy, Atabey \"Unl\"u, Tunca Do\u{g}an
SELFormer: Molecular Representation Learning via SELFIES Language Models
22 pages, 4 figures, 8 tables
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Automated computational analysis of the vast chemical space is critical for numerous fields of research such as drug discovery and material science. Representation learning techniques have recently been employed with the primary objective of generating compact and informative numerical expressions of complex data. One approach to efficiently learn molecular representations is processing string-based notations of chemicals via natural language processing (NLP) algorithms. Majority of the methods proposed so far utilize SMILES notations for this purpose; however, SMILES is associated with numerous problems related to validity and robustness, which may prevent the model from effectively uncovering the knowledge hidden in the data. In this study, we propose SELFormer, a transformer architecture-based chemical language model that utilizes a 100% valid, compact and expressive notation, SELFIES, as input, in order to learn flexible and high-quality molecular representations. SELFormer is pre-trained on two million drug-like compounds and fine-tuned for diverse molecular property prediction tasks. Our performance evaluation has revealed that, SELFormer outperforms all competing methods, including graph learning-based approaches and SMILES-based chemical language models, on predicting aqueous solubility of molecules and adverse drug reactions. We also visualized molecular representations learned by SELFormer via dimensionality reduction, which indicated that even the pre-trained model can discriminate molecules with differing structural properties. We shared SELFormer as a programmatic tool, together with its datasets and pre-trained models. Overall, our research demonstrates the benefit of using the SELFIES notations in the context of chemical language modeling and opens up new possibilities for the design and discovery of novel drug candidates with desired features.
[ { "created": "Mon, 10 Apr 2023 15:38:25 GMT", "version": "v1" }, { "created": "Thu, 25 May 2023 09:14:14 GMT", "version": "v2" } ]
2023-05-26
[ [ "Yüksel", "Atakan", "" ], [ "Ulusoy", "Erva", "" ], [ "Ünlü", "Atabey", "" ], [ "Doğan", "Tunca", "" ] ]
Automated computational analysis of the vast chemical space is critical for numerous fields of research such as drug discovery and material science. Representation learning techniques have recently been employed with the primary objective of generating compact and informative numerical expressions of complex data. One approach to efficiently learn molecular representations is processing string-based notations of chemicals via natural language processing (NLP) algorithms. Majority of the methods proposed so far utilize SMILES notations for this purpose; however, SMILES is associated with numerous problems related to validity and robustness, which may prevent the model from effectively uncovering the knowledge hidden in the data. In this study, we propose SELFormer, a transformer architecture-based chemical language model that utilizes a 100% valid, compact and expressive notation, SELFIES, as input, in order to learn flexible and high-quality molecular representations. SELFormer is pre-trained on two million drug-like compounds and fine-tuned for diverse molecular property prediction tasks. Our performance evaluation has revealed that, SELFormer outperforms all competing methods, including graph learning-based approaches and SMILES-based chemical language models, on predicting aqueous solubility of molecules and adverse drug reactions. We also visualized molecular representations learned by SELFormer via dimensionality reduction, which indicated that even the pre-trained model can discriminate molecules with differing structural properties. We shared SELFormer as a programmatic tool, together with its datasets and pre-trained models. Overall, our research demonstrates the benefit of using the SELFIES notations in the context of chemical language modeling and opens up new possibilities for the design and discovery of novel drug candidates with desired features.
2304.04977
Hesham Abdullah
Ahmed M. Mousbah, Hesham M. Abdullah, Waleed S. Mohammed, Ali M. El-Refy, Mohamed Helmy
Buffalo Genome Projects: Current Situation and Future Perspective in Improving Breeding Programs
two figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Buffaloes are farm animals that contribute to food security by providing high quality meat and milk. They can better tolerate the adverse effects of global climate change on their meat and milk production. Despite their advantages, buffaloes are heavily neglected animals with fewer studies compared to other farm animals, hence, the real potential of buffaloes has never been realized. The complete genome sequencing projects of buffaloes are essential to better understanding the buffalos biology and production since they allow scientists to identify important genes and understand how the gene networks interact to determine the critical features of buffaloes. The genome projects are also valuable for gaining better knowledge of growth, development, maintenance, and determining factors associated with increased meat and milk production. Furthermore, having access to a complete genome of high quality and comprehensive annotations provides a powerful tool in breeding programs. The current review surveyed the publicly available buffalo genome projects and studied the impact of incorporating genomic selection into the buffalo breeding program. Our survey of the publicly available buffalo genome projects showed the promise of genomic selection in developing water buffalo science and technology for food security on a global scale.
[ { "created": "Tue, 11 Apr 2023 04:50:05 GMT", "version": "v1" } ]
2023-04-12
[ [ "Mousbah", "Ahmed M.", "" ], [ "Abdullah", "Hesham M.", "" ], [ "Mohammed", "Waleed S.", "" ], [ "El-Refy", "Ali M.", "" ], [ "Helmy", "Mohamed", "" ] ]
Buffaloes are farm animals that contribute to food security by providing high quality meat and milk. They can better tolerate the adverse effects of global climate change on their meat and milk production. Despite their advantages, buffaloes are heavily neglected animals with fewer studies compared to other farm animals, hence, the real potential of buffaloes has never been realized. The complete genome sequencing projects of buffaloes are essential to better understanding the buffalos biology and production since they allow scientists to identify important genes and understand how the gene networks interact to determine the critical features of buffaloes. The genome projects are also valuable for gaining better knowledge of growth, development, maintenance, and determining factors associated with increased meat and milk production. Furthermore, having access to a complete genome of high quality and comprehensive annotations provides a powerful tool in breeding programs. The current review surveyed the publicly available buffalo genome projects and studied the impact of incorporating genomic selection into the buffalo breeding program. Our survey of the publicly available buffalo genome projects showed the promise of genomic selection in developing water buffalo science and technology for food security on a global scale.
1401.8205
Florian Hartig
Florian Hartig, Claudia Dislich, Thorsten Wiegand and Andreas Huth
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
Corresponds, apart from layout, to the final version published in Biogeosciences
Biogeosciences (2014) 11, 1261-1272
10.5194/bg-11-1261-2014
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, [...]
[ { "created": "Fri, 31 Jan 2014 16:02:33 GMT", "version": "v1" }, { "created": "Thu, 8 May 2014 15:35:45 GMT", "version": "v2" } ]
2014-05-09
[ [ "Hartig", "Florian", "" ], [ "Dislich", "Claudia", "" ], [ "Wiegand", "Thorsten", "" ], [ "Huth", "Andreas", "" ] ]
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, [...]
1711.04266
David Holcman
Zeev Schuss, Kimsy Tor, David Holcman
Do cells sense time by number of divisions?
No comments
null
null
null
q-bio.SC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Do biological cells sense time by the number of their divisions, a process that ends at senescence? We consider the question "can the cell's perception of time be expressed through the length of the shortest telomere?" The answer is that the absolute time before senescence cannot be expressed by the telomere's length and that a cell can survive many more divisions than intuitively expected. This apparent paradox is due to shortening and elongation of the telomere, which suggests a random walk model of the telomere's length. The model indicates two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions prior to senescence. The measure of stability of the latter phase is the expected number of divisions at the attractor ("lifetime") prior to crossing a threshold to senescence. The telomerase regulates stability by creating an effective potential barrier that separates statistically the shortest lifetime from the next shortest. The random walk has to overcome the barrier in order to extend the range of the first regime. The model explains how random telomere dynamics underlies the extension of cell survival time.
[ { "created": "Sun, 12 Nov 2017 10:38:53 GMT", "version": "v1" } ]
2017-11-15
[ [ "Schuss", "Zeev", "" ], [ "Tor", "Kimsy", "" ], [ "Holcman", "David", "" ] ]
Do biological cells sense time by the number of their divisions, a process that ends at senescence? We consider the question "can the cell's perception of time be expressed through the length of the shortest telomere?" The answer is that the absolute time before senescence cannot be expressed by the telomere's length and that a cell can survive many more divisions than intuitively expected. This apparent paradox is due to shortening and elongation of the telomere, which suggests a random walk model of the telomere's length. The model indicates two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions prior to senescence. The measure of stability of the latter phase is the expected number of divisions at the attractor ("lifetime") prior to crossing a threshold to senescence. The telomerase regulates stability by creating an effective potential barrier that separates statistically the shortest lifetime from the next shortest. The random walk has to overcome the barrier in order to extend the range of the first regime. The model explains how random telomere dynamics underlies the extension of cell survival time.
2201.11717
Cristiano Capone
Cristiano Capone, Cosimo Lupo, Paolo Muratore, Pier Stanislao Paolucci
Burst-dependent plasticity and dendritic amplification support target-based learning and hierarchical imitation learning
9 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain can learn to solve a wide range of tasks with high temporal and energetic efficiency. However, most biological models are composed of simple single compartment neurons and cannot achieve the state-of-art performances of artificial intelligence. We propose a multi-compartment model of pyramidal neuron, in which bursts and dendritic input segregation give the possibility to plausibly support a biological target-based learning. In target-based learning, the internal solution of a problem (a spatio temporal pattern of bursts in our case) is suggested to the network, bypassing the problems of error backpropagation and credit assignment. Finally, we show that this neuronal architecture naturally support the orchestration of hierarchical imitation learning, enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks.
[ { "created": "Thu, 27 Jan 2022 18:17:45 GMT", "version": "v1" } ]
2022-01-28
[ [ "Capone", "Cristiano", "" ], [ "Lupo", "Cosimo", "" ], [ "Muratore", "Paolo", "" ], [ "Paolucci", "Pier Stanislao", "" ] ]
The brain can learn to solve a wide range of tasks with high temporal and energetic efficiency. However, most biological models are composed of simple single compartment neurons and cannot achieve the state-of-art performances of artificial intelligence. We propose a multi-compartment model of pyramidal neuron, in which bursts and dendritic input segregation give the possibility to plausibly support a biological target-based learning. In target-based learning, the internal solution of a problem (a spatio temporal pattern of bursts in our case) is suggested to the network, bypassing the problems of error backpropagation and credit assignment. Finally, we show that this neuronal architecture naturally support the orchestration of hierarchical imitation learning, enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks.
2106.13527
Kang Hao Cheong Prof
Tao Wen, Eugene V. Koonin, Kang Hao Cheong
Alternating active-dormitive strategy enables overtaking by disadvantaged prey through Parrondo's paradox
null
null
null
null
q-bio.PE nlin.CD
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dormancy is a costly adaptive strategy that is widespread among living organisms inhabiting diverse environments. We explore mathematical models of predator-prey systems, in order to assess the impact of prey dormancy on the competition between two types of prey, a perennially active (PA) and capable of entering dormancy (dormitive). Both the active form and the dormant form of the dormitive prey are individually at a disadvantage compared to the PA prey and would go extinct due to their low growth rate, energy waste on the production of dormant prey, and inability of the latter to grow autonomously. However, the dormitive prey can paradoxically outcompete the PA prey with superior traits and even cause its extinction by alternating between the two losing strategies. This outcome recapitulates the game-theoretic Parrondo's paradox, where two losing strategies combine to achieve a winning outcome. We observed higher fitness of the dormitive prey in rich environments because a large predator population in a rich environment cannot be supported by the prey without adopting an evasive strategy, that is, dormancy. In such environments, populations experience large-scale fluctuations, which can be survived by dormitive but not by PA prey. Dormancy of the prey appears to be a natural evolutionary response to self-destructive over-predation that stabilizes evolving predator-prey systems through Parrondo's paradox.
[ { "created": "Fri, 25 Jun 2021 09:37:06 GMT", "version": "v1" } ]
2021-06-28
[ [ "Wen", "Tao", "" ], [ "Koonin", "Eugene V.", "" ], [ "Cheong", "Kang Hao", "" ] ]
Dormancy is a costly adaptive strategy that is widespread among living organisms inhabiting diverse environments. We explore mathematical models of predator-prey systems, in order to assess the impact of prey dormancy on the competition between two types of prey, a perennially active (PA) and capable of entering dormancy (dormitive). Both the active form and the dormant form of the dormitive prey are individually at a disadvantage compared to the PA prey and would go extinct due to their low growth rate, energy waste on the production of dormant prey, and inability of the latter to grow autonomously. However, the dormitive prey can paradoxically outcompete the PA prey with superior traits and even cause its extinction by alternating between the two losing strategies. This outcome recapitulates the game-theoretic Parrondo's paradox, where two losing strategies combine to achieve a winning outcome. We observed higher fitness of the dormitive prey in rich environments because a large predator population in a rich environment cannot be supported by the prey without adopting an evasive strategy, that is, dormancy. In such environments, populations experience large-scale fluctuations, which can be survived by dormitive but not by PA prey. Dormancy of the prey appears to be a natural evolutionary response to self-destructive over-predation that stabilizes evolving predator-prey systems through Parrondo's paradox.
1907.08946
Misha Tsodyks
Antonios Georgiou, Mikhail Katkov, Misha Tsodyks
Retroactive Interference Model of Power-Law Forgetting
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Memory and forgetting constitute two sides of the same coin, and although the first has been rigorously investigated, the latter is often overlooked. A number of experiments under the realm of psychology and experimental neuroscience have described the properties of forgetting in humans and animals, showing that forgetting exhibits a power-law relationship with time. These results indicate a counter-intuitive property of forgetting, namely that old memories are more stable than younger ones. We have devised a phenomenological model that is based on the principle of retroactive interference, driven by a multi-dimensional valence measure for acquired memories. The model has only one free integer parameter and can be solved analytically. We performed recognition experiments with long streams of words were performed, resulting in a good match to a five-dimensional version of the model.
[ { "created": "Sun, 21 Jul 2019 09:49:23 GMT", "version": "v1" } ]
2019-07-23
[ [ "Georgiou", "Antonios", "" ], [ "Katkov", "Mikhail", "" ], [ "Tsodyks", "Misha", "" ] ]
Memory and forgetting constitute two sides of the same coin, and although the first has been rigorously investigated, the latter is often overlooked. A number of experiments under the realm of psychology and experimental neuroscience have described the properties of forgetting in humans and animals, showing that forgetting exhibits a power-law relationship with time. These results indicate a counter-intuitive property of forgetting, namely that old memories are more stable than younger ones. We have devised a phenomenological model that is based on the principle of retroactive interference, driven by a multi-dimensional valence measure for acquired memories. The model has only one free integer parameter and can be solved analytically. We performed recognition experiments with long streams of words were performed, resulting in a good match to a five-dimensional version of the model.
1210.0495
Andr\'as Szil\'agyi
D\'aniel Gy\"orffy, P\'eter Z\'avodszky and Andr\'as Szil\'agyi
"Pull moves" for rectangular lattice polymer models are not fully reversible
4 pages, 2 figures. To be published in IEEE/ACM Transactions on Computational Biology and Bioinformatics
IEEE/ACM Trans Comput Biol Bioinform., vol. 9(6), pp. 1847-49, 2012
10.1109/TCBB.2012.129
null
q-bio.BM cond-mat.soft physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
"Pull moves" is a popular move set for lattice polymer model simulations. We show that the proof given for its reversibility earlier is flawed, and some moves are irreversible, which leads to biases in the parameters estimated from the simulations. We show how to make the move set fully reversible.
[ { "created": "Mon, 1 Oct 2012 18:23:48 GMT", "version": "v1" } ]
2013-01-04
[ [ "Györffy", "Dániel", "" ], [ "Závodszky", "Péter", "" ], [ "Szilágyi", "András", "" ] ]
"Pull moves" is a popular move set for lattice polymer model simulations. We show that the proof given for its reversibility earlier is flawed, and some moves are irreversible, which leads to biases in the parameters estimated from the simulations. We show how to make the move set fully reversible.
1907.01504
Nikolay Koshev
Nikolay Koshev, Nikolay Yavich, Mikhail Malovichko, Ekaterina Skidchenko, Maxim Fedorov
FEM-based Scalp-to-Cortex data mapping via the solution of the Cauchy problem
16 pages, 6 figures
null
null
null
q-bio.NC physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an approach and the numerical algorithm for pre-processing of the electroencephalography (EEG) data, enabling to generate an accurate mapping of the potential from the measurement area - scalp - to the brain surface. The algorithm based on the solution of ill-posed Cauchy problem for the Laplace's equation using tetrahedral finite elements linear approximation. Application of the proposed algorithm sufficiently increases the spatial resolution of the EEG technique, making it comparable with much more complicated electrocorticography (ECoG) method.
[ { "created": "Fri, 21 Jun 2019 12:47:12 GMT", "version": "v1" } ]
2019-07-03
[ [ "Koshev", "Nikolay", "" ], [ "Yavich", "Nikolay", "" ], [ "Malovichko", "Mikhail", "" ], [ "Skidchenko", "Ekaterina", "" ], [ "Fedorov", "Maxim", "" ] ]
We propose an approach and the numerical algorithm for pre-processing of the electroencephalography (EEG) data, enabling to generate an accurate mapping of the potential from the measurement area - scalp - to the brain surface. The algorithm based on the solution of ill-posed Cauchy problem for the Laplace's equation using tetrahedral finite elements linear approximation. Application of the proposed algorithm sufficiently increases the spatial resolution of the EEG technique, making it comparable with much more complicated electrocorticography (ECoG) method.
1605.01096
Daniel Toker
Daniel Toker and Friedrich Sommer
Moving Past the Minimum Information Partition: How To Quickly and Accurately Calculate Integrated Information
21 pages, 9 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An outstanding challenge with the Integrated Information Theory of Consciousness (IIT) is to find a way of rapidly and accurately calculating integrated information from neural data. A number of measures of integrated information based on time series data have been proposed, but most measures require finding the Minimum Information Partition of a network, which is computationally expensive and not practical for real brain data. Here, we introduce a novel partition, the Maximum Modularity Partition, across which to quickly calculate integrated information. We also introduce a novel detection task on simulated data to evaluate the performance of integrated information measures across different partitions. We show that integrated information can be reliably and quickly calculated across the Maximum Modularity Partition, as well as the previously proposed atomic partition, even in relatively large networks, which constitutes an advance in researchers' ability to empirically test the predictions of IIT in real brain data.
[ { "created": "Tue, 3 May 2016 21:26:17 GMT", "version": "v1" } ]
2016-05-05
[ [ "Toker", "Daniel", "" ], [ "Sommer", "Friedrich", "" ] ]
An outstanding challenge with the Integrated Information Theory of Consciousness (IIT) is to find a way of rapidly and accurately calculating integrated information from neural data. A number of measures of integrated information based on time series data have been proposed, but most measures require finding the Minimum Information Partition of a network, which is computationally expensive and not practical for real brain data. Here, we introduce a novel partition, the Maximum Modularity Partition, across which to quickly calculate integrated information. We also introduce a novel detection task on simulated data to evaluate the performance of integrated information measures across different partitions. We show that integrated information can be reliably and quickly calculated across the Maximum Modularity Partition, as well as the previously proposed atomic partition, even in relatively large networks, which constitutes an advance in researchers' ability to empirically test the predictions of IIT in real brain data.
2010.03277
Niklas Kolbe
Anne Dietrich, Niklas Kolbe, Nikolaos Sfakianakis and Christina Surulescu
Multiscale modeling of glioma invasion: from receptor binding to flux-limited macroscopic PDEs
28 pages, 7 figures
null
null
null
q-bio.TO math.AP q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel approach to modeling cell migration in an anisotropic environment with biochemical heterogeneity and interspecies interactions, using as a paradigm glioma invasion in brain tissue under the influence of hypoxia-triggered angiogenesis. The multiscale procedure links single-cell and mesoscopic dynamics with population level behavior, leading on the macroscopic scale to flux-limited glioma diffusion and multiple taxis. We verify the non-negativity of regular solutions (provided they exist) to the obtained macroscopic PDE-ODE system and perform numerical simulations to illustrate the solution behavior under several scenarios.
[ { "created": "Wed, 7 Oct 2020 08:58:44 GMT", "version": "v1" }, { "created": "Tue, 13 Apr 2021 11:04:32 GMT", "version": "v2" } ]
2021-04-14
[ [ "Dietrich", "Anne", "" ], [ "Kolbe", "Niklas", "" ], [ "Sfakianakis", "Nikolaos", "" ], [ "Surulescu", "Christina", "" ] ]
We propose a novel approach to modeling cell migration in an anisotropic environment with biochemical heterogeneity and interspecies interactions, using as a paradigm glioma invasion in brain tissue under the influence of hypoxia-triggered angiogenesis. The multiscale procedure links single-cell and mesoscopic dynamics with population level behavior, leading on the macroscopic scale to flux-limited glioma diffusion and multiple taxis. We verify the non-negativity of regular solutions (provided they exist) to the obtained macroscopic PDE-ODE system and perform numerical simulations to illustrate the solution behavior under several scenarios.
2005.05784
Mengjia Xu
Mengjia Xu, David Lopez Sanz, Pilar Garces, Fernando Maestu, Quanzheng Li, Dimitrios Pantazis
A Graph Gaussian Embedding Method for Predicting Alzheimer's Disease Progression with MEG Brain Networks
null
null
null
null
q-bio.NC cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterizing the subtle changes of functional brain networks associated with the pathological cascade of Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression prior to clinical symptoms. We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G), which can learn highly informative network features by mapping high-dimensional resting-state brain networks into a low-dimensional latent space. These latent distribution-based embeddings enable a quantitative characterization of subtle and heterogeneous brain connectivity patterns at different regions and can be used as input to traditional classifiers for various downstream graph analytic tasks, such as AD early stage prediction, and statistical evaluation of between-group significant alterations across brain regions. We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
[ { "created": "Fri, 8 May 2020 02:29:24 GMT", "version": "v1" }, { "created": "Tue, 10 Nov 2020 21:00:39 GMT", "version": "v2" } ]
2020-11-12
[ [ "Xu", "Mengjia", "" ], [ "Sanz", "David Lopez", "" ], [ "Garces", "Pilar", "" ], [ "Maestu", "Fernando", "" ], [ "Li", "Quanzheng", "" ], [ "Pantazis", "Dimitrios", "" ] ]
Characterizing the subtle changes of functional brain networks associated with the pathological cascade of Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression prior to clinical symptoms. We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G), which can learn highly informative network features by mapping high-dimensional resting-state brain networks into a low-dimensional latent space. These latent distribution-based embeddings enable a quantitative characterization of subtle and heterogeneous brain connectivity patterns at different regions and can be used as input to traditional classifiers for various downstream graph analytic tasks, such as AD early stage prediction, and statistical evaluation of between-group significant alterations across brain regions. We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
1405.2044
Fernando Antoneli Jr
Jean P. Zukurov, Sieberth do Nascimento-Brito, Angela C. Volpini, Guilherme Oliveira, Luiz Mario R. Janini and Fernando Antoneli
Estimation of genetic diversity in viral populations from next generation sequencing data with extremely deep coverage
30 pages, 5 figures, 2 tables, Tanden is written in C# (Microsoft), runs on the Windows operating system, and can be downloaded from: http://tanden.url.ph
Algorithms for Molecular Biology, 2016, 11:2
10.1186/s13015-016-0064-x
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a method and discuss its computational implementation as an integrated tool for the analysis of viral genetic diversity on data generated by high-throughput sequencing. Most methods for viral diversity estimation proposed so far are intended to take benefit of the longer reads produced by some NGS platforms in order to estimate a population of haplotypes. Our goal here is to take advantage of distinct virtues of a certain kind of NGS platform - the platform SOLiD (Life Technologies) is an example - that has not received much attention due to the short length of its reads, which renders haplotype estimation very difficult. However, this kind of platform has a very low error rate and extremely deep coverage per site and our method is designed to take advantage of these characteristics. We propose to measure the populational genetic diversity through a family of multinomial probability distributions indexed by the sites of the virus genome, each one representing the populational distribution of the diversity per site. The implementation of the method focuses on two main optimization strategies: a read mapping/alignment procedure that aims at the recovery of the maximum possible number of short-reads; the estimation of the multinomial parameters through a Bayesian approach, which, unlike simple frequency counting, allows one to take into account the prior information of the control population within the inference of a posterior experimental condition and provides a natural way to separate signal from noise, since it automatically furnishes Bayesian confidence intervals. The methods described in this paper have been implemented as an integrated tool called Tanden (Tool for Analysis of Diversity in Viral Populations).
[ { "created": "Thu, 8 May 2014 18:58:12 GMT", "version": "v1" }, { "created": "Fri, 19 Sep 2014 17:11:20 GMT", "version": "v2" }, { "created": "Tue, 9 Dec 2014 12:24:46 GMT", "version": "v3" }, { "created": "Fri, 24 Apr 2015 13:27:13 GMT", "version": "v4" } ]
2016-03-22
[ [ "Zukurov", "Jean P.", "" ], [ "Nascimento-Brito", "Sieberth do", "" ], [ "Volpini", "Angela C.", "" ], [ "Oliveira", "Guilherme", "" ], [ "Janini", "Luiz Mario R.", "" ], [ "Antoneli", "Fernando", "" ] ]
In this paper we propose a method and discuss its computational implementation as an integrated tool for the analysis of viral genetic diversity on data generated by high-throughput sequencing. Most methods for viral diversity estimation proposed so far are intended to take benefit of the longer reads produced by some NGS platforms in order to estimate a population of haplotypes. Our goal here is to take advantage of distinct virtues of a certain kind of NGS platform - the platform SOLiD (Life Technologies) is an example - that has not received much attention due to the short length of its reads, which renders haplotype estimation very difficult. However, this kind of platform has a very low error rate and extremely deep coverage per site and our method is designed to take advantage of these characteristics. We propose to measure the populational genetic diversity through a family of multinomial probability distributions indexed by the sites of the virus genome, each one representing the populational distribution of the diversity per site. The implementation of the method focuses on two main optimization strategies: a read mapping/alignment procedure that aims at the recovery of the maximum possible number of short-reads; the estimation of the multinomial parameters through a Bayesian approach, which, unlike simple frequency counting, allows one to take into account the prior information of the control population within the inference of a posterior experimental condition and provides a natural way to separate signal from noise, since it automatically furnishes Bayesian confidence intervals. The methods described in this paper have been implemented as an integrated tool called Tanden (Tool for Analysis of Diversity in Viral Populations).
1512.05039
Trinh Xuan Hoang
Trinh X. Hoang, Hoa Lan Trinh, Achille Giacometti, Rudolf Podgornik, Jayanth R. Banavar and Amos Maritan
Phase diagram of the ground states of DNA condensates
5 pages, 5 figures, to appear in PRE Rapid Communication
Phys. Rev. E (Rapid Communications) 92, 060701(R) (2015)
10.1103/PhysRevE.92.060701
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phase diagram of the ground states of DNA in a bad solvent is studied for a semi-flexible polymer model with a generalized local elastic bending potential characterized by a nonlinearity parameter $x$ and effective self-attraction promoting compaction. $x=1$ corresponds to the worm-like chain model. Surprisingly, the phase diagram as well as the transition lines between the ground states are found to be a function of $x$. The model provides a simple explanation for the results of prior experimental and computational studies and makes predictions for the specific geometries of the ground states. The results underscore the impact of the form of the microscopic bending energy at macroscopic observable scales.
[ { "created": "Wed, 16 Dec 2015 03:43:12 GMT", "version": "v1" } ]
2016-01-28
[ [ "Hoang", "Trinh X.", "" ], [ "Trinh", "Hoa Lan", "" ], [ "Giacometti", "Achille", "" ], [ "Podgornik", "Rudolf", "" ], [ "Banavar", "Jayanth R.", "" ], [ "Maritan", "Amos", "" ] ]
Phase diagram of the ground states of DNA in a bad solvent is studied for a semi-flexible polymer model with a generalized local elastic bending potential characterized by a nonlinearity parameter $x$ and effective self-attraction promoting compaction. $x=1$ corresponds to the worm-like chain model. Surprisingly, the phase diagram as well as the transition lines between the ground states are found to be a function of $x$. The model provides a simple explanation for the results of prior experimental and computational studies and makes predictions for the specific geometries of the ground states. The results underscore the impact of the form of the microscopic bending energy at macroscopic observable scales.
2208.09559
Chengyu Liu
Chengyu Liu, Wei Wang
Neural network facilitated ab initio derivation of linear formula: A case study on formulating the relationship between DNA motifs and gene expression
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Developing models with high interpretability and even deriving formulas to quantify relationships between biological data is an emerging need. We propose here a framework for ab initio derivation of sequence motifs and linear formula using a new approach based on the interpretable neural network model called contextual regression model. We showed that this linear model could predict gene expression levels using promoter sequences with a performance comparable to deep neural network models. We uncovered a list of 300 motifs with important regulatory roles on gene expression and showed that they also had significant contributions to cell-type specific gene expression in 154 diverse cell types. This work illustrates the possibility of deriving formulas to represent biology laws that may not be easily elucidated. (https://github.com/Wang-lab-UCSD/Motif_Finding_Contextual_Regression)
[ { "created": "Fri, 19 Aug 2022 22:29:30 GMT", "version": "v1" } ]
2022-08-23
[ [ "Liu", "Chengyu", "" ], [ "Wang", "Wei", "" ] ]
Developing models with high interpretability and even deriving formulas to quantify relationships between biological data is an emerging need. We propose here a framework for ab initio derivation of sequence motifs and linear formula using a new approach based on the interpretable neural network model called contextual regression model. We showed that this linear model could predict gene expression levels using promoter sequences with a performance comparable to deep neural network models. We uncovered a list of 300 motifs with important regulatory roles on gene expression and showed that they also had significant contributions to cell-type specific gene expression in 154 diverse cell types. This work illustrates the possibility of deriving formulas to represent biology laws that may not be easily elucidated. (https://github.com/Wang-lab-UCSD/Motif_Finding_Contextual_Regression)
2405.09748
Okezue Bell
Okezue Bell, Anthony Bell
A Mathematical Reconstruction of Endothelial Cell Networks
14 pages, 9 figures
null
null
null
q-bio.CB math.CO math.GR q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Endothelial cells form the linchpin of vascular and lymphatic systems, creating intricate networks that are pivotal for angiogenesis, controlling vessel permeability, and maintaining tissue homeostasis. Despite their critical roles, there is no rigorous mathematical framework to represent the connectivity structure of endothelial networks. Here, we develop a pioneering mathematical formalism called $\pi$-graphs to model the multi-type junction connectivity of endothelial networks. We define $\pi$-graphs as abstract objects consisting of endothelial cells and their junction sets, and introduce the key notion of $\pi$-isomorphism that captures when two $\pi$-graphs have the same connectivity structure. We prove several propositions relating the $\pi$-graph representation to traditional graph-theoretic representations, showing that $\pi$-isomorphism implies isomorphism of the corresponding unnested endothelial graphs, but not vice versa. We also introduce a temporal dimension to the $\pi$-graph formalism and explore the evolution of topological invariants in spatial embeddings of $\pi$-graphs. Finally, we outline a topological framework to represent the spatial embedding of $\pi$-graphs into geometric spaces. The $\pi$-graph formalism provides a novel tool for quantitative analysis of endothelial network connectivity and its relation to function, with the potential to yield new insights into vascular physiology and pathophysiology.
[ { "created": "Thu, 16 May 2024 01:14:13 GMT", "version": "v1" } ]
2024-05-17
[ [ "Bell", "Okezue", "" ], [ "Bell", "Anthony", "" ] ]
Endothelial cells form the linchpin of vascular and lymphatic systems, creating intricate networks that are pivotal for angiogenesis, controlling vessel permeability, and maintaining tissue homeostasis. Despite their critical roles, there is no rigorous mathematical framework to represent the connectivity structure of endothelial networks. Here, we develop a pioneering mathematical formalism called $\pi$-graphs to model the multi-type junction connectivity of endothelial networks. We define $\pi$-graphs as abstract objects consisting of endothelial cells and their junction sets, and introduce the key notion of $\pi$-isomorphism that captures when two $\pi$-graphs have the same connectivity structure. We prove several propositions relating the $\pi$-graph representation to traditional graph-theoretic representations, showing that $\pi$-isomorphism implies isomorphism of the corresponding unnested endothelial graphs, but not vice versa. We also introduce a temporal dimension to the $\pi$-graph formalism and explore the evolution of topological invariants in spatial embeddings of $\pi$-graphs. Finally, we outline a topological framework to represent the spatial embedding of $\pi$-graphs into geometric spaces. The $\pi$-graph formalism provides a novel tool for quantitative analysis of endothelial network connectivity and its relation to function, with the potential to yield new insights into vascular physiology and pathophysiology.
q-bio/0401025
Alan McKane
Barbara Drossel, Alan McKane and Christopher Quince
The impact of non-linear functional responses on the long-term evolution of food web structure
16 pages, 7 figures
null
null
null
q-bio.PE cond-mat.stat-mech
null
We investigate the long-term web structure emerging in evolutionary food web models when different types of functional responses are used. We find that large and complex webs with several trophic layers arise only if the population dynamics is such that it allows predators to focus on their best prey species. This can be achieved using modified Lotka-Volterra or Holling/Beddington functional responses with effective couplings that depend on the predator's efficiency at exploiting the prey, or a ratio-dependent functional response with adaptive foraging. In contrast, if standard Lotka-Volterra or Holling/Beddington functional responses are used, long-term evolution generates webs with almost all species being basal, and with additionally many links between these species. Interestingly, in all cases studied, a large proportion of weak links result naturally from the evolution of the food webs.
[ { "created": "Mon, 19 Jan 2004 15:07:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Drossel", "Barbara", "" ], [ "McKane", "Alan", "" ], [ "Quince", "Christopher", "" ] ]
We investigate the long-term web structure emerging in evolutionary food web models when different types of functional responses are used. We find that large and complex webs with several trophic layers arise only if the population dynamics is such that it allows predators to focus on their best prey species. This can be achieved using modified Lotka-Volterra or Holling/Beddington functional responses with effective couplings that depend on the predator's efficiency at exploiting the prey, or a ratio-dependent functional response with adaptive foraging. In contrast, if standard Lotka-Volterra or Holling/Beddington functional responses are used, long-term evolution generates webs with almost all species being basal, and with additionally many links between these species. Interestingly, in all cases studied, a large proportion of weak links result naturally from the evolution of the food webs.
1107.1507
Carla Goldman
Lucas W. Rossi and Carla Goldman
Jamming of molecular motors as a tool for transport cargos along microtubules
null
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hopping model for cargo transport by molecular motors introduced in Refs. goldman1, goldman2, is extended here in order to incorporate the movement of cargo-motor complexes. In this context, hopping process expresses the possibility for cargo to be exchanged between neighbor motors at a microtubule where the transport takes place. Jamming of motors is essential for cargos to execute long-range movement in this way. Results from computer simulations performed using the extended model indicate that cargo may execute bidirectional movement in the presence of motors of a single polarity, confirming previous analytical results. Moreover, these results suggest the existence of a balance between cargo hopping and the movement of the complex that may control the efficiency of cargo transfer and cargo delivering. Considerations about the energy involved in the transport process show that the model presented here offers a considerable advantage over other models in the literature for which cargo movement is restricted to the movement of cargo-motor complexes.
[ { "created": "Thu, 7 Jul 2011 20:09:58 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2011 20:07:03 GMT", "version": "v2" }, { "created": "Thu, 26 Jan 2012 15:47:49 GMT", "version": "v3" } ]
2012-01-27
[ [ "Rossi", "Lucas W.", "" ], [ "Goldman", "Carla", "" ] ]
The hopping model for cargo transport by molecular motors introduced in Refs. goldman1, goldman2, is extended here in order to incorporate the movement of cargo-motor complexes. In this context, hopping process expresses the possibility for cargo to be exchanged between neighbor motors at a microtubule where the transport takes place. Jamming of motors is essential for cargos to execute long-range movement in this way. Results from computer simulations performed using the extended model indicate that cargo may execute bidirectional movement in the presence of motors of a single polarity, confirming previous analytical results. Moreover, these results suggest the existence of a balance between cargo hopping and the movement of the complex that may control the efficiency of cargo transfer and cargo delivering. Considerations about the energy involved in the transport process show that the model presented here offers a considerable advantage over other models in the literature for which cargo movement is restricted to the movement of cargo-motor complexes.
q-bio/0506002
D. Allan Drummond
D. Allan Drummond, Jesse D. Bloom, Christoph Adami, Claus O. Wilke, Frances H. Arnold
Why highly expressed proteins evolve slowly
40 pages, 3 figures, with supporting information
Proc. Nat'l. Acad. Sci. USA 102(40):14338-14343 (2005)
10.1073/pnas.0504070102
null
q-bio.PE q-bio.GN
null
Much recent work has explored molecular and population-genetic constraints on the rate of protein sequence evolution. The best predictor of evolutionary rate is expression level, for reasons which have remained unexplained. Here, we hypothesize that selection to reduce the burden of protein misfolding will favor protein sequences with increased robustness to translational missense errors. Pressure for translational robustness increases with expression level and constrains sequence evolution. Using several sequenced yeast genomes, global expression and protein abundance data, and sets of paralogs traceable to an ancient whole-genome duplication in yeast, we rule out several confounding effects and show that expression level explains roughly half the variation in Saccharomyces cerevisiae protein evolutionary rates. We examine causes for expression's dominant role and find that genome-wide tests favor the translational robustness explanation over existing hypotheses that invoke constraints on function or translational efficiency. Our results suggest that proteins evolve at rates largely unrelated to their functions, and can explain why highly expressed proteins evolve slowly across the tree of life.
[ { "created": "Thu, 2 Jun 2005 17:20:47 GMT", "version": "v1" }, { "created": "Fri, 12 Aug 2005 04:32:03 GMT", "version": "v2" } ]
2007-05-23
[ [ "Drummond", "D. Allan", "" ], [ "Bloom", "Jesse D.", "" ], [ "Adami", "Christoph", "" ], [ "Wilke", "Claus O.", "" ], [ "Arnold", "Frances H.", "" ] ]
Much recent work has explored molecular and population-genetic constraints on the rate of protein sequence evolution. The best predictor of evolutionary rate is expression level, for reasons which have remained unexplained. Here, we hypothesize that selection to reduce the burden of protein misfolding will favor protein sequences with increased robustness to translational missense errors. Pressure for translational robustness increases with expression level and constrains sequence evolution. Using several sequenced yeast genomes, global expression and protein abundance data, and sets of paralogs traceable to an ancient whole-genome duplication in yeast, we rule out several confounding effects and show that expression level explains roughly half the variation in Saccharomyces cerevisiae protein evolutionary rates. We examine causes for expression's dominant role and find that genome-wide tests favor the translational robustness explanation over existing hypotheses that invoke constraints on function or translational efficiency. Our results suggest that proteins evolve at rates largely unrelated to their functions, and can explain why highly expressed proteins evolve slowly across the tree of life.
1507.08734
Ruriko Yoshida
Jing Xi and Jin Xie and Ruriko Yoshida and Stefan Forcey
Stochastic safety radius on Neighbor-Joining method and Balanced Minimal Evolution on small trees
6 figures. 12 pages
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distance-based method to reconstruct a phylogenetic tree with $n$ leaves takes a distance matrix, $n \times n$ symmetric matrix with $0$s in the diagonal, as its input and reconstructs a tree with $n$ leaves using tools in combinatorics. A safety radius is a radius from a tree metric (a distance matrix realizing a true tree) within which the input distance matrices must all lie in order to satisfy a precise combinatorial condition under which the distance-based method is guaranteed to return a correct tree. A stochastic safety radius is a safety radius under which the distance-based method is guaranteed to return a correct tree within a certain probability. In this paper we investigated stochastic safety radii for the neighbor-joining (NJ) method and balanced minimal evolution (BME) method for $n = 5$.
[ { "created": "Fri, 31 Jul 2015 02:47:11 GMT", "version": "v1" } ]
2015-08-03
[ [ "Xi", "Jing", "" ], [ "Xie", "Jin", "" ], [ "Yoshida", "Ruriko", "" ], [ "Forcey", "Stefan", "" ] ]
A distance-based method to reconstruct a phylogenetic tree with $n$ leaves takes a distance matrix, $n \times n$ symmetric matrix with $0$s in the diagonal, as its input and reconstructs a tree with $n$ leaves using tools in combinatorics. A safety radius is a radius from a tree metric (a distance matrix realizing a true tree) within which the input distance matrices must all lie in order to satisfy a precise combinatorial condition under which the distance-based method is guaranteed to return a correct tree. A stochastic safety radius is a safety radius under which the distance-based method is guaranteed to return a correct tree within a certain probability. In this paper we investigated stochastic safety radii for the neighbor-joining (NJ) method and balanced minimal evolution (BME) method for $n = 5$.
0907.5539
Leah B. Shaw
Leah B. Shaw, Ira B. Schwartz
Enhanced vaccine control of epidemics in adaptive networks
null
null
10.1103/PhysRevE.81.046120
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study vaccine control for disease spread on an adaptive network modeling disease avoidance behavior. Control is implemented by adding Poisson distributed vaccination of susceptibles. We show that vaccine control is much more effective in adaptive networks than in static networks due to an interaction between the adaptive network rewiring and the vaccine application. Disease extinction rates using vaccination are computed, and orders of magnitude less vaccine application is needed to drive the disease to extinction in an adaptive network than in a static one.
[ { "created": "Fri, 31 Jul 2009 14:12:44 GMT", "version": "v1" } ]
2015-05-13
[ [ "Shaw", "Leah B.", "" ], [ "Schwartz", "Ira B.", "" ] ]
We study vaccine control for disease spread on an adaptive network modeling disease avoidance behavior. Control is implemented by adding Poisson distributed vaccination of susceptibles. We show that vaccine control is much more effective in adaptive networks than in static networks due to an interaction between the adaptive network rewiring and the vaccine application. Disease extinction rates using vaccination are computed, and orders of magnitude less vaccine application is needed to drive the disease to extinction in an adaptive network than in a static one.
1808.04860
Mohsen Nabian
Mohsen Nabian, Uichiro Narusawa
Quantification of Alveolar Recruitment for the Optimization of Mechanical Ventilation using Quasi-static Pressure Volume Curve
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quasi-static, pulmonary pressure-volume (P-V) curves over an inflation-deflation cycle are analyzed using a respiratory system model (RSM), which had been developed for quantitative characterization of the mechanical behavior of the total respiratory system. Optimum mechanical ventilation setting of Positive End Expiratory Pressure (PEEP) for total alveolar recruitment is quantified based on the existing P-V curves of healthy and injured animal models. Our analytical predictions may contribute to the optimization of mechanical ventilation settings for the Acute Respiratory Distress Syndrome (ARDS) patients.
[ { "created": "Sat, 11 Aug 2018 02:05:27 GMT", "version": "v1" } ]
2018-08-16
[ [ "Nabian", "Mohsen", "" ], [ "Narusawa", "Uichiro", "" ] ]
Quasi-static, pulmonary pressure-volume (P-V) curves over an inflation-deflation cycle are analyzed using a respiratory system model (RSM), which had been developed for quantitative characterization of the mechanical behavior of the total respiratory system. Optimum mechanical ventilation setting of Positive End Expiratory Pressure (PEEP) for total alveolar recruitment is quantified based on the existing P-V curves of healthy and injured animal models. Our analytical predictions may contribute to the optimization of mechanical ventilation settings for the Acute Respiratory Distress Syndrome (ARDS) patients.
1210.2140
Sheng-Yong Xu
Jiongwei Xue and Shengyong Xu
Natural electromagnetic waveguide structures based on myelin sheath in the neural system
24 pages, 7 figures
null
null
null
q-bio.NC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The saltatory propagation of action potentials on myelinated axons is conventionally explained by the mechanism employing local circuit ionic current flows between nodes of Ranvier. Under this framework, the myelin sheath with up to 100 layers of membrane only serves as the insulating shell. The speed of action potentials is measured to be as fast as 100 m/s on myelinated axons, but ions move in fluids at just 100 nm/s in a 1 V/m electric field. We show here the action potentials, in the form of electromagnetic (EM) pulses, can propagate in natural EM waveguide structures formed by the myelin sheath merged in fluids. The propagation time is mainly cost on the duration for triggering EM pulses at nodes of Ranvier. The result clearly reveals the evolution of axons from the unmyelinated to the myelinated, which has remarkably enhanced the propagation efficiency by increasing the thickness of myelin sheath.
[ { "created": "Mon, 8 Oct 2012 03:41:45 GMT", "version": "v1" } ]
2012-10-09
[ [ "Xue", "Jiongwei", "" ], [ "Xu", "Shengyong", "" ] ]
The saltatory propagation of action potentials on myelinated axons is conventionally explained by the mechanism employing local circuit ionic current flows between nodes of Ranvier. Under this framework, the myelin sheath with up to 100 layers of membrane only serves as the insulating shell. The speed of action potentials is measured to be as fast as 100 m/s on myelinated axons, but ions move in fluids at just 100 nm/s in a 1 V/m electric field. We show here the action potentials, in the form of electromagnetic (EM) pulses, can propagate in natural EM waveguide structures formed by the myelin sheath merged in fluids. The propagation time is mainly cost on the duration for triggering EM pulses at nodes of Ranvier. The result clearly reveals the evolution of axons from the unmyelinated to the myelinated, which has remarkably enhanced the propagation efficiency by increasing the thickness of myelin sheath.
1701.01355
Catherine Matias
Vincent Miele, Catherine Matias (LPMA)
Revealing the hidden structure of dynamic ecological networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent technological advances and long-term data studies provide interaction data that can be modelled through dynamic networks, i.e a sequence of different snapshots of an evolving ecological network. Most often time is the parameter along which these networks evolve but any other one-dimensional gradient (temperature, altitude, depth, humidity, . . . ) could be considered.Here we propose a statistical tool to analyse the underlying structure of these networks and follow its evolution dynamics (either in time or any other one-dimensional factor). It consists in extracting the main features of these networks and summarise them into a high-level view.We analyse a dynamic animal contact network and a seasonal food web and in both cases we show that our approach allows for the identification of a backbone organisation as well as interesting temporal variations at the individual level.Our method, implemented into the R package dynsbm, can handle the largest ecological datasets and is a versatile and promising tool for ecologists that study dynamic interactions.
[ { "created": "Thu, 5 Jan 2017 15:34:48 GMT", "version": "v1" } ]
2017-01-06
[ [ "Miele", "Vincent", "", "LPMA" ], [ "Matias", "Catherine", "", "LPMA" ] ]
Recent technological advances and long-term data studies provide interaction data that can be modelled through dynamic networks, i.e a sequence of different snapshots of an evolving ecological network. Most often time is the parameter along which these networks evolve but any other one-dimensional gradient (temperature, altitude, depth, humidity, . . . ) could be considered.Here we propose a statistical tool to analyse the underlying structure of these networks and follow its evolution dynamics (either in time or any other one-dimensional factor). It consists in extracting the main features of these networks and summarise them into a high-level view.We analyse a dynamic animal contact network and a seasonal food web and in both cases we show that our approach allows for the identification of a backbone organisation as well as interesting temporal variations at the individual level.Our method, implemented into the R package dynsbm, can handle the largest ecological datasets and is a versatile and promising tool for ecologists that study dynamic interactions.
1611.00259
Fevzi Buyukkilic
F.Buyukk{\i}l{\i}\c{c}, Z.Ok Bayrakdar, D.Demirhan
A study on the evolution of a community population by cumulative and fractional calculus approaches
21 pages,4figures,research manuscript, biodynamics, cumulative approaches, fractional calculus
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, in our globalized world,the local and intercountry movements of population have been increased. This situation makes it important for host countries to do right predictions for the future population of their native people as well as immigrant people. The knowledge of the attained number of accumulated population is necessary for future planning, concerning to education,health, job, housing, safety requirements, etc. In this work, for updating historically well known formulas of population dynamics of a community are revisited in the framework of compound growth and fractional calculus to get more realistic relations. Within this context, for a time t, the population evolution of a society which owns two different components is calculated. Concomitant relations have been developed to provide a comparison between the native population and the immigrant population that come into existence where at each time interval a colonial population is joined. Eventually at time t, the case where the native population becomes equal to immigrant population is discussed. Moreover, the differential equation of a probability function for the equilibrium state of accumulative population is obtained. It is seen that the equilibrium solution can be written as a series in terms of Bernoulli numbers. In the calculation of population dynamics, Mittag- Leffler(M-L) function replaces the exponential function.
[ { "created": "Mon, 31 Oct 2016 14:41:03 GMT", "version": "v1" } ]
2016-11-02
[ [ "Buyukkılıç", "F.", "" ], [ "Bayrakdar", "Z. Ok", "" ], [ "Demirhan", "D.", "" ] ]
Nowadays, in our globalized world,the local and intercountry movements of population have been increased. This situation makes it important for host countries to do right predictions for the future population of their native people as well as immigrant people. The knowledge of the attained number of accumulated population is necessary for future planning, concerning to education,health, job, housing, safety requirements, etc. In this work, for updating historically well known formulas of population dynamics of a community are revisited in the framework of compound growth and fractional calculus to get more realistic relations. Within this context, for a time t, the population evolution of a society which owns two different components is calculated. Concomitant relations have been developed to provide a comparison between the native population and the immigrant population that come into existence where at each time interval a colonial population is joined. Eventually at time t, the case where the native population becomes equal to immigrant population is discussed. Moreover, the differential equation of a probability function for the equilibrium state of accumulative population is obtained. It is seen that the equilibrium solution can be written as a series in terms of Bernoulli numbers. In the calculation of population dynamics, Mittag- Leffler(M-L) function replaces the exponential function.
1810.01747
Gard Spreemann
Jean-Baptiste Bardin, Gard Spreemann, Kathryn Hess
Topological exploration of artificial neuronal network dynamics
null
null
10.1162/netn_a_00080
null
q-bio.NC math.AT
http://creativecommons.org/licenses/by/4.0/
One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method.
[ { "created": "Wed, 3 Oct 2018 14:11:24 GMT", "version": "v1" }, { "created": "Mon, 28 Jan 2019 10:28:30 GMT", "version": "v2" } ]
2019-02-08
[ [ "Bardin", "Jean-Baptiste", "" ], [ "Spreemann", "Gard", "" ], [ "Hess", "Kathryn", "" ] ]
One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method.
q-bio/0610047
Michele Caselle
A. Re, D. Cora, A. M. Puliti, M. Caselle and I. Sbrana
Correlated fragile site expression allows the identification of candidate fragile genes involved in immunity and associated with carcinogenesis
18 pages, accepted for publication in BMC Bioinformatics
BMC Bioinformatics 2006, 7:413
null
DFTT 23/2006
q-bio.GN
null
Common fragile sites (cfs) are specific regions in the human genome that are particularly prone to genomic instability under conditions of replicative stress. Several investigations support the view that common fragile sites play a role in carcinogenesis. We discuss a genome-wide approach based on graph theory and Gene Ontology vocabulary for the functional characterization of common fragile sites and for the identification of genes that contribute to tumour cell biology. CFS were assembled in a network based on a simple measure of correlation among common fragile site patterns of expression. By applying robust measurements to capture in quantitative terms the non triviality of the network, we identified several topological features clearly indicating departure from the Erdos-Renyi random graph model. The most important outcome was the presence of an unexpected large connected component far below the percolation threshold. Most of the best characterized common fragile sites belonged to this connected component. By filtering this connected component with Gene Ontology, statistically significant shared functional features were detected. Common fragile sites were found to be enriched for genes associated to the immune response and to mechanisms involved in tumour progression such as extracellular space remodeling and angiogenesis. Our results support the hypothesis that fragile sites serve a function; we propose that fragility is linked to a coordinated regulation of fragile genes expression.
[ { "created": "Wed, 25 Oct 2006 10:08:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Re", "A.", "" ], [ "Cora", "D.", "" ], [ "Puliti", "A. M.", "" ], [ "Caselle", "M.", "" ], [ "Sbrana", "I.", "" ] ]
Common fragile sites (cfs) are specific regions in the human genome that are particularly prone to genomic instability under conditions of replicative stress. Several investigations support the view that common fragile sites play a role in carcinogenesis. We discuss a genome-wide approach based on graph theory and Gene Ontology vocabulary for the functional characterization of common fragile sites and for the identification of genes that contribute to tumour cell biology. CFS were assembled in a network based on a simple measure of correlation among common fragile site patterns of expression. By applying robust measurements to capture in quantitative terms the non triviality of the network, we identified several topological features clearly indicating departure from the Erdos-Renyi random graph model. The most important outcome was the presence of an unexpected large connected component far below the percolation threshold. Most of the best characterized common fragile sites belonged to this connected component. By filtering this connected component with Gene Ontology, statistically significant shared functional features were detected. Common fragile sites were found to be enriched for genes associated to the immune response and to mechanisms involved in tumour progression such as extracellular space remodeling and angiogenesis. Our results support the hypothesis that fragile sites serve a function; we propose that fragility is linked to a coordinated regulation of fragile genes expression.
1909.10007
Tilo Schwalger
Tilo Schwalger and Anton V. Chizhov
Mind the Last Spike -- Firing Rate Models for Mesoscopic Populations of Spiking Neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons -- a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
[ { "created": "Sun, 22 Sep 2019 13:57:44 GMT", "version": "v1" } ]
2019-09-24
[ [ "Schwalger", "Tilo", "" ], [ "Chizhov", "Anton V.", "" ] ]
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons -- a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
1303.3520
Tobias Marschall
Tobias Marschall and Alexander Sch\"onhuth
Sensitive Long-Indel-Aware Alignment of Sequencing Reads
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tremdendous advances in high-throughput sequencing technologies have made population-scale sequencing as performed in the 1000 Genomes project and the Genome of the Netherlands project possible. Next-generation sequencing has allowed genom-wide discovery of variations beyond single-nucleotide polymorphisms (SNPs), in particular of structural variations (SVs) like deletions, insertions, duplications, translocations, inversions, and even more complex rearrangements. Here, we design a read aligner with special emphasis on the following properties: (1) high sensitivity, i.e. find all (reasonable) alignments; (2) ability to find (long) indels; (3) statistically sound alignment scores; and (4) runtime fast enough to be applied to whole genome data. We compare performance to BWA, bowtie2, stampy and find that our methods is especially advantageous on reads containing larger indels.
[ { "created": "Thu, 14 Mar 2013 17:31:13 GMT", "version": "v1" } ]
2013-03-15
[ [ "Marschall", "Tobias", "" ], [ "Schönhuth", "Alexander", "" ] ]
The tremdendous advances in high-throughput sequencing technologies have made population-scale sequencing as performed in the 1000 Genomes project and the Genome of the Netherlands project possible. Next-generation sequencing has allowed genom-wide discovery of variations beyond single-nucleotide polymorphisms (SNPs), in particular of structural variations (SVs) like deletions, insertions, duplications, translocations, inversions, and even more complex rearrangements. Here, we design a read aligner with special emphasis on the following properties: (1) high sensitivity, i.e. find all (reasonable) alignments; (2) ability to find (long) indels; (3) statistically sound alignment scores; and (4) runtime fast enough to be applied to whole genome data. We compare performance to BWA, bowtie2, stampy and find that our methods is especially advantageous on reads containing larger indels.
2112.11681
Swadesh Pal
Swadesh Pal and Roderick Melnik
Nonlocal Models in the Analysis of Brain Neurodegenerative Protein Dynamics with Application to Alzheimer's Disease
15 pages, 9 figures, 4 tables
null
null
null
q-bio.NC math.DS
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is well known that today nearly one in six of the world's population has to deal with neurodegenerative disorders. While a number of medical devices have been developed for the detection, prevention, and treatments of such disorders, some fundamentals of the progression of associated diseases are in urgent need of further clarification. In this paper, we focus on Alzheimer's disease, where it is believed that the concentration changes in amyloid-beta and tau proteins play a central role in its onset and development. A multiscale model is proposed to analyze the propagation of these concentrations in the brain connectome. In particular, we consider a modified heterodimer model for the protein-protein interactions. Higher toxic concentrations of amyloid-beta and tau proteins destroy the brain cell. We have studied these propagations for the primary and secondary and their mixed tauopathy. We model the damage of a brain cell by the nonlocal contributions of these toxic loads present in the brain cells. With the help of rigorous analysis, we check the stability behaviour of the stationary points corresponding to the homogeneous system. After integrating the brain connectome data into the developed model, we see that the spreading patterns of the toxic concentrations for the whole brain are the same, but their concentrations are different in different regions. Also, the time to propagate the damage in each region of the brain connectome is different.
[ { "created": "Wed, 22 Dec 2021 06:11:11 GMT", "version": "v1" } ]
2021-12-23
[ [ "Pal", "Swadesh", "" ], [ "Melnik", "Roderick", "" ] ]
It is well known that today nearly one in six of the world's population has to deal with neurodegenerative disorders. While a number of medical devices have been developed for the detection, prevention, and treatments of such disorders, some fundamentals of the progression of associated diseases are in urgent need of further clarification. In this paper, we focus on Alzheimer's disease, where it is believed that the concentration changes in amyloid-beta and tau proteins play a central role in its onset and development. A multiscale model is proposed to analyze the propagation of these concentrations in the brain connectome. In particular, we consider a modified heterodimer model for the protein-protein interactions. Higher toxic concentrations of amyloid-beta and tau proteins destroy the brain cell. We have studied these propagations for the primary and secondary and their mixed tauopathy. We model the damage of a brain cell by the nonlocal contributions of these toxic loads present in the brain cells. With the help of rigorous analysis, we check the stability behaviour of the stationary points corresponding to the homogeneous system. After integrating the brain connectome data into the developed model, we see that the spreading patterns of the toxic concentrations for the whole brain are the same, but their concentrations are different in different regions. Also, the time to propagate the damage in each region of the brain connectome is different.
2306.00236
Yuzhu Chen
Yuzhu Chen, David Saintillan, and Padmini Rangamani
Cell motility modes are selected by the interplay of mechanosensitive adhesion and membrane tension
21 pages, 8 figures
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
The initiation of directional cell motion requires symmetry breaking that can happen both with or without external stimuli. During cell crawling, forces generated by the cytoskeleton and their transmission through mechanosensitive adhesions to the extracellular substrate play a crucial role. In a recently proposed 1D model (Sens, PNAS 2020), a mechanical feedback loop between force-sensitive adhesions and cell tension was shown to be sufficient to explain spontaneous symmetry breaking and multiple motility patterns through stick-slip dynamics, without the need to account for signaling networks or active polar gels. We extended this model to 2D to study the interplay between cell shape and mechanics during crawling. Through a local force balance along a deformable boundary, we show that the membrane tension coupled with shape change can regulate the spatiotemporal evolution of the stochastic binding of mechanosensitive adhesions. Linear stability analysis identified the unstable parameter regimes where spontaneous symmetry breaking can take place. sing simulations to solve the fully coupled nonlinear system of equations, we show that starting from a randomly perturbed circular shape, this instability can lead to keratocyte-like shapes. Simulations predict that different adhesion kinetics and membrane tension can result in different cell motility modes including gliding, zigzag, rotating, and sometimes chaotic movements. Thus, using a minimal model of cell motility, we identify that the interplay between adhesions and tension can select emergent motility modes.
[ { "created": "Wed, 31 May 2023 23:16:26 GMT", "version": "v1" } ]
2023-06-02
[ [ "Chen", "Yuzhu", "" ], [ "Saintillan", "David", "" ], [ "Rangamani", "Padmini", "" ] ]
The initiation of directional cell motion requires symmetry breaking that can happen both with or without external stimuli. During cell crawling, forces generated by the cytoskeleton and their transmission through mechanosensitive adhesions to the extracellular substrate play a crucial role. In a recently proposed 1D model (Sens, PNAS 2020), a mechanical feedback loop between force-sensitive adhesions and cell tension was shown to be sufficient to explain spontaneous symmetry breaking and multiple motility patterns through stick-slip dynamics, without the need to account for signaling networks or active polar gels. We extended this model to 2D to study the interplay between cell shape and mechanics during crawling. Through a local force balance along a deformable boundary, we show that the membrane tension coupled with shape change can regulate the spatiotemporal evolution of the stochastic binding of mechanosensitive adhesions. Linear stability analysis identified the unstable parameter regimes where spontaneous symmetry breaking can take place. sing simulations to solve the fully coupled nonlinear system of equations, we show that starting from a randomly perturbed circular shape, this instability can lead to keratocyte-like shapes. Simulations predict that different adhesion kinetics and membrane tension can result in different cell motility modes including gliding, zigzag, rotating, and sometimes chaotic movements. Thus, using a minimal model of cell motility, we identify that the interplay between adhesions and tension can select emergent motility modes.
0807.2893
Joshua Milstein
J.N. Milstein, F. Mormann, I. Fried, C. Koch
Neuronal Shot Noise and Brownian $1/f^2$ Behavior in the Local Field Potential
null
null
10.1371/journal.pone.0004338
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that human electrophysiological recordings of the local field potential (LFP) from intracranial electrodes, acquired from a variety of cerebral regions, show a ubiquitous $1/f^2$ scaling within the power spectrum. We develop a quantitative model that treats the generation of these fields in an analogous way to that of electronic shot noise, and use this model to specifically address the cause of this $1/f^2$ Brownian noise. The model gives way to two analytically tractable solutions, both displaying Brownian noise: 1) uncorrelated cells that display sharp initial activity, whose extracellular fields slowly decay and 2) rapidly firing, temporally correlated cells that generate UP-DOWN states.
[ { "created": "Fri, 18 Jul 2008 01:45:45 GMT", "version": "v1" } ]
2015-05-13
[ [ "Milstein", "J. N.", "" ], [ "Mormann", "F.", "" ], [ "Fried", "I.", "" ], [ "Koch", "C.", "" ] ]
We demonstrate that human electrophysiological recordings of the local field potential (LFP) from intracranial electrodes, acquired from a variety of cerebral regions, show a ubiquitous $1/f^2$ scaling within the power spectrum. We develop a quantitative model that treats the generation of these fields in an analogous way to that of electronic shot noise, and use this model to specifically address the cause of this $1/f^2$ Brownian noise. The model gives way to two analytically tractable solutions, both displaying Brownian noise: 1) uncorrelated cells that display sharp initial activity, whose extracellular fields slowly decay and 2) rapidly firing, temporally correlated cells that generate UP-DOWN states.
1410.5377
Sebastian Maerkl
Francesca Volpetti, Jose Garcia-Cordero, Sebastian Maerkl
A microfluidic platform for high-throughput multiplexed protein quantitation
null
null
10.1371/journal.pone.0117744
null
q-bio.QM physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a high-throughput microfluidic platform capable of quantitating up to 384 biomarkers in 4 distinct samples by immunoassay. The microfluidic device contains 384 unit cells, which can be individually programmed with pairs of capture and detection antibody. Samples are quantitated in each unit cell by four independent MITOMI detection areas, allowing four samples to be analyzed in parallel for a total of 1,536 assays per device. We show that the device can be pre-assembled and stored for weeks at elevated temperature and we performed proof-of-concept experiments simultaneously quantitating IL-6, IL-1\b{eta}, TNF-{\alpha}, PSA, and GFP. Finally, we show that the platform can be used to identify functional antibody combinations by screening 64 antibody combinations requiring up to 384 unique assays per device.
[ { "created": "Mon, 20 Oct 2014 18:06:05 GMT", "version": "v1" } ]
2015-06-23
[ [ "Volpetti", "Francesca", "" ], [ "Garcia-Cordero", "Jose", "" ], [ "Maerkl", "Sebastian", "" ] ]
We present a high-throughput microfluidic platform capable of quantitating up to 384 biomarkers in 4 distinct samples by immunoassay. The microfluidic device contains 384 unit cells, which can be individually programmed with pairs of capture and detection antibody. Samples are quantitated in each unit cell by four independent MITOMI detection areas, allowing four samples to be analyzed in parallel for a total of 1,536 assays per device. We show that the device can be pre-assembled and stored for weeks at elevated temperature and we performed proof-of-concept experiments simultaneously quantitating IL-6, IL-1\b{eta}, TNF-{\alpha}, PSA, and GFP. Finally, we show that the platform can be used to identify functional antibody combinations by screening 64 antibody combinations requiring up to 384 unique assays per device.
1812.06594
Sebastian Mathias Keller
Sebastian Mathias Keller, Maxim Samarin, Antonia Meyer, Vitalii Kosak (Cozak), Ute Gschwandtner, Peter Fuhr, Volker Roth
Computational EEG in Personalized Medicine: A study in Parkinson's Disease
Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:811.07216
null
null
null
q-bio.NC cs.LG eess.SP q-bio.QM stat.ML
http://creativecommons.org/licenses/by/4.0/
Recordings of electrical brain activity carry information about a person's cognitive health. For recording EEG signals, a very common setting is for a subject to be at rest with its eyes closed. Analysis of these recordings often involve a dimensionality reduction step in which electrodes are grouped into 10 or more regions (depending on the number of electrodes available). Then an average over each group is taken which serves as a feature in subsequent evaluation. Currently, the most prominent features used in clinical practice are based on spectral power densities. In our work we consider a simplified grouping of electrodes into two regions only. In addition to spectral features we introduce a secondary, non-redundant view on brain activity through the lens of Tsallis Entropy $S_{q=2}$. We further take EEG measurements not only in an eyes closed (ec) but also in an eyes open (eo) state. For our cohort of healthy controls (HC) and individuals suffering from Parkinson's disease (PD), the question we are asking is the following: How well can one discriminate between HC and PD within this simplified, binary grouping? This question is motivated by the commercial availability of inexpensive and easy to use portable EEG devices. If enough information is retained in this binary grouping, then such simple devices could potentially be used as personal monitoring tools, as standard screening tools by general practitioners or as digital biomarkers for easy long term monitoring during neurological studies.
[ { "created": "Sun, 2 Dec 2018 15:15:44 GMT", "version": "v1" } ]
2018-12-18
[ [ "Keller", "Sebastian Mathias", "", "Cozak" ], [ "Samarin", "Maxim", "", "Cozak" ], [ "Meyer", "Antonia", "", "Cozak" ], [ "Kosak", "Vitalii", "", "Cozak" ], [ "Gschwandtner", "Ute", "" ], [ "Fuhr", "Peter", "" ], [ "Roth", "Volker", "" ] ]
Recordings of electrical brain activity carry information about a person's cognitive health. For recording EEG signals, a very common setting is for a subject to be at rest with its eyes closed. Analysis of these recordings often involve a dimensionality reduction step in which electrodes are grouped into 10 or more regions (depending on the number of electrodes available). Then an average over each group is taken which serves as a feature in subsequent evaluation. Currently, the most prominent features used in clinical practice are based on spectral power densities. In our work we consider a simplified grouping of electrodes into two regions only. In addition to spectral features we introduce a secondary, non-redundant view on brain activity through the lens of Tsallis Entropy $S_{q=2}$. We further take EEG measurements not only in an eyes closed (ec) but also in an eyes open (eo) state. For our cohort of healthy controls (HC) and individuals suffering from Parkinson's disease (PD), the question we are asking is the following: How well can one discriminate between HC and PD within this simplified, binary grouping? This question is motivated by the commercial availability of inexpensive and easy to use portable EEG devices. If enough information is retained in this binary grouping, then such simple devices could potentially be used as personal monitoring tools, as standard screening tools by general practitioners or as digital biomarkers for easy long term monitoring during neurological studies.
1612.02108
Stuart Sevier
Stuart A. Sevier, Herbert Levine
Mechanical Properties of Transcription
null
Phys. Rev. Lett. 118, 268101 (2017)
10.1103/PhysRevLett.118.268101
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently the physical characterization of a number of biological processes has proven indispensable for a full understanding of natural phenomena. One such example is the mechanical properties of transcription, which have been shown to have significant effects in gene expression. In this letter we introduce a simple description of the basic physical elements of transcription where RNA elongation, RNA polymerase rotation and DNA supercoiling are coupled. The resulting framework describes the relative amount of RNA polymerase rotation and DNA supercoiling that occurs during RNA elongation. Asymptotic behavior is derived and can be used to experimentally extract unknown mechanical parameters of transcription. Incorporation of mechanical limits to RNA polymerase is accomplished yielding an equation of motion for DNA supercoiling and RNA elongation with transcriptional stalling. Important implications for gene expression, chromatin structure and genome organization are discussed.
[ { "created": "Wed, 7 Dec 2016 04:06:59 GMT", "version": "v1" }, { "created": "Thu, 9 Feb 2017 16:40:45 GMT", "version": "v2" } ]
2017-07-05
[ [ "Sevier", "Stuart A.", "" ], [ "Levine", "Herbert", "" ] ]
Recently the physical characterization of a number of biological processes has proven indispensable for a full understanding of natural phenomena. One such example is the mechanical properties of transcription, which have been shown to have significant effects in gene expression. In this letter we introduce a simple description of the basic physical elements of transcription where RNA elongation, RNA polymerase rotation and DNA supercoiling are coupled. The resulting framework describes the relative amount of RNA polymerase rotation and DNA supercoiling that occurs during RNA elongation. Asymptotic behavior is derived and can be used to experimentally extract unknown mechanical parameters of transcription. Incorporation of mechanical limits to RNA polymerase is accomplished yielding an equation of motion for DNA supercoiling and RNA elongation with transcriptional stalling. Important implications for gene expression, chromatin structure and genome organization are discussed.
2111.03456
Younes Bouhadjar
Younes Bouhadjar, Dirk J. Wouters, Markus Diesmann, Tom Tetzlaff
Sequence learning, prediction, and replay in networks of spiking neurons
35 pages, 18 figures, 3 tables, 1 video
null
10.1371/journal.pcbi.1010233
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on electrophysiological and behavioral data.
[ { "created": "Fri, 5 Nov 2021 12:32:17 GMT", "version": "v1" }, { "created": "Tue, 19 Jul 2022 18:38:35 GMT", "version": "v2" } ]
2022-07-21
[ [ "Bouhadjar", "Younes", "" ], [ "Wouters", "Dirk J.", "" ], [ "Diesmann", "Markus", "" ], [ "Tetzlaff", "Tom", "" ] ]
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on electrophysiological and behavioral data.
2212.00023
Tianyu Wu
Tianyu Wu, Yang Tang
Random Copolymer inverse design system orienting on Accurate discovering of Antimicrobial peptide-mimetic copolymers
We decide to make deep modifacation on this paper, and we would like to add further provement on the final results generated to prove the rationality. In this way, we hope this work can be withdrew temporarily and we would open the final version in near furture. Thank you
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antimicrobial resistance is one of the biggest health problem, especially in the current period of COVID-19 pandemic. Due to the unique membrane-destruction bactericidal mechanism, antimicrobial peptide-mimetic copolymers are paid more attention and it is urgent to find more potential candidates with broad-spectrum antibacterial efficacy and low toxicity. Artificial intelligence has shown significant performance on small molecule or biotech drugs, however, the higher-dimension of polymer space and the limited experimental data restrict the application of existing methods on copolymer design. Herein, we develop a universal random copolymer inverse design system via multi-model copolymer representation learning, knowledge distillation and reinforcement learning. Our system realize a high-precision antimicrobial activity prediction with few-shot data by extracting various chemical information from multi-modal copolymer representations. By pre-training a scaffold-decorator generative model via knowledge distillation, copolymer space are greatly contracted to the near space of existing data for exploration. Thus, our reinforcement learning algorithm can be adaptive for customized generation on specific scaffolds and requirements on property or structures. We apply our system on collected antimicrobial peptide-mimetic copolymers data, and we discover candidate copolymers with desired properties.
[ { "created": "Wed, 30 Nov 2022 14:29:50 GMT", "version": "v1" }, { "created": "Thu, 8 Dec 2022 01:52:20 GMT", "version": "v2" } ]
2022-12-09
[ [ "Wu", "Tianyu", "" ], [ "Tang", "Yang", "" ] ]
Antimicrobial resistance is one of the biggest health problem, especially in the current period of COVID-19 pandemic. Due to the unique membrane-destruction bactericidal mechanism, antimicrobial peptide-mimetic copolymers are paid more attention and it is urgent to find more potential candidates with broad-spectrum antibacterial efficacy and low toxicity. Artificial intelligence has shown significant performance on small molecule or biotech drugs, however, the higher-dimension of polymer space and the limited experimental data restrict the application of existing methods on copolymer design. Herein, we develop a universal random copolymer inverse design system via multi-model copolymer representation learning, knowledge distillation and reinforcement learning. Our system realize a high-precision antimicrobial activity prediction with few-shot data by extracting various chemical information from multi-modal copolymer representations. By pre-training a scaffold-decorator generative model via knowledge distillation, copolymer space are greatly contracted to the near space of existing data for exploration. Thus, our reinforcement learning algorithm can be adaptive for customized generation on specific scaffolds and requirements on property or structures. We apply our system on collected antimicrobial peptide-mimetic copolymers data, and we discover candidate copolymers with desired properties.
1808.01058
Jyothi Swaroop Guntupalli
Dileep George, Alexander Lavin, J. Swaroop Guntupalli, David Mely, Nick Hay, Miguel Lazaro-Gredilla
Cortical Microcircuits from a Generative Vision Model
null
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence. The theoretical setting of Bayesian inference has been suggested as a framework for understanding cortical computation. Based on a recently published generative model for visual inference (George et al., 2017), we derive a family of anatomically instantiated and functional cortical circuit models. In contrast to simplistic models of Bayesian inference, the underlying generative model's representational choices are validated with real-world tasks that required efficient inference and strong generalization. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints. The derived model suggests precise functional roles for the feedforward, feedback and lateral connections observed in different laminae and columns, and assigns a computational role for the path through the thalamus.
[ { "created": "Fri, 3 Aug 2018 01:20:08 GMT", "version": "v1" } ]
2018-08-06
[ [ "George", "Dileep", "" ], [ "Lavin", "Alexander", "" ], [ "Guntupalli", "J. Swaroop", "" ], [ "Mely", "David", "" ], [ "Hay", "Nick", "" ], [ "Lazaro-Gredilla", "Miguel", "" ] ]
Understanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence. The theoretical setting of Bayesian inference has been suggested as a framework for understanding cortical computation. Based on a recently published generative model for visual inference (George et al., 2017), we derive a family of anatomically instantiated and functional cortical circuit models. In contrast to simplistic models of Bayesian inference, the underlying generative model's representational choices are validated with real-world tasks that required efficient inference and strong generalization. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints. The derived model suggests precise functional roles for the feedforward, feedback and lateral connections observed in different laminae and columns, and assigns a computational role for the path through the thalamus.
q-bio/0404039
Walton Gutierrez
Walton R. Gutierrez
Site Model of Allometric Scaling and Fractal Distribution Networks of Organs
9 pages, 1 fig., 3 tables
null
null
null
q-bio.TO q-bio.QM
null
At the basic geometric level, the distribution networks carrying vital materials in living organisms, and the units such as the nephrons and alveoli, form a scaling structure named here the site model. This unified view of the allometry of the kidney and lung of mammals is in agreement with the existing data, and in some cases, improves the predictions of the previous fractal model of the lung. Allometric scaling of surfaces and volumes of relevant organ parts are derived from a few simple propositions about the collective characteristics of the sites. New hypotheses and predictions are formulated, some verified by the data, while others remain to be tested, such as the scaling of the number of capillaries and the site model description of other organs.
[ { "created": "Tue, 27 Apr 2004 17:17:26 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gutierrez", "Walton R.", "" ] ]
At the basic geometric level, the distribution networks carrying vital materials in living organisms, and the units such as the nephrons and alveoli, form a scaling structure named here the site model. This unified view of the allometry of the kidney and lung of mammals is in agreement with the existing data, and in some cases, improves the predictions of the previous fractal model of the lung. Allometric scaling of surfaces and volumes of relevant organ parts are derived from a few simple propositions about the collective characteristics of the sites. New hypotheses and predictions are formulated, some verified by the data, while others remain to be tested, such as the scaling of the number of capillaries and the site model description of other organs.
q-bio/0405004
Chris Adami
Christoph Adami (KGI, Caltech)
Information theory in molecular biology
29 pages, 7 figs
Physics of Life Reviews 1 (2004) 3-22
10.1016/j.plrev.2004.01.002
null
q-bio.BM q-bio.PE
null
This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon's theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design.
[ { "created": "Wed, 5 May 2004 20:52:48 GMT", "version": "v1" } ]
2007-05-23
[ [ "Adami", "Christoph", "", "KGI, Caltech" ] ]
This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon's theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design.
1805.09497
Kazuhiro Takemoto
Midori Iida, Kazuhiro Takemoto
A network biology-based approach to evaluating the effect of environmental contaminants on human interactome and diseases
35 pages, 12 figures
Ecotoxicology and Environmental Safety 160, 316-327 (2018)
10.1016/j.ecoenv.2018.05.065
null
q-bio.MN physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environmental contaminant exposure can pose significant risks to human health. Therefore, evaluating the impact of this exposure is of great importance; however, it is often difficult because both the molecular mechanism of disease and the mode of action of the contaminants are complex. We used network biology techniques to quantitatively assess the impact of environmental contaminants on the human interactome and diseases with a particular focus on seven major contaminant categories: persistent organic pollutants (POPs), dioxins, polycyclic aromatic hydrocarbons (PAHs), pesticides, perfluorochemicals (PFCs), metals, and pharmaceutical and personal care products (PPCPs). We integrated publicly available data on toxicogenomics, the diseasome, protein-protein interactions (PPIs), and gene essentiality and found that a few contaminants were targeted to many genes, and a few genes were targeted by many contaminants. The contaminant targets were hub proteins in the human PPI network, whereas the target proteins in most categories did not contain abundant essential proteins. Generally, contaminant targets and disease-associated proteins were closely associated with the PPI network, and the closeness of the associations depended on the disease type and chemical category. Network biology techniques were used to identify environmental contaminants with broad effects on the human interactome and contaminant-sensitive biomarkers. Moreover, this method enabled us to quantify the relationship between environmental contaminants and human diseases, which was supported by epidemiological and experimental evidence. These methods and findings have facilitated the elucidation of the complex relationship between environmental exposure and adverse health outcomes.
[ { "created": "Thu, 24 May 2018 03:43:37 GMT", "version": "v1" } ]
2018-06-01
[ [ "Iida", "Midori", "" ], [ "Takemoto", "Kazuhiro", "" ] ]
Environmental contaminant exposure can pose significant risks to human health. Therefore, evaluating the impact of this exposure is of great importance; however, it is often difficult because both the molecular mechanism of disease and the mode of action of the contaminants are complex. We used network biology techniques to quantitatively assess the impact of environmental contaminants on the human interactome and diseases with a particular focus on seven major contaminant categories: persistent organic pollutants (POPs), dioxins, polycyclic aromatic hydrocarbons (PAHs), pesticides, perfluorochemicals (PFCs), metals, and pharmaceutical and personal care products (PPCPs). We integrated publicly available data on toxicogenomics, the diseasome, protein-protein interactions (PPIs), and gene essentiality and found that a few contaminants were targeted to many genes, and a few genes were targeted by many contaminants. The contaminant targets were hub proteins in the human PPI network, whereas the target proteins in most categories did not contain abundant essential proteins. Generally, contaminant targets and disease-associated proteins were closely associated with the PPI network, and the closeness of the associations depended on the disease type and chemical category. Network biology techniques were used to identify environmental contaminants with broad effects on the human interactome and contaminant-sensitive biomarkers. Moreover, this method enabled us to quantify the relationship between environmental contaminants and human diseases, which was supported by epidemiological and experimental evidence. These methods and findings have facilitated the elucidation of the complex relationship between environmental exposure and adverse health outcomes.
1802.04794
Joshua Goldwyn
Joshua H Goldwyn, Bradley R Slabe, Joseph B Travers, David Terman
Gain control with A-type potassium current: IA as a switch between divisive and subtractive inhibition
20 pages, 11 figures
null
10.1371/journal.pcbi.1006292
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons process information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. We use simulations and mathematical analysis of a neuron model to find the specific conditions for which inhibitory inputs have subtractive or divisive effects. We identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly- inactivating outward current acts as a switch between subtractive and divisive inhibition. If IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.
[ { "created": "Tue, 13 Feb 2018 18:56:57 GMT", "version": "v1" } ]
2018-09-05
[ [ "Goldwyn", "Joshua H", "" ], [ "Slabe", "Bradley R", "" ], [ "Travers", "Joseph B", "" ], [ "Terman", "David", "" ] ]
Neurons process information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. We use simulations and mathematical analysis of a neuron model to find the specific conditions for which inhibitory inputs have subtractive or divisive effects. We identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly- inactivating outward current acts as a switch between subtractive and divisive inhibition. If IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.
1306.3063
Robert Ekblom
Robert Ekblom, Dennis Hasselquist, Stein Are S{\Ae}Ther, Peder Fiske, John Atle K{\Aa}L{\Aa}S, Mats Grahn, Jacob H\"Oglund
Humoral immunocompetence in relation to condition, size, asymmetry and MHC class II variation in great snipe (Gallinago media) males
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
1. In recent years many studies have investigated the relationships between different aspects of the immune system and ecology in various organisms. Yet, it remains unclear why individuals differ in their ability to mount an immune response against various antigens (often referred to as 'immunocompetence'). Different kinds of trade-offs may be involved and costs of mounting the immune response often lead to condition dependent effects. 2. We investigated how variation in condition, morphology and genetic variables influenced the amount of antibodies produced against two novel antigens in a migrating bird, the great snipe (Gallinago media). 3. We found no evidence for condition dependence of the antibody response and no effect of MHC genetics. There was, however, a weak negative correlation between body size and the amount of antibody production, which may indicate a trade-off between growth and immune response in this species.
[ { "created": "Thu, 13 Jun 2013 09:38:20 GMT", "version": "v1" } ]
2013-06-14
[ [ "Ekblom", "Robert", "" ], [ "Hasselquist", "Dennis", "" ], [ "S{\\Ae}Ther", "Stein Are", "" ], [ "Fiske", "Peder", "" ], [ "K{\\Aa}L{\\Aa}S", "John Atle", "" ], [ "Grahn", "Mats", "" ], [ "HÖglund", "Jacob", "" ] ]
1. In recent years many studies have investigated the relationships between different aspects of the immune system and ecology in various organisms. Yet, it remains unclear why individuals differ in their ability to mount an immune response against various antigens (often referred to as 'immunocompetence'). Different kinds of trade-offs may be involved and costs of mounting the immune response often lead to condition dependent effects. 2. We investigated how variation in condition, morphology and genetic variables influenced the amount of antibodies produced against two novel antigens in a migrating bird, the great snipe (Gallinago media). 3. We found no evidence for condition dependence of the antibody response and no effect of MHC genetics. There was, however, a weak negative correlation between body size and the amount of antibody production, which may indicate a trade-off between growth and immune response in this species.
1305.0119
Steffen Rulands
Steffen Rulands, Alejandro Zielinski, Erwin Frey
Global attractors and extinction dynamics of cyclically competing species
18 pages, 12 figures
Phys. Rev. E 87, 052710 (2013)
10.1103/PhysRevE.87.052710
null
q-bio.PE nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transitions to absorbing states are of fundamental importance in non-equilibrium physics as well as ecology. In ecology, absorbing states correspond to the extinction of species. We here study the spatial population dynamics of three cyclically interacting species. The interaction scheme comprises both direct competition between species as in the cyclic Lotka-Volterra model, and separated selection and reproduction processes as in the May-Leonard model. We show that the dynamic processes leading to the transient maintenance of biodiversity are closely linked to attractors of the nonlinear dynamics for the overall species' concentrations. The characteristics of these global attractors change qualitatively at certain threshold values of the mobility, and depend on the relative strength of the different types of competition between species. They give information about the scaling of extinction times with the system size and thereby the stability of biodiversity. We define an effective free energy as the negative logarithm of the probability to find the system in a specific global state before reaching one of the absorbing states. The global attractors then correspond to minima of this effective energy landscape and determine the most probable values for the species' global concentrations. As in equilibrium thermodynamics, qualitative changes in the effective free energy landscape indicate and characterize the underlying non-equilibrium phase transitions. We provide the complete phase diagrams for the population dynamics, and give a comprehensive analysis of the spatio-temporal dynamics and routes to extinction in the respective phases.
[ { "created": "Wed, 1 May 2013 08:06:30 GMT", "version": "v1" } ]
2014-05-27
[ [ "Rulands", "Steffen", "" ], [ "Zielinski", "Alejandro", "" ], [ "Frey", "Erwin", "" ] ]
Transitions to absorbing states are of fundamental importance in non-equilibrium physics as well as ecology. In ecology, absorbing states correspond to the extinction of species. We here study the spatial population dynamics of three cyclically interacting species. The interaction scheme comprises both direct competition between species as in the cyclic Lotka-Volterra model, and separated selection and reproduction processes as in the May-Leonard model. We show that the dynamic processes leading to the transient maintenance of biodiversity are closely linked to attractors of the nonlinear dynamics for the overall species' concentrations. The characteristics of these global attractors change qualitatively at certain threshold values of the mobility, and depend on the relative strength of the different types of competition between species. They give information about the scaling of extinction times with the system size and thereby the stability of biodiversity. We define an effective free energy as the negative logarithm of the probability to find the system in a specific global state before reaching one of the absorbing states. The global attractors then correspond to minima of this effective energy landscape and determine the most probable values for the species' global concentrations. As in equilibrium thermodynamics, qualitative changes in the effective free energy landscape indicate and characterize the underlying non-equilibrium phase transitions. We provide the complete phase diagrams for the population dynamics, and give a comprehensive analysis of the spatio-temporal dynamics and routes to extinction in the respective phases.
1610.08664
Luigi Leonardo Palese
Luigi Leonardo Palese
A random version of principal component analysis in data clustering
18 pages, 6 figures, 2 tables
Comput. Biol. Chem. 73 (2018) 57-64
10.1016/j.compbiolchem.2018.01.009
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Principal component analysis (PCA) is a widespread technique for data analysis that relies on the covariance-correlation matrix of the analyzed data. However to properly work with high-dimensional data, PCA poses severe mathematical constraints on the minimum number of different replicates or samples that must be included in the analysis. Here we show that a modified algorithm works not only on well dimensioned datasets, but also on degenerated ones.
[ { "created": "Thu, 27 Oct 2016 08:52:19 GMT", "version": "v1" } ]
2018-10-18
[ [ "Palese", "Luigi Leonardo", "" ] ]
Principal component analysis (PCA) is a widespread technique for data analysis that relies on the covariance-correlation matrix of the analyzed data. However to properly work with high-dimensional data, PCA poses severe mathematical constraints on the minimum number of different replicates or samples that must be included in the analysis. Here we show that a modified algorithm works not only on well dimensioned datasets, but also on degenerated ones.
1505.01206
Changshuai Wei
Changshuai Wei, Daniel J. Schaid, Qing Lu
Trees Assembling Mann Whitney Approach for Detecting Genome-wide Joint Association among Low Marginal Effect loci
null
Genet Epidemiol. 2013 Jan;37(1):84-91
10.1002/gepi.21693
null
q-bio.QM stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Common complex diseases are likely influenced by the interplay of hundreds, or even thousands, of genetic variants. Converging evidence shows that genetic variants with low marginal effects (LME) play an important role in disease development. Despite their potential significance, discovering LME genetic variants and assessing their joint association on high dimensional data (e.g., genome wide association studies) remain a great challenge. To facilitate joint association analysis among a large ensemble of LME genetic variants, we proposed a computationally efficient and powerful approach, which we call Trees Assembling Mann whitney (TAMW). Through simulation studies and an empirical data application, we found that TAMW outperformed multifactor dimensionality reduction (MDR) and the likelihood ratio based Mann whitney approach (LRMW) when the underlying complex disease involves multiple LME loci and their interactions. For instance, in a simulation with 20 interacting LME loci, TAMW attained a higher power (power=0.931) than both MDR (power=0.599) and LRMW (power=0.704). In an empirical study of 29 known Crohn's disease (CD) loci, TAMW also identified a stronger joint association with CD than those detected by MDR and LRMW. Finally, we applied TAMW to Wellcome Trust CD GWAS to conduct a genome wide analysis. The analysis of 459K single nucleotide polymorphisms was completed in 40 hours using parallel computing, and revealed a joint association predisposing to CD (p-value=2.763e-19). Further analysis of the newly discovered association suggested that 13 genes, such as ATG16L1 and LACC1, may play an important role in CD pathophysiological and etiological processes.
[ { "created": "Tue, 5 May 2015 22:14:28 GMT", "version": "v1" } ]
2015-05-07
[ [ "Wei", "Changshuai", "" ], [ "Schaid", "Daniel J.", "" ], [ "Lu", "Qing", "" ] ]
Common complex diseases are likely influenced by the interplay of hundreds, or even thousands, of genetic variants. Converging evidence shows that genetic variants with low marginal effects (LME) play an important role in disease development. Despite their potential significance, discovering LME genetic variants and assessing their joint association on high dimensional data (e.g., genome wide association studies) remain a great challenge. To facilitate joint association analysis among a large ensemble of LME genetic variants, we proposed a computationally efficient and powerful approach, which we call Trees Assembling Mann whitney (TAMW). Through simulation studies and an empirical data application, we found that TAMW outperformed multifactor dimensionality reduction (MDR) and the likelihood ratio based Mann whitney approach (LRMW) when the underlying complex disease involves multiple LME loci and their interactions. For instance, in a simulation with 20 interacting LME loci, TAMW attained a higher power (power=0.931) than both MDR (power=0.599) and LRMW (power=0.704). In an empirical study of 29 known Crohn's disease (CD) loci, TAMW also identified a stronger joint association with CD than those detected by MDR and LRMW. Finally, we applied TAMW to Wellcome Trust CD GWAS to conduct a genome wide analysis. The analysis of 459K single nucleotide polymorphisms was completed in 40 hours using parallel computing, and revealed a joint association predisposing to CD (p-value=2.763e-19). Further analysis of the newly discovered association suggested that 13 genes, such as ATG16L1 and LACC1, may play an important role in CD pathophysiological and etiological processes.
q-bio/0511030
Radhakrishnan Nagarajan
Radhakrishnan Nagarajan, Meenakshi Upreti
Correlation Statistics for cDNA Microarray Image Analysis
25 Pages, 8 Figures
null
null
null
q-bio.GN q-bio.QM
null
In this report, correlation of the pixels comprising a microarray spot is investigated. Subsequently, correlation statistics namely: Pearson correlation and Spearman rank correlation are used to segment the foreground and background intensity of microarray spots. The performance of correlation-based segmentation is compared to clustering-based (PAM, k-means) and seeded-region growing techniques (SPOT). It is shown that correlation-based segmentation is useful in flagging poorly hybridized spots, thus minimizes false-positives. The present study also raises the intriguing question of whether a change in correlation can be an indicator of differential gene expression.
[ { "created": "Wed, 16 Nov 2005 15:25:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Nagarajan", "Radhakrishnan", "" ], [ "Upreti", "Meenakshi", "" ] ]
In this report, correlation of the pixels comprising a microarray spot is investigated. Subsequently, correlation statistics namely: Pearson correlation and Spearman rank correlation are used to segment the foreground and background intensity of microarray spots. The performance of correlation-based segmentation is compared to clustering-based (PAM, k-means) and seeded-region growing techniques (SPOT). It is shown that correlation-based segmentation is useful in flagging poorly hybridized spots, thus minimizes false-positives. The present study also raises the intriguing question of whether a change in correlation can be an indicator of differential gene expression.
2007.14464
Manqing Ma
Xueming Liu, Daqing Li, Manqing Ma, Boleslaw K. Szymanski, H Eugene Stanley, Jianxi Gao
Network resilience
Review chapter
Physics Reports 471:1-108, Aug. 12, 2022
10.1016/j.physrep.2022.04.002
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many systems on our planet are known to shift abruptly and irreversibly from one state to another when they are forced across a "tipping point," such as mass extinctions in ecological networks, cascading failures in infrastructure systems, and social convention changes in human and animal networks. Such a regime shift demonstrates a system's resilience that characterizes the ability of a system to adjust its activity to retain its basic functionality in the face of internal disturbances or external environmental changes. In the past 50 years, attention was almost exclusively given to low dimensional systems and calibration of their resilience functions and indicators of early warning signals without considerations for the interactions between the components. Only in recent years, taking advantages of the network theory and lavish real data sets, network scientists have directed their interest to the real-world complex networked multidimensional systems and their resilience function and early warning indicators. This report is devoted to a comprehensive review of resilience function and regime shift of complex systems in different domains, such as ecology, biology, social systems and infrastructure. We cover the related research about empirical observations, experimental studies, mathematical modeling, and theoretical analysis. We also discuss some ambiguous definitions, such as robustness, resilience, and stability.
[ { "created": "Sun, 26 Jul 2020 18:12:45 GMT", "version": "v1" }, { "created": "Sun, 10 Apr 2022 00:02:00 GMT", "version": "v2" } ]
2022-05-23
[ [ "Liu", "Xueming", "" ], [ "Li", "Daqing", "" ], [ "Ma", "Manqing", "" ], [ "Szymanski", "Boleslaw K.", "" ], [ "Stanley", "H Eugene", "" ], [ "Gao", "Jianxi", "" ] ]
Many systems on our planet are known to shift abruptly and irreversibly from one state to another when they are forced across a "tipping point," such as mass extinctions in ecological networks, cascading failures in infrastructure systems, and social convention changes in human and animal networks. Such a regime shift demonstrates a system's resilience that characterizes the ability of a system to adjust its activity to retain its basic functionality in the face of internal disturbances or external environmental changes. In the past 50 years, attention was almost exclusively given to low dimensional systems and calibration of their resilience functions and indicators of early warning signals without considerations for the interactions between the components. Only in recent years, taking advantages of the network theory and lavish real data sets, network scientists have directed their interest to the real-world complex networked multidimensional systems and their resilience function and early warning indicators. This report is devoted to a comprehensive review of resilience function and regime shift of complex systems in different domains, such as ecology, biology, social systems and infrastructure. We cover the related research about empirical observations, experimental studies, mathematical modeling, and theoretical analysis. We also discuss some ambiguous definitions, such as robustness, resilience, and stability.
1910.06860
Augustine Okolie
Augustine Okolie and Johannes M\"uller
Exact and approximate formulas for contact tracing on random trees
24 pages, 9 figures
Mathematical Biosciences, 108320 (2020)
10.1016/j.mbs.2020.108320
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a stochastic susceptible-infected-recovered (SIR) model with contact tracing on random trees and on the configuration model. On a rooted tree, where initially all individuals are susceptible apart from the root which is infected, we are able to find exact formulas for the distribution of the infectious period. Thereto, we show how to extend the existing theory for contact tracing in homogeneously mixing populations to trees. Based on these formulas, we discuss the influence of randomness in the tree and the basic reproduction. We find the well known results for the homogeneously mixing case as a limit of the present model (tree-shaped contact graph). Furthermore, we develop approximate mean field equations for the dynamics on trees, and - using the message passing method - also for the configuration model. The interpretation and implications of the results are discussed.
[ { "created": "Tue, 15 Oct 2019 15:23:34 GMT", "version": "v1" }, { "created": "Wed, 16 Oct 2019 05:46:48 GMT", "version": "v2" }, { "created": "Sat, 22 Feb 2020 10:14:45 GMT", "version": "v3" } ]
2020-02-25
[ [ "Okolie", "Augustine", "" ], [ "Müller", "Johannes", "" ] ]
We consider a stochastic susceptible-infected-recovered (SIR) model with contact tracing on random trees and on the configuration model. On a rooted tree, where initially all individuals are susceptible apart from the root which is infected, we are able to find exact formulas for the distribution of the infectious period. Thereto, we show how to extend the existing theory for contact tracing in homogeneously mixing populations to trees. Based on these formulas, we discuss the influence of randomness in the tree and the basic reproduction. We find the well known results for the homogeneously mixing case as a limit of the present model (tree-shaped contact graph). Furthermore, we develop approximate mean field equations for the dynamics on trees, and - using the message passing method - also for the configuration model. The interpretation and implications of the results are discussed.
0801.4556
German Andres Enciso
Winfried Just and German Enciso
Large attractors in cooperative bi-quadratic Boolean networks. Part II
14 pages
null
null
null
q-bio.MN q-bio.QM
null
Boolean networks have been the object of much attention, especially since S. Kauffman proposed them in the 1960's as models for gene regulatory networks. These systems are characterized by being defined on a Boolean state space and by simultaneous updating at discrete time steps. Of particular importance for biological applications are networks in which the indegree for each variable is bounded by a fixed constant, as was stressed by Kauffman in his original papers. An important question is which conditions on the network topology can rule out exponentially long periodic orbits in the system. In this paper we consider cooperative systems, i.e. systems with positive feedback interconnections among all variables, which in a continuous setting guarantees a very stable dynamics. In Part I of this paper we presented a construction that shows that for an arbitrary constant 0<c<2 and sufficiently large n there exist n-dimensional Boolean cooperative networks in which both the indegree and outdegree of each for each variable is bounded by two (bi-quadratic networks) and which nevertheless contain periodic orbits of length at least c^n. In this part, we prove an inverse result showing that for sufficiently large n and for 0<c<2 sufficiently close to 2, any n-dimensional cooperative, bi-quadratic Boolean network with a cycle of length at least c^n must have a large proportion of variables with indegree 1. Such systems therefore share a structural similarity to the systems constructed in Part I.
[ { "created": "Tue, 29 Jan 2008 20:50:32 GMT", "version": "v1" } ]
2008-01-30
[ [ "Just", "Winfried", "" ], [ "Enciso", "German", "" ] ]
Boolean networks have been the object of much attention, especially since S. Kauffman proposed them in the 1960's as models for gene regulatory networks. These systems are characterized by being defined on a Boolean state space and by simultaneous updating at discrete time steps. Of particular importance for biological applications are networks in which the indegree for each variable is bounded by a fixed constant, as was stressed by Kauffman in his original papers. An important question is which conditions on the network topology can rule out exponentially long periodic orbits in the system. In this paper we consider cooperative systems, i.e. systems with positive feedback interconnections among all variables, which in a continuous setting guarantees a very stable dynamics. In Part I of this paper we presented a construction that shows that for an arbitrary constant 0<c<2 and sufficiently large n there exist n-dimensional Boolean cooperative networks in which both the indegree and outdegree of each for each variable is bounded by two (bi-quadratic networks) and which nevertheless contain periodic orbits of length at least c^n. In this part, we prove an inverse result showing that for sufficiently large n and for 0<c<2 sufficiently close to 2, any n-dimensional cooperative, bi-quadratic Boolean network with a cycle of length at least c^n must have a large proportion of variables with indegree 1. Such systems therefore share a structural similarity to the systems constructed in Part I.
2402.15091
Chen Shen
Zehua Si, Zhixue He, Chen Shen and Jun Tanimoto
Mixed strategy approach destabilizes cooperation in finite populations with clustering coefficient
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Evolutionary game theory, encompassing discrete, continuous, and mixed strategies, is pivotal for understanding cooperation dynamics. Discrete strategies involve deterministic actions with a fixed probability of one, whereas continuous strategies employ intermediate probabilities to convey the extent of cooperation and emphasize expected payoffs. Mixed strategies, though akin to continuous ones, calculate immediate payoffs based on the action chosen at a given moment within intermediate probabilities. Although previous research has highlighted the distinct impacts of these strategic approaches on fostering cooperation, the reasons behind the differing levels of cooperation among these approaches have remained somewhat unclear. This study explores how these strategic approaches influence cooperation in the context of the prisoner's dilemma game, particularly in networked populations with varying clustering coefficients. Our research goes beyond existing studies by revealing that the differences in cooperation levels between these strategic approaches are not confined to finite populations; they also depend on the clustering coefficients of these populations. In populations with nonzero clustering coefficients, we observed varying degrees of stable cooperation for each strategic approach across multiple simulations, with mixed strategies showing the most variability, followed by continuous and discrete strategies. However, this variability in cooperation evolution decreased in populations with a clustering coefficient of zero, narrowing the differences in cooperation levels among the strategies. These findings suggest that in more realistic settings, the robustness of cooperation systems may be compromised, as the evolution of cooperation through mixed and continuous strategies introduces a degree of unpredictability.
[ { "created": "Fri, 23 Feb 2024 04:34:05 GMT", "version": "v1" } ]
2024-02-26
[ [ "Si", "Zehua", "" ], [ "He", "Zhixue", "" ], [ "Shen", "Chen", "" ], [ "Tanimoto", "Jun", "" ] ]
Evolutionary game theory, encompassing discrete, continuous, and mixed strategies, is pivotal for understanding cooperation dynamics. Discrete strategies involve deterministic actions with a fixed probability of one, whereas continuous strategies employ intermediate probabilities to convey the extent of cooperation and emphasize expected payoffs. Mixed strategies, though akin to continuous ones, calculate immediate payoffs based on the action chosen at a given moment within intermediate probabilities. Although previous research has highlighted the distinct impacts of these strategic approaches on fostering cooperation, the reasons behind the differing levels of cooperation among these approaches have remained somewhat unclear. This study explores how these strategic approaches influence cooperation in the context of the prisoner's dilemma game, particularly in networked populations with varying clustering coefficients. Our research goes beyond existing studies by revealing that the differences in cooperation levels between these strategic approaches are not confined to finite populations; they also depend on the clustering coefficients of these populations. In populations with nonzero clustering coefficients, we observed varying degrees of stable cooperation for each strategic approach across multiple simulations, with mixed strategies showing the most variability, followed by continuous and discrete strategies. However, this variability in cooperation evolution decreased in populations with a clustering coefficient of zero, narrowing the differences in cooperation levels among the strategies. These findings suggest that in more realistic settings, the robustness of cooperation systems may be compromised, as the evolution of cooperation through mixed and continuous strategies introduces a degree of unpredictability.
2011.10318
Vincent Huin
Sabiha Eddarkaoui (JPArc), M\'egane Homa (JPArc), Anne Loyens (JPArc), Emilie Faivre (JPArc), Vincent Deramecourt (JPArc), Claude-Alain Maurage, Luc Bu\'ee (JPArc), Vincent Huin (JPArc), Bernard Sablonni\`ere (JPArc)
The TMEM240 Protein, Mutated in SCA21, Is Expressed in Purkinje Cells and Synaptic Terminals
The Cerebellum, Springer, In press, Ahead of print
null
10.1007/s12311-020-01112-y
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A variety of missense mutations and a stop mutation in the gene coding for transmembrane protein 240 (TMEM240) have been reported to be the causative mutations of spinocerebellar ataxia 21 (SCA21). We aimed to investigate the expression of TMEM240 protein in mouse brain at the tissue, cellular, and subcellular levels. Immunofluorescence labeling showed TMEM240 to be expressed in various areas of the brain, with the highest levels in the hippocampus, isocortex, and cerebellum. In the cerebellum, TMEM240 was detected in the deep nuclei and the cerebellar cortex. The protein was expressed in all three layers of the cortex and various cerebellar neurons. TMEM240 was localized to climbing, mossy, and parallel fiber afferents projecting to Purkinje cells, as shown by coimmunostaining with VGLUT1 and VGLUT2. Co-immunostaining with synaptophysin, post-synaptic fractionation, and confirmatory electron microscopy showed TMEM240 to be localized to the post-synaptic side of synapses near the Purkinje-cell soma. Similar results were obtained in human cerebellar sections. These data suggest that TMEM240 may be involved in the organization of the cerebellar network, particularly in synaptic inputs converging on Purkinje cells. This study is the first to describe TMEM240 expression in the normal mouse brain.
[ { "created": "Fri, 20 Nov 2020 10:15:41 GMT", "version": "v1" } ]
2020-11-23
[ [ "Eddarkaoui", "Sabiha", "", "JPArc" ], [ "Homa", "Mégane", "", "JPArc" ], [ "Loyens", "Anne", "", "JPArc" ], [ "Faivre", "Emilie", "", "JPArc" ], [ "Deramecourt", "Vincent", "", "JPArc" ], [ "Maurage", "Claude-Alain", "", "JPArc" ], [ "Buée", "Luc", "", "JPArc" ], [ "Huin", "Vincent", "", "JPArc" ], [ "Sablonnière", "Bernard", "", "JPArc" ] ]
A variety of missense mutations and a stop mutation in the gene coding for transmembrane protein 240 (TMEM240) have been reported to be the causative mutations of spinocerebellar ataxia 21 (SCA21). We aimed to investigate the expression of TMEM240 protein in mouse brain at the tissue, cellular, and subcellular levels. Immunofluorescence labeling showed TMEM240 to be expressed in various areas of the brain, with the highest levels in the hippocampus, isocortex, and cerebellum. In the cerebellum, TMEM240 was detected in the deep nuclei and the cerebellar cortex. The protein was expressed in all three layers of the cortex and various cerebellar neurons. TMEM240 was localized to climbing, mossy, and parallel fiber afferents projecting to Purkinje cells, as shown by coimmunostaining with VGLUT1 and VGLUT2. Co-immunostaining with synaptophysin, post-synaptic fractionation, and confirmatory electron microscopy showed TMEM240 to be localized to the post-synaptic side of synapses near the Purkinje-cell soma. Similar results were obtained in human cerebellar sections. These data suggest that TMEM240 may be involved in the organization of the cerebellar network, particularly in synaptic inputs converging on Purkinje cells. This study is the first to describe TMEM240 expression in the normal mouse brain.
0809.1180
Patricia Faisca
P.F.N. Faisca and C. M. Gomes
On the relation between native geometry and conformational plasticity
Accepted in Biophysical Chemistry
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In protein folding the term plasticity refers to the number of alternative folding pathways encountered in response to free energy perturbations such as those induced by mutation. Here we explore the relation between folding plasticity and a gross, generic feature of the native geometry, namely, the relative number of local and non-local native contacts. The results from our study, which is based on Monte Carlo simulations of simple lattice proteins, show that folding to a structure that is rich in local contacts is considerably more plastic than folding to a native geometry characterized by having a very large number of long-range contacts (i.e., contacts between amino acids that are separated by more than 12 units of backbone distance). The smaller folding plasticity of `non-local' native geometries is probably a direct consequence of their higher folding cooperativity that renders the folding reaction more robust against single- and multiple-point mutations.
[ { "created": "Sat, 6 Sep 2008 15:45:16 GMT", "version": "v1" } ]
2008-09-09
[ [ "Faisca", "P. F. N.", "" ], [ "Gomes", "C. M.", "" ] ]
In protein folding the term plasticity refers to the number of alternative folding pathways encountered in response to free energy perturbations such as those induced by mutation. Here we explore the relation between folding plasticity and a gross, generic feature of the native geometry, namely, the relative number of local and non-local native contacts. The results from our study, which is based on Monte Carlo simulations of simple lattice proteins, show that folding to a structure that is rich in local contacts is considerably more plastic than folding to a native geometry characterized by having a very large number of long-range contacts (i.e., contacts between amino acids that are separated by more than 12 units of backbone distance). The smaller folding plasticity of `non-local' native geometries is probably a direct consequence of their higher folding cooperativity that renders the folding reaction more robust against single- and multiple-point mutations.
1905.12179
Tom Chou
Stephanie M. Lewkiewicz, Yao-Li Chuang, Tom Chou
Dynamics of T Cell Receptor Distributions Following Acute Thymic Atrophy and Resumption
22 pages, 7 figures
null
null
null
q-bio.PE q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Naive human T cells are produced in the thymus, which atrophies abruptly and severely in response to physical or psychological stress. To understand how an instance of stress affects the size and "diversity" of the peripheral naive T cell pool, we derive a mean-field autonomous ODE model of T cell replenishment that allows us to track the clone abundance distribution (the mean number of different TCRs each represented by a specific number of cells). We identify equilibrium solutions that arise at different rates of T cell production, and derive analytic approximations to the dominant eigenvalues and eigenvectors of the problem linearized about these equilibria. From the forms of the eigenvalues and eigenvectors, we estimate rates at which counts of clones of different sizes converge to and depart from equilibrium values--that is, how the number of clones of different sizes "adjust" to the changing rate of T cell production. Under most physiologically realistic realizations of our model, the dominant eigenvalue (representing the slowest dynamics of the clone abundance distribution) scales as a power law in the thymic output for low output levels, but saturates at higher T cell production rates. Our analysis provides a framework for quantitatively understanding how the clone abundance distributions evolve under small changes in the overall T cell production rate by the thymus.
[ { "created": "Wed, 29 May 2019 02:21:04 GMT", "version": "v1" } ]
2019-05-30
[ [ "Lewkiewicz", "Stephanie M.", "" ], [ "Chuang", "Yao-Li", "" ], [ "Chou", "Tom", "" ] ]
Naive human T cells are produced in the thymus, which atrophies abruptly and severely in response to physical or psychological stress. To understand how an instance of stress affects the size and "diversity" of the peripheral naive T cell pool, we derive a mean-field autonomous ODE model of T cell replenishment that allows us to track the clone abundance distribution (the mean number of different TCRs each represented by a specific number of cells). We identify equilibrium solutions that arise at different rates of T cell production, and derive analytic approximations to the dominant eigenvalues and eigenvectors of the problem linearized about these equilibria. From the forms of the eigenvalues and eigenvectors, we estimate rates at which counts of clones of different sizes converge to and depart from equilibrium values--that is, how the number of clones of different sizes "adjust" to the changing rate of T cell production. Under most physiologically realistic realizations of our model, the dominant eigenvalue (representing the slowest dynamics of the clone abundance distribution) scales as a power law in the thymic output for low output levels, but saturates at higher T cell production rates. Our analysis provides a framework for quantitatively understanding how the clone abundance distributions evolve under small changes in the overall T cell production rate by the thymus.
0911.0656
Randen Patterson
Gaurav Bhardwaj, Christine P. Wells, Reka Albert, Damian B. van Rossum, and Randen L. Patterson
Mapping Complex Networks: Exploring Boolean Modeling of Signal Transduction Pathways
19 pages, 5 Figures, 2 Tables
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we explored the utility of a descriptive and predictive bionetwork model for phospholipase C-coupled calcium signaling pathways, built with non-kinetic experimental information. Boolean models generated from these data yield oscillatory activity patterns for both the endoplasmic reticulum resident inositol-1,4,5-trisphosphate receptor (IP3R) and the plasma-membrane resident canonical transient receptor potential channel 3 (TRPC3). These results are specific as randomization of the Boolean operators ablates oscillatory pattern formation. Furthermore, knock-out simulations of the IP3R, TRPC3, and multiple other proteins recapitulate experimentally derived results. The potential of this approach can be observed by its ability to predict previously undescribed cellular phenotypes using in vitro experimental data. Indeed our cellular analysis of the developmental and calcium-regulatory protein, DANGER1a, confirms the counter-intuitive predictions from our Boolean models in two highly relevant cellular models. Based on these results, we theorize that with sufficient legacy knowledge and/or computational biology predictions, Boolean networks provide a robust method for predictive-modeling of any biological system.
[ { "created": "Tue, 3 Nov 2009 19:16:12 GMT", "version": "v1" }, { "created": "Mon, 14 Dec 2009 16:47:36 GMT", "version": "v2" } ]
2009-12-14
[ [ "Bhardwaj", "Gaurav", "" ], [ "Wells", "Christine P.", "" ], [ "Albert", "Reka", "" ], [ "van Rossum", "Damian B.", "" ], [ "Patterson", "Randen L.", "" ] ]
In this study, we explored the utility of a descriptive and predictive bionetwork model for phospholipase C-coupled calcium signaling pathways, built with non-kinetic experimental information. Boolean models generated from these data yield oscillatory activity patterns for both the endoplasmic reticulum resident inositol-1,4,5-trisphosphate receptor (IP3R) and the plasma-membrane resident canonical transient receptor potential channel 3 (TRPC3). These results are specific as randomization of the Boolean operators ablates oscillatory pattern formation. Furthermore, knock-out simulations of the IP3R, TRPC3, and multiple other proteins recapitulate experimentally derived results. The potential of this approach can be observed by its ability to predict previously undescribed cellular phenotypes using in vitro experimental data. Indeed our cellular analysis of the developmental and calcium-regulatory protein, DANGER1a, confirms the counter-intuitive predictions from our Boolean models in two highly relevant cellular models. Based on these results, we theorize that with sufficient legacy knowledge and/or computational biology predictions, Boolean networks provide a robust method for predictive-modeling of any biological system.
1910.00048
Joseph Rusinko
Sophia Huebler, Rachel Morris, Joseph Rusinko and Yifei Tao
Constructing Semi-Directed Level-1 Phylogenetic Networks from Quarnets
17 pages
null
null
null
q-bio.GN math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce two algorithms for reconstructing semi-directed level-1 phylogenetic networks from their complete set of 4-leaf subnetworks, known as quarnets. The first algorithm, the sequential method, begins with a single quarnet and adds on one leaf at a time until all leaves have been placed. The second algorithm, the cherry-blob method, functions similarly to cherry-picking algorithms for phylogenetic trees by identifying exterior network structures from the quarnets.
[ { "created": "Mon, 30 Sep 2019 18:33:27 GMT", "version": "v1" } ]
2019-10-02
[ [ "Huebler", "Sophia", "" ], [ "Morris", "Rachel", "" ], [ "Rusinko", "Joseph", "" ], [ "Tao", "Yifei", "" ] ]
We introduce two algorithms for reconstructing semi-directed level-1 phylogenetic networks from their complete set of 4-leaf subnetworks, known as quarnets. The first algorithm, the sequential method, begins with a single quarnet and adds on one leaf at a time until all leaves have been placed. The second algorithm, the cherry-blob method, functions similarly to cherry-picking algorithms for phylogenetic trees by identifying exterior network structures from the quarnets.
2007.00886
Qinbing Fu
Qinbing Fu and Shigang Yue
Modelling Drosophila Motion Vision Pathways for Decoding the Direction of Translating Objects Against Cluttered Moving Backgrounds
27 pages, 13 figures, been included in a future issue of the journal of Biological Cybernetics
null
10.1007/s00422-020-00841-x
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly \textit{Drosophila} motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
[ { "created": "Thu, 2 Jul 2020 05:15:31 GMT", "version": "v1" } ]
2020-07-03
[ [ "Fu", "Qinbing", "" ], [ "Yue", "Shigang", "" ] ]
Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly \textit{Drosophila} motion vision pathways and presents computational modelling based on cutting-edge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: 1) the proposed model articulates the forming of both direction-selective (DS) and direction-opponent (DO) responses, revealed as principal features of motion perception neural circuits, in a feed-forward manner; 2) it also shows robust direction selectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive or negative output indicating preferred-direction (PD) or null-direction (ND) translation. The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds.
1504.07392
Giovanni Bussi
Francesco Di Palma, Sandro Bottaro, and Giovanni Bussi
Kissing loop interaction in adenine riboswitch: insights from umbrella sampling simulations
Accepted for publication on BMC Bioinformatics
BMC Bioinformatics 2015, 16(Suppl 9):S6
10.1186/1471-2105-16-S9-S6
null
q-bio.BM physics.bio-ph physics.chem-ph
http://creativecommons.org/licenses/by/3.0/
Riboswitches are cis-acting regulatory RNA elements prevalently located in the leader sequences of bacterial mRNA. An adenine sensing riboswitch cis-regulates adeninosine deaminase gene (add) in Vibrio vulnificus. The structural mechanism regulating its conformational changes upon ligand binding mostly remains to be elucidated. In this open framework it has been suggested that the ligand stabilizes the interaction of the distal "kissing loop" complex. Using accurate full-atom molecular dynamics with explicit solvent in combination with enhanced sampling techniques and advanced analysis methods it could be possible to provide a more detailed perspective on the formation of these tertiary contacts. In this work, we used umbrella sampling simulations to study the thermodynamics of the kissing loop complex in the presence and in the absence of the cognate ligand. We enforced the breaking/formation of the loop-loop interaction restraining the distance between the two loops. We also assessed the convergence of the results by using two alternative initialization protocols. A structural analysis was performed using a novel approach to analyze base contacts. Our simulations qualitatively indicated that the ligand could stabilize the kissing loop complex. We also compared with previously published simulation studies. Kissing complex stabilization given by the ligand was compatible with available experimental data. However, the dependence of its value on the initialization protocol of the umbrella sampling simulations posed some questions on the quantitative interpretation of the results and called for better converged enhanced sampling simulations.
[ { "created": "Tue, 28 Apr 2015 09:34:02 GMT", "version": "v1" } ]
2015-06-08
[ [ "Di Palma", "Francesco", "" ], [ "Bottaro", "Sandro", "" ], [ "Bussi", "Giovanni", "" ] ]
Riboswitches are cis-acting regulatory RNA elements prevalently located in the leader sequences of bacterial mRNA. An adenine sensing riboswitch cis-regulates adeninosine deaminase gene (add) in Vibrio vulnificus. The structural mechanism regulating its conformational changes upon ligand binding mostly remains to be elucidated. In this open framework it has been suggested that the ligand stabilizes the interaction of the distal "kissing loop" complex. Using accurate full-atom molecular dynamics with explicit solvent in combination with enhanced sampling techniques and advanced analysis methods it could be possible to provide a more detailed perspective on the formation of these tertiary contacts. In this work, we used umbrella sampling simulations to study the thermodynamics of the kissing loop complex in the presence and in the absence of the cognate ligand. We enforced the breaking/formation of the loop-loop interaction restraining the distance between the two loops. We also assessed the convergence of the results by using two alternative initialization protocols. A structural analysis was performed using a novel approach to analyze base contacts. Our simulations qualitatively indicated that the ligand could stabilize the kissing loop complex. We also compared with previously published simulation studies. Kissing complex stabilization given by the ligand was compatible with available experimental data. However, the dependence of its value on the initialization protocol of the umbrella sampling simulations posed some questions on the quantitative interpretation of the results and called for better converged enhanced sampling simulations.
1802.05820
Meng-Han Zhang
Meng-Han Zhang, Wu-Yun Pan, Shi Yan, Li Jin
Phonemic evidence reveals interwoven evolution of Chinese dialects
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Han Chinese experienced substantial population migrations and admixture in history, yet little is known about the evolutionary process of Chinese dialects. Here, we used phylogenetic approaches and admixture inference to explicitly decompose the underlying structure of the diversity of Chinese dialects, based on the total phoneme inventories of 140 dialect samples from seven traditional dialect groups: Mandarin, Wu, Xiang, Gan, Hakka, Min and Yue. We found a north-south gradient of phonemic differences in Chinese dialects induced from historical population migrations. We also quantified extensive horizontal language transfers among these dialects, corresponding to the complicated socio-genetic history in China. We finally identified that the middle latitude dialects of Xiang, Gan and Hakka were formed by admixture with other four dialects. Accordingly, the middle-latitude areas in China were a linguistic melting pot of northern and southern Han populations. Our study provides a detailed phylogenetic and historical context against family-tree model in China.
[ { "created": "Fri, 16 Feb 2018 02:07:23 GMT", "version": "v1" } ]
2018-02-19
[ [ "Zhang", "Meng-Han", "" ], [ "Pan", "Wu-Yun", "" ], [ "Yan", "Shi", "" ], [ "Jin", "Li", "" ] ]
Han Chinese experienced substantial population migrations and admixture in history, yet little is known about the evolutionary process of Chinese dialects. Here, we used phylogenetic approaches and admixture inference to explicitly decompose the underlying structure of the diversity of Chinese dialects, based on the total phoneme inventories of 140 dialect samples from seven traditional dialect groups: Mandarin, Wu, Xiang, Gan, Hakka, Min and Yue. We found a north-south gradient of phonemic differences in Chinese dialects induced from historical population migrations. We also quantified extensive horizontal language transfers among these dialects, corresponding to the complicated socio-genetic history in China. We finally identified that the middle latitude dialects of Xiang, Gan and Hakka were formed by admixture with other four dialects. Accordingly, the middle-latitude areas in China were a linguistic melting pot of northern and southern Han populations. Our study provides a detailed phylogenetic and historical context against family-tree model in China.
1405.6258
Mike Steel Prof.
Mike Steel and Joel D. Velasco
Axiomatic opportunities and obstacles for inferring a species tree from gene trees
19 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reconstruction of a central tendency `species tree' from a large number of conflicting gene trees is a central problem in systematic biology. Moreover, it becomes particularly problematic when taxon coverage is patchy, so that not all taxa are present in every gene tree. Here, we list four apparently desirable properties that a method for estimating a species tree from gene trees could have (the strongest property states that building a species tree from input gene trees and then pruning leaves gives a tree that is the same as, or more resolved than, the tree obtained by first removing the taxa from the input trees and then building the species tree). We show that while it is technically possible to simultaneously satisfy these properties when taxon coverage is complete, they cannot all be satisfied in the more general supertree setting. In part two, we discuss a concordance-based consensus method based on Baum's `plurality clusters', and an extension to concordance supertrees.
[ { "created": "Sat, 24 May 2014 02:17:43 GMT", "version": "v1" } ]
2014-05-27
[ [ "Steel", "Mike", "" ], [ "Velasco", "Joel D.", "" ] ]
The reconstruction of a central tendency `species tree' from a large number of conflicting gene trees is a central problem in systematic biology. Moreover, it becomes particularly problematic when taxon coverage is patchy, so that not all taxa are present in every gene tree. Here, we list four apparently desirable properties that a method for estimating a species tree from gene trees could have (the strongest property states that building a species tree from input gene trees and then pruning leaves gives a tree that is the same as, or more resolved than, the tree obtained by first removing the taxa from the input trees and then building the species tree). We show that while it is technically possible to simultaneously satisfy these properties when taxon coverage is complete, they cannot all be satisfied in the more general supertree setting. In part two, we discuss a concordance-based consensus method based on Baum's `plurality clusters', and an extension to concordance supertrees.
2109.14724
Andrejs Tucs
Andrejs Tucs, Koji Tsuda, Adnan Sljoka
Probing conformational dynamics of antibodies with geometric simulations
Book chapter
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
This chapter describes the application of constrained geometric simulations for prediction of antibody structural dynamics. We utilize constrained geometric simulations method FRODAN, which is a low computational complexity alternative to Molecular Dynamics (MD) simulations that can rapidly explore flexible motions in protein structures. FRODAN is highly suited for conformational dynamics analysis of large proteins, complexes, intrinsically disordered proteins and dynamics that occurs on longer biologically relevant time scales which are normally inaccessible to classical MD simulations. This approach predicts protein dynamics at an all-atom scale while retaining realistic covalent bonding, maintaining dihedral angles in energetically good conformations while avoiding steric clashes in addition to performing other geometric and stereochemical criteria checks. In this chapter, we apply FRODAN to showcase its applicability for probing functionally relevant dynamics of IgG2a, including large amplitude domain-domain motions and motions of complementarity determining region (CDR) loops. As was suggested in previous experimental studies, our simulations show that antibodies can explore a large range of conformational space.
[ { "created": "Wed, 29 Sep 2021 21:05:49 GMT", "version": "v1" } ]
2021-10-01
[ [ "Tucs", "Andrejs", "" ], [ "Tsuda", "Koji", "" ], [ "Sljoka", "Adnan", "" ] ]
This chapter describes the application of constrained geometric simulations for prediction of antibody structural dynamics. We utilize constrained geometric simulations method FRODAN, which is a low computational complexity alternative to Molecular Dynamics (MD) simulations that can rapidly explore flexible motions in protein structures. FRODAN is highly suited for conformational dynamics analysis of large proteins, complexes, intrinsically disordered proteins and dynamics that occurs on longer biologically relevant time scales which are normally inaccessible to classical MD simulations. This approach predicts protein dynamics at an all-atom scale while retaining realistic covalent bonding, maintaining dihedral angles in energetically good conformations while avoiding steric clashes in addition to performing other geometric and stereochemical criteria checks. In this chapter, we apply FRODAN to showcase its applicability for probing functionally relevant dynamics of IgG2a, including large amplitude domain-domain motions and motions of complementarity determining region (CDR) loops. As was suggested in previous experimental studies, our simulations show that antibodies can explore a large range of conformational space.
2108.11331
Fatma Al-Musalhi
Kifah Al-Maqrashi, Fatma Al-Musalhi, Ibrahim M. Elmojtaba, Nasser Al-Salti
The Impact of Mobility between Rural Areas and Forests on the Spread of Zika
30pages,13figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A mathematical model of Zika virus transmission incorporating human movement between rural areas and nearby forests is presented to investigate the role of human movement in the spread of Zika virus infections in human and mosquito populations. Proportions of both susceptible and infected humans living in rural areas are assumed to move to nearby forest areas. Direct, indirect and vertical transmission routes are incorporated for all populations. Mathematical analysis of the proposed model has been presented. The analysis starts with normalizing the proposed model. Positivity and boundedness of solutions to the normalized model have been then addressed. The basic reproduction number has been calculated using the next generation matrix method and its relation to the three routes of disease transmission has been presented. The sensitivity analysis of the basic reproduction number to all model parameters has been investigated. The analysis also includes existence and stability of disease free and endemic equilibrium points. Bifurcation analysis has been also carried out. Finally, numerical solutions to the normalized model have been obtained to confirm the theoretical results and to demonstrate the impact of human movement in the disease transmission in human and mosquito populations.
[ { "created": "Tue, 24 Aug 2021 08:50:23 GMT", "version": "v1" } ]
2021-08-26
[ [ "Al-Maqrashi", "Kifah", "" ], [ "Al-Musalhi", "Fatma", "" ], [ "Elmojtaba", "Ibrahim M.", "" ], [ "Al-Salti", "Nasser", "" ] ]
A mathematical model of Zika virus transmission incorporating human movement between rural areas and nearby forests is presented to investigate the role of human movement in the spread of Zika virus infections in human and mosquito populations. Proportions of both susceptible and infected humans living in rural areas are assumed to move to nearby forest areas. Direct, indirect and vertical transmission routes are incorporated for all populations. Mathematical analysis of the proposed model has been presented. The analysis starts with normalizing the proposed model. Positivity and boundedness of solutions to the normalized model have been then addressed. The basic reproduction number has been calculated using the next generation matrix method and its relation to the three routes of disease transmission has been presented. The sensitivity analysis of the basic reproduction number to all model parameters has been investigated. The analysis also includes existence and stability of disease free and endemic equilibrium points. Bifurcation analysis has been also carried out. Finally, numerical solutions to the normalized model have been obtained to confirm the theoretical results and to demonstrate the impact of human movement in the disease transmission in human and mosquito populations.
2407.12060
Wen-Juan Ma Prof. Dr.
Barbora Augstenov\'a, Wen-Juan Ma
Decoding Dmrt1: Insights into vertebrate sex determination and gonadal sex differentiation
40 pages, 4 figures, 2 tables, 1 supplementary table
null
null
null
q-bio.QM q-bio.CB q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dmrt1 is pivotal in testis formation and function by interacting with genes crucial for Sertoli cell differentiation, such as Sox9. It represses female-determining pathways and ovarian formation by silencing Foxl2. Across 127 vertebrate species, Dmrt1 exhibits sexually dimorphic expression, prior to and during gonadal sex differentiation and in adult testes, implicating its role in master regulation of sex determination and gonadal sex differentiation. Dmrt1 emerges as a master sex-determining gene in one fish, frog, chicken and reptile. Recent studies suggest epigenetic regulation of Dmrt1 in its promoter methylation and transposable element insertion introducing epigenetic modification to cis-regulatory elements, alongside non-coding RNA involvement, in a wide spectrum of sex-determining mechanisms. Additionally, alternative splicing of Dmrt1 was found in all vertebrate groups except amphibians. Dmrt1 has evolved many lineage-specific isoforms (ranging from 2 to 10) but has no sex-specific splicing variants in any taxa, which is in sharp contrast to the constitutional sex-specific splicing of Dsx in insects. Future research should focus on understanding the molecular basis of environmental sex determination from a broader taxon, and the molecular basis of epigenetic regulation. It is also essential to understand why and how multiple alternative splicing variants of Dmrt1 evolve and the specific roles each isoform plays in sex determination and gonadal sex differentiation, as well as the significant differences in the molecular mechanisms and functions of alternative splicing between Dmrt1 in vertebrates and sex-specific splicing of Dsx in insects. Understanding the differences could provide deeper insights into the evolution of sex-determining mechanisms between vertebrates and insects.
[ { "created": "Mon, 15 Jul 2024 12:55:32 GMT", "version": "v1" }, { "created": "Thu, 8 Aug 2024 09:43:30 GMT", "version": "v2" } ]
2024-08-09
[ [ "Augstenová", "Barbora", "" ], [ "Ma", "Wen-Juan", "" ] ]
Dmrt1 is pivotal in testis formation and function by interacting with genes crucial for Sertoli cell differentiation, such as Sox9. It represses female-determining pathways and ovarian formation by silencing Foxl2. Across 127 vertebrate species, Dmrt1 exhibits sexually dimorphic expression, prior to and during gonadal sex differentiation and in adult testes, implicating its role in master regulation of sex determination and gonadal sex differentiation. Dmrt1 emerges as a master sex-determining gene in one fish, frog, chicken and reptile. Recent studies suggest epigenetic regulation of Dmrt1 in its promoter methylation and transposable element insertion introducing epigenetic modification to cis-regulatory elements, alongside non-coding RNA involvement, in a wide spectrum of sex-determining mechanisms. Additionally, alternative splicing of Dmrt1 was found in all vertebrate groups except amphibians. Dmrt1 has evolved many lineage-specific isoforms (ranging from 2 to 10) but has no sex-specific splicing variants in any taxa, which is in sharp contrast to the constitutional sex-specific splicing of Dsx in insects. Future research should focus on understanding the molecular basis of environmental sex determination from a broader taxon, and the molecular basis of epigenetic regulation. It is also essential to understand why and how multiple alternative splicing variants of Dmrt1 evolve and the specific roles each isoform plays in sex determination and gonadal sex differentiation, as well as the significant differences in the molecular mechanisms and functions of alternative splicing between Dmrt1 in vertebrates and sex-specific splicing of Dsx in insects. Understanding the differences could provide deeper insights into the evolution of sex-determining mechanisms between vertebrates and insects.
1512.05234
Inna Kuperstein
Inna Kuperstein
Deciphering cell signaling rewiring in human disorders
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The knowledge of cell molecular mechanisms implicated in human diseases is expanding and should be converted into guidelines for deciphering pathological cell signaling and suggesting appropriate treatment. The basic assumption is that during a pathological transformation, the cell does not create new signaling mechanisms, but rather it hijacks the existing molecular programs. This affects not only intracellular functions, but also a crosstalk between different cell types resulting in a new, yet pathological status of the system. There is a certain combination of molecular characteristics dictating specific cell signaling states that sustains the pathological disease status. Identifying and manipulating the key molecular players controlling these cell signaling states, and shifting the pathological status toward the desired healthy phenotype, are the major challenge for molecular biology of human diseases.
[ { "created": "Wed, 16 Dec 2015 16:26:35 GMT", "version": "v1" } ]
2015-12-17
[ [ "Kuperstein", "Inna", "" ] ]
The knowledge of cell molecular mechanisms implicated in human diseases is expanding and should be converted into guidelines for deciphering pathological cell signaling and suggesting appropriate treatment. The basic assumption is that during a pathological transformation, the cell does not create new signaling mechanisms, but rather it hijacks the existing molecular programs. This affects not only intracellular functions, but also a crosstalk between different cell types resulting in a new, yet pathological status of the system. There is a certain combination of molecular characteristics dictating specific cell signaling states that sustains the pathological disease status. Identifying and manipulating the key molecular players controlling these cell signaling states, and shifting the pathological status toward the desired healthy phenotype, are the major challenge for molecular biology of human diseases.
1610.03828
Gabriel Ocker
Gabriel Koch Ocker, Kre\v{s}imir Josi\'c, Eric Shea-Brown, Michael A. Buice
Linking structure and activity in nonlinear spiking networks
We were recently made aware of an error in this article: in Figure 13, we neglected several one-loop contributions to the two-point correlation. For the networks we studied here, these contributions are small (third order in the coupling strength). For further discussion, please see the correction note appended to the end of the article
PLoS Computational Biology 2017;13(6):e1005583
10.1371/journal.pcbi.1005583
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of {\it structure-driven activity} has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities---including those of different cell types---combine with connectivity to shape population activity and function.
[ { "created": "Wed, 12 Oct 2016 19:07:45 GMT", "version": "v1" }, { "created": "Fri, 10 Mar 2017 20:09:45 GMT", "version": "v2" }, { "created": "Tue, 25 Feb 2020 23:31:10 GMT", "version": "v3" } ]
2020-02-27
[ [ "Ocker", "Gabriel Koch", "" ], [ "Josić", "Krešimir", "" ], [ "Shea-Brown", "Eric", "" ], [ "Buice", "Michael A.", "" ] ]
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of {\it structure-driven activity} has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks' spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities---including those of different cell types---combine with connectivity to shape population activity and function.