id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1403.6779
Leon Avery
Leon Avery
A model of the effect of uncertainty on the C elegans L2/L2d decision
null
PLoS ONE 9(7): e100580
10.1371/journal.pone.0100580
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the end of the first larval stage, the C elegans larva chooses between two developmental pathways, an L2 committed to reproductive development and an L2d, which has the option of undergoing reproductive development or entering the dauer diapause. I develop a quantitative model of this choice using mathematical tools developed for pricing financial options. The model predicts that the optimal decision must take into account not only the expected potential for reproductive growth, but also the uncertainty in that expected potential. Because the L2d has more flexibility than the L2, it is favored in unpredictable environments. I estimate that the ability to take uncertainty into account may increase reproductive value by as much as 5%, and discuss possible experimental tests for this ability.
[ { "created": "Wed, 26 Mar 2014 18:13:48 GMT", "version": "v1" }, { "created": "Fri, 16 May 2014 14:34:34 GMT", "version": "v2" }, { "created": "Fri, 25 Jul 2014 17:58:17 GMT", "version": "v3" } ]
2014-07-29
[ [ "Avery", "Leon", "" ] ]
At the end of the first larval stage, the C elegans larva chooses between two developmental pathways, an L2 committed to reproductive development and an L2d, which has the option of undergoing reproductive development or entering the dauer diapause. I develop a quantitative model of this choice using mathematical tools developed for pricing financial options. The model predicts that the optimal decision must take into account not only the expected potential for reproductive growth, but also the uncertainty in that expected potential. Because the L2d has more flexibility than the L2, it is favored in unpredictable environments. I estimate that the ability to take uncertainty into account may increase reproductive value by as much as 5%, and discuss possible experimental tests for this ability.
2301.00548
Niv DeMalach
David Sampson Issaka, Or Gross, Itunuoluwa Ayilara, Talia Schabes, Niv DeMalach
Density-dependent and independent mechanisms jointly reduce species performance under nitrogen enrichment
null
null
10.1111/oik.09838
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Nitrogen (N) deposition is a primary driver of species loss in plant communities globally. However, the mechanisms by which high N availability causes species loss remain unclear. Many hypotheses for species loss with increasing N availability highlight density-dependent mechanisms, i.e., changes in species interactions. However, an alternative set of hypotheses highlights density-independent detrimental effects of nitrogen (e.g., N toxicity). We tested the role of density-dependent and density-independent mechanisms in reducing species performance. For this aim, we used 120 experimental plant communities comprised of annual species growing together in containers under four fertilization treatments: (1) no nutrient addition(, (2) all nutrients except N (P, K, and micronutrients), (3) Low N, and (4) high N. Each fertilization treatment included two sowing densities to differentiate between the effects of competition (N * density interactions) and other detrimental effects of N. We focused on three performance attributes: the probability of reaching the reproduction period, biomass growth, and population growth. We found that individual biomass and population growth rates decreased with increasing sowing density in all nutrient treatments, implying that species interactions were predominantly negative. The common grass had a higher biomass and population growth under N enrichment, regardless of sowing density. In contrast, the legume showed a density-independent reduction in biomass growth with increasing N. Lastly, the small forb showed a density-dependent reduction in population growth, i.e., the decline occurred only under high density. Our results demonstrate that density-dependent and density-independent mechanisms operate simultaneously to reduce species performance under high N availability. Yet, their relative importance varies among species and life stages.
[ { "created": "Mon, 2 Jan 2023 07:24:22 GMT", "version": "v1" } ]
2023-04-18
[ [ "Issaka", "David Sampson", "" ], [ "Gross", "Or", "" ], [ "Ayilara", "Itunuoluwa", "" ], [ "Schabes", "Talia", "" ], [ "DeMalach", "Niv", "" ] ]
Nitrogen (N) deposition is a primary driver of species loss in plant communities globally. However, the mechanisms by which high N availability causes species loss remain unclear. Many hypotheses for species loss with increasing N availability highlight density-dependent mechanisms, i.e., changes in species interactions. However, an alternative set of hypotheses highlights density-independent detrimental effects of nitrogen (e.g., N toxicity). We tested the role of density-dependent and density-independent mechanisms in reducing species performance. For this aim, we used 120 experimental plant communities comprised of annual species growing together in containers under four fertilization treatments: (1) no nutrient addition(, (2) all nutrients except N (P, K, and micronutrients), (3) Low N, and (4) high N. Each fertilization treatment included two sowing densities to differentiate between the effects of competition (N * density interactions) and other detrimental effects of N. We focused on three performance attributes: the probability of reaching the reproduction period, biomass growth, and population growth. We found that individual biomass and population growth rates decreased with increasing sowing density in all nutrient treatments, implying that species interactions were predominantly negative. The common grass had a higher biomass and population growth under N enrichment, regardless of sowing density. In contrast, the legume showed a density-independent reduction in biomass growth with increasing N. Lastly, the small forb showed a density-dependent reduction in population growth, i.e., the decline occurred only under high density. Our results demonstrate that density-dependent and density-independent mechanisms operate simultaneously to reduce species performance under high N availability. Yet, their relative importance varies among species and life stages.
1902.00483
Stefan Bornholdt
Stefan Bornholdt and Stuart Kauffman
Ensembles, Dynamics, and Cell Types: Revisiting the Statistical Mechanics Perspective on Cellular Regulation
22 pages, article will be included in a special issue of J. Theor. Biol. dedicated to the memory of Prof. Rene Thomas
null
null
null
q-bio.MN cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic regulatory networks control ontogeny. For fifty years Boolean networks have served as models of such systems, ranging from ensembles of random Boolean networks as models for generic properties of gene regulation to working dynamical models of a growing number of sub-networks of real cells. At the same time, their statistical mechanics has been thoroughly studied. Here we recapitulate their original motivation in the context of current theoretical and empirical research. We discuss ensembles of random Boolean networks whose dynamical attractors model cell types. A sub-ensemble is the critical ensemble. There is now strong evidence that genetic regulatory networks are dynamically critical, and that evolution is exploring the critical sub-ensemble. The generic properties of this sub-ensemble predict essential features of cell differentiation. In particular, the number of attractors in such networks scales as the DNA content raised to the 0.63 power. Data on the number of cell types as a function of the DNA content per cell shows a scaling relationship of 0.88. Thus, the theory correctly predicts a power law relationship between the number of cell types and the DNA contents per cell, and a comparable slope. We discuss these new scaling values and show prospects for new research lines for Boolean networks as a base model for systems biology.
[ { "created": "Fri, 1 Feb 2019 17:58:35 GMT", "version": "v1" } ]
2019-02-04
[ [ "Bornholdt", "Stefan", "" ], [ "Kauffman", "Stuart", "" ] ]
Genetic regulatory networks control ontogeny. For fifty years Boolean networks have served as models of such systems, ranging from ensembles of random Boolean networks as models for generic properties of gene regulation to working dynamical models of a growing number of sub-networks of real cells. At the same time, their statistical mechanics has been thoroughly studied. Here we recapitulate their original motivation in the context of current theoretical and empirical research. We discuss ensembles of random Boolean networks whose dynamical attractors model cell types. A sub-ensemble is the critical ensemble. There is now strong evidence that genetic regulatory networks are dynamically critical, and that evolution is exploring the critical sub-ensemble. The generic properties of this sub-ensemble predict essential features of cell differentiation. In particular, the number of attractors in such networks scales as the DNA content raised to the 0.63 power. Data on the number of cell types as a function of the DNA content per cell shows a scaling relationship of 0.88. Thus, the theory correctly predicts a power law relationship between the number of cell types and the DNA contents per cell, and a comparable slope. We discuss these new scaling values and show prospects for new research lines for Boolean networks as a base model for systems biology.
1211.6644
Yuri Shestopaloff
Yu. K. Shestopaloff
General law of growth and replication. Growth equation and its applications
53 pages, 17 figures, 4 tables
Biophysical Reviews and Letters, 2012 Vol. 7, No. 1-2, p. 71-120
10.1142/S1793048012500051
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present significantly advanced studies of the previously introduced physical growth mechanism and unite it with biochemical growth factors. Obtained results allowed formulating the general growth law which governs growth and evolutional development of all living organisms, their organs and systems. It was discovered that the growth cycle is predefined by the distribution of nutritional resources between maintenance needs and biomass production. This distribution is quantitatively defined by the growth ratio parameter, which depends on the geometry of an organism, phase of growth and, indirectly, organism's biochemical machinery. The amount of produced biomass, in turn, defines the composition of biochemical reactions. Changing amount of nutrients diverted to biomass production is what forces organisms to proceed through the whole growth and replication cycle. The growth law can be formulated as follows: the rate of growth is proportional to influx of nutrients and growth ratio. Considering specific biochemical components of different organisms, we find influxes of required nutrients and substitute them into the growth equation; then, we compute growth curves for amoeba, wild type fission yeast, fission yeast's mutant. In all cases, predicted growth curves correspond very well to experimental data. Obtained results prove validity and fundamental scientific value of the discovery.
[ { "created": "Wed, 28 Nov 2012 16:16:37 GMT", "version": "v1" } ]
2012-11-29
[ [ "Shestopaloff", "Yu. K.", "" ] ]
We present significantly advanced studies of the previously introduced physical growth mechanism and unite it with biochemical growth factors. Obtained results allowed formulating the general growth law which governs growth and evolutional development of all living organisms, their organs and systems. It was discovered that the growth cycle is predefined by the distribution of nutritional resources between maintenance needs and biomass production. This distribution is quantitatively defined by the growth ratio parameter, which depends on the geometry of an organism, phase of growth and, indirectly, organism's biochemical machinery. The amount of produced biomass, in turn, defines the composition of biochemical reactions. Changing amount of nutrients diverted to biomass production is what forces organisms to proceed through the whole growth and replication cycle. The growth law can be formulated as follows: the rate of growth is proportional to influx of nutrients and growth ratio. Considering specific biochemical components of different organisms, we find influxes of required nutrients and substitute them into the growth equation; then, we compute growth curves for amoeba, wild type fission yeast, fission yeast's mutant. In all cases, predicted growth curves correspond very well to experimental data. Obtained results prove validity and fundamental scientific value of the discovery.
2101.00823
Philip Gerlee
Philip Gerlee, Julia Karlsson, Ingrid Fritzell, Thomas Brezicka, Armin Spreco, Toomas Timpka, Anna J\"oud, Torbj\"orn Lundh
Predicting regional COVID-19 hospital admissions in Sweden using mobility data
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
The transmission of COVID-19 is dependent on social contacts, the rate of which have varied during the pandemic due to mandated and voluntary social distancing. Changes in transmission dynamics eventually affect hospital admissions and we have used this connection in order to model and predict regional hospital admissions in Sweden during the COVID-19 pandemic. We use an SEIR-model for each region in Sweden in which the infectivity is assumed to depend on mobility data in terms of public transport utilisation and mobile phone usage. The results show that the model can capture the timing of the first and beginning of the second wave of the pandemic. Further, we show that for two major regions of Sweden models with public transport data outperform models using mobile phone usage. The model assumes a three week delay from disease transmission to hospitalisation which makes it possible to use current mobility data to predict future admissions.
[ { "created": "Mon, 4 Jan 2021 08:18:53 GMT", "version": "v1" } ]
2021-01-05
[ [ "Gerlee", "Philip", "" ], [ "Karlsson", "Julia", "" ], [ "Fritzell", "Ingrid", "" ], [ "Brezicka", "Thomas", "" ], [ "Spreco", "Armin", "" ], [ "Timpka", "Toomas", "" ], [ "Jöud", "Anna", "" ], [ "Lundh", "Torbjörn", "" ] ]
The transmission of COVID-19 is dependent on social contacts, the rate of which have varied during the pandemic due to mandated and voluntary social distancing. Changes in transmission dynamics eventually affect hospital admissions and we have used this connection in order to model and predict regional hospital admissions in Sweden during the COVID-19 pandemic. We use an SEIR-model for each region in Sweden in which the infectivity is assumed to depend on mobility data in terms of public transport utilisation and mobile phone usage. The results show that the model can capture the timing of the first and beginning of the second wave of the pandemic. Further, we show that for two major regions of Sweden models with public transport data outperform models using mobile phone usage. The model assumes a three week delay from disease transmission to hospitalisation which makes it possible to use current mobility data to predict future admissions.
1901.03596
Francisco Herrer\'ias-Azcu\'e Mr.
Francisco Herrer\'ias-Azcu\'e, Vicente P\'erez-Mu\~nuzuri and Tobias Galla
Motion, fixation probability and the choice of an evolutionary process
null
null
10.1371/journal.pcbi.1007238
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different evolutionary models are known to make disparate predictions for the success of an invading mutant in some situations. For example, some evolutionary mechanics lead to amplification of selection in structured populations, while others suppress it. Here, we use computer simulations to study evolutionary populations moved by flows, and show how the speed of this motion impacts the fixation probability of an invading mutant. Flows of different speeds interpolate between evolutionary dynamics on fixed heterogeneous graphs and in well-stirred populations. We find that the motion has an active role in amplifying or suppressing selection, accomplished by fragmenting and reconnecting the interaction graph. While increasing flow speeds suppress selection for most evolutionary models, we identify characteristic responses to flow for the different update rules we test. We suggest these responses as a potential aid for choosing the most suitable update rule for a given biological system.
[ { "created": "Fri, 11 Jan 2019 14:36:52 GMT", "version": "v1" } ]
2020-07-01
[ [ "Herrerías-Azcué", "Francisco", "" ], [ "Pérez-Muñuzuri", "Vicente", "" ], [ "Galla", "Tobias", "" ] ]
Different evolutionary models are known to make disparate predictions for the success of an invading mutant in some situations. For example, some evolutionary mechanics lead to amplification of selection in structured populations, while others suppress it. Here, we use computer simulations to study evolutionary populations moved by flows, and show how the speed of this motion impacts the fixation probability of an invading mutant. Flows of different speeds interpolate between evolutionary dynamics on fixed heterogeneous graphs and in well-stirred populations. We find that the motion has an active role in amplifying or suppressing selection, accomplished by fragmenting and reconnecting the interaction graph. While increasing flow speeds suppress selection for most evolutionary models, we identify characteristic responses to flow for the different update rules we test. We suggest these responses as a potential aid for choosing the most suitable update rule for a given biological system.
2301.01110
Jacob Rast
Jacob Rast
Causal Discovery for Gene Regulatory Network Prediction
null
null
null
null
q-bio.MN cs.AI
http://creativecommons.org/licenses/by/4.0/
Biological systems and processes are networks of complex nonlinear regulatory interactions between nucleic acids, proteins, and metabolites. A natural way in which to represent these interaction networks is through the use of a graph. In this formulation, each node represents a nucleic acid, protein, or metabolite and edges represent intermolecular interactions (inhibition, regulation, promotion, coexpression, etc.). In this work, a novel algorithm for the discovery of latent graph structures given experimental data is presented.
[ { "created": "Tue, 3 Jan 2023 14:11:00 GMT", "version": "v1" } ]
2023-01-04
[ [ "Rast", "Jacob", "" ] ]
Biological systems and processes are networks of complex nonlinear regulatory interactions between nucleic acids, proteins, and metabolites. A natural way in which to represent these interaction networks is through the use of a graph. In this formulation, each node represents a nucleic acid, protein, or metabolite and edges represent intermolecular interactions (inhibition, regulation, promotion, coexpression, etc.). In this work, a novel algorithm for the discovery of latent graph structures given experimental data is presented.
2110.03907
Albert Christian Soewongsono
Albert Ch. Soewongsono (1), Barbara R. Holland (1), Ma{\l}gorzata M. O'Reilly (1) (School of Natural Sciences, Discipline of Mathematics, University of Tasmania)
The Shape of Phylogenies Under Phase-Type Distributed Times to Speciation and Extinction
32 pages, 14 figures, 2 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Phylogenetic trees are widely used to understand the evolutionary history of organisms. Tree shapes provide information about macroevolutionary processes. However, macroevolutionary models are unreliable for inferring the true processes underlying empirical trees. Here, we propose a flexible and biologically plausible macroevolutionary model for phylogenetic trees where times to speciation or extinction events are drawn from a Coxian phase-type (PH) distribution. First, we show that different choices of parameters in our model lead to a range of tree balances as measured by Aldous' $\beta$ statistic. In particular, we demonstrate that it is possible to find parameters that correspond well to empirical tree balance. Next, we provide a natural extension of the $\beta$ statistic to sets of trees. This extension produces less biased estimates of $\beta$ compared to using the median $\beta$ values from individual trees. Furthermore, we derive a likelihood expression for the probability of observing any tree with branch lengths under a model with speciation but no extinction. Finally, we illustrate the application of our model by performing both absolute and relative goodness-of-fit tests for two large empirical phylogenies (squamates and angiosperms) that compare models with Coxian PH distributed times to speciation with models that assume exponential or Weibull distributed waiting times. In our numerical analysis, we found that, in most cases, models assuming a Coxian PH distribution provided the best fit.
[ { "created": "Fri, 8 Oct 2021 06:01:36 GMT", "version": "v1" } ]
2021-10-11
[ [ "Soewongsono", "Albert Ch.", "" ], [ "Holland", "Barbara R.", "" ], [ "O'Reilly", "Małgorzata M.", "" ] ]
Phylogenetic trees are widely used to understand the evolutionary history of organisms. Tree shapes provide information about macroevolutionary processes. However, macroevolutionary models are unreliable for inferring the true processes underlying empirical trees. Here, we propose a flexible and biologically plausible macroevolutionary model for phylogenetic trees where times to speciation or extinction events are drawn from a Coxian phase-type (PH) distribution. First, we show that different choices of parameters in our model lead to a range of tree balances as measured by Aldous' $\beta$ statistic. In particular, we demonstrate that it is possible to find parameters that correspond well to empirical tree balance. Next, we provide a natural extension of the $\beta$ statistic to sets of trees. This extension produces less biased estimates of $\beta$ compared to using the median $\beta$ values from individual trees. Furthermore, we derive a likelihood expression for the probability of observing any tree with branch lengths under a model with speciation but no extinction. Finally, we illustrate the application of our model by performing both absolute and relative goodness-of-fit tests for two large empirical phylogenies (squamates and angiosperms) that compare models with Coxian PH distributed times to speciation with models that assume exponential or Weibull distributed waiting times. In our numerical analysis, we found that, in most cases, models assuming a Coxian PH distribution provided the best fit.
1509.01663
Louxin Zhang
Louxin Zhang
On Tree Based Phylogenetic Networks
17 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree based networks. Reticulation-visible networks and child-sibling networks are all tree based. In this work, we present a simply necessary and sufficient condition for tree-based networks and prove that there is a universal tree based network for each set of species such that every phylogenetic tree on the same species is a base of this network. The existence of universal tree based network implies that for any given set of phylogenetic trees (resp. clusters) on the same species there exists a tree base network that display all of them.
[ { "created": "Sat, 5 Sep 2015 04:50:39 GMT", "version": "v1" }, { "created": "Tue, 8 Sep 2015 10:23:02 GMT", "version": "v2" } ]
2015-09-09
[ [ "Zhang", "Louxin", "" ] ]
A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree based networks. Reticulation-visible networks and child-sibling networks are all tree based. In this work, we present a simply necessary and sufficient condition for tree-based networks and prove that there is a universal tree based network for each set of species such that every phylogenetic tree on the same species is a base of this network. The existence of universal tree based network implies that for any given set of phylogenetic trees (resp. clusters) on the same species there exists a tree base network that display all of them.
0707.1295
Riccardo Zecchina
Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, Riccardo Zecchina
Efficient supervised learning in networks with binary synapses
10 pages, 4 figures
PNAS 104, 11079-11084 (2007)
10.1073/pnas.0700324104
null
q-bio.NC cond-mat.stat-mech cs.NE q-bio.QM
null
Recent experimental studies indicate that synaptic changes induced by neuronal activity are discrete jumps between a small number of stable states. Learning in systems with discrete synapses is known to be a computationally hard problem. Here, we study a neurobiologically plausible on-line learning algorithm that derives from Belief Propagation algorithms. We show that it performs remarkably well in a model neuron with binary synapses, and a finite number of `hidden' states per synapse, that has to learn a random classification task. Such system is able to learn a number of associations close to the theoretical limit, in time which is sublinear in system size. This is to our knowledge the first on-line algorithm that is able to achieve efficiently a finite number of patterns learned per binary synapse. Furthermore, we show that performance is optimal for a finite number of hidden states which becomes very small for sparse coding. The algorithm is similar to the standard `perceptron' learning algorithm, with an additional rule for synaptic transitions which occur only if a currently presented pattern is `barely correct'. In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), stabilizing the synapse in its current state. Finally, we show that a system with two visible states and K hidden states is much more robust to noise than a system with K visible states. We suggest this rule is sufficiently simple to be easily implemented by neurobiological systems or in hardware.
[ { "created": "Mon, 9 Jul 2007 16:23:55 GMT", "version": "v1" } ]
2009-11-13
[ [ "Baldassi", "Carlo", "" ], [ "Braunstein", "Alfredo", "" ], [ "Brunel", "Nicolas", "" ], [ "Zecchina", "Riccardo", "" ] ]
Recent experimental studies indicate that synaptic changes induced by neuronal activity are discrete jumps between a small number of stable states. Learning in systems with discrete synapses is known to be a computationally hard problem. Here, we study a neurobiologically plausible on-line learning algorithm that derives from Belief Propagation algorithms. We show that it performs remarkably well in a model neuron with binary synapses, and a finite number of `hidden' states per synapse, that has to learn a random classification task. Such system is able to learn a number of associations close to the theoretical limit, in time which is sublinear in system size. This is to our knowledge the first on-line algorithm that is able to achieve efficiently a finite number of patterns learned per binary synapse. Furthermore, we show that performance is optimal for a finite number of hidden states which becomes very small for sparse coding. The algorithm is similar to the standard `perceptron' learning algorithm, with an additional rule for synaptic transitions which occur only if a currently presented pattern is `barely correct'. In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), stabilizing the synapse in its current state. Finally, we show that a system with two visible states and K hidden states is much more robust to noise than a system with K visible states. We suggest this rule is sufficiently simple to be easily implemented by neurobiological systems or in hardware.
1511.00921
\'Etienne Fodor
\'Etienne Fodor, Wylie W. Ahmed, Maria Almonacid, Matthias Bussonnier, Nir S. Gov, Marie-H\'el\`ene Verlhac, Timo Betz, Paolo Visco, Fr\'ed\'eric van Wijland
Nonequilibrium dissipation in living oocytes
5 pages, 2 figures
EPL 116, 30008 (2016)
10.1209/0295-5075/116/30008
null
q-bio.SC cond-mat.soft cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living organisms are inherently out-of-equilibrium systems. We employ new developments in stochastic energetics and rely on a minimal microscopic model to predict the amount of mechanical energy dissipated by such dynamics. Our model includes complex rheological effects and nonequilibrium stochastic forces. By performing active microrheology and tracking micron-sized vesicles in the cytoplasm of living oocytes, we provide unprecedented measurements of the spectrum of dissipated energy. We show that our model is fully consistent with the experimental data, and we use it to offer predictions for the injection and dissipation energy scales involved in active fluctuations.
[ { "created": "Tue, 3 Nov 2015 14:31:13 GMT", "version": "v1" }, { "created": "Thu, 22 Dec 2016 14:42:27 GMT", "version": "v2" } ]
2016-12-23
[ [ "Fodor", "Étienne", "" ], [ "Ahmed", "Wylie W.", "" ], [ "Almonacid", "Maria", "" ], [ "Bussonnier", "Matthias", "" ], [ "Gov", "Nir S.", "" ], [ "Verlhac", "Marie-Hélène", "" ], [ "Betz", "Timo", "" ], [ "Visco", "Paolo", "" ], [ "van Wijland", "Frédéric", "" ] ]
Living organisms are inherently out-of-equilibrium systems. We employ new developments in stochastic energetics and rely on a minimal microscopic model to predict the amount of mechanical energy dissipated by such dynamics. Our model includes complex rheological effects and nonequilibrium stochastic forces. By performing active microrheology and tracking micron-sized vesicles in the cytoplasm of living oocytes, we provide unprecedented measurements of the spectrum of dissipated energy. We show that our model is fully consistent with the experimental data, and we use it to offer predictions for the injection and dissipation energy scales involved in active fluctuations.
2008.03165
Giannis Koutsou
Constantia Alexandrou, Vangelis Harmandaris, Anastasios Irakleous, Giannis Koutsou, and Nikos Savva
Modeling the evolution of COVID-19 via compartmental and particle-based approaches: application to the Cyprus case
Changes in v2: Updated to match published version; 21 pages, 8 figures, 1 table
PLOS ONE 16(5): e0250709 (2021)
10.1371/journal.pone.0250709 10.17605/OSF.IO/BP79H
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present two different approaches for modeling the spread of the COVID-19 pandemic. Both approaches are based on the population classes susceptible, exposed, infectious, quarantined, and recovered and allow for an arbitrary number of subgroups with different infection rates and different levels of testing. The first model is derived from a set of ordinary differential equations that incorporates the rates at which population transitions take place among classes. The other is a particle model, which is a specific case of crowd simulation model, in which the disease is transmitted through particle collisions and infection rates are varied by adjusting the particle velocities. The parameters of these two models are tuned using information on COVID-19 from the literature and country-specific data, including the effect of restrictions as they were imposed and lifted. We demonstrate the applicability of both models using data from Cyprus, for which we find that both models yield very similar results, giving confidence in the predictions.
[ { "created": "Fri, 7 Aug 2020 13:16:49 GMT", "version": "v1" }, { "created": "Mon, 10 May 2021 15:41:17 GMT", "version": "v2" } ]
2021-05-11
[ [ "Alexandrou", "Constantia", "" ], [ "Harmandaris", "Vangelis", "" ], [ "Irakleous", "Anastasios", "" ], [ "Koutsou", "Giannis", "" ], [ "Savva", "Nikos", "" ] ]
We present two different approaches for modeling the spread of the COVID-19 pandemic. Both approaches are based on the population classes susceptible, exposed, infectious, quarantined, and recovered and allow for an arbitrary number of subgroups with different infection rates and different levels of testing. The first model is derived from a set of ordinary differential equations that incorporates the rates at which population transitions take place among classes. The other is a particle model, which is a specific case of crowd simulation model, in which the disease is transmitted through particle collisions and infection rates are varied by adjusting the particle velocities. The parameters of these two models are tuned using information on COVID-19 from the literature and country-specific data, including the effect of restrictions as they were imposed and lifted. We demonstrate the applicability of both models using data from Cyprus, for which we find that both models yield very similar results, giving confidence in the predictions.
q-bio/0502014
Edward Lyman . D.
Edward Lyman, F. Marty Ytreberg, and Daniel M. Zuckerman
Resolution exchange simulation
revised manuscript: 4.2 pages, 3 figures
Phys. Rev. Lett. v.96:028105(2006)
10.1103/PhysRevLett.96.028105
null
q-bio.BM physics.bio-ph
null
We extend replica exchange simulation in two ways, and apply our approaches to biomolecules. The first generalization permits exchange simulation between models of differing resolution -- i.e., between detailed and coarse-grained models. Such ``resolution exchange'' can be applied to molecular systems or spin systems. The second extension is to ``pseudo-exchange'' simulations, which require little CPU usage for most levels of the exchange ladder and also substantially reduces the need for overlap between levels. Pseudo exchanges can be used in either replica or resolution exchange simulations. We perform efficient, converged simulations of a 50-atom peptide to illustrate the new approaches.
[ { "created": "Sun, 13 Feb 2005 19:52:26 GMT", "version": "v1" }, { "created": "Mon, 20 Jun 2005 19:47:55 GMT", "version": "v2" }, { "created": "Mon, 15 Aug 2005 17:28:36 GMT", "version": "v3" }, { "created": "Tue, 22 Nov 2005 21:42:39 GMT", "version": "v4" } ]
2009-11-11
[ [ "Lyman", "Edward", "" ], [ "Ytreberg", "F. Marty", "" ], [ "Zuckerman", "Daniel M.", "" ] ]
We extend replica exchange simulation in two ways, and apply our approaches to biomolecules. The first generalization permits exchange simulation between models of differing resolution -- i.e., between detailed and coarse-grained models. Such ``resolution exchange'' can be applied to molecular systems or spin systems. The second extension is to ``pseudo-exchange'' simulations, which require little CPU usage for most levels of the exchange ladder and also substantially reduces the need for overlap between levels. Pseudo exchanges can be used in either replica or resolution exchange simulations. We perform efficient, converged simulations of a 50-atom peptide to illustrate the new approaches.
2310.16908
Arvid Ernst Gollwitzer
Maximilian-David Rumpf, Mohammed Alser, Arvid E. Gollwitzer, Joel Lindegger, Nour Almadhoun, Can Firtina, Serghei Mangul, Onur Mutlu
SequenceLab: A Comprehensive Benchmark of Computational Methods for Comparing Genomic Sequences
null
null
null
null
q-bio.GN cs.AR q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Computational complexity is a key limitation of genomic analyses. Thus, over the last 30 years, researchers have proposed numerous fast heuristic methods that provide computational relief. Comparing genomic sequences is one of the most fundamental computational steps in most genomic analyses. Due to its high computational complexity, optimized exact and heuristic algorithms are still being developed. We find that these methods are highly sensitive to the underlying data, its quality, and various hyperparameters. Despite their wide use, no in-depth analysis has been performed, potentially falsely discarding genetic sequences from further analysis and unnecessarily inflating computational costs. We provide the first analysis and benchmark of this heterogeneity. We deliver an actionable overview of the 11 most widely used state-of-the-art methods for comparing genomic sequences. We also inform readers about their advantages and downsides using thorough experimental evaluation and different real datasets from all major manufacturers (i.e., Illumina, ONT, and PacBio). SequenceLab is publicly available at https://github.com/CMU-SAFARI/SequenceLab.
[ { "created": "Wed, 25 Oct 2023 18:17:46 GMT", "version": "v1" }, { "created": "Sun, 12 Nov 2023 16:07:25 GMT", "version": "v2" }, { "created": "Sun, 7 Jan 2024 16:04:16 GMT", "version": "v3" }, { "created": "Sun, 21 Jan 2024 15:14:32 GMT", "version": "v4" } ]
2024-01-23
[ [ "Rumpf", "Maximilian-David", "" ], [ "Alser", "Mohammed", "" ], [ "Gollwitzer", "Arvid E.", "" ], [ "Lindegger", "Joel", "" ], [ "Almadhoun", "Nour", "" ], [ "Firtina", "Can", "" ], [ "Mangul", "Serghei", "" ], [ "Mutlu", "Onur", "" ] ]
Computational complexity is a key limitation of genomic analyses. Thus, over the last 30 years, researchers have proposed numerous fast heuristic methods that provide computational relief. Comparing genomic sequences is one of the most fundamental computational steps in most genomic analyses. Due to its high computational complexity, optimized exact and heuristic algorithms are still being developed. We find that these methods are highly sensitive to the underlying data, its quality, and various hyperparameters. Despite their wide use, no in-depth analysis has been performed, potentially falsely discarding genetic sequences from further analysis and unnecessarily inflating computational costs. We provide the first analysis and benchmark of this heterogeneity. We deliver an actionable overview of the 11 most widely used state-of-the-art methods for comparing genomic sequences. We also inform readers about their advantages and downsides using thorough experimental evaluation and different real datasets from all major manufacturers (i.e., Illumina, ONT, and PacBio). SequenceLab is publicly available at https://github.com/CMU-SAFARI/SequenceLab.
1809.01127
Roman Kaplan
Roman Kaplan, Leonid Yavits and Ran Ginosar
RASSA: Resistive Pre-Alignment Accelerator for Approximate DNA Long Read Mapping
null
null
null
null
q-bio.GN cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA read mapping is a computationally expensive bioinformatics task, required for genome assembly and consensus polishing. It requires to find the best-fitting location for each DNA read on a long reference sequence. A novel resistive approximate similarity search accelerator, RASSA, exploits charge distribution and parallel in-memory processing to reflect a mismatch count between DNA sequences. RASSA implementation of DNA long read pre-alignment outperforms the state-of-art solution, minimap2, by 16-77x with comparable accuracy and provides two orders of magnitude higher throughput than GateKeeper, a short-read pre-alignment hardware architecture implemented in FPGA.
[ { "created": "Sun, 2 Sep 2018 17:33:47 GMT", "version": "v1" }, { "created": "Sun, 7 Oct 2018 08:04:12 GMT", "version": "v2" }, { "created": "Mon, 28 Jan 2019 12:58:32 GMT", "version": "v3" } ]
2019-01-29
[ [ "Kaplan", "Roman", "" ], [ "Yavits", "Leonid", "" ], [ "Ginosar", "Ran", "" ] ]
DNA read mapping is a computationally expensive bioinformatics task, required for genome assembly and consensus polishing. It requires to find the best-fitting location for each DNA read on a long reference sequence. A novel resistive approximate similarity search accelerator, RASSA, exploits charge distribution and parallel in-memory processing to reflect a mismatch count between DNA sequences. RASSA implementation of DNA long read pre-alignment outperforms the state-of-art solution, minimap2, by 16-77x with comparable accuracy and provides two orders of magnitude higher throughput than GateKeeper, a short-read pre-alignment hardware architecture implemented in FPGA.
q-bio/0412009
Krzysztof Kulakowski
M. J. Krawczyk and K. Kulakowski
Off-lattice simulation of the solid phase DNA amplification
8 pages, 5 figures
Comp. Phys. Commun. 170 (2005) 131
10.1016/j.cpc.2005.03.108
null
q-bio.BM q-bio.QM
null
Recent simulations of the solid phase DNA amplification (SPA) by J.-F. Mercier et al (Biophys. J. 85 (2003) 2075) are generalized to include two kinds of primers and the off-lattice character of the primer distribution on the surface. The sigmoidal character of the primer occupation by DNA, observed experimentally, is reproduced in the simulation. We discuss an influence of two parameters on the efficience of the amplification process: the initial density p_0 of the occupied primers from the interfacial amplification and the ratio r of the molecule length to the average distance between primers. The number of cycles till the saturation decreases with p_0 roughly as p_0^{-0.26}. For r=1.5, the number of occupied primers is reduced by a factor two, when compared to the case of longer molecules. Below r=1.4, the effectivity of SPA is reduced by a factor 100.
[ { "created": "Sun, 5 Dec 2004 20:59:59 GMT", "version": "v1" } ]
2009-11-10
[ [ "Krawczyk", "M. J.", "" ], [ "Kulakowski", "K.", "" ] ]
Recent simulations of the solid phase DNA amplification (SPA) by J.-F. Mercier et al (Biophys. J. 85 (2003) 2075) are generalized to include two kinds of primers and the off-lattice character of the primer distribution on the surface. The sigmoidal character of the primer occupation by DNA, observed experimentally, is reproduced in the simulation. We discuss an influence of two parameters on the efficience of the amplification process: the initial density p_0 of the occupied primers from the interfacial amplification and the ratio r of the molecule length to the average distance between primers. The number of cycles till the saturation decreases with p_0 roughly as p_0^{-0.26}. For r=1.5, the number of occupied primers is reduced by a factor two, when compared to the case of longer molecules. Below r=1.4, the effectivity of SPA is reduced by a factor 100.
2104.07059
SueYeon Chung
SueYeon Chung, L. F. Abbott
Neural population geometry: An approach for understanding biological and artificial neural networks
8 pages
Current Opinion in Neurobiology, Volume 70, October 2021, Pages 137-144
10.1016/j.conb.2021.10.010
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, populations and behavior.
[ { "created": "Wed, 14 Apr 2021 18:10:34 GMT", "version": "v1" }, { "created": "Sat, 17 Apr 2021 03:30:26 GMT", "version": "v2" }, { "created": "Sat, 20 Nov 2021 02:42:15 GMT", "version": "v3" } ]
2021-11-23
[ [ "Chung", "SueYeon", "" ], [ "Abbott", "L. F.", "" ] ]
Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, populations and behavior.
2106.08150
Robin Kobus
Robin Kobus (1), Andr\'e M\"uller (1), Daniel J\"unger (1), Christian Hundt (2) and Bertil Schmidt (1) ((1) Johannes Gutenberg University Mainz, Germany, (2) NVIDIA AI Technology Center Luxembourg)
MetaCache-GPU: Ultra-Fast Metagenomic Classification
11 pages. To be published in ICPP 2021
null
null
null
q-bio.GN cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cost of DNA sequencing has dropped exponentially over the past decade, making genomic data accessible to a growing number of scientists. In bioinformatics, localization of short DNA sequences (reads) within large genomic sequences is commonly facilitated by constructing index data structures which allow for efficient querying of substrings. Recent metagenomic classification pipelines annotate reads with taxonomic labels by analyzing their $k$-mer histograms with respect to a reference genome database. CPU-based index construction is often performed in a preprocessing phase due to the relatively high cost of building irregular data structures such as hash maps. However, the rapidly growing amount of available reference genomes establishes the need for index construction and querying at interactive speeds. In this paper, we introduce MetaCache-GPU -- an ultra-fast metagenomic short read classifier specifically tailored to fit the characteristics of CUDA-enabled accelerators. Our approach employs a novel hash table variant featuring efficient minhash fingerprinting of reads for locality-sensitive hashing and their rapid insertion using warp-aggregated operations. Our performance evaluation shows that MetaCache-GPU is able to build large reference databases in a matter of seconds, enabling instantaneous operability, while popular CPU-based tools such as Kraken2 require over an hour for index construction on the same data. In the context of an ever-growing number of reference genomes, MetaCache-GPU is the first metagenomic classifier that makes analysis pipelines with on-demand composition of large-scale reference genome sets practical. The source code is publicly available at https://github.com/muellan/metacache .
[ { "created": "Mon, 14 Jun 2021 14:31:07 GMT", "version": "v1" } ]
2021-06-16
[ [ "Kobus", "Robin", "" ], [ "Müller", "André", "" ], [ "Jünger", "Daniel", "" ], [ "Hundt", "Christian", "" ], [ "Schmidt", "Bertil", "" ] ]
The cost of DNA sequencing has dropped exponentially over the past decade, making genomic data accessible to a growing number of scientists. In bioinformatics, localization of short DNA sequences (reads) within large genomic sequences is commonly facilitated by constructing index data structures which allow for efficient querying of substrings. Recent metagenomic classification pipelines annotate reads with taxonomic labels by analyzing their $k$-mer histograms with respect to a reference genome database. CPU-based index construction is often performed in a preprocessing phase due to the relatively high cost of building irregular data structures such as hash maps. However, the rapidly growing amount of available reference genomes establishes the need for index construction and querying at interactive speeds. In this paper, we introduce MetaCache-GPU -- an ultra-fast metagenomic short read classifier specifically tailored to fit the characteristics of CUDA-enabled accelerators. Our approach employs a novel hash table variant featuring efficient minhash fingerprinting of reads for locality-sensitive hashing and their rapid insertion using warp-aggregated operations. Our performance evaluation shows that MetaCache-GPU is able to build large reference databases in a matter of seconds, enabling instantaneous operability, while popular CPU-based tools such as Kraken2 require over an hour for index construction on the same data. In the context of an ever-growing number of reference genomes, MetaCache-GPU is the first metagenomic classifier that makes analysis pipelines with on-demand composition of large-scale reference genome sets practical. The source code is publicly available at https://github.com/muellan/metacache .
2008.11546
Yu Li
Yu Li
Towards Structured Prediction in Bioinformatics with Deep Learning
PhD dissertatation
null
null
null
q-bio.QM cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using machine learning, especially deep learning, to facilitate biological research is a fascinating research direction. However, in addition to the standard classification or regression problems, in bioinformatics, we often need to predict more complex structured targets, such as 2D images and 3D molecular structures. The above complex prediction tasks are referred to as structured prediction. Structured prediction is more complicated than the traditional classification but has much broader applications, considering that most of the original bioinformatics problems have complex output objects. Due to the properties of those structured prediction problems, such as having problem-specific constraints and dependency within the labeling space, the straightforward application of existing deep learning models can lead to unsatisfactory results. Here, we argue that the following ideas can help resolve structured prediction problems in bioinformatics. Firstly, we can combine deep learning with other classic algorithms, such as probabilistic graphical models, which model the problem structure explicitly. Secondly, we can design the problem-specific deep learning architectures or methods by considering the structured labeling space and problem constraints, either explicitly or implicitly. We demonstrate our ideas with six projects from four bioinformatics subfields, including sequencing analysis, structure prediction, function annotation, and network analysis. The structured outputs cover 1D signals, 2D images, 3D structures, hierarchical labeling, and heterogeneous networks. With the help of the above ideas, all of our methods can achieve SOTA performance on the corresponding problems. The success of these projects motivates us to extend our work towards other more challenging but important problems, such as health-care problems, which can directly benefit people's health and wellness.
[ { "created": "Tue, 25 Aug 2020 02:52:18 GMT", "version": "v1" } ]
2020-08-31
[ [ "Li", "Yu", "" ] ]
Using machine learning, especially deep learning, to facilitate biological research is a fascinating research direction. However, in addition to the standard classification or regression problems, in bioinformatics, we often need to predict more complex structured targets, such as 2D images and 3D molecular structures. The above complex prediction tasks are referred to as structured prediction. Structured prediction is more complicated than the traditional classification but has much broader applications, considering that most of the original bioinformatics problems have complex output objects. Due to the properties of those structured prediction problems, such as having problem-specific constraints and dependency within the labeling space, the straightforward application of existing deep learning models can lead to unsatisfactory results. Here, we argue that the following ideas can help resolve structured prediction problems in bioinformatics. Firstly, we can combine deep learning with other classic algorithms, such as probabilistic graphical models, which model the problem structure explicitly. Secondly, we can design the problem-specific deep learning architectures or methods by considering the structured labeling space and problem constraints, either explicitly or implicitly. We demonstrate our ideas with six projects from four bioinformatics subfields, including sequencing analysis, structure prediction, function annotation, and network analysis. The structured outputs cover 1D signals, 2D images, 3D structures, hierarchical labeling, and heterogeneous networks. With the help of the above ideas, all of our methods can achieve SOTA performance on the corresponding problems. The success of these projects motivates us to extend our work towards other more challenging but important problems, such as health-care problems, which can directly benefit people's health and wellness.
1706.06481
Hyun Youk
Eduardo P. Olimpio, Yiteng Dang, Hyun Youk
Statistical dynamics of spatial-order formation by communicating cells
null
iScience 2, 27-40 (2018)
10.1016/j.isci.2018.03.013
null
q-bio.QM cond-mat.stat-mech nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Communicating cells can coordinate their gene expressions to form spatial patterns. 'Secrete-and-sense cells' secrete and sense the same molecule to do so and are ubiquitous. Here we address why and how these cells, from disordered beginnings, can form spatial order through a statistical mechanics-type framework for cellular communication. Classifying cellular lattices by 'macrostate' variables - 'spatial order paramete' and average gene-expression level - reveals a conceptual picture: cellular lattices act as particles rolling down on 'pseudo-energy landscapes' shaped by a 'Hamiltonian' for cellular communication. Particles rolling down represent cells' spatial order increasing. Particles trapped on the landscapes represent metastable spatial configurations. The gradient of the Hamiltonian and a 'trapping probability' determine the particle's equation of motion. This framework is extendable to more complex forms of cellular communication.
[ { "created": "Fri, 16 Jun 2017 15:19:49 GMT", "version": "v1" }, { "created": "Sun, 23 Jul 2017 21:00:05 GMT", "version": "v2" }, { "created": "Wed, 2 Aug 2017 00:12:41 GMT", "version": "v3" }, { "created": "Wed, 1 Nov 2017 16:47:53 GMT", "version": "v4" } ]
2018-06-05
[ [ "Olimpio", "Eduardo P.", "" ], [ "Dang", "Yiteng", "" ], [ "Youk", "Hyun", "" ] ]
Communicating cells can coordinate their gene expressions to form spatial patterns. 'Secrete-and-sense cells' secrete and sense the same molecule to do so and are ubiquitous. Here we address why and how these cells, from disordered beginnings, can form spatial order through a statistical mechanics-type framework for cellular communication. Classifying cellular lattices by 'macrostate' variables - 'spatial order paramete' and average gene-expression level - reveals a conceptual picture: cellular lattices act as particles rolling down on 'pseudo-energy landscapes' shaped by a 'Hamiltonian' for cellular communication. Particles rolling down represent cells' spatial order increasing. Particles trapped on the landscapes represent metastable spatial configurations. The gradient of the Hamiltonian and a 'trapping probability' determine the particle's equation of motion. This framework is extendable to more complex forms of cellular communication.
2106.15244
Malcolm Hillebrand
M Hillebrand, G Kalosakas, A R Bishop, Ch Skokos
Bubble lifetimes in DNA gene promoters and their mutations affecting transcription
6 pages, 4 figures
null
10.1063/5.0060335
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relative lifetimes of inherent double stranded DNA openings with lengths up to ten base pairs are presented for different gene promoters and corresponding mutants that either increase or decrease transcriptional activity, in the framework of the Peyrard-Bishop-Dauxois model. Extensive microcanonical simulations are used, with energies corresponding to physiological temperature. The bubble lifetime profiles along the DNA sequences demonstrate a significant reduction of the average lifetime at the mutation sites when the mutated promoter decreases transcription, while a corresponding enhancement of the bubble lifetime is observed in the case of mutations leading to increased transcription. The relative difference of bubble lifetimes between the mutated and the wild type promoters at the position of mutation varies from 20% to more than 30% as the bubble length is decreasing
[ { "created": "Tue, 29 Jun 2021 10:56:05 GMT", "version": "v1" } ]
2021-09-15
[ [ "Hillebrand", "M", "" ], [ "Kalosakas", "G", "" ], [ "Bishop", "A R", "" ], [ "Skokos", "Ch", "" ] ]
Relative lifetimes of inherent double stranded DNA openings with lengths up to ten base pairs are presented for different gene promoters and corresponding mutants that either increase or decrease transcriptional activity, in the framework of the Peyrard-Bishop-Dauxois model. Extensive microcanonical simulations are used, with energies corresponding to physiological temperature. The bubble lifetime profiles along the DNA sequences demonstrate a significant reduction of the average lifetime at the mutation sites when the mutated promoter decreases transcription, while a corresponding enhancement of the bubble lifetime is observed in the case of mutations leading to increased transcription. The relative difference of bubble lifetimes between the mutated and the wild type promoters at the position of mutation varies from 20% to more than 30% as the bubble length is decreasing
2208.13675
Thomas Schmidt
Melanie Biafora, Thomas Schmidt
Juggling too many balls at once: Qualitatively different effects when measuring priming and masking in single, dual, and triple tasks
v1: initial upload. v2: adds arxiv reference. Manuscript is under review, still subject to changes
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dissociation paradigms examine dissociations between indirect measures of prime processing and direct measures of prime awareness. It is debated whether direct measures should be objective or subjective, and whether these measures should be obtained on the same or separate trials. In two metacontrast experiments, we measured prime discrimination, PAS ratings, and response priming either separately or in multiple tasks. Single tasks show the fastest responses in priming and therefore most likely meet the assumption of feedforward processing as assumed under Rapid-Chase Theory. Similarly, dual tasks allow for a fast response activation by the prime; nevertheless, prolonged responses and slower errors occur more often. In contrast, triple tasks have a negative effect on response activation: responses are massively slowed and fast prime-locked errors are lost. Moreover, decreasing priming effects and prime identification performance result in a loss of a double dissociation. Here, a necessary condition for unconscious response priming, feedforward processing, is violated.
[ { "created": "Mon, 29 Aug 2022 15:24:48 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2022 12:22:04 GMT", "version": "v2" } ]
2022-08-31
[ [ "Biafora", "Melanie", "" ], [ "Schmidt", "Thomas", "" ] ]
Dissociation paradigms examine dissociations between indirect measures of prime processing and direct measures of prime awareness. It is debated whether direct measures should be objective or subjective, and whether these measures should be obtained on the same or separate trials. In two metacontrast experiments, we measured prime discrimination, PAS ratings, and response priming either separately or in multiple tasks. Single tasks show the fastest responses in priming and therefore most likely meet the assumption of feedforward processing as assumed under Rapid-Chase Theory. Similarly, dual tasks allow for a fast response activation by the prime; nevertheless, prolonged responses and slower errors occur more often. In contrast, triple tasks have a negative effect on response activation: responses are massively slowed and fast prime-locked errors are lost. Moreover, decreasing priming effects and prime identification performance result in a loss of a double dissociation. Here, a necessary condition for unconscious response priming, feedforward processing, is violated.
1311.2554
Christoph Adami
B. Patra, Y. Kon, G. Yadav, A.W. Sevold, J. P. Frumkin, R. R. Vallabhajosyula, A. Hintze, B. {\O}stman, J. Schossau, A. Bhan, B. Marzolf, J. K. Tamashiro, A. Kaur, N. S. Baliga, E. J. Grayhack, C. Adami, D. J. Galas, A. Raval, E. M. Phizicky, and A. Ray
A genome wide dosage suppressor network reveals genetic robustness and a novel mechanism for Huntington's disease
42 pages, 2 tables, 6 Figures. Supplementary Tables S1-S12 and Supplementary Figures S1-S8 at http://dx.doi.org/10.6084/m9.figshare.844761
Nucleic Acids Research 45 (2017) 255-270
10.1093/nar/gkw1148
null
q-bio.MN q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutational robustness is the extent to which an organism has evolved to withstand the effects of deleterious mutations. We explored the extent of mutational robustness in the budding yeast by genome wide dosage suppressor analysis of 53 conditional lethal mutations in cell division cycle and RNA synthesis related genes, revealing 660 suppressor interactions of which 642 are novel. This collection has several distinctive features, including high co-occurrence of mutant-suppressor pairs within protein modules, highly correlated functions between the pairs, and higher diversity of functions among the co-suppressors than previously observed. Dosage suppression of essential genes encoding RNA polymerase subunits and chromosome cohesion complex suggest a surprising degree of functional plasticity of macromolecular complexes and the existence od degenerate pathways for circumventing potentially lethal mutations. The utility of dosage-suppressor networks is illustrated by the discovery of a novel connection between chromosome cohesion-condensation pathways involving homologous recombination, and Huntington's disease.
[ { "created": "Mon, 11 Nov 2013 20:00:14 GMT", "version": "v1" } ]
2020-02-04
[ [ "Patra", "B.", "" ], [ "Kon", "Y.", "" ], [ "Yadav", "G.", "" ], [ "Sevold", "A. W.", "" ], [ "Frumkin", "J. P.", "" ], [ "Vallabhajosyula", "R. R.", "" ], [ "Hintze", "A.", "" ], [ "Østman", "B.", "" ], [ "Schossau", "J.", "" ], [ "Bhan", "A.", "" ], [ "Marzolf", "B.", "" ], [ "Tamashiro", "J. K.", "" ], [ "Kaur", "A.", "" ], [ "Baliga", "N. S.", "" ], [ "Grayhack", "E. J.", "" ], [ "Adami", "C.", "" ], [ "Galas", "D. J.", "" ], [ "Raval", "A.", "" ], [ "Phizicky", "E. M.", "" ], [ "Ray", "A.", "" ] ]
Mutational robustness is the extent to which an organism has evolved to withstand the effects of deleterious mutations. We explored the extent of mutational robustness in the budding yeast by genome wide dosage suppressor analysis of 53 conditional lethal mutations in cell division cycle and RNA synthesis related genes, revealing 660 suppressor interactions of which 642 are novel. This collection has several distinctive features, including high co-occurrence of mutant-suppressor pairs within protein modules, highly correlated functions between the pairs, and higher diversity of functions among the co-suppressors than previously observed. Dosage suppression of essential genes encoding RNA polymerase subunits and chromosome cohesion complex suggest a surprising degree of functional plasticity of macromolecular complexes and the existence od degenerate pathways for circumventing potentially lethal mutations. The utility of dosage-suppressor networks is illustrated by the discovery of a novel connection between chromosome cohesion-condensation pathways involving homologous recombination, and Huntington's disease.
2005.04937
Christopher Overton
Christopher E. Overton, Helena B. Stage, Shazaad Ahmad, Jacob Curran-Sebastian, Paul Dark, Rajenki Das, Elizabeth Fearon, Timothy Felton, Martyn Fyles, Nick Gent, Ian Hall, Thomas House, Hugo Lewkowicz, Xiaoxi Pang, Lorenzo Pellis, Robert Sawko, Andrew Ustianowski, Bindu Vekaria, Luke Webb
Using statistics and mathematical modelling to understand infectious disease outbreaks: COVID-19 as an example
null
Infectious Disease Modelling, Volume 5 (2020), 409-441
10.1016/j.idm.2020.06.008
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
During an infectious disease outbreak, biases in the data and complexities of the underlying dynamics pose significant challenges in mathematically modelling the outbreak and designing policy. Motivated by the ongoing response to COVID-19, we provide a toolkit of statistical and mathematical models beyond the simple SIR-type differential equation models for analysing the early stages of an outbreak and assessing interventions. In particular, we focus on parameter estimation in the presence of known biases in the data, and the effect of non-pharmaceutical interventions in enclosed subpopulations, such as households and care homes. We illustrate these methods by applying them to the COVID-19 pandemic.
[ { "created": "Mon, 11 May 2020 09:06:43 GMT", "version": "v1" } ]
2020-09-22
[ [ "Overton", "Christopher E.", "" ], [ "Stage", "Helena B.", "" ], [ "Ahmad", "Shazaad", "" ], [ "Curran-Sebastian", "Jacob", "" ], [ "Dark", "Paul", "" ], [ "Das", "Rajenki", "" ], [ "Fearon", "Elizabeth", "" ], [ "Felton", "Timothy", "" ], [ "Fyles", "Martyn", "" ], [ "Gent", "Nick", "" ], [ "Hall", "Ian", "" ], [ "House", "Thomas", "" ], [ "Lewkowicz", "Hugo", "" ], [ "Pang", "Xiaoxi", "" ], [ "Pellis", "Lorenzo", "" ], [ "Sawko", "Robert", "" ], [ "Ustianowski", "Andrew", "" ], [ "Vekaria", "Bindu", "" ], [ "Webb", "Luke", "" ] ]
During an infectious disease outbreak, biases in the data and complexities of the underlying dynamics pose significant challenges in mathematically modelling the outbreak and designing policy. Motivated by the ongoing response to COVID-19, we provide a toolkit of statistical and mathematical models beyond the simple SIR-type differential equation models for analysing the early stages of an outbreak and assessing interventions. In particular, we focus on parameter estimation in the presence of known biases in the data, and the effect of non-pharmaceutical interventions in enclosed subpopulations, such as households and care homes. We illustrate these methods by applying them to the COVID-19 pandemic.
1711.04950
Jeroen Van Boxtel
Jeroen J.A. van Boxtel
Modelling stochastic resonance in humans: the influence of lapse rate
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adding noise to a sensory signal generally decreases human performance. However noise can improve performance too, due to a process called stochastic resonance (SR). This paradoxical effect may be exploited in psychophysical experiments, to provide additional insights into how the sensory system deals with noise. Here, I develop a model for stochastic resonance to study the influence of noise on human perception, in which the biological parameter of `lapse rate' was included. I show that the inclusion of lapse rate allows for the occurrence of stochastic resonance in terms of the performance metric d'. At the same time, I show that high levels of lapse rate cause stochastic resonance to disappear. It is also shown that noise generated in the brain (i.e., internal noise) may obscure any effect of stochastic resonance in experimental settings. I further relate the model to a standard equivalent noise model, the linear amplifier model, and show that the lapse rate can function to scale the threshold versus noise (TvN) curve, similar to the efficiency parameter in equivalent noise (EN) models. Therefore, lapse rate provides a psychophysical explanation for reduced efficiency in EN paradigms. Furthermore, I note that ignoring lapse rate may lead to an overestimation of internal noise in equivalent noise paradigms. Overall, describing stochastic resonance in terms of signal detection theory, with the inclusion of lapse rate, may provide valuable new insights into how human performance depends on internal and external noise.
[ { "created": "Tue, 14 Nov 2017 04:50:23 GMT", "version": "v1" } ]
2017-11-15
[ [ "van Boxtel", "Jeroen J. A.", "" ] ]
Adding noise to a sensory signal generally decreases human performance. However noise can improve performance too, due to a process called stochastic resonance (SR). This paradoxical effect may be exploited in psychophysical experiments, to provide additional insights into how the sensory system deals with noise. Here, I develop a model for stochastic resonance to study the influence of noise on human perception, in which the biological parameter of `lapse rate' was included. I show that the inclusion of lapse rate allows for the occurrence of stochastic resonance in terms of the performance metric d'. At the same time, I show that high levels of lapse rate cause stochastic resonance to disappear. It is also shown that noise generated in the brain (i.e., internal noise) may obscure any effect of stochastic resonance in experimental settings. I further relate the model to a standard equivalent noise model, the linear amplifier model, and show that the lapse rate can function to scale the threshold versus noise (TvN) curve, similar to the efficiency parameter in equivalent noise (EN) models. Therefore, lapse rate provides a psychophysical explanation for reduced efficiency in EN paradigms. Furthermore, I note that ignoring lapse rate may lead to an overestimation of internal noise in equivalent noise paradigms. Overall, describing stochastic resonance in terms of signal detection theory, with the inclusion of lapse rate, may provide valuable new insights into how human performance depends on internal and external noise.
2005.02261
D K K Vamsi
Bishal Chhetri, D. K. K. Vamsi, Vijay M. Bhagat, Ananth V. S., Bhanu Prakash, Roshan Mandale, Swapna Muthusamy, Carani B Sanjeevi
Crucial Inflammatory Mediators and Efficacy of Drug Interventions in Pneumonia Inflated COVID-19: An Invivo Mathematical Modelling Study
50 pages, 37 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The virus SARS-COV-2 caused disease COVID-19 has been declared a pandemic by WHO. Currently, over 210 countries and territories have been affected. Careful, well-designed drugs and vaccine for the total elimination of this virus seem to be the need of the hour. In this context, the invivo mathematical modelling studies can be extremely helpful in understanding the efficacy of the drug interventions. These studies can also help understand the role of the crucial inflammatory mediators and the behaviour of immune response towards this novel coronavirus. Motivated by these facts, in this paper, we study the invivo dynamics of Covid-19. The results obtained here are inline with some of the clinical findings for Covid-19. This invivo modelling study involving the crucial biomarkers of Covid-19 is the first of its kind and the results obtained from this can be helpful to researchers, epidemiologists, clinicians and doctors who are working in this field.
[ { "created": "Sun, 3 May 2020 19:17:29 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 15:18:05 GMT", "version": "v2" } ]
2020-10-07
[ [ "Chhetri", "Bishal", "" ], [ "Vamsi", "D. K. K.", "" ], [ "Bhagat", "Vijay M.", "" ], [ "S.", "Ananth V.", "" ], [ "Prakash", "Bhanu", "" ], [ "Mandale", "Roshan", "" ], [ "Muthusamy", "Swapna", "" ], [ "Sanjeevi", "Carani B", "" ] ]
The virus SARS-COV-2 caused disease COVID-19 has been declared a pandemic by WHO. Currently, over 210 countries and territories have been affected. Careful, well-designed drugs and vaccine for the total elimination of this virus seem to be the need of the hour. In this context, the invivo mathematical modelling studies can be extremely helpful in understanding the efficacy of the drug interventions. These studies can also help understand the role of the crucial inflammatory mediators and the behaviour of immune response towards this novel coronavirus. Motivated by these facts, in this paper, we study the invivo dynamics of Covid-19. The results obtained here are inline with some of the clinical findings for Covid-19. This invivo modelling study involving the crucial biomarkers of Covid-19 is the first of its kind and the results obtained from this can be helpful to researchers, epidemiologists, clinicians and doctors who are working in this field.
2007.11957
Kok Yew Ng Dr
Ton Duc Do, Meei Mei Gui and Kok Yew Ng
Assessing the effects of time-dependent restrictions and control actions to flatten the curve of COVID-19 in Kazakhstan
35 pages, 7 figures, To appear in PeerJ
PeerJ 2021
10.7717/peerj.10806
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the assessment of time-dependent national-level restrictions and control actions and their effects in fighting the COVID-19 pandemic. By analysing the transmission dynamics during the first wave of COVID-19 in the country, the effectiveness of the various levels of control actions taken to flatten the curve can be better quantified and understood. This in turn can help the relevant authorities to better plan for and control the subsequent waves of the pandemic. To achieve this, a deterministic population model for the pandemic is firstly developed to take into consideration the time-dependent characteristics of the model parameters, especially on the ever-evolving value of the reproduction number, which is one of the critical measures used to describe the transmission dynamics of this pandemic. The reproduction number alongside other key parameters of the model can then be estimated by fitting the model to real-world data using numerical optimisation techniques or by inducing ad-hoc control actions as recorded in the news platforms. In this paper, the model is verified using a case study based on the data from the first wave of COVID-19 in the Republic of Kazakhstan. The model is fitted to provide estimates for two settings in simulations; time-invariant and time-varying (with bounded constraints) parameters. Finally, some forecasts are made using four scenarios with time-dependent control measures so as to determine which would reflect on the actual situations better.
[ { "created": "Tue, 21 Jul 2020 10:45:03 GMT", "version": "v1" }, { "created": "Mon, 24 Aug 2020 11:15:15 GMT", "version": "v2" }, { "created": "Tue, 12 Jan 2021 10:40:20 GMT", "version": "v3" } ]
2021-02-04
[ [ "Do", "Ton Duc", "" ], [ "Gui", "Meei Mei", "" ], [ "Ng", "Kok Yew", "" ] ]
This paper presents the assessment of time-dependent national-level restrictions and control actions and their effects in fighting the COVID-19 pandemic. By analysing the transmission dynamics during the first wave of COVID-19 in the country, the effectiveness of the various levels of control actions taken to flatten the curve can be better quantified and understood. This in turn can help the relevant authorities to better plan for and control the subsequent waves of the pandemic. To achieve this, a deterministic population model for the pandemic is firstly developed to take into consideration the time-dependent characteristics of the model parameters, especially on the ever-evolving value of the reproduction number, which is one of the critical measures used to describe the transmission dynamics of this pandemic. The reproduction number alongside other key parameters of the model can then be estimated by fitting the model to real-world data using numerical optimisation techniques or by inducing ad-hoc control actions as recorded in the news platforms. In this paper, the model is verified using a case study based on the data from the first wave of COVID-19 in the Republic of Kazakhstan. The model is fitted to provide estimates for two settings in simulations; time-invariant and time-varying (with bounded constraints) parameters. Finally, some forecasts are made using four scenarios with time-dependent control measures so as to determine which would reflect on the actual situations better.
2204.01700
Xin Gao
Xin Gao, Jianwei Li, Dianjie Li
Modeling COVID-19 vaccine-induced immunological memory development and its links to antibody level and infectiousness
23 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 vaccines have proven to be effective against SARS-CoV-2 infection. However, the dynamics of vaccine-induced immunological memory development and neutralizing antibodies generation are not fully understood, limiting vaccine development and vaccination regimen determination. Herein, we constructed a mathematical model to characterize the vaccine-induced immune response based on fitting the viral infection and vaccination datasets. With the example of CoronaVac, we revealed the association between vaccine-induced immunological memory development and neutralizing antibody levels. The establishment of the intact immunological memory requires more than 6 months after the first and second doses, after that a booster shot can induce high levels neutralizing antibodies. By introducing the maximum viral load and recovery time after viral infection, we quantitatively studied the protective effect of vaccines against viral infection. Accordingly, we optimized the vaccination regimen, including dose and vaccination timing, and predicted the effect of the fourth dose. Last, by combining the viral transmission model, we showed the suppression of virus transmission by vaccination, which may be instructive for the development of public health policies.
[ { "created": "Tue, 5 Apr 2022 09:53:38 GMT", "version": "v1" } ]
2022-04-06
[ [ "Gao", "Xin", "" ], [ "Li", "Jianwei", "" ], [ "Li", "Dianjie", "" ] ]
COVID-19 vaccines have proven to be effective against SARS-CoV-2 infection. However, the dynamics of vaccine-induced immunological memory development and neutralizing antibodies generation are not fully understood, limiting vaccine development and vaccination regimen determination. Herein, we constructed a mathematical model to characterize the vaccine-induced immune response based on fitting the viral infection and vaccination datasets. With the example of CoronaVac, we revealed the association between vaccine-induced immunological memory development and neutralizing antibody levels. The establishment of the intact immunological memory requires more than 6 months after the first and second doses, after that a booster shot can induce high levels neutralizing antibodies. By introducing the maximum viral load and recovery time after viral infection, we quantitatively studied the protective effect of vaccines against viral infection. Accordingly, we optimized the vaccination regimen, including dose and vaccination timing, and predicted the effect of the fourth dose. Last, by combining the viral transmission model, we showed the suppression of virus transmission by vaccination, which may be instructive for the development of public health policies.
1105.1483
Gyorgy Korniss
Lauren O'Malley, G. Korniss, Sai Satya Praveen Mungara, and Thomas Caraco
Spatial competition and the dynamics of rarity in a temporally varying environment
The original article is available at www.evolutionary-ecology.com/issues/v12/n03/ccar2546.pdf
Evolutionary Ecology Research 12: 279-305 (2010)
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an endogenous timescale set by invasion in a constant environment, we introduced periodic temporal variation in competitive superiority by alternating the species' propagation rates. By manipulating habitat size and introduction rate, we simulated environments where successful invasion proceeds through growth of many spatial clusters, and where invasion can occur only as a single-cluster process. In the multi-cluster invasion regime, rapid environmental variation produced spatial mixing of the species and non-equilibrium coexistence. The dynamics' dominant response effectively averaged environmental fluctuation, so that each species could avoid competitive exclusion. Increasing the environment's half-period to match the population-dynamic timescale let the (initially) more abundant resident repeatedly repel the invader. Periodic transition in propagation-rate advantage rarely interrupted the exclusion process when the more abundant species had competitive advantage. However, at infrequent and randomly occurring times, the rare species could invade and reverse the density pattern by rapidly eroding the resident's preemption of space. In the single-cluster invasion regime, environmental variation occurring faster than the population-dynamic timescale prohibited successful invasion; the first species to reach its stationary density (calculated for a constant environment) continued to repel the other during long simulations. When the endogenous and exogenous timescales matched, the species randomly reversed roles of resident and invader; the waiting times for reversal of abundances indicate stochastic resonance. For both invasion regimes, environmental fluctuation occurring much slower than the endogenous dynamics produced symmetric limit cycles, alternations of the constant-environment pattern.
[ { "created": "Sun, 8 May 2011 00:26:21 GMT", "version": "v1" } ]
2011-05-10
[ [ "O'Malley", "Lauren", "" ], [ "Korniss", "G.", "" ], [ "Mungara", "Sai Satya Praveen", "" ], [ "Caraco", "Thomas", "" ] ]
Given an endogenous timescale set by invasion in a constant environment, we introduced periodic temporal variation in competitive superiority by alternating the species' propagation rates. By manipulating habitat size and introduction rate, we simulated environments where successful invasion proceeds through growth of many spatial clusters, and where invasion can occur only as a single-cluster process. In the multi-cluster invasion regime, rapid environmental variation produced spatial mixing of the species and non-equilibrium coexistence. The dynamics' dominant response effectively averaged environmental fluctuation, so that each species could avoid competitive exclusion. Increasing the environment's half-period to match the population-dynamic timescale let the (initially) more abundant resident repeatedly repel the invader. Periodic transition in propagation-rate advantage rarely interrupted the exclusion process when the more abundant species had competitive advantage. However, at infrequent and randomly occurring times, the rare species could invade and reverse the density pattern by rapidly eroding the resident's preemption of space. In the single-cluster invasion regime, environmental variation occurring faster than the population-dynamic timescale prohibited successful invasion; the first species to reach its stationary density (calculated for a constant environment) continued to repel the other during long simulations. When the endogenous and exogenous timescales matched, the species randomly reversed roles of resident and invader; the waiting times for reversal of abundances indicate stochastic resonance. For both invasion regimes, environmental fluctuation occurring much slower than the endogenous dynamics produced symmetric limit cycles, alternations of the constant-environment pattern.
1511.08260
Nancy (Xin Ru) Wang
Nancy X. R. Wang, Jared D. Olson, Jeffrey G. Ojemann, Rajesh P.N. Rao, Bingni W. Brunton
Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations
null
Frontiers in human neuroscience 2016
10.3389/fnhum.2016.00165
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Most ongoing efforts have focused on training decoders on specific, stereotyped tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in natural settings requires adaptive strategies and scalable algorithms that require minimal supervision. Here we propose an unsupervised approach to decoding neural states from human brain recordings acquired in a naturalistic context. We demonstrate our approach on continuous long-term electrocorticographic (ECoG) data recorded over many days from the brain surface of subjects in a hospital room, with simultaneous audio and video recordings. We first discovered clusters in high-dimensional ECoG recordings and then annotated coherent clusters using speech and movement labels extracted automatically from audio and video recordings. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Our results show that our unsupervised approach can discover distinct behaviors from ECoG data, including moving, speaking and resting. We verify the accuracy of our approach by comparing to manual annotations. Projecting the discovered cluster centers back onto the brain, this technique opens the door to automated functional brain mapping in natural settings.
[ { "created": "Thu, 26 Nov 2015 01:02:03 GMT", "version": "v1" }, { "created": "Tue, 8 Dec 2015 06:44:07 GMT", "version": "v2" } ]
2018-01-22
[ [ "Wang", "Nancy X. R.", "" ], [ "Olson", "Jared D.", "" ], [ "Ojemann", "Jeffrey G.", "" ], [ "Rao", "Rajesh P. N.", "" ], [ "Brunton", "Bingni W.", "" ] ]
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Most ongoing efforts have focused on training decoders on specific, stereotyped tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in natural settings requires adaptive strategies and scalable algorithms that require minimal supervision. Here we propose an unsupervised approach to decoding neural states from human brain recordings acquired in a naturalistic context. We demonstrate our approach on continuous long-term electrocorticographic (ECoG) data recorded over many days from the brain surface of subjects in a hospital room, with simultaneous audio and video recordings. We first discovered clusters in high-dimensional ECoG recordings and then annotated coherent clusters using speech and movement labels extracted automatically from audio and video recordings. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Our results show that our unsupervised approach can discover distinct behaviors from ECoG data, including moving, speaking and resting. We verify the accuracy of our approach by comparing to manual annotations. Projecting the discovered cluster centers back onto the brain, this technique opens the door to automated functional brain mapping in natural settings.
2104.05923
Farshad Shirani
Farshad Shirani and Judith R. Miller
Competition, Trait Variance Dynamics, and the Evolution of a Species' Range
null
Bulletin of Mathematical Biology, vol. 84, no. 3, 2022
10.1007/s11538-022-00990-z
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geographic ranges of communities of species evolve in response to environmental, ecological, and evolutionary forces. Understanding the effects of these forces on species' range dynamics is a major goal of spatial ecology. Previous mathematical models have jointly captured the dynamic changes in species' population distributions and the selective evolution of fitness-related phenotypic traits in the presence of an environmental gradient. These models inevitably include some unrealistic assumptions, and biologically reasonable ranges of values for their parameters are not easy to specify. As a result, simulations of the seminal models of this type can lead to markedly different conclusions about the behavior of such populations, including the possibility of maladaptation setting stable range boundaries. Here, we harmonize such results by developing and simulating a continuum model of range evolution in a community of species that interact competitively while diffusing over an environmental gradient. Our model extends existing models by incorporating both competition and freely changing intraspecific trait variance. Simulations of this model predict a spatial profile of species' trait variance that is consistent with experimental measurements available in the literature. Moreover, they reaffirm interspecific competition as an effective factor in limiting species' ranges, even when trait variance is not artificially constrained. These theoretical results can inform the design of, as yet rare, empirical studies to clarify the evolutionary causes of range stabilization.
[ { "created": "Tue, 13 Apr 2021 03:54:00 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 03:12:37 GMT", "version": "v2" } ]
2022-02-02
[ [ "Shirani", "Farshad", "" ], [ "Miller", "Judith R.", "" ] ]
Geographic ranges of communities of species evolve in response to environmental, ecological, and evolutionary forces. Understanding the effects of these forces on species' range dynamics is a major goal of spatial ecology. Previous mathematical models have jointly captured the dynamic changes in species' population distributions and the selective evolution of fitness-related phenotypic traits in the presence of an environmental gradient. These models inevitably include some unrealistic assumptions, and biologically reasonable ranges of values for their parameters are not easy to specify. As a result, simulations of the seminal models of this type can lead to markedly different conclusions about the behavior of such populations, including the possibility of maladaptation setting stable range boundaries. Here, we harmonize such results by developing and simulating a continuum model of range evolution in a community of species that interact competitively while diffusing over an environmental gradient. Our model extends existing models by incorporating both competition and freely changing intraspecific trait variance. Simulations of this model predict a spatial profile of species' trait variance that is consistent with experimental measurements available in the literature. Moreover, they reaffirm interspecific competition as an effective factor in limiting species' ranges, even when trait variance is not artificially constrained. These theoretical results can inform the design of, as yet rare, empirical studies to clarify the evolutionary causes of range stabilization.
1511.01062
Edward Rusu
Edward Rusu
Network Models in Epidemiology: Considering Discrete and Continuous Dynamics
11 pages, 11 figures, matlab code
null
null
null
q-bio.PE math.DS q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete and Continuous Dynamics is the first in a series of articles on Network Models for Epidemiology. This project began in the Fall quarter of 2014 in my continuous modeling course. Since then, it has taken off and turned into a series of articles, which I hope to compile into a single report. The purpose of the report is to explore mathematical epidemiology. In this article, we discuss the historical approach to disease modeling with compartmental models. We discuss the issues and benefits of using network models. We build a discrete dynamical system to describe infection and recovery of individuals in the population. Lastly, we detail the computational scheme for iterating this model.
[ { "created": "Mon, 19 Oct 2015 23:51:02 GMT", "version": "v1" } ]
2015-11-04
[ [ "Rusu", "Edward", "" ] ]
Discrete and Continuous Dynamics is the first in a series of articles on Network Models for Epidemiology. This project began in the Fall quarter of 2014 in my continuous modeling course. Since then, it has taken off and turned into a series of articles, which I hope to compile into a single report. The purpose of the report is to explore mathematical epidemiology. In this article, we discuss the historical approach to disease modeling with compartmental models. We discuss the issues and benefits of using network models. We build a discrete dynamical system to describe infection and recovery of individuals in the population. Lastly, we detail the computational scheme for iterating this model.
2012.06482
Yannick Drif
Yannick Drif, Benjamin Roche (IRD), Pierre Valade
Cons{\'e}quences du changement climatique pour les maladies {\`a} transmission vectorielle et impact en assurance de personnes
in French
null
null
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Climate change, which is largely linked to human activities, is already having a considerable impact on our societies. Based on current trends, climate change is expected to accelerate in the coming decades. Beyond its impact on the pace of natural disasters (floods, hurricanes, etc.), climate change may have catastrophic consequences for human life and health. One of the concerns is the increase in the transmission of viruses spread by mosquitoes. Indeed, rising temperatures have a direct positive impact on the viability of mosquitoes in ecosystems, leading to their abundance and thus the risk of exposure of human populations to these pathogens. This study quantifies the consequences of global warming on the risk of epidemics of viruses transmitted by the Aedes Albopictus mosquito in metropolitan France. This mosquito, which is a vector for the Dengue, Chikungunya and Zika viruses, among others, arrived in mainland France in 2004 and has since spread throughout the country. Thanks to the association previously established between the probability of the presence of the mosquito and the average temperature combined with a mathematical model, the probability of an epidemic and the number of people who could be infected and die during a season in each department are estimated. If there is a high degree of heterogeneity in metropolitan France, nearly 2,000 deaths per year could be expected by 2040.
[ { "created": "Fri, 13 Nov 2020 15:05:14 GMT", "version": "v1" } ]
2020-12-14
[ [ "Drif", "Yannick", "", "IRD" ], [ "Roche", "Benjamin", "", "IRD" ], [ "Valade", "Pierre", "" ] ]
Climate change, which is largely linked to human activities, is already having a considerable impact on our societies. Based on current trends, climate change is expected to accelerate in the coming decades. Beyond its impact on the pace of natural disasters (floods, hurricanes, etc.), climate change may have catastrophic consequences for human life and health. One of the concerns is the increase in the transmission of viruses spread by mosquitoes. Indeed, rising temperatures have a direct positive impact on the viability of mosquitoes in ecosystems, leading to their abundance and thus the risk of exposure of human populations to these pathogens. This study quantifies the consequences of global warming on the risk of epidemics of viruses transmitted by the Aedes Albopictus mosquito in metropolitan France. This mosquito, which is a vector for the Dengue, Chikungunya and Zika viruses, among others, arrived in mainland France in 2004 and has since spread throughout the country. Thanks to the association previously established between the probability of the presence of the mosquito and the average temperature combined with a mathematical model, the probability of an epidemic and the number of people who could be infected and die during a season in each department are estimated. If there is a high degree of heterogeneity in metropolitan France, nearly 2,000 deaths per year could be expected by 2040.
2112.02097
Claire Nedellec
Anfu Tang (LISN), Claire N\'edellec, Pierre Zweigenbaum (LISN), Louise Del\'eger, Robert Bossy
Global alignment for relation extraction in Microbiology
null
Junior Conference on Data Science and Engineering, Feb 2021, Orsay, France
null
null
q-bio.OT cs.CL cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a method to extract relations from texts based on global alignment and syntactic information. Combined with SVM, this method is shown to have a performance comparable or even better than LSTM on two RE tasks.
[ { "created": "Thu, 25 Nov 2021 10:19:05 GMT", "version": "v1" } ]
2021-12-07
[ [ "Tang", "Anfu", "", "LISN" ], [ "Nédellec", "Claire", "", "LISN" ], [ "Zweigenbaum", "Pierre", "", "LISN" ], [ "Deléger", "Louise", "" ], [ "Bossy", "Robert", "" ] ]
We investigate a method to extract relations from texts based on global alignment and syntactic information. Combined with SVM, this method is shown to have a performance comparable or even better than LSTM on two RE tasks.
2311.12570
Frederikke Isa Marin
Frederikke Isa Marin, Felix Teufel, Marc Horlacher, Dennis Madsen, Dennis Pultz, Ole Winther, Wouter Boomsma
BEND: Benchmarking DNA Language Models on biologically meaningful tasks
9 pages, 1 figure, 3 tables, code available at https://github.com/frederikkemarin/BEND, to be published in ICLR 2024
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
The genome sequence contains the blueprint for governing cellular processes. While the availability of genomes has vastly increased over the last decades, experimental annotation of the various functional, non-coding and regulatory elements encoded in the DNA sequence remains both expensive and challenging. This has sparked interest in unsupervised language modeling of genomic DNA, a paradigm that has seen great success for protein sequence data. Although various DNA language models have been proposed, evaluation tasks often differ between individual works, and might not fully recapitulate the fundamental challenges of genome annotation, including the length, scale and sparsity of the data. In this study, we introduce BEND, a Benchmark for DNA language models, featuring a collection of realistic and biologically meaningful downstream tasks defined on the human genome. We find that embeddings from current DNA LMs can approach performance of expert methods on some tasks, but only capture limited information about long-range features. BEND is available at https://github.com/frederikkemarin/BEND.
[ { "created": "Tue, 21 Nov 2023 12:34:00 GMT", "version": "v1" }, { "created": "Sat, 25 Nov 2023 07:24:40 GMT", "version": "v2" }, { "created": "Mon, 11 Mar 2024 09:49:06 GMT", "version": "v3" }, { "created": "Tue, 9 Apr 2024 09:35:08 GMT", "version": "v4" } ]
2024-04-10
[ [ "Marin", "Frederikke Isa", "" ], [ "Teufel", "Felix", "" ], [ "Horlacher", "Marc", "" ], [ "Madsen", "Dennis", "" ], [ "Pultz", "Dennis", "" ], [ "Winther", "Ole", "" ], [ "Boomsma", "Wouter", "" ] ]
The genome sequence contains the blueprint for governing cellular processes. While the availability of genomes has vastly increased over the last decades, experimental annotation of the various functional, non-coding and regulatory elements encoded in the DNA sequence remains both expensive and challenging. This has sparked interest in unsupervised language modeling of genomic DNA, a paradigm that has seen great success for protein sequence data. Although various DNA language models have been proposed, evaluation tasks often differ between individual works, and might not fully recapitulate the fundamental challenges of genome annotation, including the length, scale and sparsity of the data. In this study, we introduce BEND, a Benchmark for DNA language models, featuring a collection of realistic and biologically meaningful downstream tasks defined on the human genome. We find that embeddings from current DNA LMs can approach performance of expert methods on some tasks, but only capture limited information about long-range features. BEND is available at https://github.com/frederikkemarin/BEND.
2009.04438
Varsha Subramanyan
Anushka Halder, Arinnia Anto, Varsha Subramanyan, Moitrayee Bhattacharyya, Smitha Vishveshwara, Saraswathi Vishveshwara
Surveying the side-chain network approach to protein structure and dynamics: The SARS-CoV-2 spike protein as an illustrative case
35 pages, 6 figures
Front Mol Biosci . 2020 Dec 18;7:596945
10.3389/fmolb.2020.596945
null
q-bio.BM cond-mat.other physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network theory-based approaches provide valuable insights into the variations in global structural connectivity between differing dynamical states of proteins. Our objective is to review network-based analyses to elucidate such variations, especially in the context of subtle conformational changes. We present technical details of the construction and analyses of protein structure networks, encompassing both the non-covalent connectivity and dynamics. We examine the selection of optimal criteria for connectivity based on the physical concept of percolation. We highlight the advantages of using side-chain based network metrics in contrast to backbone measurements. As an illustrative example, we apply the described network approach to investigate the global conformational change between the closed and partially open states of the SARS-CoV-2 spike protein. This conformational change in the spike protein is crucial for coronavirus entry and fusion into human cells. Our analysis reveals global structural reorientations between the two states of the spike protein despite small changes between the two states at the backbone level. We also observe some differences at strategic locations in the structures, correlating with their functions, asserting the advantages of the side-chain network analysis. Finally we present a view of allostery as a subtle synergistic-global change between the ligand and the receptor, the incorporation of which would enhance the drug design strategies.
[ { "created": "Wed, 9 Sep 2020 17:33:16 GMT", "version": "v1" }, { "created": "Sat, 31 Oct 2020 05:42:26 GMT", "version": "v2" } ]
2021-12-15
[ [ "Halder", "Anushka", "" ], [ "Anto", "Arinnia", "" ], [ "Subramanyan", "Varsha", "" ], [ "Bhattacharyya", "Moitrayee", "" ], [ "Vishveshwara", "Smitha", "" ], [ "Vishveshwara", "Saraswathi", "" ] ]
Network theory-based approaches provide valuable insights into the variations in global structural connectivity between differing dynamical states of proteins. Our objective is to review network-based analyses to elucidate such variations, especially in the context of subtle conformational changes. We present technical details of the construction and analyses of protein structure networks, encompassing both the non-covalent connectivity and dynamics. We examine the selection of optimal criteria for connectivity based on the physical concept of percolation. We highlight the advantages of using side-chain based network metrics in contrast to backbone measurements. As an illustrative example, we apply the described network approach to investigate the global conformational change between the closed and partially open states of the SARS-CoV-2 spike protein. This conformational change in the spike protein is crucial for coronavirus entry and fusion into human cells. Our analysis reveals global structural reorientations between the two states of the spike protein despite small changes between the two states at the backbone level. We also observe some differences at strategic locations in the structures, correlating with their functions, asserting the advantages of the side-chain network analysis. Finally we present a view of allostery as a subtle synergistic-global change between the ligand and the receptor, the incorporation of which would enhance the drug design strategies.
2402.01056
Sushrut Thorat
Lotta Piefke, Adrien Doerig, Tim Kietzmann, Sushrut Thorat
Computational characterization of the role of an attention schema in controlling visuospatial attention
7 pages, 3 figures; Accepted at CogSci 2024
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
How does the brain control attention? The Attention Schema Theory suggests that the brain explicitly models its state of attention, termed an attention schema, for its control. However, it remains unclear under which circumstances an attention schema is computationally useful, and whether it can emerge in a learning system without hard-wiring. To address these questions, we trained a reinforcement learning agent with attention to track and catch a ball in a noisy environment. Crucially, the agent had additional resources that it could freely use. We asked under which conditions these additional resources develop an attention schema to track attention. We found that the more uncertain the agent was about the location of its attentional window, the more it benefited from these additional resources, which developed an attention schema. Together, these results indicate that an attention schema emerges in simple learning systems where attention is important and difficult to track.
[ { "created": "Thu, 1 Feb 2024 23:03:55 GMT", "version": "v1" }, { "created": "Wed, 8 May 2024 14:56:29 GMT", "version": "v2" } ]
2024-05-09
[ [ "Piefke", "Lotta", "" ], [ "Doerig", "Adrien", "" ], [ "Kietzmann", "Tim", "" ], [ "Thorat", "Sushrut", "" ] ]
How does the brain control attention? The Attention Schema Theory suggests that the brain explicitly models its state of attention, termed an attention schema, for its control. However, it remains unclear under which circumstances an attention schema is computationally useful, and whether it can emerge in a learning system without hard-wiring. To address these questions, we trained a reinforcement learning agent with attention to track and catch a ball in a noisy environment. Crucially, the agent had additional resources that it could freely use. We asked under which conditions these additional resources develop an attention schema to track attention. We found that the more uncertain the agent was about the location of its attentional window, the more it benefited from these additional resources, which developed an attention schema. Together, these results indicate that an attention schema emerges in simple learning systems where attention is important and difficult to track.
1601.01358
Leyla Isik
Leyla Isik, Andrea Tacchetti, and Tomaso Poggio
Fast, invariant representation for human action in the visual system
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discounts changes in 3D viewpoint relative to when it initially discriminates between actions. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. Our results show no difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition.
[ { "created": "Thu, 7 Jan 2016 00:28:06 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2017 14:46:56 GMT", "version": "v2" } ]
2017-08-16
[ [ "Isik", "Leyla", "" ], [ "Tacchetti", "Andrea", "" ], [ "Poggio", "Tomaso", "" ] ]
Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discounts changes in 3D viewpoint relative to when it initially discriminates between actions. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. Our results show no difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition.
1712.00683
Peter Helfer
Peter Helfer and Thomas R. Shultz
Coupled feedback loops maintain synaptic long-term potentiation: A computational model of PKMzeta synthesis and AMPA receptor trafficking
null
null
10.1371/journal.pcbi.1006147
null
q-bio.NC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In long-term potentiation (LTP), one of the most studied types of neural plasticity, synaptic strength is persistently increased in response to stimulation. Although a number of different proteins have been implicated in the sub-cellular molecular processes underlying induction and maintenance of LTP, the precise mechanisms remain unknown. A particular challenge is to demonstrate that a proposed molecular mechanism can provide the level of stability needed to maintain memories for months or longer, in spite of the fact that many of the participating molecules have much shorter life spans. Here we present a computational model that combines simulations of several biochemical reactions that have been suggested in the LTP literature and show that the resulting system does exhibit the required stability. At the core of the model are two interlinked feedback loops of molecular reactions, one involving the atypical protein kinase PKM{\zeta} and its messenger RNA, the other involving PKM{\zeta} and GluA2-containing AMPA receptors. We demonstrate that robust bistability - stable equilibria both in the synapse's potentiated and unpotentiated states - can arise from a set of simple molecular reactions. The model is able to account for a wide range of empirical results, including induction and maintenance of late-phase LTP, cellular memory reconsolidation and the effects of different pharmaceutical interventions.
[ { "created": "Sat, 2 Dec 2017 23:54:01 GMT", "version": "v1" }, { "created": "Wed, 28 Mar 2018 20:55:55 GMT", "version": "v2" }, { "created": "Mon, 30 Apr 2018 23:12:38 GMT", "version": "v3" } ]
2019-06-11
[ [ "Helfer", "Peter", "" ], [ "Shultz", "Thomas R.", "" ] ]
In long-term potentiation (LTP), one of the most studied types of neural plasticity, synaptic strength is persistently increased in response to stimulation. Although a number of different proteins have been implicated in the sub-cellular molecular processes underlying induction and maintenance of LTP, the precise mechanisms remain unknown. A particular challenge is to demonstrate that a proposed molecular mechanism can provide the level of stability needed to maintain memories for months or longer, in spite of the fact that many of the participating molecules have much shorter life spans. Here we present a computational model that combines simulations of several biochemical reactions that have been suggested in the LTP literature and show that the resulting system does exhibit the required stability. At the core of the model are two interlinked feedback loops of molecular reactions, one involving the atypical protein kinase PKM{\zeta} and its messenger RNA, the other involving PKM{\zeta} and GluA2-containing AMPA receptors. We demonstrate that robust bistability - stable equilibria both in the synapse's potentiated and unpotentiated states - can arise from a set of simple molecular reactions. The model is able to account for a wide range of empirical results, including induction and maintenance of late-phase LTP, cellular memory reconsolidation and the effects of different pharmaceutical interventions.
1609.04136
Chi Xue
Chi Xue and Nigel Goldenfeld
Stochastic predator-prey dynamics of transposons in the human genome
null
null
10.1103/PhysRevLett.117.208101
null
q-bio.PE physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transposable elements, or transposons, are DNA sequences that can jump from site to site in the genome during the life cycle of a cell, usually encoding the very enzymes which perform their excision. However, some transposons are parasitic, relying on the enzymes produced by the regular transposons. In this case, we show that a stochastic model, which takes into account the small copy numbers of the transposons in a cell, predicts noise-induced predator-prey oscillations with a characteristic time scale that is much longer than the cell replication time, indicating that the state of the predator-prey oscillator is stored in the genome and transmitted to successive generations. Our work demonstrates the important role of number fluctuations in the expression of mobile genetic elements, and shows explicitly how ecological concepts can be applied to the dynamics and fluctuations of living genomes.
[ { "created": "Wed, 14 Sep 2016 04:50:41 GMT", "version": "v1" }, { "created": "Fri, 7 Oct 2016 02:43:10 GMT", "version": "v2" } ]
2016-11-16
[ [ "Xue", "Chi", "" ], [ "Goldenfeld", "Nigel", "" ] ]
Transposable elements, or transposons, are DNA sequences that can jump from site to site in the genome during the life cycle of a cell, usually encoding the very enzymes which perform their excision. However, some transposons are parasitic, relying on the enzymes produced by the regular transposons. In this case, we show that a stochastic model, which takes into account the small copy numbers of the transposons in a cell, predicts noise-induced predator-prey oscillations with a characteristic time scale that is much longer than the cell replication time, indicating that the state of the predator-prey oscillator is stored in the genome and transmitted to successive generations. Our work demonstrates the important role of number fluctuations in the expression of mobile genetic elements, and shows explicitly how ecological concepts can be applied to the dynamics and fluctuations of living genomes.
2304.08770
Benjamin Zoller
Po-Ta Chen, Michal Levo, Benjamin Zoller, Thomas Gregor
Gene activity fully predicts transcriptional bursting dynamics
null
null
null
null
q-bio.MN physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Transcription commonly occurs in bursts, with alternating productive (ON) and quiescent (OFF) periods, governing mRNA production rates. Yet, how transcription is regulated through bursting dynamics remains unresolved. Here, we conduct real-time measurements of endogenous transcriptional bursting with single-mRNA sensitivity. Leveraging the diverse transcriptional activities in early fly embryos, we uncover stringent relationships between bursting parameters. Specifically, we find that the durations of ON and OFF periods are linked. Regardless of the developmental stage or body-axis position, gene activity levels predict individual alleles' average ON and OFF periods. Lowly transcribing alleles predominantly modulate OFF periods (burst frequency), while highly transcribing alleles primarily tune ON periods (burst size). These relationships persist even under perturbations of cis-regulatory elements or trans-factors and account for bursting dynamics measured in other species. Our results suggest a novel mechanistic constraint governing bursting dynamics rather than a modular control of distinct parameters by distinct regulatory processes.
[ { "created": "Tue, 18 Apr 2023 06:58:46 GMT", "version": "v1" }, { "created": "Mon, 2 Oct 2023 13:45:00 GMT", "version": "v2" }, { "created": "Fri, 28 Jun 2024 15:47:06 GMT", "version": "v3" } ]
2024-07-01
[ [ "Chen", "Po-Ta", "" ], [ "Levo", "Michal", "" ], [ "Zoller", "Benjamin", "" ], [ "Gregor", "Thomas", "" ] ]
Transcription commonly occurs in bursts, with alternating productive (ON) and quiescent (OFF) periods, governing mRNA production rates. Yet, how transcription is regulated through bursting dynamics remains unresolved. Here, we conduct real-time measurements of endogenous transcriptional bursting with single-mRNA sensitivity. Leveraging the diverse transcriptional activities in early fly embryos, we uncover stringent relationships between bursting parameters. Specifically, we find that the durations of ON and OFF periods are linked. Regardless of the developmental stage or body-axis position, gene activity levels predict individual alleles' average ON and OFF periods. Lowly transcribing alleles predominantly modulate OFF periods (burst frequency), while highly transcribing alleles primarily tune ON periods (burst size). These relationships persist even under perturbations of cis-regulatory elements or trans-factors and account for bursting dynamics measured in other species. Our results suggest a novel mechanistic constraint governing bursting dynamics rather than a modular control of distinct parameters by distinct regulatory processes.
1012.3623
Woodrow L Shew
Woodrow L. Shew, Hongdian Yang, Shan Yu, Rajarshi Roy, Dietmar Plenz
Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches
null
The Journal of Neuroscience, January 5, 2011 31(01)
null
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The repertoire of neural activity patterns that a cortical network can produce constrains the network's ability to transfer and process information. Here, we measured activity patterns obtained from multi-site local field potential (LFP) recordings in cortex cultures, urethane anesthetized rats, and awake macaque monkeys. First, we quantified the information capacity of the pattern repertoire of ongoing and stimulus-evoked activity using Shannon entropy. Next, we quantified the efficacy of information transmission between stimulus and response using mutual information. By systematically changing the ratio of excitation/inhibition (E/I) in vitro and in a network model, we discovered that both information capacity and information transmission are maximized at a particular intermediate E/I, at which ongoing activity emerges as neuronal avalanches. Next, we used our in vitro and model results to correctly predict in vivo information capacity and interactions between neuronal groups during ongoing activity. Close agreement between our experiments and model suggest that neuronal avalanches and peak information capacity arise due to criticality and are general properties of cortical networks with balanced E/I.
[ { "created": "Thu, 16 Dec 2010 14:37:47 GMT", "version": "v1" } ]
2010-12-17
[ [ "Shew", "Woodrow L.", "" ], [ "Yang", "Hongdian", "" ], [ "Yu", "Shan", "" ], [ "Roy", "Rajarshi", "" ], [ "Plenz", "Dietmar", "" ] ]
The repertoire of neural activity patterns that a cortical network can produce constrains the network's ability to transfer and process information. Here, we measured activity patterns obtained from multi-site local field potential (LFP) recordings in cortex cultures, urethane anesthetized rats, and awake macaque monkeys. First, we quantified the information capacity of the pattern repertoire of ongoing and stimulus-evoked activity using Shannon entropy. Next, we quantified the efficacy of information transmission between stimulus and response using mutual information. By systematically changing the ratio of excitation/inhibition (E/I) in vitro and in a network model, we discovered that both information capacity and information transmission are maximized at a particular intermediate E/I, at which ongoing activity emerges as neuronal avalanches. Next, we used our in vitro and model results to correctly predict in vivo information capacity and interactions between neuronal groups during ongoing activity. Close agreement between our experiments and model suggest that neuronal avalanches and peak information capacity arise due to criticality and are general properties of cortical networks with balanced E/I.
1410.1549
Cristian Micheletti
Cristian Micheletti, Marco Di Stefano and Henri Orland
The unknotted strands of life: knots are very rare in RNA structures
7 pages, 5 figures, 1 table
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing effort to detect and characterize physical entanglement in biopolymers has so far established that knots are present in many globular proteins and also abound in viral DNA packaged inside bacteriophages. RNA molecules, on the other hand, have not yet been systematically screened for the occurrence of physical knots. We have accordingly undertaken the systematic profiling of the ~6,000 RNA structures present in the protein data bank. The search identified no more than three deeply-knotted RNA molecules. These are ribosomal RNAs solved by cryo-em and consist of about 3,000 nucleotides. Compared to the case of proteins and viral DNA, the observed incidence of RNA knots is therefore practically negligible. This suggests that either evolutionary selection, or thermodynamic and kinetic folding mechanisms act towards minimizing the entanglement of RNA to an extent that is unparalleled by other types of biomolecules. The properties of the three observed RNA knotting patterns provide valuable clues for designing RNA sequences capable of self-tying in a twist-knot fold.
[ { "created": "Mon, 6 Oct 2014 20:00:32 GMT", "version": "v1" } ]
2014-10-08
[ [ "Micheletti", "Cristian", "" ], [ "Di Stefano", "Marco", "" ], [ "Orland", "Henri", "" ] ]
The ongoing effort to detect and characterize physical entanglement in biopolymers has so far established that knots are present in many globular proteins and also abound in viral DNA packaged inside bacteriophages. RNA molecules, on the other hand, have not yet been systematically screened for the occurrence of physical knots. We have accordingly undertaken the systematic profiling of the ~6,000 RNA structures present in the protein data bank. The search identified no more than three deeply-knotted RNA molecules. These are ribosomal RNAs solved by cryo-em and consist of about 3,000 nucleotides. Compared to the case of proteins and viral DNA, the observed incidence of RNA knots is therefore practically negligible. This suggests that either evolutionary selection, or thermodynamic and kinetic folding mechanisms act towards minimizing the entanglement of RNA to an extent that is unparalleled by other types of biomolecules. The properties of the three observed RNA knotting patterns provide valuable clues for designing RNA sequences capable of self-tying in a twist-knot fold.
0710.4269
Anirvan M. Sengupta
Madalena Chave, Eduardo D. Sontag and Anirvan M. Sengupta
Shape, size and robustness: feasible regions in the parameter space of biochemical networks
38 pages, 6 figure
null
null
null
q-bio.MN
null
The concept of robustness of regulatory networks has been closely related to the nature of the interactions among genes, and the capability of pattern maintenance or reproducibility. Defining this robustness property is a challenging task, but mathematical models have often associated it to the volume of the space of admissible parameters. Not only the volume of the space but also its topology and geometry contain information on essential aspects of the network, including feasible pathways, switching between two parallel pathways or distinct/disconnected active regions of parameters. A general method is presented here to characterize the space of admissible parameters, by writing it as a semi-algebraic set, and then theoretically analyzing its topology and geometry, as well as volume. This method provides a more objective and complete measure of the robustness of a developmental module. As an illustration, the segment polarity gene network is analyzed.
[ { "created": "Tue, 23 Oct 2007 13:55:12 GMT", "version": "v1" } ]
2007-10-24
[ [ "Chave", "Madalena", "" ], [ "Sontag", "Eduardo D.", "" ], [ "Sengupta", "Anirvan M.", "" ] ]
The concept of robustness of regulatory networks has been closely related to the nature of the interactions among genes, and the capability of pattern maintenance or reproducibility. Defining this robustness property is a challenging task, but mathematical models have often associated it to the volume of the space of admissible parameters. Not only the volume of the space but also its topology and geometry contain information on essential aspects of the network, including feasible pathways, switching between two parallel pathways or distinct/disconnected active regions of parameters. A general method is presented here to characterize the space of admissible parameters, by writing it as a semi-algebraic set, and then theoretically analyzing its topology and geometry, as well as volume. This method provides a more objective and complete measure of the robustness of a developmental module. As an illustration, the segment polarity gene network is analyzed.
1412.2368
Guowei Wei
Bao Wang and Guo-Wei Wei
Objective-oriented Persistent Homology
13 figures and 96 references
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Persistent homology provides a new approach for the topological simplification of big data via measuring the life time of intrinsic topological features in a filtration process and has found its success in scientific and engineering applications. However, such a success is essentially limited to qualitative data characterization, identification and analysis (CIA). In this work, we outline a general protocol to construct objective-oriented persistent homology methods. The minimization of the objective functional leads to a Laplace-Beltrami operator which generates a multiscale representation of the initial data and offers an objective oriented filtration process. The resulting differential geometry based objective-oriented persistent homology is able to preserve desirable geometric features in the evolutionary filtration and enhances the corresponding topological persistence. The consistence between Laplace-Beltrami flow based filtration and Euclidean distance based filtration is confirmed on the Vietoris-Rips complex for a large amount of numerical tests. The convergence and reliability of the present Laplace-Beltrami flow based cubical complex filtration approach are analyzed over various spatial and temporal mesh sizes. The efficiency and robustness of the present method are verified by more than 500 fullerene molecules. It is shown that the proposed persistent homology based quantitative model offers good predictions of total curvature energies for ten types of fullerene isomers. The present work offers the first example to design objective-oriented persistent homology to enhance or preserve desirable features in the original data during the filtration process and then automatically detect or extract the corresponding topological traits from the data.
[ { "created": "Sun, 7 Dec 2014 16:56:13 GMT", "version": "v1" } ]
2014-12-09
[ [ "Wang", "Bao", "" ], [ "Wei", "Guo-Wei", "" ] ]
Persistent homology provides a new approach for the topological simplification of big data via measuring the life time of intrinsic topological features in a filtration process and has found its success in scientific and engineering applications. However, such a success is essentially limited to qualitative data characterization, identification and analysis (CIA). In this work, we outline a general protocol to construct objective-oriented persistent homology methods. The minimization of the objective functional leads to a Laplace-Beltrami operator which generates a multiscale representation of the initial data and offers an objective oriented filtration process. The resulting differential geometry based objective-oriented persistent homology is able to preserve desirable geometric features in the evolutionary filtration and enhances the corresponding topological persistence. The consistence between Laplace-Beltrami flow based filtration and Euclidean distance based filtration is confirmed on the Vietoris-Rips complex for a large amount of numerical tests. The convergence and reliability of the present Laplace-Beltrami flow based cubical complex filtration approach are analyzed over various spatial and temporal mesh sizes. The efficiency and robustness of the present method are verified by more than 500 fullerene molecules. It is shown that the proposed persistent homology based quantitative model offers good predictions of total curvature energies for ten types of fullerene isomers. The present work offers the first example to design objective-oriented persistent homology to enhance or preserve desirable features in the original data during the filtration process and then automatically detect or extract the corresponding topological traits from the data.
2210.09470
Wei-Hsiang Lin
Wei-Hsiang Lin
Biomass transfer on autocatalytic reaction network: a delay differential equation formulation
Error in the text
null
null
null
q-bio.MN cond-mat.soft math.DS
http://creativecommons.org/licenses/by-nc-nd/4.0/
For a biological system to grow, the biomass must be incorporated, transferred, and accumulated into the underlying reaction network. There are two perspectives for studying growth dynamics of reaction networks: one way is to focus on each node in the networks and study its associated influxes and effluxes. The other way is to focus on a fraction of biomass and study its trajectory along the reaction pathways. The former perspective (analogous to the "Eulerian representation" in fluid mechanics) has been studied extensively, while the latter perspective (analogous to the "Lagrangian representation" in fluid mechanics) has not been systematically explored. In this work, I characterized the biomass transfer process for autocatalytic, growing systems with scalable reaction fluxes. Under balanced growth, the long-term growth dynamics of the systems are described by delay differential equations (DDEs). The kernel function of the DDE serves as a unique pattern for the catalytic delay for a reaction network, and in frequency domain the delay spectrum provides a geometric interpretation for long-term growth rate. The DDE formulation provides a clear intuition on how autocatalytic reaction pathways lead to system growth, it also enables us to classify and compare reaction networks with different network structures.
[ { "created": "Mon, 17 Oct 2022 23:17:24 GMT", "version": "v1" }, { "created": "Thu, 2 Mar 2023 02:13:29 GMT", "version": "v2" } ]
2023-03-03
[ [ "Lin", "Wei-Hsiang", "" ] ]
For a biological system to grow, the biomass must be incorporated, transferred, and accumulated into the underlying reaction network. There are two perspectives for studying growth dynamics of reaction networks: one way is to focus on each node in the networks and study its associated influxes and effluxes. The other way is to focus on a fraction of biomass and study its trajectory along the reaction pathways. The former perspective (analogous to the "Eulerian representation" in fluid mechanics) has been studied extensively, while the latter perspective (analogous to the "Lagrangian representation" in fluid mechanics) has not been systematically explored. In this work, I characterized the biomass transfer process for autocatalytic, growing systems with scalable reaction fluxes. Under balanced growth, the long-term growth dynamics of the systems are described by delay differential equations (DDEs). The kernel function of the DDE serves as a unique pattern for the catalytic delay for a reaction network, and in frequency domain the delay spectrum provides a geometric interpretation for long-term growth rate. The DDE formulation provides a clear intuition on how autocatalytic reaction pathways lead to system growth, it also enables us to classify and compare reaction networks with different network structures.
1206.5771
Christoph Adami
Lars Marstaller, Arend Hintze, and Christoph Adami
The evolution of representation in simple cognitive networks
36 pages, 10 figures, one Table
Neural Computation 25 (2013) 2079-2107
10.1162/NECO_a_00475
null
q-bio.NC cs.NE q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether or not they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks---an artificial neural network and a network of hidden Markov gates---to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system, and should be predictive of an agent's long-term adaptive success.
[ { "created": "Mon, 25 Jun 2012 19:03:04 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2013 17:27:01 GMT", "version": "v2" } ]
2013-08-07
[ [ "Marstaller", "Lars", "" ], [ "Hintze", "Arend", "" ], [ "Adami", "Christoph", "" ] ]
Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether or not they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks---an artificial neural network and a network of hidden Markov gates---to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system, and should be predictive of an agent's long-term adaptive success.
1911.09959
Sarra Ghanjeti
Sarra Ghanjeti
Alignment of Protein-Protein Interaction Networks
57 pages, in French, 9 figures
null
null
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
PPI network alignment aims to find topological and functional similarities between networks of different species. Several alignment approaches have been proposed. Each of these approaches relies on a different alignment method and uses different biological information during the alignment process such as the topological structure of the networks and the sequence similarities between the proteins, but less of them integrate the functional similarities between proteins. In this context, we present our algorithm PPINA (Protein-Protein Interaction Network Aligner), which is an extension of the NETAL algorithm. The latter aligns two networks based on the sequence, functional and network topology similarity of the proteins. PPINA has been tested on real PPI networks. The results show that PPINA has outperformed other alignment algorithms where it provides biologically meaningful results.
[ { "created": "Fri, 22 Nov 2019 10:32:31 GMT", "version": "v1" } ]
2019-11-25
[ [ "Ghanjeti", "Sarra", "" ] ]
PPI network alignment aims to find topological and functional similarities between networks of different species. Several alignment approaches have been proposed. Each of these approaches relies on a different alignment method and uses different biological information during the alignment process such as the topological structure of the networks and the sequence similarities between the proteins, but less of them integrate the functional similarities between proteins. In this context, we present our algorithm PPINA (Protein-Protein Interaction Network Aligner), which is an extension of the NETAL algorithm. The latter aligns two networks based on the sequence, functional and network topology similarity of the proteins. PPINA has been tested on real PPI networks. The results show that PPINA has outperformed other alignment algorithms where it provides biologically meaningful results.
2303.08245
Chenyu Wu
C. Wu, E.B. Gunnarsson, E.M. Myklebust, A. K\"ohn-Luque, D.S. Tadele, J.M. Enserink, A. Frigessi, J. Foo, K. Leder
Using birth-death processes to infer tumor subpopulation structure from live-cell imaging drug screening data
36 pages, 14 figures. v2: 1. Rearranged paper and figures. 2. Modified the figures to make them easier to access; results unchanged. 3. Revised the argument in section 3 and section 4; results unchanged. 4. Revised the abstract
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Tumor heterogeneity is a complex and widely recognized trait that poses significant challenges in developing effective cancer therapies. In particular, many tumors harbor a variety of subpopulations with distinct therapeutic response characteristics. Characterizing this heterogeneity by determining the subpopulation structure within a tumor enables more precise and successful treatment strategies. In our prior work, we developed PhenoPop, a computational framework for unravelling the drug-response subpopulation structure within a tumor from bulk high-throughput drug screening data. However, the deterministic nature of the underlying models driving PhenoPop restricts the model fit and the information it can extract from the data. As an advancement, we propose a stochastic model based on the linear birth-death process to address this limitation. Our model can formulate a dynamic variance along the horizon of the experiment so that the model uses more information from the data to provide a more robust estimation. In addition, the newly proposed model can be readily adapted to situations where the experimental data exhibits a positive time correlation. We test our model on simulated data (in silico) and experimental data (in vitro), which supports our argument about its advantages.
[ { "created": "Tue, 14 Mar 2023 21:39:19 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 17:07:15 GMT", "version": "v2" } ]
2023-06-14
[ [ "Wu", "C.", "" ], [ "Gunnarsson", "E. B.", "" ], [ "Myklebust", "E. M.", "" ], [ "Köhn-Luque", "A.", "" ], [ "Tadele", "D. S.", "" ], [ "Enserink", "J. M.", "" ], [ "Frigessi", "A.", "" ], [ "Foo", "J.", "" ], [ "Leder", "K.", "" ] ]
Tumor heterogeneity is a complex and widely recognized trait that poses significant challenges in developing effective cancer therapies. In particular, many tumors harbor a variety of subpopulations with distinct therapeutic response characteristics. Characterizing this heterogeneity by determining the subpopulation structure within a tumor enables more precise and successful treatment strategies. In our prior work, we developed PhenoPop, a computational framework for unravelling the drug-response subpopulation structure within a tumor from bulk high-throughput drug screening data. However, the deterministic nature of the underlying models driving PhenoPop restricts the model fit and the information it can extract from the data. As an advancement, we propose a stochastic model based on the linear birth-death process to address this limitation. Our model can formulate a dynamic variance along the horizon of the experiment so that the model uses more information from the data to provide a more robust estimation. In addition, the newly proposed model can be readily adapted to situations where the experimental data exhibits a positive time correlation. We test our model on simulated data (in silico) and experimental data (in vitro), which supports our argument about its advantages.
2007.08523
Paolo Pin
Matteo Bizzarri, Fabrizio Panebianco, Paolo Pin
Epidemic dynamics with homophily, vaccination choices, and pseudoscience attitudes
null
null
null
null
q-bio.PE econ.GN physics.soc-ph q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We interpret attitudes towards science and pseudosciences as cultural traits that diffuse in society through communication efforts exerted by agents. We present a tractable model that allows us to study the interaction among the diffusion of an epidemic, vaccination choices, and the dynamics of cultural traits. We apply it to study the impact of homophily between pro-vaxxers and anti-vaxxers on the total number of cases (the cumulative infection). We show that, during the outbreak of a disease, homophily has the direct effect of decreasing the speed of recovery. Hence, it may increase the number of cases and make the disease endemic. The dynamics of the shares of the two cultural traits in the population is crucial in determining the sign of the total effect on the cumulative infection: more homophily is beneficial if agents are not too flexible in changing their cultural trait, is detrimental otherwise.
[ { "created": "Thu, 16 Jul 2020 09:23:02 GMT", "version": "v1" }, { "created": "Fri, 18 Sep 2020 08:56:06 GMT", "version": "v2" }, { "created": "Fri, 11 Jun 2021 08:04:31 GMT", "version": "v3" } ]
2021-06-14
[ [ "Bizzarri", "Matteo", "" ], [ "Panebianco", "Fabrizio", "" ], [ "Pin", "Paolo", "" ] ]
We interpret attitudes towards science and pseudosciences as cultural traits that diffuse in society through communication efforts exerted by agents. We present a tractable model that allows us to study the interaction among the diffusion of an epidemic, vaccination choices, and the dynamics of cultural traits. We apply it to study the impact of homophily between pro-vaxxers and anti-vaxxers on the total number of cases (the cumulative infection). We show that, during the outbreak of a disease, homophily has the direct effect of decreasing the speed of recovery. Hence, it may increase the number of cases and make the disease endemic. The dynamics of the shares of the two cultural traits in the population is crucial in determining the sign of the total effect on the cumulative infection: more homophily is beneficial if agents are not too flexible in changing their cultural trait, is detrimental otherwise.
1911.04040
Petter Holme
Naoki Masuda, Petter Holme
Small inter-event times govern epidemic spreading on temporal networks
null
Phys. Rev. Research 2, 023163 (2020)
10.1103/PhysRevResearch.2.023163
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Just like the degrees of human and animal interaction networks, the distribution of the times between interactions is known to often be right-skewed and fat-tailed. Both these distributions affect epidemic dynamics strongly, but, as we show in this Letter, for very different reasons. Whereas the high degrees of the tail are critical for facilitating epidemics, it is the small interevent times that control the dynamics of epidemics. We investigate this effect both analytically and numerically for different versions of the Susceptible-Infected-Recovered model on different types of networks.
[ { "created": "Mon, 11 Nov 2019 02:34:50 GMT", "version": "v1" }, { "created": "Thu, 9 Apr 2020 05:54:12 GMT", "version": "v2" } ]
2020-05-14
[ [ "Masuda", "Naoki", "" ], [ "Holme", "Petter", "" ] ]
Just like the degrees of human and animal interaction networks, the distribution of the times between interactions is known to often be right-skewed and fat-tailed. Both these distributions affect epidemic dynamics strongly, but, as we show in this Letter, for very different reasons. Whereas the high degrees of the tail are critical for facilitating epidemics, it is the small interevent times that control the dynamics of epidemics. We investigate this effect both analytically and numerically for different versions of the Susceptible-Infected-Recovered model on different types of networks.
1811.12153
Diego Fasoli
Diego Fasoli, Stefano Panzeri
Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder
30 pages, 6 figures, 2 supplemental Python scripts
null
10.3390/e21070630
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, which can be calculated in terms of the permanent of special matrices, according to extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we also specialized our calculations to the case of statistically-homogeneous multi-population networks. In the specific case of this network topology, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.
[ { "created": "Thu, 29 Nov 2018 14:11:24 GMT", "version": "v1" } ]
2019-07-24
[ [ "Fasoli", "Diego", "" ], [ "Panzeri", "Stefano", "" ] ]
We study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, which can be calculated in terms of the permanent of special matrices, according to extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we also specialized our calculations to the case of statistically-homogeneous multi-population networks. In the specific case of this network topology, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.
2001.11437
Vu Anh Truong Nguyen
Vu AT Nguyen and Dervis Can Vural
Theoretical guidelines for editing ecological communities
10 pages, 8 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Having control over species abundances and community resilience is of great interest for experimental, agricultural, industrial and conservation purposes. Here, we theoretically explore the possibility of manipulating ecological communities by modifying pairwise interactions. Specifically, we establish which interaction values should be modified, and by how much, in order to alter the composition or resilience of a community towards a favorable direction. While doing so, we also take into account the experimental difficulties in making such modifications by including in our optimization process, a cost parameter, which penalizes large modifications. In addition to prescribing what changes should be made to interspecies interactions given some modification cost, our approach also serves to establish the limits of community control, i.e. how well can one approach an ecological goal at best, even when not constrained by cost.
[ { "created": "Thu, 30 Jan 2020 16:36:40 GMT", "version": "v1" } ]
2020-01-31
[ [ "Nguyen", "Vu AT", "" ], [ "Vural", "Dervis Can", "" ] ]
Having control over species abundances and community resilience is of great interest for experimental, agricultural, industrial and conservation purposes. Here, we theoretically explore the possibility of manipulating ecological communities by modifying pairwise interactions. Specifically, we establish which interaction values should be modified, and by how much, in order to alter the composition or resilience of a community towards a favorable direction. While doing so, we also take into account the experimental difficulties in making such modifications by including in our optimization process, a cost parameter, which penalizes large modifications. In addition to prescribing what changes should be made to interspecies interactions given some modification cost, our approach also serves to establish the limits of community control, i.e. how well can one approach an ecological goal at best, even when not constrained by cost.
1108.2840
Miguel \'A. Carreira-Perpi\~n\'an
Miguel \'A. Carreira-Perpi\~n\'an, Geoffrey J. Goodhill
Generalised elastic nets
52 pages, 16 figures. Original manuscript dated August 14, 2003 and not updated since. Current authors' email addresses: mcarreira-perpinan@ucmerced.edu, g.goodhill@uq.edu.au
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The elastic net was introduced as a heuristic algorithm for combinatorial optimisation and has been applied, among other problems, to biological modelling. It has an energy function which trades off a fitness term against a tension term. In the original formulation of the algorithm the tension term was implicitly based on a first-order derivative. In this paper we generalise the elastic net model to an arbitrary quadratic tension term, e.g. derived from a discretised differential operator, and give an efficient learning algorithm. We refer to these as generalised elastic nets (GENs). We give a theoretical analysis of the tension term for 1D nets with periodic boundary conditions, and show that the model is sensitive to the choice of finite difference scheme that represents the discretised derivative. We illustrate some of these issues in the context of cortical map models, by relating the choice of tension term to a cortical interaction function. In particular, we prove that this interaction takes the form of a Mexican hat for the original elastic net, and of progressively more oscillatory Mexican hats for higher-order derivatives. The results apply not only to generalised elastic nets but also to other methods using discrete differential penalties, and are expected to be useful in other areas, such as data analysis, computer graphics and optimisation problems.
[ { "created": "Sun, 14 Aug 2011 03:47:14 GMT", "version": "v1" } ]
2011-08-16
[ [ "Carreira-Perpiñán", "Miguel Á.", "" ], [ "Goodhill", "Geoffrey J.", "" ] ]
The elastic net was introduced as a heuristic algorithm for combinatorial optimisation and has been applied, among other problems, to biological modelling. It has an energy function which trades off a fitness term against a tension term. In the original formulation of the algorithm the tension term was implicitly based on a first-order derivative. In this paper we generalise the elastic net model to an arbitrary quadratic tension term, e.g. derived from a discretised differential operator, and give an efficient learning algorithm. We refer to these as generalised elastic nets (GENs). We give a theoretical analysis of the tension term for 1D nets with periodic boundary conditions, and show that the model is sensitive to the choice of finite difference scheme that represents the discretised derivative. We illustrate some of these issues in the context of cortical map models, by relating the choice of tension term to a cortical interaction function. In particular, we prove that this interaction takes the form of a Mexican hat for the original elastic net, and of progressively more oscillatory Mexican hats for higher-order derivatives. The results apply not only to generalised elastic nets but also to other methods using discrete differential penalties, and are expected to be useful in other areas, such as data analysis, computer graphics and optimisation problems.
2304.09566
Giuseppe de Vito
Giuseppe de Vito, Lapo Turrini, Chiara Fornetto, Elena Trabalzini, Pietro Ricci, Duccio Fanelli, Francesco Vanzi, Francesco Saverio Pavone
Brain-wide functional imaging to highlight differences between the diurnal and nocturnal neuronal activity in zebrafish larvae
22 pages, 8 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most living organisms show highly conserved physiological changes following a 24-hour cycle which goes by the name of circadian rhythm. Among experimental models, the effects of light-dark cycle have been recently investigated in the larval zebrafish. Owing to its small size and transparency, this vertebrate enables optical access to the entire brain. Indeed, the combination of this organism with light-sheet imaging grants high spatio-temporal resolution volumetric recording of neuronal activity. This imaging technique, in its multiphoton variant, allows functional investigations without unwanted visual stimulation. Here, we employed a custom two-photon light-sheet microscope to study brain-wide differences in neuronal activity between diurnal and nocturnal periods in larval zebrafish assessed at the transition between day and night. We describe for the first time an activity increase in the low frequency domain of the pretectum and a frequency-localized activity decrease of the anterior rhombencephalic turning region during the nocturnal period. Moreover, our data confirm a nocturnal reduction in habenular activity. Furthermore, brain-wide detrended fluctuation analysis revealed a nocturnal decrease in the self-affinity of the neuronal signals in parts of the dorsal thalamus and the medulla oblongata and an increase in the pretectum. Our data show that brain-wide nonlinear light-sheet imaging represents a useful tool to investigate circadian rhythm effects on neuronal activity.
[ { "created": "Wed, 19 Apr 2023 11:09:50 GMT", "version": "v1" }, { "created": "Fri, 25 Aug 2023 17:16:27 GMT", "version": "v2" } ]
2023-08-28
[ [ "de Vito", "Giuseppe", "" ], [ "Turrini", "Lapo", "" ], [ "Fornetto", "Chiara", "" ], [ "Trabalzini", "Elena", "" ], [ "Ricci", "Pietro", "" ], [ "Fanelli", "Duccio", "" ], [ "Vanzi", "Francesco", "" ], [ "Pavone", "Francesco Saverio", "" ] ]
Most living organisms show highly conserved physiological changes following a 24-hour cycle which goes by the name of circadian rhythm. Among experimental models, the effects of light-dark cycle have been recently investigated in the larval zebrafish. Owing to its small size and transparency, this vertebrate enables optical access to the entire brain. Indeed, the combination of this organism with light-sheet imaging grants high spatio-temporal resolution volumetric recording of neuronal activity. This imaging technique, in its multiphoton variant, allows functional investigations without unwanted visual stimulation. Here, we employed a custom two-photon light-sheet microscope to study brain-wide differences in neuronal activity between diurnal and nocturnal periods in larval zebrafish assessed at the transition between day and night. We describe for the first time an activity increase in the low frequency domain of the pretectum and a frequency-localized activity decrease of the anterior rhombencephalic turning region during the nocturnal period. Moreover, our data confirm a nocturnal reduction in habenular activity. Furthermore, brain-wide detrended fluctuation analysis revealed a nocturnal decrease in the self-affinity of the neuronal signals in parts of the dorsal thalamus and the medulla oblongata and an increase in the pretectum. Our data show that brain-wide nonlinear light-sheet imaging represents a useful tool to investigate circadian rhythm effects on neuronal activity.
2007.09466
Sergio L\'opez Bernal
Sergio L\'opez Bernal, Alberto Huertas Celdr\'an, Lorenzo Fern\'andez Maim\'o, Michael Taynnan Barros, Sasitharan Balasubramaniam, Gregorio Mart\'inez P\'erez
Cyberattacks on Miniature Brain Implants to Disrupt Spontaneous Neural Signaling
null
null
null
null
q-bio.NC cs.CR
http://creativecommons.org/publicdomain/zero/1.0/
Brain-Computer Interfaces (BCI) arose as systems that merge computing systems with the human brain to facilitate recording, stimulation, and inhibition of neural activity. Over the years, the development of BCI technologies has shifted towards miniaturization of devices that can be seamlessly embedded into the brain and can target single neuron or small population sensing and control. We present a motivating example highlighting vulnerabilities of two promising micron-scale BCI technologies, demonstrating the lack of security and privacy principles in existing solutions. This situation opens the door to a novel family of cyberattacks, called neuronal cyberattacks, affecting neuronal signaling. This paper defines the first two neural cyberattacks, Neuronal Flooding (FLO) and Neuronal Scanning (SCA), where each threat can affect the natural activity of neurons. This work implements these attacks in a neuronal simulator to determine their impact over the spontaneous neuronal behavior, defining three metrics: number of spikes, percentage of shifts, and dispersion of spikes. Several experiments demonstrate that both cyberattacks produce a reduction of spikes compared to spontaneous behavior, generating a rise in temporal shifts and a dispersion increase. Mainly, SCA presents a higher impact than FLO in the metrics focused on the number of spikes and dispersion, where FLO is slightly more damaging, considering the percentage of shifts. Nevertheless, the intrinsic behavior of each attack generates a differentiation on how they alter neuronal signaling. FLO is adequate to generate an immediate impact on the neuronal activity, whereas SCA presents higher effectiveness for damages to the neural signaling in the long-term.
[ { "created": "Sat, 18 Jul 2020 16:25:46 GMT", "version": "v1" }, { "created": "Thu, 10 Sep 2020 14:55:42 GMT", "version": "v2" } ]
2020-09-11
[ [ "Bernal", "Sergio López", "" ], [ "Celdrán", "Alberto Huertas", "" ], [ "Maimó", "Lorenzo Fernández", "" ], [ "Barros", "Michael Taynnan", "" ], [ "Balasubramaniam", "Sasitharan", "" ], [ "Pérez", "Gregorio Martínez", "" ] ]
Brain-Computer Interfaces (BCI) arose as systems that merge computing systems with the human brain to facilitate recording, stimulation, and inhibition of neural activity. Over the years, the development of BCI technologies has shifted towards miniaturization of devices that can be seamlessly embedded into the brain and can target single neuron or small population sensing and control. We present a motivating example highlighting vulnerabilities of two promising micron-scale BCI technologies, demonstrating the lack of security and privacy principles in existing solutions. This situation opens the door to a novel family of cyberattacks, called neuronal cyberattacks, affecting neuronal signaling. This paper defines the first two neural cyberattacks, Neuronal Flooding (FLO) and Neuronal Scanning (SCA), where each threat can affect the natural activity of neurons. This work implements these attacks in a neuronal simulator to determine their impact over the spontaneous neuronal behavior, defining three metrics: number of spikes, percentage of shifts, and dispersion of spikes. Several experiments demonstrate that both cyberattacks produce a reduction of spikes compared to spontaneous behavior, generating a rise in temporal shifts and a dispersion increase. Mainly, SCA presents a higher impact than FLO in the metrics focused on the number of spikes and dispersion, where FLO is slightly more damaging, considering the percentage of shifts. Nevertheless, the intrinsic behavior of each attack generates a differentiation on how they alter neuronal signaling. FLO is adequate to generate an immediate impact on the neuronal activity, whereas SCA presents higher effectiveness for damages to the neural signaling in the long-term.
1505.02928
Srinandan Dasmahapatra
An Nguyen, Adam Prugel-Bennett and Srinandan Dasmahapatra
A Low Dimensional Approximation For Competence In Bacillus Subtilis
12 pages, to be published in IEEE/ACM Transactions on Computational Biology and Bioinformatics
null
10.1109/TCBB.2015.2440275
null
q-bio.QM physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The behaviour of a high dimensional stochastic system described by a Chemical Master Equation (CME) depends on many parameters, rendering explicit simulation an inefficient method for exploring the properties of such models. Capturing their behaviour by low-dimensional models makes analysis of system behaviour tractable. In this paper, we present low dimensional models for the noise-induced excitable dynamics in Bacillus subtilis, whereby a key protein ComK, which drives a complex chain of reactions leading to bacterial competence, gets expressed rapidly in large quantities (competent state) before subsiding to low levels of expression (vegetative state). These rapid reactions suggest the application of an adiabatic approximation of the dynamics of the regulatory model that, however, lead to competence durations that are incorrect by a factor of 2. We apply a modified version of an iterative functional procedure that faithfully approximates the time-course of the trajectories in terms of a 2-dimensional model involving proteins ComK and ComS. Furthermore, in order to describe the bimodal bivariate marginal probability distribution obtained from the Gillespie simulations of the CME, we introduce a tunable multiplicative noise term in a 2-dimensional Langevin model whose stationary state is described by the time-independent solution of the corresponding Fokker-Planck equation.
[ { "created": "Tue, 12 May 2015 09:30:47 GMT", "version": "v1" } ]
2016-11-17
[ [ "Nguyen", "An", "" ], [ "Prugel-Bennett", "Adam", "" ], [ "Dasmahapatra", "Srinandan", "" ] ]
The behaviour of a high dimensional stochastic system described by a Chemical Master Equation (CME) depends on many parameters, rendering explicit simulation an inefficient method for exploring the properties of such models. Capturing their behaviour by low-dimensional models makes analysis of system behaviour tractable. In this paper, we present low dimensional models for the noise-induced excitable dynamics in Bacillus subtilis, whereby a key protein ComK, which drives a complex chain of reactions leading to bacterial competence, gets expressed rapidly in large quantities (competent state) before subsiding to low levels of expression (vegetative state). These rapid reactions suggest the application of an adiabatic approximation of the dynamics of the regulatory model that, however, lead to competence durations that are incorrect by a factor of 2. We apply a modified version of an iterative functional procedure that faithfully approximates the time-course of the trajectories in terms of a 2-dimensional model involving proteins ComK and ComS. Furthermore, in order to describe the bimodal bivariate marginal probability distribution obtained from the Gillespie simulations of the CME, we introduce a tunable multiplicative noise term in a 2-dimensional Langevin model whose stationary state is described by the time-independent solution of the corresponding Fokker-Planck equation.
2306.14707
Danko Georgiev
Danko D. Georgiev
Causal potency of consciousness in the physical world
47 pages, 7 figures. International Journal of Modern Physics B (2023)
International Journal of Modern Physics B 2024; 38 (19): 2450256
10.1142/s0217979224502564
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of the human mind through natural selection mandates that our conscious experiences are causally potent in order to leave a tangible impact upon the surrounding physical world. Any attempt to construct a functional theory of the conscious mind within the framework of classical physics, however, inevitably leads to causally impotent conscious experiences in direct contradiction to evolution theory. Here, we derive several rigorous theorems that identify the origin of the latter impasse in the mathematical properties of ordinary differential equations employed in combination with the alleged functional production of the mind by the brain. Then, we demonstrate that a mind--brain theory consistent with causally potent conscious experiences is provided by modern quantum physics, in which the unobservable conscious mind is reductively identified with the quantum state of the brain and the observable brain is constructed by the physical measurement of quantum brain observables. The resulting quantum stochastic dynamics obtained from sequential quantum measurements of the brain is governed by stochastic differential equations, which permit genuine free will exercised through sequential conscious choices of future courses of action. Thus, quantum reductionism provides a solid theoretical foundation for the causal potency of consciousness, free will and cultural transmission.
[ { "created": "Mon, 26 Jun 2023 13:55:33 GMT", "version": "v1" } ]
2024-05-10
[ [ "Georgiev", "Danko D.", "" ] ]
The evolution of the human mind through natural selection mandates that our conscious experiences are causally potent in order to leave a tangible impact upon the surrounding physical world. Any attempt to construct a functional theory of the conscious mind within the framework of classical physics, however, inevitably leads to causally impotent conscious experiences in direct contradiction to evolution theory. Here, we derive several rigorous theorems that identify the origin of the latter impasse in the mathematical properties of ordinary differential equations employed in combination with the alleged functional production of the mind by the brain. Then, we demonstrate that a mind--brain theory consistent with causally potent conscious experiences is provided by modern quantum physics, in which the unobservable conscious mind is reductively identified with the quantum state of the brain and the observable brain is constructed by the physical measurement of quantum brain observables. The resulting quantum stochastic dynamics obtained from sequential quantum measurements of the brain is governed by stochastic differential equations, which permit genuine free will exercised through sequential conscious choices of future courses of action. Thus, quantum reductionism provides a solid theoretical foundation for the causal potency of consciousness, free will and cultural transmission.
2212.07695
Vittorio Lippi
Vittorio Lippi, Christoph Maurer, Thomas Mergner
Human body-sway steady-state responses to small amplitude tilts and translations of the support surface -- Effects of superposition of the two stimuli
10 pages, 7 figures
Gait & Posture, Volume 100, 2023, Pages 139-148, ISSN 0966-6362
10.1016/j.gaitpost.2022.12.003
null
q-bio.NC stat.AP
http://creativecommons.org/licenses/by/4.0/
Upright stance tested with a superposition of support surface tilt and translation. Steady state response is characterized by frequency response function. Interaction between two stimuli absent in most of the cases. Larger stimuli may create interaction. Simulations suggest that the observed effects can be due to joint stiffness modulation.
[ { "created": "Thu, 15 Dec 2022 10:13:32 GMT", "version": "v1" } ]
2022-12-16
[ [ "Lippi", "Vittorio", "" ], [ "Maurer", "Christoph", "" ], [ "Mergner", "Thomas", "" ] ]
Upright stance tested with a superposition of support surface tilt and translation. Steady state response is characterized by frequency response function. Interaction between two stimuli absent in most of the cases. Larger stimuli may create interaction. Simulations suggest that the observed effects can be due to joint stiffness modulation.
2012.08671
Aaron Wang
Wei Cheng, Ghulam Murtaza, Aaron Wang
SimpleChrome: Encoding of Combinatorial Effects for Predicting Gene Expression
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Due to recent breakthroughs in state-of-the-art DNA sequencing technology, genomics data sets have become ubiquitous. The emergence of large-scale data sets provides great opportunities for better understanding of genomics, especially gene regulation. Although each cell in the human body contains the same set of DNA information, gene expression controls the functions of these cells by either turning genes on or off, known as gene expression levels. There are two important factors that control the expression level of each gene: (1) Gene regulation such as histone modifications can directly regulate gene expression. (2) Neighboring genes that are functionally related to or interact with each other that can also affect gene expression level. Previous efforts have tried to address the former using Attention-based model. However, addressing the second problem requires the incorporation of all potentially related gene information into the model. Though modern machine learning and deep learning models have been able to capture gene expression signals when applied to moderately sized data, they have struggled to recover the underlying signals of the data due to the nature of the data's higher dimensionality. To remedy this issue, we present SimpleChrome, a deep learning model that learns the latent histone modification representations of genes. The features learned from the model allow us to better understand the combinatorial effects of cross-gene interactions and direct gene regulation on the target gene expression. The results of this paper show outstanding improvements on the predictive capabilities of downstream models and greatly relaxes the need for a large data set to learn a robust, generalized neural network. These results have immediate downstream effects in epigenomics research and drug development.
[ { "created": "Tue, 15 Dec 2020 23:30:36 GMT", "version": "v1" }, { "created": "Thu, 17 Dec 2020 05:58:21 GMT", "version": "v2" } ]
2020-12-18
[ [ "Cheng", "Wei", "" ], [ "Murtaza", "Ghulam", "" ], [ "Wang", "Aaron", "" ] ]
Due to recent breakthroughs in state-of-the-art DNA sequencing technology, genomics data sets have become ubiquitous. The emergence of large-scale data sets provides great opportunities for better understanding of genomics, especially gene regulation. Although each cell in the human body contains the same set of DNA information, gene expression controls the functions of these cells by either turning genes on or off, known as gene expression levels. There are two important factors that control the expression level of each gene: (1) Gene regulation such as histone modifications can directly regulate gene expression. (2) Neighboring genes that are functionally related to or interact with each other that can also affect gene expression level. Previous efforts have tried to address the former using Attention-based model. However, addressing the second problem requires the incorporation of all potentially related gene information into the model. Though modern machine learning and deep learning models have been able to capture gene expression signals when applied to moderately sized data, they have struggled to recover the underlying signals of the data due to the nature of the data's higher dimensionality. To remedy this issue, we present SimpleChrome, a deep learning model that learns the latent histone modification representations of genes. The features learned from the model allow us to better understand the combinatorial effects of cross-gene interactions and direct gene regulation on the target gene expression. The results of this paper show outstanding improvements on the predictive capabilities of downstream models and greatly relaxes the need for a large data set to learn a robust, generalized neural network. These results have immediate downstream effects in epigenomics research and drug development.
1309.7414
Liane Gabora
Liane Gabora
The Beer Can Theory of Creativity
25 pages
In P. Bentley & D. Corne (Eds.), Creative Evolutionary Systems (pp. 147-161). San Francisco: Morgan Kauffman. (2000)
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This chapter explores the cognitive mechanisms underlying the emergence and evolution of cultural novelty. Section Two summarizes the rationale for viewing the process by which the fruits of the mind take shape as they spread from one individual to another as a form of evolution, and briefly discusses a computer model of this process. Section Three presents theoretical and empirical evidence that the sudden proliferation of human culture approximately two million years ago began with the capacity for creativity: that is, the ability to generate novelty strategically and contextually. The next two sections take a closer look at the creative process. Section Four examines the mechanisms underlying the fluid, associative thought that constitutes the inspirational component of creativity. Section Five explores how that initial flicker of inspiration crystallizes into a solid, workable idea as it gets mulled over in light of the various constraints and affordances of the world into which it will be born. Finally, Section Six wraps things up with a few speculative thoughts about the overall unfolding of this evolutionary process.
[ { "created": "Sat, 28 Sep 2013 03:07:53 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2019 20:01:35 GMT", "version": "v2" }, { "created": "Tue, 9 Jul 2019 20:04:24 GMT", "version": "v3" } ]
2019-07-11
[ [ "Gabora", "Liane", "" ] ]
This chapter explores the cognitive mechanisms underlying the emergence and evolution of cultural novelty. Section Two summarizes the rationale for viewing the process by which the fruits of the mind take shape as they spread from one individual to another as a form of evolution, and briefly discusses a computer model of this process. Section Three presents theoretical and empirical evidence that the sudden proliferation of human culture approximately two million years ago began with the capacity for creativity: that is, the ability to generate novelty strategically and contextually. The next two sections take a closer look at the creative process. Section Four examines the mechanisms underlying the fluid, associative thought that constitutes the inspirational component of creativity. Section Five explores how that initial flicker of inspiration crystallizes into a solid, workable idea as it gets mulled over in light of the various constraints and affordances of the world into which it will be born. Finally, Section Six wraps things up with a few speculative thoughts about the overall unfolding of this evolutionary process.
1706.03013
Genki Ichinose
Genki Ichinose, Yoshiki Satotani, Hiroki Sayama
How mutation alters fitness of cooperation in networked evolutionary games
6 pages, 5 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperation is ubiquitous in every level of living organisms. It is known that spatial (network) structure is a viable mechanism for cooperation to evolve. Until recently, it has been difficult to predict whether cooperation can evolve at a network (population) level. To address this problem, Pinheiro et al. proposed a numerical metric, called Average Gradient of Selection (AGoS) in 2012. AGoS can characterize and forecast the evolutionary fate of cooperation at a population level. However, stochastic mutation of strategies was not considered in the analysis of AGoS. Here we analyzed the evolution of cooperation using AGoS where mutation may occur to strategies of individuals in networks. Our analyses revealed that mutation always has a negative effect on the evolution of cooperation regardless of the fraction of cooperators and network structures. Moreover, we found that mutation affects the fitness of cooperation differently on different social network structures.
[ { "created": "Fri, 9 Jun 2017 15:57:02 GMT", "version": "v1" } ]
2017-06-12
[ [ "Ichinose", "Genki", "" ], [ "Satotani", "Yoshiki", "" ], [ "Sayama", "Hiroki", "" ] ]
Cooperation is ubiquitous in every level of living organisms. It is known that spatial (network) structure is a viable mechanism for cooperation to evolve. Until recently, it has been difficult to predict whether cooperation can evolve at a network (population) level. To address this problem, Pinheiro et al. proposed a numerical metric, called Average Gradient of Selection (AGoS) in 2012. AGoS can characterize and forecast the evolutionary fate of cooperation at a population level. However, stochastic mutation of strategies was not considered in the analysis of AGoS. Here we analyzed the evolution of cooperation using AGoS where mutation may occur to strategies of individuals in networks. Our analyses revealed that mutation always has a negative effect on the evolution of cooperation regardless of the fraction of cooperators and network structures. Moreover, we found that mutation affects the fitness of cooperation differently on different social network structures.
2307.06732
Robert Rosenbaum
Vicky Zhu and Robert Rosenbaum
Learning fixed points of recurrent neural networks by reparameterizing the network model
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by/4.0/
In computational neuroscience, fixed points of recurrent neural networks are commonly used to model neural responses to static or slowly changing stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. A natural approach is to use gradient descent on the Euclidean space of synaptic weights. We show that this approach can lead to poor learning performance due, in part, to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produces more robust learning dynamics. We show that these learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
[ { "created": "Thu, 13 Jul 2023 13:09:11 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2023 09:23:48 GMT", "version": "v2" } ]
2023-07-28
[ [ "Zhu", "Vicky", "" ], [ "Rosenbaum", "Robert", "" ] ]
In computational neuroscience, fixed points of recurrent neural networks are commonly used to model neural responses to static or slowly changing stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. A natural approach is to use gradient descent on the Euclidean space of synaptic weights. We show that this approach can lead to poor learning performance due, in part, to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produces more robust learning dynamics. We show that these learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.
1101.1858
Kate Inasaridze
Ketevan Inasaridze, Vera Bzhalava
Dual-task Coordination in Children and Adolescents with Attention Deficit Hyperactivity Disorder (ADHD)
31 pages, 9 figures, 7 tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The deficit of executive functioning was found to be associated with attention deficit hyperactivity disorder (ADHD) in general and its subtypes. One of the important functions of central executive is the ability simultaneously coordinate two tasks. The study aimed at defining the dual-task performance characteristics in healthy children and adolescents on the computerised and the paper and pencil dual-task methods; investigating the effect of task difficulty on dual-task performance in ADHD in comparison to age and years of education matched healthy controls; testing if the paper and pencil version of the dual-task method is giving the same results in ADHD and healthy controls; investigating whether the dual-task functioning in ADHD is defined by the deficits in the general motor functioning and comorbidity factors. The study investigated dual task functioning in 6-16 years old 91 typically developing controls and 91 children with ADHD. It was found that: (1) the dual-task coordination is available in children and adolescents with ADHD in general and in its subtypes and not significantly different from performance of age and years of education matched healthy controls; (2) Increase of the task difficulty in dual-task paradigm don't affect disproportionately children and adolescents with ADHD in comparison to age and years of education matched healthy controls; (3) The paper and pencil version of the dual-task method is giving the same results in ADHD and healthy controls as computerised version; (4) The dual-task functioning in ADHD in general and in its subtypes is not defined by the general motor functioning while in healthy controls dual task performance is associated with the general motor functioning level; (5) The dual-task functioning in ADHD in general and in its subtypes is not defined by the comorbidity factors.
[ { "created": "Mon, 10 Jan 2011 16:08:00 GMT", "version": "v1" } ]
2011-01-11
[ [ "Inasaridze", "Ketevan", "" ], [ "Bzhalava", "Vera", "" ] ]
The deficit of executive functioning was found to be associated with attention deficit hyperactivity disorder (ADHD) in general and its subtypes. One of the important functions of central executive is the ability simultaneously coordinate two tasks. The study aimed at defining the dual-task performance characteristics in healthy children and adolescents on the computerised and the paper and pencil dual-task methods; investigating the effect of task difficulty on dual-task performance in ADHD in comparison to age and years of education matched healthy controls; testing if the paper and pencil version of the dual-task method is giving the same results in ADHD and healthy controls; investigating whether the dual-task functioning in ADHD is defined by the deficits in the general motor functioning and comorbidity factors. The study investigated dual task functioning in 6-16 years old 91 typically developing controls and 91 children with ADHD. It was found that: (1) the dual-task coordination is available in children and adolescents with ADHD in general and in its subtypes and not significantly different from performance of age and years of education matched healthy controls; (2) Increase of the task difficulty in dual-task paradigm don't affect disproportionately children and adolescents with ADHD in comparison to age and years of education matched healthy controls; (3) The paper and pencil version of the dual-task method is giving the same results in ADHD and healthy controls as computerised version; (4) The dual-task functioning in ADHD in general and in its subtypes is not defined by the general motor functioning while in healthy controls dual task performance is associated with the general motor functioning level; (5) The dual-task functioning in ADHD in general and in its subtypes is not defined by the comorbidity factors.
1510.09155
Momoko Hayamizu
Momoko Hayamizu, Hiroshi Endo, and Kenji Fukumizu
A characterization of minimum spanning tree-like metric spaces
9 pages, 2 figures
null
null
null
q-bio.QM cs.DM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed a surge of biological interest in the minimum spanning tree (MST) problem for its relevance to automatic model construction using the distances between data points. Despite the increasing use of MST algorithms for this purpose, the goodness-of-fit of an MST to the data is often elusive because no quantitative criteria have been developed to measure it. Motivated by this, we provide a necessary and sufficient condition to ensure that a metric space on n points can be represented by a fully labeled tree on n vertices, and thereby determine when an MST preserves all pairwise distances between points in a finite metric space.
[ { "created": "Fri, 30 Oct 2015 16:57:08 GMT", "version": "v1" } ]
2015-11-02
[ [ "Hayamizu", "Momoko", "" ], [ "Endo", "Hiroshi", "" ], [ "Fukumizu", "Kenji", "" ] ]
Recent years have witnessed a surge of biological interest in the minimum spanning tree (MST) problem for its relevance to automatic model construction using the distances between data points. Despite the increasing use of MST algorithms for this purpose, the goodness-of-fit of an MST to the data is often elusive because no quantitative criteria have been developed to measure it. Motivated by this, we provide a necessary and sufficient condition to ensure that a metric space on n points can be represented by a fully labeled tree on n vertices, and thereby determine when an MST preserves all pairwise distances between points in a finite metric space.
1509.01697
Alan D. Rendall
Dorothea M\"ohring and Alan D. Rendall
Overload breakdown in models for photosynthesis
null
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many models of the Calvin cycle of photosynthesis it is observed that there are solutions where concentrations of key substances belonging to the cycle tend to zero at late times, a phenomenon known as overload breakdown. In this paper we prove theorems about the existence and non-existence of solutions of this type and obtain information on which concentrations tend to zero when overload breakdown occurs. As a starting point we take a model of Pettersson and Ryde-Pettersson which seems to be prone to overload breakdown and a modification of it due to Poolman which was intended to avoid this effect.
[ { "created": "Sat, 5 Sep 2015 12:46:35 GMT", "version": "v1" } ]
2015-09-08
[ [ "Möhring", "Dorothea", "" ], [ "Rendall", "Alan D.", "" ] ]
In many models of the Calvin cycle of photosynthesis it is observed that there are solutions where concentrations of key substances belonging to the cycle tend to zero at late times, a phenomenon known as overload breakdown. In this paper we prove theorems about the existence and non-existence of solutions of this type and obtain information on which concentrations tend to zero when overload breakdown occurs. As a starting point we take a model of Pettersson and Ryde-Pettersson which seems to be prone to overload breakdown and a modification of it due to Poolman which was intended to avoid this effect.
q-bio/0311039
Alexander Kraskov
Alexander Kraskov, Harald St\"ogbauer, Ralph G. Andrzejak, and Peter Grassberger
Hierarchical Clustering Based on Mutual Information
11 pages, 5 figures
null
null
null
q-bio.QM cs.CC physics.bio-ph
null
Motivation: Clustering is a frequently used concept in variety of bioinformatical applications. We present a new method for hierarchical clustering of data called mutual information clustering (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). Results: We use this both in the Shannon (probabilistic) version of information theory, where the "objects" are probability distributions represented by random samples, and in the Kolmogorov (algorithmic) version, where the "objects" are symbol sequences. We apply our method to the construction of mammal phylogenetic trees from mitochondrial DNA sequences and we reconstruct the fetal ECG from the output of independent components analysis (ICA) applied to the ECG of a pregnant woman. Availability: The programs for estimation of MI and for clustering (probabilistic version) are available at http://www.fz-juelich.de/nic/cs/software
[ { "created": "Fri, 28 Nov 2003 17:04:26 GMT", "version": "v1" }, { "created": "Mon, 1 Dec 2003 07:37:34 GMT", "version": "v2" } ]
2007-05-23
[ [ "Kraskov", "Alexander", "" ], [ "Stögbauer", "Harald", "" ], [ "Andrzejak", "Ralph G.", "" ], [ "Grassberger", "Peter", "" ] ]
Motivation: Clustering is a frequently used concept in variety of bioinformatical applications. We present a new method for hierarchical clustering of data called mutual information clustering (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). Results: We use this both in the Shannon (probabilistic) version of information theory, where the "objects" are probability distributions represented by random samples, and in the Kolmogorov (algorithmic) version, where the "objects" are symbol sequences. We apply our method to the construction of mammal phylogenetic trees from mitochondrial DNA sequences and we reconstruct the fetal ECG from the output of independent components analysis (ICA) applied to the ECG of a pregnant woman. Availability: The programs for estimation of MI and for clustering (probabilistic version) are available at http://www.fz-juelich.de/nic/cs/software
q-bio/0511002
Simone Pigolotti
S. Pigolotti, A. Flammini, M.Marsili, and A.Maritan
Species lifetime distribution for simple models of ecologies
19 pages, 2 figures
PNAS (2005) 102: pp. 15747-15751
10.1073/pnas.0502648102
null
q-bio.PE
null
Interpretation of empirical results based on a taxa's lifetime distribution shows apparently conflicting results. Species' lifetime is reported to be exponentially distributed, whereas higher order taxa, such as families or genera, follow a broader distribution, compatible with power law decay. We show that both these evidences are consistent with a simple evolutionary model that does not require specific assumptions on species interaction. The model provides a zero-order description of the dynamics of ecological communities and its species lifetime distribution can be computed exactly. Different behaviors are found: an initial $t^{-3/2}$ power law, emerging from a random walk type of dynamics, which crosses over to a steeper $t^{-2}$ branching process-like regime and finally is cutoff by an exponential decay which becomes weaker and weaker as the total population increases. Sampling effects can also be taken into account and shown to be relevant: if species in the fossil record were sampled according to the Fisher log-series distribution, lifetime should be distributed according to a $t^{-1}$ power law. Such variability of behaviors in a simple model, combined with the scarcity of data available, cast serious doubts on the possibility to validate theories of evolution on the basis of species lifetime data.
[ { "created": "Wed, 2 Nov 2005 15:15:26 GMT", "version": "v1" } ]
2009-11-11
[ [ "Pigolotti", "S.", "" ], [ "Flammini", "A.", "" ], [ "Marsili", "M.", "" ], [ "Maritan", "A.", "" ] ]
Interpretation of empirical results based on a taxa's lifetime distribution shows apparently conflicting results. Species' lifetime is reported to be exponentially distributed, whereas higher order taxa, such as families or genera, follow a broader distribution, compatible with power law decay. We show that both these evidences are consistent with a simple evolutionary model that does not require specific assumptions on species interaction. The model provides a zero-order description of the dynamics of ecological communities and its species lifetime distribution can be computed exactly. Different behaviors are found: an initial $t^{-3/2}$ power law, emerging from a random walk type of dynamics, which crosses over to a steeper $t^{-2}$ branching process-like regime and finally is cutoff by an exponential decay which becomes weaker and weaker as the total population increases. Sampling effects can also be taken into account and shown to be relevant: if species in the fossil record were sampled according to the Fisher log-series distribution, lifetime should be distributed according to a $t^{-1}$ power law. Such variability of behaviors in a simple model, combined with the scarcity of data available, cast serious doubts on the possibility to validate theories of evolution on the basis of species lifetime data.
0908.0657
Ranjith Padinhateeri
Padinhateeri Ranjith, Kirone Mallick, Jean-Francois Joanny, David Lacoste
Role of ATP-hydrolysis in the dynamics of a single actin filament
To appear in Biophysical Journal (2010)
Biophys. J, 98, 1418 (2010)
10.1016/j.bpj.2009.12.4306
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the stochastic dynamics of growth and shrinkage of single actin filaments taking into account insertion, removal, and ATP hydrolysis of subunits either according to the vectorial mechanism or to the random mechanism. In a previous work, we developed a model for a single actin or microtubule filament where hydrolysis occurred according to the vectorial mechanism: the filament could grow only from one end, and was in contact with a reservoir of monomers. Here we extend this approach in several ways, by including the dynamics of both ends and by comparing two possible mechanisms of ATP hydrolysis. Our emphasis is mainly on two possible limiting models for the mechanism of hydrolysis within a single filament, namely the vectorial or the random model. We propose a set of experiments to test the nature of the precise mechanism of hydrolysis within actin filaments.
[ { "created": "Wed, 5 Aug 2009 12:39:40 GMT", "version": "v1" }, { "created": "Thu, 7 Jan 2010 07:04:51 GMT", "version": "v2" } ]
2015-05-13
[ [ "Ranjith", "Padinhateeri", "" ], [ "Mallick", "Kirone", "" ], [ "Joanny", "Jean-Francois", "" ], [ "Lacoste", "David", "" ] ]
We study the stochastic dynamics of growth and shrinkage of single actin filaments taking into account insertion, removal, and ATP hydrolysis of subunits either according to the vectorial mechanism or to the random mechanism. In a previous work, we developed a model for a single actin or microtubule filament where hydrolysis occurred according to the vectorial mechanism: the filament could grow only from one end, and was in contact with a reservoir of monomers. Here we extend this approach in several ways, by including the dynamics of both ends and by comparing two possible mechanisms of ATP hydrolysis. Our emphasis is mainly on two possible limiting models for the mechanism of hydrolysis within a single filament, namely the vectorial or the random model. We propose a set of experiments to test the nature of the precise mechanism of hydrolysis within actin filaments.
1805.05453
Marina Voinova V
Marina V Voinova
Modeling water transport processes in dialysis
60 pages, review
null
null
null
q-bio.TO cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical modeling is an important theoretical tool which provides researchers with quantification of the permeability of dialyzing systems in renal replacement therapy. In the paper we provide a short review of the most successful theoretical approaches and refer to the corresponding experimental methods studying these phenomena in both biological and synthetic filters in dialysis. Two levels of modeling of fluid and solute transport are considered in the review: thermodynamic and kinetic modeling of hemodialysis and peritoneal dialysis. A brief account for hindered diffusion across cake layers formed due to membrane filters fouling is given, too.
[ { "created": "Fri, 11 May 2018 16:34:45 GMT", "version": "v1" } ]
2018-05-16
[ [ "Voinova", "Marina V", "" ] ]
Mathematical modeling is an important theoretical tool which provides researchers with quantification of the permeability of dialyzing systems in renal replacement therapy. In the paper we provide a short review of the most successful theoretical approaches and refer to the corresponding experimental methods studying these phenomena in both biological and synthetic filters in dialysis. Two levels of modeling of fluid and solute transport are considered in the review: thermodynamic and kinetic modeling of hemodialysis and peritoneal dialysis. A brief account for hindered diffusion across cake layers formed due to membrane filters fouling is given, too.
2403.18862
Sebastien Dam
S\'ebastien Dam (UR, Inria, CNRS, IRISA, EMPENN), Jean-Marie Batail (CHGR), Gabriel H Robert (UR, Inria, CNRS, IRISA, EMPENN, CHGR), Dominique Drapier (CHGR), Pierre Maurel (UR, Inria, CNRS, IRISA, EMPENN), Julie Coloigner (UR, Inria, CNRS, IRISA, EMPENN)
Structural Brain Connectivity and Treatment Improvement in Mood Disorder
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The treatment of depressive episodes is well established, with clearly demonstrated effectiveness of antidepressants and psychotherapies. However, more than one-third of depressed patients do not respond to treatment. Identifying the brain structural basis of treatment-resistant depression could prevent useless pharmacological prescriptions,adverse events, and lost therapeutic opportunities.Methods: Using diffusion magnetic resonance imaging, we performed structural connectivity analyses on a cohort of 154 patients with mood disorder (MD) -- and 77 sex- and age-matched healthy control (HC) participants. To assess illness improvement, the MD patients went through two clinical interviews at baseline and at 6-month follow-up and were classified based on the Clinical Global Impression-Improvement score into improved or not-improved. First, the threshold-free network-based statistics was conducted to measure the differences in regional network architecture. Second, nonparametric permutations tests were performed on topological metrics based on graph theory to examine differences in connectome organization. Results: The threshold-free network-based statistics revealed impaired connections involvingregions of the basal ganglia in MD patients compared to HC. Significant increase of local efficiency and clustering coefficient was found in the lingual gyrus, insula and amygdala in the MD group. Compared with the not-improved, the improved displayed significantly reduced network integration and segregation, predominately in the default-mode regions, including the precuneus, middle temporal lobe and rostral anterior cingulate.Conclusions: This study highlights the involvement of regions belonging to the basal ganglia, the fronto-limbic network and the default mode network, leading to a better understanding of MD disease and its unfavorable outcome.
[ { "created": "Fri, 22 Mar 2024 08:20:00 GMT", "version": "v1" } ]
2024-03-29
[ [ "Dam", "Sébastien", "", "UR, Inria, CNRS, IRISA, EMPENN" ], [ "Batail", "Jean-Marie", "", "CHGR" ], [ "Robert", "Gabriel H", "", "UR, Inria, CNRS, IRISA, EMPENN, CHGR" ], [ "Drapier", "Dominique", "", "CHGR" ], [ "Maurel", "Pierre", "", "UR, Inria, CNRS, IRISA, EMPENN" ], [ "Coloigner", "Julie", "", "UR, Inria, CNRS, IRISA, EMPENN" ] ]
Background: The treatment of depressive episodes is well established, with clearly demonstrated effectiveness of antidepressants and psychotherapies. However, more than one-third of depressed patients do not respond to treatment. Identifying the brain structural basis of treatment-resistant depression could prevent useless pharmacological prescriptions,adverse events, and lost therapeutic opportunities.Methods: Using diffusion magnetic resonance imaging, we performed structural connectivity analyses on a cohort of 154 patients with mood disorder (MD) -- and 77 sex- and age-matched healthy control (HC) participants. To assess illness improvement, the MD patients went through two clinical interviews at baseline and at 6-month follow-up and were classified based on the Clinical Global Impression-Improvement score into improved or not-improved. First, the threshold-free network-based statistics was conducted to measure the differences in regional network architecture. Second, nonparametric permutations tests were performed on topological metrics based on graph theory to examine differences in connectome organization. Results: The threshold-free network-based statistics revealed impaired connections involvingregions of the basal ganglia in MD patients compared to HC. Significant increase of local efficiency and clustering coefficient was found in the lingual gyrus, insula and amygdala in the MD group. Compared with the not-improved, the improved displayed significantly reduced network integration and segregation, predominately in the default-mode regions, including the precuneus, middle temporal lobe and rostral anterior cingulate.Conclusions: This study highlights the involvement of regions belonging to the basal ganglia, the fronto-limbic network and the default mode network, leading to a better understanding of MD disease and its unfavorable outcome.
1906.05584
Antonio de Candia
S. Scarpetta, A. de Candia
Information capacity of a network of spiking neurons
Accepted for publication in Physica A
null
10.1016/j.physa.2019.123681
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a model of spiking neurons, with recurrent connections that result from learning a set of spatio-temporal patterns with a spike-timing dependent plasticity rule and a global inhibition. We investigate the ability of the network to store and selectively replay multiple patterns of spikes, with a combination of spatial population and phase-of-spike code. Each neuron in a pattern is characterized by a binary variable determining if the neuron is active in the pattern, and a phase-lag variable representing the spike-timing order among the active units. After the learning stage, we study the dynamics of the network induced by a brief cue stimulation, and verify that the network is able to selectively replay the pattern correctly and persistently. We calculate the information capacity of the network, defined as the maximum number of patterns that can be encoded in the network times the number of bits carried by each pattern, normalized by the number of synapses, and find that it can reach a value $\alpha_\text{max}\simeq 0.27$, similar to the one of sequence processing neural networks, and almost double of the capacity of the static Hopfield model. We study the dependence of the capacity on the global inhibition, connection strength (or neuron threshold) and fraction of neurons participating to the patterns. The results show that a dual population and temporal coding can be optimal for the capacity of an associative memory.
[ { "created": "Thu, 13 Jun 2019 09:56:33 GMT", "version": "v1" }, { "created": "Sun, 24 Nov 2019 21:34:39 GMT", "version": "v2" } ]
2020-04-22
[ [ "Scarpetta", "S.", "" ], [ "de Candia", "A.", "" ] ]
We study a model of spiking neurons, with recurrent connections that result from learning a set of spatio-temporal patterns with a spike-timing dependent plasticity rule and a global inhibition. We investigate the ability of the network to store and selectively replay multiple patterns of spikes, with a combination of spatial population and phase-of-spike code. Each neuron in a pattern is characterized by a binary variable determining if the neuron is active in the pattern, and a phase-lag variable representing the spike-timing order among the active units. After the learning stage, we study the dynamics of the network induced by a brief cue stimulation, and verify that the network is able to selectively replay the pattern correctly and persistently. We calculate the information capacity of the network, defined as the maximum number of patterns that can be encoded in the network times the number of bits carried by each pattern, normalized by the number of synapses, and find that it can reach a value $\alpha_\text{max}\simeq 0.27$, similar to the one of sequence processing neural networks, and almost double of the capacity of the static Hopfield model. We study the dependence of the capacity on the global inhibition, connection strength (or neuron threshold) and fraction of neurons participating to the patterns. The results show that a dual population and temporal coding can be optimal for the capacity of an associative memory.
1905.10441
Japan Patel
Japan K. Patel, Richard Vasques, Barry D. Ganapol
Towards a Multiphysics Model for Tumor Response to Combined-Hyperthermia-Radiotherapy Treatment
10 pages, 3 figures, submitted to ANS Topical Meeting
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a multiphysics-based model to predict the response of localized tumors to combined-hyperthermia-radiotherapy (CHR) treatment. This procedure combines hyperthermia (tumor heating) with standard radiotherapy to improve efficacy of the overall treatment. In addition to directly killing tumor cells, tumor heating amends several parameters within the tumor microenvironment. This leads to radiosensitization, which improves the performance of radiotherapy while reducing the side-effects of excess radiation in the surrounding normal tissue. Existing tools to model this kind of treatment consider each of the physics separately. The model presented in this paper accounts for the synergy between hyperthermia and radiotherapy providing a more realistic and holistic approach to simulate CHR treatment. Our model couples radiation transport and heat-transfer with cell population dynamics.
[ { "created": "Fri, 10 May 2019 20:46:27 GMT", "version": "v1" }, { "created": "Wed, 3 Jul 2019 18:36:12 GMT", "version": "v2" } ]
2019-07-05
[ [ "Patel", "Japan K.", "" ], [ "Vasques", "Richard", "" ], [ "Ganapol", "Barry D.", "" ] ]
We develop a multiphysics-based model to predict the response of localized tumors to combined-hyperthermia-radiotherapy (CHR) treatment. This procedure combines hyperthermia (tumor heating) with standard radiotherapy to improve efficacy of the overall treatment. In addition to directly killing tumor cells, tumor heating amends several parameters within the tumor microenvironment. This leads to radiosensitization, which improves the performance of radiotherapy while reducing the side-effects of excess radiation in the surrounding normal tissue. Existing tools to model this kind of treatment consider each of the physics separately. The model presented in this paper accounts for the synergy between hyperthermia and radiotherapy providing a more realistic and holistic approach to simulate CHR treatment. Our model couples radiation transport and heat-transfer with cell population dynamics.
1501.00421
Arianna Bianchi
Arianna Bianchi, Kevin J. Painter, Jonathan A. Sherratt
A Mathematical Model for Lymphangiogenesis in Normal and Diabetic Wounds
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several studies suggest that one possible cause of impaired wound healing is failed or insufficient lymphangiogenesis, that is the formation of new lymphatic capillaries. Although many mathematical models have been developed to describe the formation of blood capillaries (angiogenesis) very few have been proposed for the regeneration of the lymphatic network. Moreover, lymphangiogenesis is markedly distinct from angiogenesis, occurring at different times and in a different manner. Here a model of five ordinary differential equations is presented to describe the formation of lymphatic capillaries following a skin wound. The variables represent different cell densities and growth factor concentrations, and where possible the parameters are estimated from experimental and clinical data. The system is then solved numerically and the results are compared with the available biological literature. Finally, a parameter sensitivity analysis of the model is taken as a starting point for suggesting new therapeutic approaches targeting the enhancement of lymphangiogenesis in diabetic wounds. The work provides a deeper understanding of the phenomenon in question, clarifying the main factors involved. In particular, the balance between TGF-$\beta$ and VEGF levels, rather than their absolute values, is identified as crucial to effective lymphangiogenesis. In addition, the results indicate lowering the macrophage-mediated activation of TGF-$\beta$ and increasing the basal lymphatic endothelial cell growth rate, \emph{inter alia}, as potential treatments. It is hoped the findings of this paper may be considered in the development of future experiments investigating novel lymphangiogenic therapies.
[ { "created": "Fri, 2 Jan 2015 15:03:47 GMT", "version": "v1" } ]
2015-01-05
[ [ "Bianchi", "Arianna", "" ], [ "Painter", "Kevin J.", "" ], [ "Sherratt", "Jonathan A.", "" ] ]
Several studies suggest that one possible cause of impaired wound healing is failed or insufficient lymphangiogenesis, that is the formation of new lymphatic capillaries. Although many mathematical models have been developed to describe the formation of blood capillaries (angiogenesis) very few have been proposed for the regeneration of the lymphatic network. Moreover, lymphangiogenesis is markedly distinct from angiogenesis, occurring at different times and in a different manner. Here a model of five ordinary differential equations is presented to describe the formation of lymphatic capillaries following a skin wound. The variables represent different cell densities and growth factor concentrations, and where possible the parameters are estimated from experimental and clinical data. The system is then solved numerically and the results are compared with the available biological literature. Finally, a parameter sensitivity analysis of the model is taken as a starting point for suggesting new therapeutic approaches targeting the enhancement of lymphangiogenesis in diabetic wounds. The work provides a deeper understanding of the phenomenon in question, clarifying the main factors involved. In particular, the balance between TGF-$\beta$ and VEGF levels, rather than their absolute values, is identified as crucial to effective lymphangiogenesis. In addition, the results indicate lowering the macrophage-mediated activation of TGF-$\beta$ and increasing the basal lymphatic endothelial cell growth rate, \emph{inter alia}, as potential treatments. It is hoped the findings of this paper may be considered in the development of future experiments investigating novel lymphangiogenic therapies.
0811.0115
German Andres Enciso
Winfried Just, German Enciso
Extremely chaotic Boolean networks
10 pages for the main article, 33 pages for detailed proofs of the main results, 4 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is an increasingly important problem to study conditions on the structure of a network that guarantee a given behavior for its underlying dynamical system. In this paper we report that a Boolean network may fall within the chaotic regime, even under the simultaneous assumption of several conditions which in randomized studies have been separately shown to correlate with ordered behavior. These properties include using at most two inputs for every variable, using biased and canalyzing regulatory functions, and restricting the number of negative feedback loops. We also prove for n-dimensional Boolean networks that if in addition the number of outputs for each variable is bounded and there exist periodic orbits of length c^n for c sufficiently close to 2, any network with these properties must have a large proportion of variables that simply copy previous values of other variables. Such systems share a structural similarity to a relatively small Turing machine acting on one or several tapes.
[ { "created": "Sat, 1 Nov 2008 21:57:49 GMT", "version": "v1" } ]
2008-11-04
[ [ "Just", "Winfried", "" ], [ "Enciso", "German", "" ] ]
It is an increasingly important problem to study conditions on the structure of a network that guarantee a given behavior for its underlying dynamical system. In this paper we report that a Boolean network may fall within the chaotic regime, even under the simultaneous assumption of several conditions which in randomized studies have been separately shown to correlate with ordered behavior. These properties include using at most two inputs for every variable, using biased and canalyzing regulatory functions, and restricting the number of negative feedback loops. We also prove for n-dimensional Boolean networks that if in addition the number of outputs for each variable is bounded and there exist periodic orbits of length c^n for c sufficiently close to 2, any network with these properties must have a large proportion of variables that simply copy previous values of other variables. Such systems share a structural similarity to a relatively small Turing machine acting on one or several tapes.
0908.3037
Luis David Garcia-Puente
Elena Dimitrova, Luis David Garcia-Puente, Franziska Hinkelmann, Abdul S. Jarrah, Reinhard Laubenbacher, Brandilyn Stigler, Michael Stillman, and Paola Vera-Licona
Parameter estimation for Boolean models of biological networks
Web interface of the software is available at http://polymath.vbi.vt.edu/polynome/
Theoretical Computer Science 412 (2011) 2816-2826
10.1016/j.tcs.2010.04.034
null
q-bio.MN q-bio.OT q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks have long been used as models of molecular networks and play an increasingly important role in systems biology. This paper describes a software package, Polynome, offered as a web service, that helps users construct Boolean network models based on experimental data and biological input. The key feature is a discrete analog of parameter estimation for continuous models. With only experimental data as input, the software can be used as a tool for reverse-engineering of Boolean network models from experimental time course data.
[ { "created": "Fri, 21 Aug 2009 01:13:13 GMT", "version": "v1" } ]
2019-07-10
[ [ "Dimitrova", "Elena", "" ], [ "Garcia-Puente", "Luis David", "" ], [ "Hinkelmann", "Franziska", "" ], [ "Jarrah", "Abdul S.", "" ], [ "Laubenbacher", "Reinhard", "" ], [ "Stigler", "Brandilyn", "" ], [ "Stillman", "Michael", "" ], [ "Vera-Licona", "Paola", "" ] ]
Boolean networks have long been used as models of molecular networks and play an increasingly important role in systems biology. This paper describes a software package, Polynome, offered as a web service, that helps users construct Boolean network models based on experimental data and biological input. The key feature is a discrete analog of parameter estimation for continuous models. With only experimental data as input, the software can be used as a tool for reverse-engineering of Boolean network models from experimental time course data.
0711.3456
Eugene Shakhnovich
Muyoung Heo, Konstantin B. Zeldovich, Eugene I. Shakhnovich
Emergence of clonal selection and affinity maturation in an ab initio microscopic model of immunity
null
null
null
null
q-bio.BM q-bio.PE
null
Mechanisms of immunity, and of the host-pathogen interactions in general are among the most fundamental problems of medicine, ecology, and evolution studies. Here, we present a microscopic, protein-level, sequence-based model of immune system, with explicitly defined interactions between host and pathogen proteins.. Simulations of this model show that possible outcomes of the infection (extinction of cells, survival with complete elimination of viruses, or chronic infection with continuous coexistence of cells and viruses) crucially depend on mutation rates of the viral and immunoglobulin proteins. Infection is always lethal if the virus mutation rate exceeds a certain threshold. Potent immunoglobulins are discovered in this model via clonal selection and affinity maturation. Surviving cells acquire lasting immunity against subsequent infection by the same virus strain. As a second line of defense cells develop apoptosis-like behavior by reducing their lifetimes to eliminate viruses. These results demonstrate the feasibility of microscopic sequence-based models of immune system, where population dynamics of the evolving B-cells is explicitly tied to the molecular properties of their proteins.
[ { "created": "Wed, 21 Nov 2007 20:46:58 GMT", "version": "v1" } ]
2007-11-22
[ [ "Heo", "Muyoung", "" ], [ "Zeldovich", "Konstantin B.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Mechanisms of immunity, and of the host-pathogen interactions in general are among the most fundamental problems of medicine, ecology, and evolution studies. Here, we present a microscopic, protein-level, sequence-based model of immune system, with explicitly defined interactions between host and pathogen proteins.. Simulations of this model show that possible outcomes of the infection (extinction of cells, survival with complete elimination of viruses, or chronic infection with continuous coexistence of cells and viruses) crucially depend on mutation rates of the viral and immunoglobulin proteins. Infection is always lethal if the virus mutation rate exceeds a certain threshold. Potent immunoglobulins are discovered in this model via clonal selection and affinity maturation. Surviving cells acquire lasting immunity against subsequent infection by the same virus strain. As a second line of defense cells develop apoptosis-like behavior by reducing their lifetimes to eliminate viruses. These results demonstrate the feasibility of microscopic sequence-based models of immune system, where population dynamics of the evolving B-cells is explicitly tied to the molecular properties of their proteins.
q-bio/0504029
Pablo Echenique
J. L. Alonso and Pablo Echenique
A physically meaningful method for the comparison of potential energy functions
30 pages, 7 figures, LaTeX, BibTeX. v2: A misspelling in the author's name has been corrected. v3: A new application of the method has been added at the end of section 9 and minor modifications have also been made in other sections. v4: Journal reference and minor corrections added
J. Comp. Chem. 27 (2006) 238-252
10.1002/jcc.20337
null
q-bio.QM cond-mat.soft q-bio.BM
null
In the study of the conformational behavior of complex systems, such as proteins, several related statistical measures are commonly used to compare two different potential energy functions. Among them, the Pearson's correlation coefficient r has no units and allows only semi-quantitative statements to be made. Those that do have units of energy and whose value may be compared to a physically relevant scale, such as the root mean square deviation (RMSD), the mean error of the energies (ER), the standard deviation of the error (SDER) or the mean absolute error (AER), overestimate the distance between potentials. Moreover, their precise statistical meaning is far from clear. In this article, a new measure of the distance between potential energy functions is defined which overcomes the aforementioned difficulties. In addition, its precise physical meaning is discussed, the important issue of its additivity is investigated and some possible applications are proposed. Finally, two of these applications are illustrated with practical examples: the study of the van der Waals energy, as implemented in CHARMM, in the Trp-Cage protein (PDB code 1L2Y) and the comparison of different levels of the theory in the ab initio study of the Ramachandran map of the model peptide HCO-L-Ala-NH2.
[ { "created": "Thu, 21 Apr 2005 12:05:58 GMT", "version": "v1" }, { "created": "Wed, 27 Apr 2005 16:03:26 GMT", "version": "v2" }, { "created": "Thu, 14 Jul 2005 10:39:15 GMT", "version": "v3" }, { "created": "Mon, 18 Jul 2005 09:28:25 GMT", "version": "v4" }, { "created": "Wed, 7 Dec 2005 12:43:53 GMT", "version": "v5" } ]
2007-12-19
[ [ "Alonso", "J. L.", "" ], [ "Echenique", "Pablo", "" ] ]
In the study of the conformational behavior of complex systems, such as proteins, several related statistical measures are commonly used to compare two different potential energy functions. Among them, the Pearson's correlation coefficient r has no units and allows only semi-quantitative statements to be made. Those that do have units of energy and whose value may be compared to a physically relevant scale, such as the root mean square deviation (RMSD), the mean error of the energies (ER), the standard deviation of the error (SDER) or the mean absolute error (AER), overestimate the distance between potentials. Moreover, their precise statistical meaning is far from clear. In this article, a new measure of the distance between potential energy functions is defined which overcomes the aforementioned difficulties. In addition, its precise physical meaning is discussed, the important issue of its additivity is investigated and some possible applications are proposed. Finally, two of these applications are illustrated with practical examples: the study of the van der Waals energy, as implemented in CHARMM, in the Trp-Cage protein (PDB code 1L2Y) and the comparison of different levels of the theory in the ab initio study of the Ramachandran map of the model peptide HCO-L-Ala-NH2.
0805.4087
Stefan Braunewell
Stefan Braunewell, Stefan Bornholdt
Reliability of regulatory networks and its evolution
11 pages, 12 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of reliability of the dynamics in biological regulatory networks is studied in the framework of a generalized Boolean network model with continuous timing and noise. Using well-known artificial genetic networks such as the repressilator, we discuss concepts of reliability of rhythmic attractors. In a simple evolution process we investigate how overall network structure affects the reliability of the dynamics. In the course of the evolution, networks are selected for reliable dynamics. We find that most networks can be easily evolved towards reliable functioning while preserving the original function.
[ { "created": "Tue, 27 May 2008 10:17:21 GMT", "version": "v1" } ]
2008-05-28
[ [ "Braunewell", "Stefan", "" ], [ "Bornholdt", "Stefan", "" ] ]
The problem of reliability of the dynamics in biological regulatory networks is studied in the framework of a generalized Boolean network model with continuous timing and noise. Using well-known artificial genetic networks such as the repressilator, we discuss concepts of reliability of rhythmic attractors. In a simple evolution process we investigate how overall network structure affects the reliability of the dynamics. In the course of the evolution, networks are selected for reliable dynamics. We find that most networks can be easily evolved towards reliable functioning while preserving the original function.
2101.08346
Neda Shafiee
Neda Shafiee, Mahsa Dadar, Simon Ducharme, D. Louis Collins
Automatic prediction of cognitive and functional decline can significantly decrease the number of subjects required for clinical trials in early Alzheimer's disease
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
INTRODUCTION: Heterogeneity in the progression of Alzheimer's disease makes it challenging to predict the rate of cognitive and functional decline for individual patients. Tools for short-term prediction could help enrich clinical trial designs and focus prevention strategies on the most at-risk patients. METHOD: We built a prognostic model using baseline cognitive scores and MRI-based features to determine which subjects with mild cognitive impairment remained stable and which functionally declined (measured by a two-point increase in CDR-SB) over 2 and 3-year follow-up periods, periods typical of the length of clinical trials. RESULTS: Combining both sets of features yields 77% accuracy (81% sensitivity and 75% specificity) to predict cognitive decline at 2 years (74% accuracy at 3 years with 75% sensitivity and 73% specificity). Using this tool to select trial participants yields a 3.8-fold decrease in the required sample size for a 2-year study (2.8-fold decrease for a 3-year study) for a hypothesized 25% treatment effect to reduce cognitive decline. DISCUSSION: This cohort enrichment tool could accelerate treatment development by increasing power in clinical trials.
[ { "created": "Wed, 20 Jan 2021 22:31:25 GMT", "version": "v1" } ]
2021-01-22
[ [ "Shafiee", "Neda", "" ], [ "Dadar", "Mahsa", "" ], [ "Ducharme", "Simon", "" ], [ "Collins", "D. Louis", "" ] ]
INTRODUCTION: Heterogeneity in the progression of Alzheimer's disease makes it challenging to predict the rate of cognitive and functional decline for individual patients. Tools for short-term prediction could help enrich clinical trial designs and focus prevention strategies on the most at-risk patients. METHOD: We built a prognostic model using baseline cognitive scores and MRI-based features to determine which subjects with mild cognitive impairment remained stable and which functionally declined (measured by a two-point increase in CDR-SB) over 2 and 3-year follow-up periods, periods typical of the length of clinical trials. RESULTS: Combining both sets of features yields 77% accuracy (81% sensitivity and 75% specificity) to predict cognitive decline at 2 years (74% accuracy at 3 years with 75% sensitivity and 73% specificity). Using this tool to select trial participants yields a 3.8-fold decrease in the required sample size for a 2-year study (2.8-fold decrease for a 3-year study) for a hypothesized 25% treatment effect to reduce cognitive decline. DISCUSSION: This cohort enrichment tool could accelerate treatment development by increasing power in clinical trials.
2307.02287
Cyril Rauch
Cyril Rauch, Panagiota Kyratzi and Andras Paldi
Genomic Informational Field Theory (GIFT) to characterize genotypes involved in large phenotypic fluctuations
51 pages (Main Text: pages 1-35 inc. references. Appendices: pages 35-51), 4 figures in the emain text, 2 figures in the appendices
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Based on the normal distribution and its properties, i.e., average and variance, Fisher works have provided a conceptual framework to identify genotype-phenotype associations. While Fisher intuition has proved fruitful over the past century, the current demands for higher mapping precisions have led to the formulation of a new genotype-phenotype association method a.k.a. GIFT (Genomic Informational Field Theory). Not only is the method more powerful in extracting information from genotype and phenotype datasets, GIFT can also deal with any phenotype distribution density function. Here we apply GIFT to a hypothetical Cauchy-distributed phenotype. As opposed to the normal distribution that restricts fluctuations to a finite variance defined by the bulk of the distribution, Cauchy distribution embraces large phenotypic fluctuations and as a result, averages and variances from Cauchy-distributed phenotypes cannot be defined mathematically. While classic genotype-phenotype association methods (GWAS) are unable to function without proper average and variance, it is demonstrated here that GIFT can associate genotype to phenotype in this case. As phenotypic plasticity, i.e., phenotypic fluctuation, is central to surviving sudden environmental changes, by applying GIFT the unique characteristic of the genotype permitting evolution of biallelic organisms to take place is determined in this case.
[ { "created": "Wed, 5 Jul 2023 13:40:53 GMT", "version": "v1" } ]
2023-07-06
[ [ "Rauch", "Cyril", "" ], [ "Kyratzi", "Panagiota", "" ], [ "Paldi", "Andras", "" ] ]
Based on the normal distribution and its properties, i.e., average and variance, Fisher works have provided a conceptual framework to identify genotype-phenotype associations. While Fisher intuition has proved fruitful over the past century, the current demands for higher mapping precisions have led to the formulation of a new genotype-phenotype association method a.k.a. GIFT (Genomic Informational Field Theory). Not only is the method more powerful in extracting information from genotype and phenotype datasets, GIFT can also deal with any phenotype distribution density function. Here we apply GIFT to a hypothetical Cauchy-distributed phenotype. As opposed to the normal distribution that restricts fluctuations to a finite variance defined by the bulk of the distribution, Cauchy distribution embraces large phenotypic fluctuations and as a result, averages and variances from Cauchy-distributed phenotypes cannot be defined mathematically. While classic genotype-phenotype association methods (GWAS) are unable to function without proper average and variance, it is demonstrated here that GIFT can associate genotype to phenotype in this case. As phenotypic plasticity, i.e., phenotypic fluctuation, is central to surviving sudden environmental changes, by applying GIFT the unique characteristic of the genotype permitting evolution of biallelic organisms to take place is determined in this case.
2309.11513
Zohreh Shams
Zohreh Shams
Gene Expression Patterns of CsZCD and Apocarotenoid Accumulation during Saffron Stigma Development
null
null
10.47191/ijpbms/v3-i9-04
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Crocus sativus L., otherwise known as saffron, is a highly prized plant due to its unique triploid capability and elongated stigmas, contributing to its status as the costly spice globally. The color and taste properties of saffron are linked to carotenoid elements including cis- and trans-crocin, picrocrocin, and safranal. In the research carried out, we dedicated our attention to the gene CsZCD, an important player in the formation of apocarotenoids. Through the application of real-time polymerase chain reaction to RNA purified from saffron stigmas at various growth phases, it was determined that the peak expression of the CsZCD gene coincided with the red stage, which is associated with the highest concentration of apocarotenoids. The data showed a 2.69-fold enhancement in CsZCD gene expression during the red phase, whereas a 0.90-fold and 0.69-fold reduction was noted at the stages characterized by orange and yellow hues, respectively. A noteworthy observation was that CsZCD's expression was three times that of the CsTUB gene. Additionally, relative to CsTUB, CsLYC displayed 0.7-fold and 0.3-times expression. Our investigation provides insight into the governance of CsZCD during stigma maturation and its possible influence on the fluctuation in apocarotenoid content. These discoveries carry significance for the industrial production of saffron spice and underscore the importance of additional studies on pivotal genes participating in the synthesis of apocarotenoids.
[ { "created": "Fri, 15 Sep 2023 16:10:41 GMT", "version": "v1" } ]
2023-09-22
[ [ "Shams", "Zohreh", "" ] ]
Crocus sativus L., otherwise known as saffron, is a highly prized plant due to its unique triploid capability and elongated stigmas, contributing to its status as the costly spice globally. The color and taste properties of saffron are linked to carotenoid elements including cis- and trans-crocin, picrocrocin, and safranal. In the research carried out, we dedicated our attention to the gene CsZCD, an important player in the formation of apocarotenoids. Through the application of real-time polymerase chain reaction to RNA purified from saffron stigmas at various growth phases, it was determined that the peak expression of the CsZCD gene coincided with the red stage, which is associated with the highest concentration of apocarotenoids. The data showed a 2.69-fold enhancement in CsZCD gene expression during the red phase, whereas a 0.90-fold and 0.69-fold reduction was noted at the stages characterized by orange and yellow hues, respectively. A noteworthy observation was that CsZCD's expression was three times that of the CsTUB gene. Additionally, relative to CsTUB, CsLYC displayed 0.7-fold and 0.3-times expression. Our investigation provides insight into the governance of CsZCD during stigma maturation and its possible influence on the fluctuation in apocarotenoid content. These discoveries carry significance for the industrial production of saffron spice and underscore the importance of additional studies on pivotal genes participating in the synthesis of apocarotenoids.
2402.17621
Natasha K. Dudek
Natasha K. Dudek, Mariam Chakhvadze, Saba Kobakhidze, Omar Kantidze, Yuriy Gankin
Supervised machine learning for microbiomics: bridging the gap between current and best practices
25 pages, 5 figures
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning (ML) is set to accelerate innovations in clinical microbiomics, such as in disease diagnostics and prognostics. This will require high-quality, reproducible, interpretable workflows whose predictive capabilities meet or exceed the high thresholds set for clinical tools by regulatory agencies. Here, we capture a snapshot of current practices in the application of supervised ML to microbiomics data, through an in-depth analysis of 100 peer-reviewed journal articles published in 2021-2022. We apply a data-driven approach to steer discussion of the merits of varied approaches to experimental design, including key considerations such as how to mitigate the effects of small dataset size while avoiding data leakage. We further provide guidance on how to avoid common experimental design pitfalls that can hurt model performance, trustworthiness, and reproducibility. Discussion is accompanied by an interactive online tutorial that demonstrates foundational principles of ML experimental design, tailored to the microbiomics community. Formalizing community best practices for supervised ML in microbiomics is an important step towards improving the success and efficiency of clinical research, to the benefit of patients and other stakeholders.
[ { "created": "Tue, 27 Feb 2024 15:49:26 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 16:39:05 GMT", "version": "v2" } ]
2024-07-25
[ [ "Dudek", "Natasha K.", "" ], [ "Chakhvadze", "Mariam", "" ], [ "Kobakhidze", "Saba", "" ], [ "Kantidze", "Omar", "" ], [ "Gankin", "Yuriy", "" ] ]
Machine learning (ML) is set to accelerate innovations in clinical microbiomics, such as in disease diagnostics and prognostics. This will require high-quality, reproducible, interpretable workflows whose predictive capabilities meet or exceed the high thresholds set for clinical tools by regulatory agencies. Here, we capture a snapshot of current practices in the application of supervised ML to microbiomics data, through an in-depth analysis of 100 peer-reviewed journal articles published in 2021-2022. We apply a data-driven approach to steer discussion of the merits of varied approaches to experimental design, including key considerations such as how to mitigate the effects of small dataset size while avoiding data leakage. We further provide guidance on how to avoid common experimental design pitfalls that can hurt model performance, trustworthiness, and reproducibility. Discussion is accompanied by an interactive online tutorial that demonstrates foundational principles of ML experimental design, tailored to the microbiomics community. Formalizing community best practices for supervised ML in microbiomics is an important step towards improving the success and efficiency of clinical research, to the benefit of patients and other stakeholders.
2203.00628
Zhenyu Yang
Zhenyu Yang, Zongsheng Hu, Hangjie Ji, Kyle Lafata, Scott Floyd, Fang-Fang Yin, Chunhao Wang
A Neural Ordinary Differential Equation Model for Visualizing Deep Neural Network Behaviors in Multi-Parametric MRI based Glioma Segmentation
30 pages, 7 figures, 2 tables
null
null
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network (DNN) behavior during multi-parametric MRI (mp-MRI) based glioma segmentation as a method to enhance deep learning explainability. Methods: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we designed a novel deep learning model, neural ODE, in which deep feature extraction was governed by an ODE without explicit expression. The dynamics of 1) MR images after interactions with DNN and 2) segmentation formation can be visualized after solving ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate the utilization of each MRI by DNN towards the final segmentation results. The proposed neural ODE model was demonstrated using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT). The key MR modalities with significant utilization by DNN were identified based on ACC analysis. Segmentation results by DNN using only the key MR modalities were compared to the ones using all 4 MR modalities. Results: All neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all 4 MR modalities, Dice coefficient of ET (0.784->0.775), TC (0.760->0.758), and WT (0.841->0.837) using the key modalities only had minimal differences without significance. Conclusion: The neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep learning applications.
[ { "created": "Tue, 1 Mar 2022 17:16:41 GMT", "version": "v1" }, { "created": "Wed, 23 Mar 2022 23:26:25 GMT", "version": "v2" } ]
2022-03-25
[ [ "Yang", "Zhenyu", "" ], [ "Hu", "Zongsheng", "" ], [ "Ji", "Hangjie", "" ], [ "Lafata", "Kyle", "" ], [ "Floyd", "Scott", "" ], [ "Yin", "Fang-Fang", "" ], [ "Wang", "Chunhao", "" ] ]
Purpose: To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network (DNN) behavior during multi-parametric MRI (mp-MRI) based glioma segmentation as a method to enhance deep learning explainability. Methods: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we designed a novel deep learning model, neural ODE, in which deep feature extraction was governed by an ODE without explicit expression. The dynamics of 1) MR images after interactions with DNN and 2) segmentation formation can be visualized after solving ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate the utilization of each MRI by DNN towards the final segmentation results. The proposed neural ODE model was demonstrated using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT). The key MR modalities with significant utilization by DNN were identified based on ACC analysis. Segmentation results by DNN using only the key MR modalities were compared to the ones using all 4 MR modalities. Results: All neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all 4 MR modalities, Dice coefficient of ET (0.784->0.775), TC (0.760->0.758), and WT (0.841->0.837) using the key modalities only had minimal differences without significance. Conclusion: The neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep learning applications.
1504.00525
Jean-Baptiste Masson dr.
Mohamed El Beheiry, Maxime Dahan and Jean-Baptiste Masson
InferenceMAP: Mapping of Single-Molecule Dynamics with Bayesian Inference
56 pages
null
10.1016/j.bpj.2014.11.2580
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-particle tracking (SPT) grants unprecedented insight into cellular function at the molecular scale [1]. Throughout the cell, the movement of single-molecules is generally heterogeneous and complex. Hence, there is an imperative to understand the multi-scale nature of single-molecule dynamics in biological systems. We have previously shown that with high-density SPT, spatial maps of the parameters that dictate molecule motion can be generated to intricately describe cellular environments [2,3,4]. To date, however, there exist no publically available tools that reconcile trajectory data to generate the aforementioned maps. We address this void in the SPT community with InferenceMAP: an interactive software package that uses a powerful Bayesian method to map the dynamic cellular space experienced by individual biomolecules.
[ { "created": "Thu, 2 Apr 2015 12:32:19 GMT", "version": "v1" } ]
2015-06-24
[ [ "Beheiry", "Mohamed El", "" ], [ "Dahan", "Maxime", "" ], [ "Masson", "Jean-Baptiste", "" ] ]
Single-particle tracking (SPT) grants unprecedented insight into cellular function at the molecular scale [1]. Throughout the cell, the movement of single-molecules is generally heterogeneous and complex. Hence, there is an imperative to understand the multi-scale nature of single-molecule dynamics in biological systems. We have previously shown that with high-density SPT, spatial maps of the parameters that dictate molecule motion can be generated to intricately describe cellular environments [2,3,4]. To date, however, there exist no publically available tools that reconcile trajectory data to generate the aforementioned maps. We address this void in the SPT community with InferenceMAP: an interactive software package that uses a powerful Bayesian method to map the dynamic cellular space experienced by individual biomolecules.
1303.0103
Jonathan Crofts
Jonathan J Crofts and Ernesto Estrada
A statistical mechanics description of environmental variability in metabolic networks
null
null
null
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many of the chemical reactions that take place within a living cell are irreversible. Due to evolutionary pressures, the number of allowable reactions within these systems are highly constrained and thus the resulting metabolic networks display considerable asymmetry. In this paper, we explore possible evolutionary factors pertaining to the reduced symmetry observed in these networks, and demonstrate the important role environmental variability plays in shaping their structural organization. Interpreting the returnability index as an equilibrium constant for a reaction network in equilibrium with a hypothetical reference system, enables us to quantify the extent to which a metabolic network is in disequilibrium. Further, by introducing a new directed centrality measure via an extension of the subgraph centrality metric to directed networks, we are able to characterise individual metabolites by their participation within metabolic pathways. To demonstrate these ideas, we study 116 metabolic networks of bacteria. In particular, we find that the equilibrium constant for the metabolic networks decreases significantly in-line with variability in bacterial habitats, supporting the view that environmental variability promotes disequilibrium within these biochemical reaction systems.
[ { "created": "Fri, 1 Mar 2013 07:18:09 GMT", "version": "v1" } ]
2013-03-04
[ [ "Crofts", "Jonathan J", "" ], [ "Estrada", "Ernesto", "" ] ]
Many of the chemical reactions that take place within a living cell are irreversible. Due to evolutionary pressures, the number of allowable reactions within these systems are highly constrained and thus the resulting metabolic networks display considerable asymmetry. In this paper, we explore possible evolutionary factors pertaining to the reduced symmetry observed in these networks, and demonstrate the important role environmental variability plays in shaping their structural organization. Interpreting the returnability index as an equilibrium constant for a reaction network in equilibrium with a hypothetical reference system, enables us to quantify the extent to which a metabolic network is in disequilibrium. Further, by introducing a new directed centrality measure via an extension of the subgraph centrality metric to directed networks, we are able to characterise individual metabolites by their participation within metabolic pathways. To demonstrate these ideas, we study 116 metabolic networks of bacteria. In particular, we find that the equilibrium constant for the metabolic networks decreases significantly in-line with variability in bacterial habitats, supporting the view that environmental variability promotes disequilibrium within these biochemical reaction systems.
1612.02744
Igor Kaufman
I.Kh. Kaufman, O.A. Fedorenko, D.G. Luchinsky, W.A.T. Gibby, S.K. Roberts. P.V.E. McClintock, R.S. Eisenberg
Ionic Coulomb blockade and anomalous mole fraction effect in NaChBac bacterial ion channels
12 pages, 5 figures, 32 references, submitted to EPJ
null
null
null
q-bio.SC physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report an experimental study of the influences of the fixed charge and bulk ionic concentrations on the conduction of biological ion channels, and we consider the results within the framework of the ionic Coulomb blockade model of permeation and selectivity. Voltage clamp recordings were used to investigate the Na$^+$/Ca$^{2+}$ anomalous mole fraction effect (AMFE) exhibited by the bacterial sodium channel NaChBac and its mutants. Site-directed mutagenesis was used to study the effect of either increasing or decreasing the fixed charge in their selectivity filters for comparison with the predictions of the Coulomb blockade model. The model was found to describe well some aspects of the experimental (divalent blockade and AMFE) and simulated (discrete multi-ion conduction and occupancy band) phenomena, including a concentration-dependent shift of the Coulomb staircase. These results substantially extend the understanding of ion channel selectivity and may also be applicable to biomimetic nanopores with charged walls.
[ { "created": "Thu, 8 Dec 2016 17:59:51 GMT", "version": "v1" } ]
2016-12-09
[ [ "Kaufman", "I. Kh.", "" ], [ "Fedorenko", "O. A.", "" ], [ "Luchinsky", "D. G.", "" ], [ "Gibby", "W. A. T.", "" ], [ "McClintock", "S. K. Roberts. P. V. E.", "" ], [ "Eisenberg", "R. S.", "" ] ]
We report an experimental study of the influences of the fixed charge and bulk ionic concentrations on the conduction of biological ion channels, and we consider the results within the framework of the ionic Coulomb blockade model of permeation and selectivity. Voltage clamp recordings were used to investigate the Na$^+$/Ca$^{2+}$ anomalous mole fraction effect (AMFE) exhibited by the bacterial sodium channel NaChBac and its mutants. Site-directed mutagenesis was used to study the effect of either increasing or decreasing the fixed charge in their selectivity filters for comparison with the predictions of the Coulomb blockade model. The model was found to describe well some aspects of the experimental (divalent blockade and AMFE) and simulated (discrete multi-ion conduction and occupancy band) phenomena, including a concentration-dependent shift of the Coulomb staircase. These results substantially extend the understanding of ion channel selectivity and may also be applicable to biomimetic nanopores with charged walls.
1801.09372
Tzvetomir Tzvetanov
Christian Beste and Daniel Kaping and Tzvetomir Tzvetanov
Extension of the non-parametric cluster-based time-frequency statistics to the full time windows and to single condition tests
14 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oscillatory processes are central for the understanding of the neural bases of cognition and behaviour. To analyse these processes, time-frequency (TF) decomposition methods are applied and non-parametric cluster-based statistical procedure are used for comparing two or more conditions. While this combination is a powerful method, it has two drawbacks. One the unreliable estimation of signals outside the cone-of-influence and the second relates to the length of the time frequency window used for the analysis. Both impose constrains on the non-parametric statistical procedure for inferring an effect in the TF domain. Here we extend the method to reliably infer oscillatory differences within the full TF map and to test single conditions. We show that it can be applied in small time windows irrespective of the cone-of-influence and we further develop its application to single-condition case for testing the hypothesis of the presence or not of time-varying signals. We present tests of this new method on real EEG and behavioural data and show that its sensitivity to single-condition tests is at least as good as classic Fourier analysis. Statistical inference in the full TF map is available and efficient in detecting differences between conditions as well as the presence of time-varying signal in single condition.
[ { "created": "Mon, 29 Jan 2018 06:27:08 GMT", "version": "v1" } ]
2018-01-30
[ [ "Beste", "Christian", "" ], [ "Kaping", "Daniel", "" ], [ "Tzvetanov", "Tzvetomir", "" ] ]
Oscillatory processes are central for the understanding of the neural bases of cognition and behaviour. To analyse these processes, time-frequency (TF) decomposition methods are applied and non-parametric cluster-based statistical procedure are used for comparing two or more conditions. While this combination is a powerful method, it has two drawbacks. One the unreliable estimation of signals outside the cone-of-influence and the second relates to the length of the time frequency window used for the analysis. Both impose constrains on the non-parametric statistical procedure for inferring an effect in the TF domain. Here we extend the method to reliably infer oscillatory differences within the full TF map and to test single conditions. We show that it can be applied in small time windows irrespective of the cone-of-influence and we further develop its application to single-condition case for testing the hypothesis of the presence or not of time-varying signals. We present tests of this new method on real EEG and behavioural data and show that its sensitivity to single-condition tests is at least as good as classic Fourier analysis. Statistical inference in the full TF map is available and efficient in detecting differences between conditions as well as the presence of time-varying signal in single condition.
2208.05770
Qiang Li
Qiang Li, Greg Ver Steeg, Jesus Malo
Functional Connectivity via Total Correlation: Analytical results in Visual Areas
31 pages, 14 figures, Accepted to Neurocomputing Journal
Neurocomputing 2023, 127143
10.1016/j.neucom.2023.127143
null
q-bio.NC math.PR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent studies invoke the superiority of the multivariate Total Correlation concept over the conventional pairwise measures of functional connectivity in biological networks. Those seminal works certainly show that empirical measures of Total Correlation lead to connectivity patterns that differ from what is obtained using the most popular measure, linear correlation, or its higher order and nonlinear alternative Mutual Information. However, they do not provide analytical results that explain the differences beyond the obvious multivariate versus bivariate definitions. Moreover, the accuracy of the empirical estimators could not be addressed directly because no controlled scenario with known analytical result was provided either. This point is critical because empirical estimation of information theory measures is always challenging. As opposed to previous empirical approaches, in this work we present analytical results to prove the advantages of Total Correlation over Mutual Information to describe the functional connectivity. In particular, we do it in neural networks for early vision (retina-LGN-cortex) which are realistic but simple enough to get analytical results. The presented analytical setting is also useful to check empirical estimates of Total Correlation. Therefore, once certain estimate can be trusted, one can explore the behavior with natural signals where the analytical results (that assume Gaussian signals), may not be valid. In this regard, as applications (a) we explore the effect of connectivity and feedback in the analytical retina-LGN-cortex network with natural images, and (b) we assess the functional connectivity in visual areas V1-V2-V3-V4 from actual fMRI recordings.
[ { "created": "Thu, 11 Aug 2022 12:01:26 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2023 14:23:33 GMT", "version": "v2" } ]
2023-12-25
[ [ "Li", "Qiang", "" ], [ "Steeg", "Greg Ver", "" ], [ "Malo", "Jesus", "" ] ]
Recent studies invoke the superiority of the multivariate Total Correlation concept over the conventional pairwise measures of functional connectivity in biological networks. Those seminal works certainly show that empirical measures of Total Correlation lead to connectivity patterns that differ from what is obtained using the most popular measure, linear correlation, or its higher order and nonlinear alternative Mutual Information. However, they do not provide analytical results that explain the differences beyond the obvious multivariate versus bivariate definitions. Moreover, the accuracy of the empirical estimators could not be addressed directly because no controlled scenario with known analytical result was provided either. This point is critical because empirical estimation of information theory measures is always challenging. As opposed to previous empirical approaches, in this work we present analytical results to prove the advantages of Total Correlation over Mutual Information to describe the functional connectivity. In particular, we do it in neural networks for early vision (retina-LGN-cortex) which are realistic but simple enough to get analytical results. The presented analytical setting is also useful to check empirical estimates of Total Correlation. Therefore, once certain estimate can be trusted, one can explore the behavior with natural signals where the analytical results (that assume Gaussian signals), may not be valid. In this regard, as applications (a) we explore the effect of connectivity and feedback in the analytical retina-LGN-cortex network with natural images, and (b) we assess the functional connectivity in visual areas V1-V2-V3-V4 from actual fMRI recordings.
2211.11346
Kazuyoshi Tsutsumi
Kazuyoshi Tsutsumi and Ernst Niebur
Hierarchically Modular Dynamical Neural Network Relaxing in a Warped Space: Basic Model and its Characteristics
44 pages, 22 EPS figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a hierarchically modular, dynamical neural network model whose architecture minimizes a specifically designed energy function and defines its temporal characteristics. The model has an internal and an external space that are connected with a layered internetwork that consists of a pair of forward and backward subnets composed of static neurons (with an instantaneous time-course). Dynamical neurons with large time constants in the internal space determine the overall time-course. The model offers a framework in which state variables in the network relax in a warped space, due to the cooperation between dynamic and static neurons. We assume that the system operates in either a learning or an association mode, depending on the presence or absence of feedback paths and input ports. In the learning mode, synaptic weights in the internetwork are modified by strong inputs corresponding to repetitive neuronal bursting, which represents sinusoidal or quasi-sinusoidal waves in the short-term average density of nerve impulses or in the membrane potential. A two-dimensional mapping relationship can be formed by employing signals with different frequencies based on the same mechanism as Lissajous curves. In the association mode, the speed of convergence to a goal point greatly varies with the mapping relationship of the previously trained internetwork, and owing to this property, the convergence trajectory in the two-dimensional model with the non-linear mapping internetwork cannot go straight but instead must curve. We further introduce a constrained association mode with a given target trajectory and elucidate that in the internal space, an output trajectory is generated, which is mapped from the external space according to the inverse of the mapping relationship of the forward subnet.
[ { "created": "Mon, 21 Nov 2022 10:53:46 GMT", "version": "v1" } ]
2022-11-22
[ [ "Tsutsumi", "Kazuyoshi", "" ], [ "Niebur", "Ernst", "" ] ]
We propose a hierarchically modular, dynamical neural network model whose architecture minimizes a specifically designed energy function and defines its temporal characteristics. The model has an internal and an external space that are connected with a layered internetwork that consists of a pair of forward and backward subnets composed of static neurons (with an instantaneous time-course). Dynamical neurons with large time constants in the internal space determine the overall time-course. The model offers a framework in which state variables in the network relax in a warped space, due to the cooperation between dynamic and static neurons. We assume that the system operates in either a learning or an association mode, depending on the presence or absence of feedback paths and input ports. In the learning mode, synaptic weights in the internetwork are modified by strong inputs corresponding to repetitive neuronal bursting, which represents sinusoidal or quasi-sinusoidal waves in the short-term average density of nerve impulses or in the membrane potential. A two-dimensional mapping relationship can be formed by employing signals with different frequencies based on the same mechanism as Lissajous curves. In the association mode, the speed of convergence to a goal point greatly varies with the mapping relationship of the previously trained internetwork, and owing to this property, the convergence trajectory in the two-dimensional model with the non-linear mapping internetwork cannot go straight but instead must curve. We further introduce a constrained association mode with a given target trajectory and elucidate that in the internal space, an output trajectory is generated, which is mapped from the external space according to the inverse of the mapping relationship of the forward subnet.
1810.00613
Joana Ribeiro
Joana P.C. Ribeiro, Bjarki {\TH}. Elvarsson, Erla Sturlud\'ottir and Gunnar Stef\'ansson
An overview of the marine food web in Icelandic waters using Ecopath with Ecosim
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fishing activities have broad impacts that affect, although not exclusively, the targeted stocks. These impacts affect predators and prey of the harvested species, as well as the whole ecosystem it inhabits. Ecosystem models can be used to study the interactions that occur within a system, including those between different organisms and those between fisheries and targeted species. Trophic web models like Ecopath with Ecosim (EwE) can handle fishing fleets as a top predator, with top-down impact on harvested organisms. The aim of this study was to better understand the Icelandic marine ecosystem and the interactions within. This was done by constructing an EwE model of Icelandic waters. The model was run from 1984 to 2013 and was fitted to time series of biomass estimates, landings data and mean annual temperature. The final model was chosen by selecting the model with the lowest Akaike information criterion. A skill assessment was performed using the Pearson's correlation coefficient, the coefficient of determination, the modelling efficiency and the reliability index to evaluate the model performance. The model performed satisfactorily when simulating previously estimated biomass and known landings. Most of the groups with time series were estimated to have top-down control over their prey. These are harvested species with direct and/or indirect links to lower trophic levels and future fishing policies should take this into account. This model could be used as a tool to investigate how such policies could impact the marine ecosystem in Icelandic waters.
[ { "created": "Mon, 1 Oct 2018 10:45:08 GMT", "version": "v1" }, { "created": "Fri, 1 Feb 2019 13:54:13 GMT", "version": "v2" } ]
2019-02-04
[ [ "Ribeiro", "Joana P. C.", "" ], [ "Elvarsson", "Bjarki Þ.", "" ], [ "Sturludóttir", "Erla", "" ], [ "Stefánsson", "Gunnar", "" ] ]
Fishing activities have broad impacts that affect, although not exclusively, the targeted stocks. These impacts affect predators and prey of the harvested species, as well as the whole ecosystem it inhabits. Ecosystem models can be used to study the interactions that occur within a system, including those between different organisms and those between fisheries and targeted species. Trophic web models like Ecopath with Ecosim (EwE) can handle fishing fleets as a top predator, with top-down impact on harvested organisms. The aim of this study was to better understand the Icelandic marine ecosystem and the interactions within. This was done by constructing an EwE model of Icelandic waters. The model was run from 1984 to 2013 and was fitted to time series of biomass estimates, landings data and mean annual temperature. The final model was chosen by selecting the model with the lowest Akaike information criterion. A skill assessment was performed using the Pearson's correlation coefficient, the coefficient of determination, the modelling efficiency and the reliability index to evaluate the model performance. The model performed satisfactorily when simulating previously estimated biomass and known landings. Most of the groups with time series were estimated to have top-down control over their prey. These are harvested species with direct and/or indirect links to lower trophic levels and future fishing policies should take this into account. This model could be used as a tool to investigate how such policies could impact the marine ecosystem in Icelandic waters.
2301.07568
Abbi Abdel-Rehim
Abbi Abdel-Rehim, Oghenejokpeme Orhobor, Hang Lou, Hao Ni and Ross D. King
Beating the Best: Improving on AlphaFold2 at Protein Structure Prediction
12 pages
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The goal of Protein Structure Prediction (PSP) problem is to predict a protein's 3D structure (confirmation) from its amino acid sequence. The problem has been a 'holy grail' of science since the Noble prize-winning work of Anfinsen demonstrated that protein conformation was determined by sequence. A recent and important step towards this goal was the development of AlphaFold2, currently the best PSP method. AlphaFold2 is probably the highest profile application of AI to science. Both AlphaFold2 and RoseTTAFold (another impressive PSP method) have been published and placed in the public domain (code & models). Stacking is a form of ensemble machine learning ML in which multiple baseline models are first learnt, then a meta-model is learnt using the outputs of the baseline level model to form a model that outperforms the base models. Stacking has been successful in many applications. We developed the ARStack PSP method by stacking AlphaFold2 and RoseTTAFold. ARStack significantly outperforms AlphaFold2. We rigorously demonstrate this using two sets of non-homologous proteins, and a test set of protein structures published after that of AlphaFold2 and RoseTTAFold. As more high quality prediction methods are published it is likely that ensemble methods will increasingly outperform any single method.
[ { "created": "Wed, 18 Jan 2023 14:39:34 GMT", "version": "v1" }, { "created": "Mon, 23 Jan 2023 09:54:01 GMT", "version": "v2" } ]
2023-01-24
[ [ "Abdel-Rehim", "Abbi", "" ], [ "Orhobor", "Oghenejokpeme", "" ], [ "Lou", "Hang", "" ], [ "Ni", "Hao", "" ], [ "King", "Ross D.", "" ] ]
The goal of Protein Structure Prediction (PSP) problem is to predict a protein's 3D structure (confirmation) from its amino acid sequence. The problem has been a 'holy grail' of science since the Noble prize-winning work of Anfinsen demonstrated that protein conformation was determined by sequence. A recent and important step towards this goal was the development of AlphaFold2, currently the best PSP method. AlphaFold2 is probably the highest profile application of AI to science. Both AlphaFold2 and RoseTTAFold (another impressive PSP method) have been published and placed in the public domain (code & models). Stacking is a form of ensemble machine learning ML in which multiple baseline models are first learnt, then a meta-model is learnt using the outputs of the baseline level model to form a model that outperforms the base models. Stacking has been successful in many applications. We developed the ARStack PSP method by stacking AlphaFold2 and RoseTTAFold. ARStack significantly outperforms AlphaFold2. We rigorously demonstrate this using two sets of non-homologous proteins, and a test set of protein structures published after that of AlphaFold2 and RoseTTAFold. As more high quality prediction methods are published it is likely that ensemble methods will increasingly outperform any single method.
1411.0291
Li Zhaoping
Li Zhaoping and Li Zhe
Primary visual cortex as a saliency map: parameter-free prediction of behavior from V1 physiology
11 figures, 66 pages
PLoS Comput Biol 11(10): e1004375 (2015)
10.1371/journal.pcbi.1004375
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis' first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton's location can be measured by the shortness of the reaction time in a visual search task to find the singleton. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The predicted distribution matches the experimentally observed distribution in all six human observers. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functional roles like visual decoding or endogenous attention.
[ { "created": "Sun, 2 Nov 2014 18:21:39 GMT", "version": "v1" } ]
2015-10-08
[ [ "Zhaoping", "Li", "" ], [ "Zhe", "Li", "" ] ]
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis' first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton's location can be measured by the shortness of the reaction time in a visual search task to find the singleton. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The predicted distribution matches the experimentally observed distribution in all six human observers. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functional roles like visual decoding or endogenous attention.
q-bio/0402046
Giulia Menconi
Giulia Menconi
Sublinear Growth of Information in DNA Sequences
30 pages, 13 figures, submitted (Oct. 2003)
null
null
null
q-bio.GN cond-mat.stat-mech physics.data-an
null
We introduce a novel method to analyse complete genomes and recognise some distinctive features by means of an adaptive compression algorithm, which is not DNA-oriented. We study the Information Content as a function of the number of symbols encoded by the algorithm. Preliminar results are shown concerning regions having a sublinear type of information growth, which is strictly connected to the presence of highly repetitive subregions that might be supposed to have a regulatory function within the genome.
[ { "created": "Fri, 27 Feb 2004 21:30:59 GMT", "version": "v1" } ]
2007-05-23
[ [ "Menconi", "Giulia", "" ] ]
We introduce a novel method to analyse complete genomes and recognise some distinctive features by means of an adaptive compression algorithm, which is not DNA-oriented. We study the Information Content as a function of the number of symbols encoded by the algorithm. Preliminar results are shown concerning regions having a sublinear type of information growth, which is strictly connected to the presence of highly repetitive subregions that might be supposed to have a regulatory function within the genome.
1907.05395
Prasanna Parvathaneni
Prasanna Parvathaneni, Shunxing Bao, Vishwesh Nath, Neil D. Woodward, Daniel O. Claassen, Carissa J. Cascio, David H. Zald, Yuankai Huo, Bennett A. Landman, Ilwoo Lyu
Cortical Surface Parcellation using Spherical Convolutional Neural Networks
null
null
null
null
q-bio.NC eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with high processing time on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method out-performs traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.
[ { "created": "Thu, 11 Jul 2019 17:20:00 GMT", "version": "v1" } ]
2019-07-12
[ [ "Parvathaneni", "Prasanna", "" ], [ "Bao", "Shunxing", "" ], [ "Nath", "Vishwesh", "" ], [ "Woodward", "Neil D.", "" ], [ "Claassen", "Daniel O.", "" ], [ "Cascio", "Carissa J.", "" ], [ "Zald", "David H.", "" ], [ "Huo", "Yuankai", "" ], [ "Landman", "Bennett A.", "" ], [ "Lyu", "Ilwoo", "" ] ]
We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with high processing time on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method out-performs traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.
0705.3895
Apoorva Patel
Apoorva D. Patel
Towards Understanding the Origin of Genetic Languages
(v1) 33 pages, contributed chapter to "Quantum Aspects of Life", edited by D. Abbott, P. Davies and A. Pati, (v2) published version with some editing
null
10.1142/9781848162556_0010
null
q-bio.GN cs.IT math.IT physics.bio-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular biology is a nanotechnology that works--it has worked for billions of years and in an amazing variety of circumstances. At its core is a system for acquiring, processing and communicating information that is universal, from viruses and bacteria to human beings. Advances in genetics and experience in designing computers have taken us to a stage where we can understand the optimisation principles at the root of this system, from the availability of basic building blocks to the execution of tasks. The languages of DNA and proteins are argued to be the optimal solutions to the information processing tasks they carry out. The analysis also suggests simpler predecessors to these languages, and provides fascinating clues about their origin. Obviously, a comprehensive unraveling of the puzzle of life would have a lot to say about what we may design or convert ourselves into.
[ { "created": "Sat, 26 May 2007 13:01:20 GMT", "version": "v1" }, { "created": "Tue, 28 Oct 2008 11:37:41 GMT", "version": "v2" } ]
2016-12-21
[ [ "Patel", "Apoorva D.", "" ] ]
Molecular biology is a nanotechnology that works--it has worked for billions of years and in an amazing variety of circumstances. At its core is a system for acquiring, processing and communicating information that is universal, from viruses and bacteria to human beings. Advances in genetics and experience in designing computers have taken us to a stage where we can understand the optimisation principles at the root of this system, from the availability of basic building blocks to the execution of tasks. The languages of DNA and proteins are argued to be the optimal solutions to the information processing tasks they carry out. The analysis also suggests simpler predecessors to these languages, and provides fascinating clues about their origin. Obviously, a comprehensive unraveling of the puzzle of life would have a lot to say about what we may design or convert ourselves into.
q-bio/0407028
Arnaud Buhot
A. Buhot and A. Halperin
The Effects of Stacking on the Configurations and Elasticity of Single Stranded Nucleic Acids
4 pages and 2 figures. Accepted in Phys. Rev. E Rapid Comm
Phys. Rev. E 70 020902(R) (2004)
10.1103/PhysRevE.70.020902
null
q-bio.BM cond-mat.stat-mech
null
Stacking interactions in single stranded nucleic acids give rise to configurations of an annealed rod-coil multiblock copolymer. Theoretical analysis identifies the resulting signatures for long homopolynucleotides: A non monotonous dependence of size on temperature, corresponding effects on cyclization and a plateau in the extension force law. Explicit numerical results for poly(dA) and poly(rU) are presented.
[ { "created": "Wed, 21 Jul 2004 13:50:44 GMT", "version": "v1" } ]
2009-11-10
[ [ "Buhot", "A.", "" ], [ "Halperin", "A.", "" ] ]
Stacking interactions in single stranded nucleic acids give rise to configurations of an annealed rod-coil multiblock copolymer. Theoretical analysis identifies the resulting signatures for long homopolynucleotides: A non monotonous dependence of size on temperature, corresponding effects on cyclization and a plateau in the extension force law. Explicit numerical results for poly(dA) and poly(rU) are presented.
2403.19844
Prakash Chourasia
Sarwan Ali, Prakash Chourasia, Murray Patterson
Expanding Chemical Representation with k-mers and Fragment-based Fingerprints for Molecular Fingerprinting
12 Pages, 3 tables, Accepted at SimBig2023
SimBig2023
null
null
q-bio.BM cs.LG physics.chem-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study introduces a novel approach, combining substruct counting, $k$-mers, and Daylight-like fingerprints, to expand the representation of chemical structures in SMILES strings. The integrated method generates comprehensive molecular embeddings that enhance discriminative power and information content. Experimental evaluations demonstrate its superiority over traditional Morgan fingerprinting, MACCS, and Daylight fingerprint alone, improving chemoinformatics tasks such as drug classification. The proposed method offers a more informative representation of chemical structures, advancing molecular similarity analysis and facilitating applications in molecular design and drug discovery. It presents a promising avenue for molecular structure analysis and design, with significant potential for practical implementation.
[ { "created": "Thu, 28 Mar 2024 21:36:07 GMT", "version": "v1" } ]
2024-04-01
[ [ "Ali", "Sarwan", "" ], [ "Chourasia", "Prakash", "" ], [ "Patterson", "Murray", "" ] ]
This study introduces a novel approach, combining substruct counting, $k$-mers, and Daylight-like fingerprints, to expand the representation of chemical structures in SMILES strings. The integrated method generates comprehensive molecular embeddings that enhance discriminative power and information content. Experimental evaluations demonstrate its superiority over traditional Morgan fingerprinting, MACCS, and Daylight fingerprint alone, improving chemoinformatics tasks such as drug classification. The proposed method offers a more informative representation of chemical structures, advancing molecular similarity analysis and facilitating applications in molecular design and drug discovery. It presents a promising avenue for molecular structure analysis and design, with significant potential for practical implementation.
0908.1310
Denis Semenov A.
Denis A. Semenov
Reasons underlying certain tendencies in the data on the frequency of codon usage
3 pages
null
null
null
q-bio.OT q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tendencies described in this work were revealed in the course of examination of adenine and uracil distribution in the mRNA encoding sequence. The study also discusses the usage of codons occupied by the amino acid arginine in the table of the universal genetic code. All of the described tendencies are qualitative, so neither sophisticated methods nor cumbersome calculations are necessary to reveal and interpret them.
[ { "created": "Mon, 10 Aug 2009 12:25:04 GMT", "version": "v1" } ]
2009-08-11
[ [ "Semenov", "Denis A.", "" ] ]
The tendencies described in this work were revealed in the course of examination of adenine and uracil distribution in the mRNA encoding sequence. The study also discusses the usage of codons occupied by the amino acid arginine in the table of the universal genetic code. All of the described tendencies are qualitative, so neither sophisticated methods nor cumbersome calculations are necessary to reveal and interpret them.
1902.07283
Hamid Behjat
Sevil Maghsadhagh, Anders Eklund, Hamid Behjat
Graph Spectral Characterization of Brain Cortical Morphology
arXiv admin note: substantial text overlap with arXiv:1810.10339
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain cortical layer has a convoluted morphology that is unique to each individual. Characterization of the cortical morphology is necessary in longitudinal studies of structural brain change, as well as in discriminating individuals in health and disease. A method for encoding the cortical morphology in the form of a graph is presented. The design of graphs that encode the global cerebral hemisphere cortices as well as localized cortical regions is proposed. Spectral metrics derived from these graphs are then studied and proposed as descriptors of cortical morphology. As proof-of-concept of their applicability in characterizing cortical morphology, the metrics are studied in the context of hemispheric asymmetry as well as gender dependent discrimination of cortical morphology.
[ { "created": "Tue, 19 Feb 2019 21:04:26 GMT", "version": "v1" } ]
2019-02-21
[ [ "Maghsadhagh", "Sevil", "" ], [ "Eklund", "Anders", "" ], [ "Behjat", "Hamid", "" ] ]
The human brain cortical layer has a convoluted morphology that is unique to each individual. Characterization of the cortical morphology is necessary in longitudinal studies of structural brain change, as well as in discriminating individuals in health and disease. A method for encoding the cortical morphology in the form of a graph is presented. The design of graphs that encode the global cerebral hemisphere cortices as well as localized cortical regions is proposed. Spectral metrics derived from these graphs are then studied and proposed as descriptors of cortical morphology. As proof-of-concept of their applicability in characterizing cortical morphology, the metrics are studied in the context of hemispheric asymmetry as well as gender dependent discrimination of cortical morphology.