id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1709.00374
David Pascucci Ph.D.
David Pascucci, Clayton Hickey, Jorge Jovicich and Massimo Turatto
Independent circuits in basal ganglia and cortex for the processing of reward and precision feedback
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to understand human decision making it is necessary to understand how the brain uses feedback to guide goal-directed behavior. The ventral striatum (VS) appears to be a key structure in this function, responding strongly to explicit reward feedback. However, recent results have also shown striatal activity following correct task performance even in the absence of feedback. This raises the possibility that, in addition to processing external feedback, the dopamine-centered reward circuit might regulate endogenous reinforcement signals, like those triggered by satisfaction in accurate task performance. Here we use functional magnetic resonance imaging (fMRI) to test this idea. Participants completed a simple task that garnered both reward feedback and feedback about the precision of performance. Importantly, the design was such that we could manipulate information about the precision of performance within different levels of reward magnitude. Using parametric modulation and functional connectivity analysis we identified brain regions sensitive to each of these signals. Our results show a double dissociation: frontal and posterior cingulate regions responded to explicit reward but were insensitive to task precision, whereas the dorsal striatum - and putamen in particular - was insensitive to reward but responded strongly to precision feedback in reward-present trials. Both types of feedback activated the VS, and sensitivity in this structure to precision feedback was predicted by personality traits related to approach behavior and reward responsiveness. Our findings shed new light on the role of specific brain regions in integrating different sources of feedback to guide goal-directed behavior.
[ { "created": "Fri, 1 Sep 2017 15:55:36 GMT", "version": "v1" } ]
2017-09-04
[ [ "Pascucci", "David", "" ], [ "Hickey", "Clayton", "" ], [ "Jovicich", "Jorge", "" ], [ "Turatto", "Massimo", "" ] ]
In order to understand human decision making it is necessary to understand how the brain uses feedback to guide goal-directed behavior. The ventral striatum (VS) appears to be a key structure in this function, responding strongly to explicit reward feedback. However, recent results have also shown striatal activity following correct task performance even in the absence of feedback. This raises the possibility that, in addition to processing external feedback, the dopamine-centered reward circuit might regulate endogenous reinforcement signals, like those triggered by satisfaction in accurate task performance. Here we use functional magnetic resonance imaging (fMRI) to test this idea. Participants completed a simple task that garnered both reward feedback and feedback about the precision of performance. Importantly, the design was such that we could manipulate information about the precision of performance within different levels of reward magnitude. Using parametric modulation and functional connectivity analysis we identified brain regions sensitive to each of these signals. Our results show a double dissociation: frontal and posterior cingulate regions responded to explicit reward but were insensitive to task precision, whereas the dorsal striatum - and putamen in particular - was insensitive to reward but responded strongly to precision feedback in reward-present trials. Both types of feedback activated the VS, and sensitivity in this structure to precision feedback was predicted by personality traits related to approach behavior and reward responsiveness. Our findings shed new light on the role of specific brain regions in integrating different sources of feedback to guide goal-directed behavior.
0707.3469
Claus O. Wilke
Igor M. Rouzine, Eric Brunet, Claus O. Wilke
The traveling wave approach to asexual evolution: Muller's ratchet and speed of adaptation
50 pages, 8 figures
null
10.1016/j.tpb.2007.10.004
null
q-bio.PE
null
We use traveling-wave theory to derive expressions for the rate of accumulation of deleterious mutations under Muller's ratchet and the speed of adaptation under positive selection in asexual populations. Traveling-wave theory is a semi-deterministic description of an evolving population, where the bulk of the population is modeled using deterministic equations, but the class of the highest-fitness genotypes, whose evolution over time determines loss or gain of fitness in the population, is given proper stochastic treatment. We derive improved methods to model the highest-fitness class (the stochastic edge) for both Muller's ratchet and adaptive evolution, and calculate analytic correction terms that compensate for inaccuracies which arise when treating discrete fitness classes as a continuum. We show that traveling wave theory makes excellent predictions for the rate of mutation accumulation in the case of Muller's ratchet, and makes good predictions for the speed of adaptation in a very broad parameter range. We predict the adaptation rate to grow logarithmically in the population size until the population size is extremely large.
[ { "created": "Mon, 23 Jul 2007 23:47:54 GMT", "version": "v1" }, { "created": "Wed, 10 Oct 2007 14:40:16 GMT", "version": "v2" } ]
2007-12-19
[ [ "Rouzine", "Igor M.", "" ], [ "Brunet", "Eric", "" ], [ "Wilke", "Claus O.", "" ] ]
We use traveling-wave theory to derive expressions for the rate of accumulation of deleterious mutations under Muller's ratchet and the speed of adaptation under positive selection in asexual populations. Traveling-wave theory is a semi-deterministic description of an evolving population, where the bulk of the population is modeled using deterministic equations, but the class of the highest-fitness genotypes, whose evolution over time determines loss or gain of fitness in the population, is given proper stochastic treatment. We derive improved methods to model the highest-fitness class (the stochastic edge) for both Muller's ratchet and adaptive evolution, and calculate analytic correction terms that compensate for inaccuracies which arise when treating discrete fitness classes as a continuum. We show that traveling wave theory makes excellent predictions for the rate of mutation accumulation in the case of Muller's ratchet, and makes good predictions for the speed of adaptation in a very broad parameter range. We predict the adaptation rate to grow logarithmically in the population size until the population size is extremely large.
2006.01497
Simon Childs
Simon Childs
Quantification of the South African Lockdown Regimes, for the SARS-CoV-2 Pandemic, and the Levels of Immunity They Require to Work
15 pages, 4 figures, 2 tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research quantifies the various South African lockdown regimes, for the SARS-CoV-2 pandemic, in terms of the basic reproduction number, $r_0$. It further calculates the levels of immunity required for these selfsame lockdown regimes to begin to work and predicts perceived values, should infections have been underestimated by a factor of 10. The latter results are compelling. The first, level-5 lockdown was a valiant attempt to contain the highly infectious, SARS-CoV-2 virus, based on a limited knowledge. Its basic reproduction number ($r_0 = 1.93$) never came anywhere close to the requirement of being less than unity. Obviously, it could be anticipated that the same would apply for subsequent, lower levels of lockdown. The basic reproduction number for the level-3 lockdown was found to be $2.34$ and that of the level-4 lockdown, $1.69$.The suggestion is therefore that the level-4 lockdown might have been marginally `smarter' than the `harder', level-5 lockdown, although its basic reproduction number may merely reflect an adjustment by the public to the new normal, or the ever-present error associated with data sets, in general. The pandemic's basic reproduction number was calculated to be $3.16$, in the Swedish context. The lockdowns therefore served to ensure that the medical system was not overwhelmed, bought it valuable time to prepare and provided useful data. The lockdowns nonetheless failed significantly in meeting any objective to curtail the pandemic.
[ { "created": "Tue, 2 Jun 2020 09:53:59 GMT", "version": "v1" }, { "created": "Sat, 6 Jun 2020 09:27:36 GMT", "version": "v2" }, { "created": "Thu, 2 Jul 2020 14:57:11 GMT", "version": "v3" }, { "created": "Tue, 1 Sep 2020 08:50:48 GMT", "version": "v4" } ]
2020-09-02
[ [ "Childs", "Simon", "" ] ]
This research quantifies the various South African lockdown regimes, for the SARS-CoV-2 pandemic, in terms of the basic reproduction number, $r_0$. It further calculates the levels of immunity required for these selfsame lockdown regimes to begin to work and predicts perceived values, should infections have been underestimated by a factor of 10. The latter results are compelling. The first, level-5 lockdown was a valiant attempt to contain the highly infectious, SARS-CoV-2 virus, based on a limited knowledge. Its basic reproduction number ($r_0 = 1.93$) never came anywhere close to the requirement of being less than unity. Obviously, it could be anticipated that the same would apply for subsequent, lower levels of lockdown. The basic reproduction number for the level-3 lockdown was found to be $2.34$ and that of the level-4 lockdown, $1.69$.The suggestion is therefore that the level-4 lockdown might have been marginally `smarter' than the `harder', level-5 lockdown, although its basic reproduction number may merely reflect an adjustment by the public to the new normal, or the ever-present error associated with data sets, in general. The pandemic's basic reproduction number was calculated to be $3.16$, in the Swedish context. The lockdowns therefore served to ensure that the medical system was not overwhelmed, bought it valuable time to prepare and provided useful data. The lockdowns nonetheless failed significantly in meeting any objective to curtail the pandemic.
2103.11775
Alexander Mathis
S\'ebastien B. Hausmann and Alessandro Marin Vargas and Alexander Mathis and Mackenzie W. Mathis
Measuring and modeling the motor system with machine learning
null
Current Opinion in Neurobiology 2021
10.1016/j.conb.2021.04.004
null
q-bio.QM cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data. The field of movement science already elegantly incorporates theory and engineering principles to guide experimental work, and in this review we discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems. We also give our perspective on new avenues where markerless motion capture combined with biomechanical modeling and neural networks could be a new platform for hypothesis-driven research.
[ { "created": "Mon, 22 Mar 2021 12:42:16 GMT", "version": "v1" } ]
2021-09-16
[ [ "Hausmann", "Sébastien B.", "" ], [ "Vargas", "Alessandro Marin", "" ], [ "Mathis", "Alexander", "" ], [ "Mathis", "Mackenzie W.", "" ] ]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data. The field of movement science already elegantly incorporates theory and engineering principles to guide experimental work, and in this review we discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems. We also give our perspective on new avenues where markerless motion capture combined with biomechanical modeling and neural networks could be a new platform for hypothesis-driven research.
1904.01988
Christoph Zechner
Lorenzo Duso and Christoph Zechner
Path mutual information for a class of biochemical reaction networks
6 pages, 2 figures
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living cells encode and transmit information in the temporal dynamics of biochemical components. Gaining a detailed understanding of the input-output relationship in biological systems therefore requires quantitative measures that capture the interdependence between complete time trajectories of biochemical components. Mutual information provides such a measure but its calculation in the context of stochastic reaction networks is associated with mathematical challenges. Here we show how to estimate the mutual information between complete paths of two molecular species that interact with each other through biochemical reactions. We demonstrate our approach using three simple case studies.
[ { "created": "Wed, 3 Apr 2019 13:06:53 GMT", "version": "v1" } ]
2019-04-04
[ [ "Duso", "Lorenzo", "" ], [ "Zechner", "Christoph", "" ] ]
Living cells encode and transmit information in the temporal dynamics of biochemical components. Gaining a detailed understanding of the input-output relationship in biological systems therefore requires quantitative measures that capture the interdependence between complete time trajectories of biochemical components. Mutual information provides such a measure but its calculation in the context of stochastic reaction networks is associated with mathematical challenges. Here we show how to estimate the mutual information between complete paths of two molecular species that interact with each other through biochemical reactions. We demonstrate our approach using three simple case studies.
1202.1447
Filipe Tostevin
Filipe Tostevin, Wiet de Ronde, Pieter Rein ten Wolde
Reliability of frequency- and amplitude-decoding in gene regulation
5 pages, 3 figures
Phys. Rev. Lett. 108, 108104 (2012)
10.1103/PhysRevLett.108.108104
null
q-bio.SC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biochemical signaling, information is often encoded in oscillatory signals. However, the advantages of such a coding strategy over an amplitude encoding scheme of constant signals remain unclear. Here we study the dynamics of a simple model gene promoter in response to oscillating and constant transcription factor signals. We find that in biologically-relevant parameter regimes an oscillating input can produce a more constant protein level than a constant input. Our results suggest that oscillating signals may be used to minimize noise in gene regulation.
[ { "created": "Tue, 7 Feb 2012 15:30:01 GMT", "version": "v1" } ]
2012-03-09
[ [ "Tostevin", "Filipe", "" ], [ "de Ronde", "Wiet", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
In biochemical signaling, information is often encoded in oscillatory signals. However, the advantages of such a coding strategy over an amplitude encoding scheme of constant signals remain unclear. Here we study the dynamics of a simple model gene promoter in response to oscillating and constant transcription factor signals. We find that in biologically-relevant parameter regimes an oscillating input can produce a more constant protein level than a constant input. Our results suggest that oscillating signals may be used to minimize noise in gene regulation.
2009.14533
Izaak Neri
Olivier Dauloudet, Izaak Neri, Jean-Charles Walter, J\'er\^ome Dorignac, Fr\'ed\'eric Geniet, Andrea Parmeggiani
Modelling the effect of ribosome mobility on the rate of protein synthesis
article, 16 pages, 5 figures
Eur. Phys. J. E 44, 19 (2021)
10.1140/epje/s10189-021-00019-8
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translation is one of the main steps in the synthesis of proteins. It consists of ribosomes that translate sequences of nucleotides encoded on mRNA into polypeptide sequences of amino acids. Ribosomes bound to mRNA move unidirectionally, while unbound ribosomes diffuse in the cytoplasm. It has been hypothesized that finite diffusion of ribosomes plays an important role in ribosome recycling and that mRNA circularization enhances the efficiency of translation. In order to estimate the effect of cytoplasmic diffusion on the rate of translation, we consider a Totally Asymmetric Simple Exclusion Process (TASEP) coupled to a finite diffusive reservoir, which we call the Ribosome Transport model with Diffusion (RTD). In this model, we derive an analytical expression for the rate of protein synthesis as a function of the diffusion constant of ribosomes, which is corroborated with results from continuous-time Monte Carlo simulations. Using a wide range of biological relevant parameters, we conclude that diffusion in biological cells is fast enough so that it does not play a role in controlling the rate of translation initiation.
[ { "created": "Wed, 30 Sep 2020 09:33:14 GMT", "version": "v1" }, { "created": "Mon, 17 Jan 2022 17:22:18 GMT", "version": "v2" } ]
2022-01-19
[ [ "Dauloudet", "Olivier", "" ], [ "Neri", "Izaak", "" ], [ "Walter", "Jean-Charles", "" ], [ "Dorignac", "Jérôme", "" ], [ "Geniet", "Frédéric", "" ], [ "Parmeggiani", "Andrea", "" ] ]
Translation is one of the main steps in the synthesis of proteins. It consists of ribosomes that translate sequences of nucleotides encoded on mRNA into polypeptide sequences of amino acids. Ribosomes bound to mRNA move unidirectionally, while unbound ribosomes diffuse in the cytoplasm. It has been hypothesized that finite diffusion of ribosomes plays an important role in ribosome recycling and that mRNA circularization enhances the efficiency of translation. In order to estimate the effect of cytoplasmic diffusion on the rate of translation, we consider a Totally Asymmetric Simple Exclusion Process (TASEP) coupled to a finite diffusive reservoir, which we call the Ribosome Transport model with Diffusion (RTD). In this model, we derive an analytical expression for the rate of protein synthesis as a function of the diffusion constant of ribosomes, which is corroborated with results from continuous-time Monte Carlo simulations. Using a wide range of biological relevant parameters, we conclude that diffusion in biological cells is fast enough so that it does not play a role in controlling the rate of translation initiation.
1305.3075
Bernard Ycart
Bernard Ycart
Fluctuation analysis: can estimates be trusted?
null
null
10.1371/journal.pone.0080958
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The estimation of mutation probabilities and relative fitnesses in fluctuation analysis is based on the unrealistic hypothesis that the single-cell times to division are exponentially distributed. Using the classical Luria-Delbr\"{u}ck distribution outside its modelling hypotheses induces an important bias on the estimation of the relative fitness. The model is extended here to any division time distribution. Mutant counts follow a generalization of the Luria-Delbr\"{u}ck distribution, which depends on the mean number of mutations, the relative fitness of normal cells compared to mutants, and the division time distribution of mutant cells. Empirical probability generating function techniques yield precise estimates both of the mean number of mutations and the relative fitness of normal cells compared to mutants. In the case where no information is available on the division time distribution, it is shown that the estimation procedure using constant division times yields more reliable results. Numerical results both on observed and simulated data are reported.
[ { "created": "Tue, 14 May 2013 09:34:53 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2013 14:15:16 GMT", "version": "v2" }, { "created": "Tue, 10 Sep 2013 15:26:58 GMT", "version": "v3" } ]
2014-03-05
[ [ "Ycart", "Bernard", "" ] ]
The estimation of mutation probabilities and relative fitnesses in fluctuation analysis is based on the unrealistic hypothesis that the single-cell times to division are exponentially distributed. Using the classical Luria-Delbr\"{u}ck distribution outside its modelling hypotheses induces an important bias on the estimation of the relative fitness. The model is extended here to any division time distribution. Mutant counts follow a generalization of the Luria-Delbr\"{u}ck distribution, which depends on the mean number of mutations, the relative fitness of normal cells compared to mutants, and the division time distribution of mutant cells. Empirical probability generating function techniques yield precise estimates both of the mean number of mutations and the relative fitness of normal cells compared to mutants. In the case where no information is available on the division time distribution, it is shown that the estimation procedure using constant division times yields more reliable results. Numerical results both on observed and simulated data are reported.
1012.0091
Ricardo Honorato-Zimmer
Ricardo Honorato-Zimmer, Bryan Reynaert, Ignacio Vergara and Tomas Perez-Acle
Conan: a platform for complex network analysis
5 pages and 1 figure
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conan is a C++ library created for the accurate and efficient modelling, inference and analysis of complex networks. It implements the generation and modification of graphs according to several published models, as well as the unexpensive computation of global and local network properties. Other features include network inference and community detection. Furthermore, Conan provides a Python interface to facilitate the use of the library and its integration in currently existing applications. Conan is available at http://github.com/rhz/conan/.
[ { "created": "Wed, 1 Dec 2010 04:15:47 GMT", "version": "v1" } ]
2010-12-02
[ [ "Honorato-Zimmer", "Ricardo", "" ], [ "Reynaert", "Bryan", "" ], [ "Vergara", "Ignacio", "" ], [ "Perez-Acle", "Tomas", "" ] ]
Conan is a C++ library created for the accurate and efficient modelling, inference and analysis of complex networks. It implements the generation and modification of graphs according to several published models, as well as the unexpensive computation of global and local network properties. Other features include network inference and community detection. Furthermore, Conan provides a Python interface to facilitate the use of the library and its integration in currently existing applications. Conan is available at http://github.com/rhz/conan/.
2211.08718
Konstantin Sorokin
Konstantin Sorokin, Anton Ayzenberg, Konstantin Anokhin, Vladimir Sotskov, Maxim Beketov, Andrey Zaitsew, Robert Drynkin
Topology of cognitive maps
27 pages, 23 figures
null
null
null
q-bio.NC cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In present paper we discuss several approaches to reconstructing the topology of the physical space from neural activity data of CA1 fields in mice hippocampus, in particular, having Cognitome theory of brain function in mind. In our experiments, animals were placed in different new environments and discovered these moving freely while their physical and neural activity was recorded. We test possible approaches to identifying place cell groups out of the observed CA1 neurons. We also test and discuss various methods of dimension reduction and topology reconstruction. In particular, two main strategies we focus on are the Nerve theorem and point cloud-based methods. Conclusions on the results of reconstruction are supported with illustrations and mathematical background which is also briefly discussed.
[ { "created": "Wed, 16 Nov 2022 07:19:26 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2022 21:59:58 GMT", "version": "v2" } ]
2022-12-07
[ [ "Sorokin", "Konstantin", "" ], [ "Ayzenberg", "Anton", "" ], [ "Anokhin", "Konstantin", "" ], [ "Sotskov", "Vladimir", "" ], [ "Beketov", "Maxim", "" ], [ "Zaitsew", "Andrey", "" ], [ "Drynkin", "Robert", "...
In present paper we discuss several approaches to reconstructing the topology of the physical space from neural activity data of CA1 fields in mice hippocampus, in particular, having Cognitome theory of brain function in mind. In our experiments, animals were placed in different new environments and discovered these moving freely while their physical and neural activity was recorded. We test possible approaches to identifying place cell groups out of the observed CA1 neurons. We also test and discuss various methods of dimension reduction and topology reconstruction. In particular, two main strategies we focus on are the Nerve theorem and point cloud-based methods. Conclusions on the results of reconstruction are supported with illustrations and mathematical background which is also briefly discussed.
1109.3178
Thierry Rabilloud
Thierry Rabilloud (LCBM, BBSI), C\'ecile Lelong (LCBM)
Two-dimensional gel electrophoresis in proteomics: A tutorial
null
Journal of Proteomics 74, 10 (2011) 1829-41
10.1016/j.jprot.2011.05.040
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-dimensional electrophoresis of proteins has preceded, and accompanied, the birth of proteomics. Although it is no longer the only experimental scheme used in modern proteomics, it still has distinct features and advantages. The purpose of this tutorial paper is to guide the reader through the history of the field, then through the main steps of the process, from sample preparation to in-gel detection of proteins, commenting the constraints and caveats of the technique. Then the limitations and positive features of two-dimensional electrophoresis are discussed (e.g. its unique ability to separate complete proteins and its easy interfacing with immunoblotting techniques), so that the optimal type of applications of this technique in current and future proteomics can be perceived. This is illustrated by a detailed example taken from the literature and commented in detail. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP 2).
[ { "created": "Wed, 14 Sep 2011 19:39:19 GMT", "version": "v1" } ]
2011-09-15
[ [ "Rabilloud", "Thierry", "", "LCBM, BBSI" ], [ "Lelong", "Cécile", "", "LCBM" ] ]
Two-dimensional electrophoresis of proteins has preceded, and accompanied, the birth of proteomics. Although it is no longer the only experimental scheme used in modern proteomics, it still has distinct features and advantages. The purpose of this tutorial paper is to guide the reader through the history of the field, then through the main steps of the process, from sample preparation to in-gel detection of proteins, commenting the constraints and caveats of the technique. Then the limitations and positive features of two-dimensional electrophoresis are discussed (e.g. its unique ability to separate complete proteins and its easy interfacing with immunoblotting techniques), so that the optimal type of applications of this technique in current and future proteomics can be perceived. This is illustrated by a detailed example taken from the literature and commented in detail. This Tutorial is part of the International Proteomics Tutorial Programme (IPTP 2).
0709.4019
Pierre Bongrand
Anne Pierres (AC), Anil Prakasam, Dominique Touchard (AC), Anne-Marie Benoliel (AC), Pierre Bongrand (AC), Deborah Leckband
Dissecting Subsecond Cadherin Bound States Reveals an Efficient Way for Cells to Achieve Ultrafast Probing of their Environment
null
FEBS letters 581 (2007) 1841
null
null
q-bio.CB physics.bio-ph
null
Cells continuously probe their environment with membrane receptors, achieving subsecond adaptation of their behaviour [1-3]. Recently, several receptors, including cadherins, were found to bind ligands with a lifetime of order of one second. Here we show at the single molecule level that homotypic C-cadherin association involves transient intermediates lasting less than a few tens of milliseconds. Further, these intermediates transitionned towards more stable states with a kinetic rate displaying exponential decrease with piconewton forces. These features enable cells to detect ligands or measure surrounding mechanical behaviour within a fraction of a second, much more rapidly than was previously thought.
[ { "created": "Tue, 25 Sep 2007 19:31:20 GMT", "version": "v1" } ]
2007-09-26
[ [ "Pierres", "Anne", "", "AC" ], [ "Prakasam", "Anil", "", "AC" ], [ "Touchard", "Dominique", "", "AC" ], [ "Benoliel", "Anne-Marie", "", "AC" ], [ "Bongrand", "Pierre", "", "AC" ], [ "Leckband", "Deborah", ...
Cells continuously probe their environment with membrane receptors, achieving subsecond adaptation of their behaviour [1-3]. Recently, several receptors, including cadherins, were found to bind ligands with a lifetime of order of one second. Here we show at the single molecule level that homotypic C-cadherin association involves transient intermediates lasting less than a few tens of milliseconds. Further, these intermediates transitionned towards more stable states with a kinetic rate displaying exponential decrease with piconewton forces. These features enable cells to detect ligands or measure surrounding mechanical behaviour within a fraction of a second, much more rapidly than was previously thought.
1405.3669
Konstantin Klemm
Haleh Ebadi, Konstantin Klemm
Boolean networks with veto functions
7 pages, 3 figures, 3 tables, v2: minor revision
Phys. Rev. E 90, 022815 (2014)
10.1103/PhysRevE.90.022815
null
q-bio.MN cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks are discrete dynamical systems for modeling regulation and signaling in living cells. We investigate a particular class of Boolean functions with inhibiting inputs exerting a veto (forced zero) on the output. We give analytical expressions for the sensitivity of these functions and provide evidence for their role in natural systems. In an intracellular signal transduction network [Helikar et al., PNAS (2008)], the functions with veto are over-represented by a factor exceeding the over-representation of threshold functions and canalyzing functions in the same system. In Boolean networks for control of the yeast cell cycle [Fangting Li et al., PNAS (2004), Davidich et al., PLoS One (2009)], none or minimal changes to the wiring diagrams are necessary to formulate their dynamics in terms of the veto functions introduced here.
[ { "created": "Wed, 14 May 2014 20:01:56 GMT", "version": "v1" }, { "created": "Wed, 27 Aug 2014 10:48:01 GMT", "version": "v2" } ]
2014-09-05
[ [ "Ebadi", "Haleh", "" ], [ "Klemm", "Konstantin", "" ] ]
Boolean networks are discrete dynamical systems for modeling regulation and signaling in living cells. We investigate a particular class of Boolean functions with inhibiting inputs exerting a veto (forced zero) on the output. We give analytical expressions for the sensitivity of these functions and provide evidence for their role in natural systems. In an intracellular signal transduction network [Helikar et al., PNAS (2008)], the functions with veto are over-represented by a factor exceeding the over-representation of threshold functions and canalyzing functions in the same system. In Boolean networks for control of the yeast cell cycle [Fangting Li et al., PNAS (2004), Davidich et al., PLoS One (2009)], none or minimal changes to the wiring diagrams are necessary to formulate their dynamics in terms of the veto functions introduced here.
1303.3997
Heng Li
Heng Li
Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM
3 pages and 1 color figure
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: BWA-MEM is a new alignment algorithm for aligning sequence reads or long query sequences against a large reference genome such as human. It automatically chooses between local and end-to-end alignments, supports paired-end reads and performs chimeric alignment. The algorithm is robust to sequencing errors and applicable to a wide range of sequence lengths from 70bp to a few megabases. For mapping 100bp sequences, BWA-MEM shows better performance than several state-of-art read aligners to date. Availability and implementation: BWA-MEM is implemented as a component of BWA, which is available at http://github.com/lh3/bwa. Contact: hengli@broadinstitute.org
[ { "created": "Sat, 16 Mar 2013 17:20:52 GMT", "version": "v1" }, { "created": "Sun, 26 May 2013 22:19:49 GMT", "version": "v2" } ]
2013-05-28
[ [ "Li", "Heng", "" ] ]
Summary: BWA-MEM is a new alignment algorithm for aligning sequence reads or long query sequences against a large reference genome such as human. It automatically chooses between local and end-to-end alignments, supports paired-end reads and performs chimeric alignment. The algorithm is robust to sequencing errors and applicable to a wide range of sequence lengths from 70bp to a few megabases. For mapping 100bp sequences, BWA-MEM shows better performance than several state-of-art read aligners to date. Availability and implementation: BWA-MEM is implemented as a component of BWA, which is available at http://github.com/lh3/bwa. Contact: hengli@broadinstitute.org
2011.06280
Almaz Tesfay
Almaz Tesfay, Tareq Saeed, Anwar Zeb, Daniel Tesfay, Anas Khalaf, James Brannan
Dynamics of a Stochastic COVID-19 Epidemic Model with Jump-Diffusion
18 pages, 13 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/publicdomain/zero/1.0/
For a stochastic COVID-19 model with jump-diffusion, we prove the existence and uniqueness of the global positive solution. We also investigate some conditions for the extinction and persistence of the disease. We calculate the threshold of the stochastic epidemic system which determines the extinction or permanence of the disease at different intensities of the stochastic noises. This threshold is denoted by $\xi$ which depends on the white and jump noises. The effects of these noises on the dynamics of the model are studied. The numerical experiments show that the random perturbation introduced in the stochastic model suppresses disease outbreaks as compared to its deterministic counterpart. In other words, the impact of the noises on the extinction and persistence is high. When the noise is large or small, our numerical findings show that the COVID-19 vanishes from the population if $\xi <1;$ whereas the epidemic can't go out of control if $\xi >1.$ From this, we observe that white noise and jump noise have a significant effect on the spread of COVID-19 infection, i.e., we can conclude that the stochastic model is more realistic than the deterministic one. Finally, to illustrate this phenomenon, we put some numerical simulations.
[ { "created": "Thu, 12 Nov 2020 09:47:29 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 09:12:29 GMT", "version": "v2" }, { "created": "Tue, 17 Nov 2020 12:21:00 GMT", "version": "v3" }, { "created": "Sun, 22 Nov 2020 04:26:48 GMT", "version": "v4" }, { "c...
2021-04-16
[ [ "Tesfay", "Almaz", "" ], [ "Saeed", "Tareq", "" ], [ "Zeb", "Anwar", "" ], [ "Tesfay", "Daniel", "" ], [ "Khalaf", "Anas", "" ], [ "Brannan", "James", "" ] ]
For a stochastic COVID-19 model with jump-diffusion, we prove the existence and uniqueness of the global positive solution. We also investigate some conditions for the extinction and persistence of the disease. We calculate the threshold of the stochastic epidemic system which determines the extinction or permanence of the disease at different intensities of the stochastic noises. This threshold is denoted by $\xi$ which depends on the white and jump noises. The effects of these noises on the dynamics of the model are studied. The numerical experiments show that the random perturbation introduced in the stochastic model suppresses disease outbreaks as compared to its deterministic counterpart. In other words, the impact of the noises on the extinction and persistence is high. When the noise is large or small, our numerical findings show that the COVID-19 vanishes from the population if $\xi <1;$ whereas the epidemic can't go out of control if $\xi >1.$ From this, we observe that white noise and jump noise have a significant effect on the spread of COVID-19 infection, i.e., we can conclude that the stochastic model is more realistic than the deterministic one. Finally, to illustrate this phenomenon, we put some numerical simulations.
2003.05133
Bulent Karas\"ozen
B\"ulent Karas\"ozen
Model Order Reduction in Neuroscience
14 pages, no figures
Handbook Of Model Order Reduction, Volume 3, 2020
10.1515/9783110499001
null
q-bio.NC cs.NA math.NA
http://creativecommons.org/publicdomain/zero/1.0/
The human brain contains approximately $10^9$ neurons, each with approximately $10^3$ connections, synapses, with other neurons. Most sensory, cognitive and motor functions of our brains depend on the interaction of a large population of neurons. In recent years, many technologies are developed for recording large numbers of neurons either sequentially or simultaneously. An increase in computational power and algorithmic developments have enabled advanced analyses of neuronal population parallel to the rapid growth of quantity and complexity of the recorded neuronal activity. Recent studies made use of dimensionality and model order reduction techniques to extract coherent features which are not apparent at the level of individual neurons. It has been observed that the neuronal activity evolves on low-dimensional subspaces. The aim of model reduction of large-scale neuronal networks is an accurate and fast prediction of patterns and their propagation in different areas of the brain. Spatiotemporal features of the brain activity are identified on low dimensional subspaces with methods such as dynamic mode decomposition (DMD), proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM) and combined parameter and state reduction. In this paper, we give an overview of the currently used dimensionality reduction and model order reduction techniques in neuroscience. This work will be featured as a chapter in the upcoming Handbook on Model Order Reduction,(P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. H. A. Schilders, L. M. Silveira, eds, to appear on DE GRUYTER)
[ { "created": "Wed, 11 Mar 2020 06:31:10 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2020 02:21:36 GMT", "version": "v2" } ]
2021-08-30
[ [ "Karasözen", "Bülent", "" ] ]
The human brain contains approximately $10^9$ neurons, each with approximately $10^3$ connections, synapses, with other neurons. Most sensory, cognitive and motor functions of our brains depend on the interaction of a large population of neurons. In recent years, many technologies are developed for recording large numbers of neurons either sequentially or simultaneously. An increase in computational power and algorithmic developments have enabled advanced analyses of neuronal population parallel to the rapid growth of quantity and complexity of the recorded neuronal activity. Recent studies made use of dimensionality and model order reduction techniques to extract coherent features which are not apparent at the level of individual neurons. It has been observed that the neuronal activity evolves on low-dimensional subspaces. The aim of model reduction of large-scale neuronal networks is an accurate and fast prediction of patterns and their propagation in different areas of the brain. Spatiotemporal features of the brain activity are identified on low dimensional subspaces with methods such as dynamic mode decomposition (DMD), proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM) and combined parameter and state reduction. In this paper, we give an overview of the currently used dimensionality reduction and model order reduction techniques in neuroscience. This work will be featured as a chapter in the upcoming Handbook on Model Order Reduction,(P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. H. A. Schilders, L. M. Silveira, eds, to appear on DE GRUYTER)
1909.05573
Jean-Louis Milan
Jean-Louis Milan (ISM), Sandrine Lavenus (IMN), Paul Pilet (LIOAD), Guy Louarn (IMN), Sylvie Wendling (ISM), Dominique Heymann, Pierre Layrolle, Patrick Chabrand (ISM)
Computational model combined with in vitro experiments to analyse mechanotransduction during mesenchymal stem cell adhesion
null
European Cells and Materials, 2013, 25, pp.97-113
10.22203/eCM.v025a07
null
q-bio.CB physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The shape that stem cells reach at the end of adhesion process influences their differentiation. Rearrangement of cytoskeleton and modification of intracellular tension may activate mechanotransduction pathways controlling cell commitment. In the present study, the mechanical signals involved in cell adhesion were computed in in vitro stem cells of different shapes using a single cell model, the so-called Cytoskeleton Divided Medium (CDM) model. In the CDM model, the filamentous cytoskeleton and nucleoskeleton networks were represented as a mechanical system of multiple tensile and compressive interactions between the nodes of a divided medium. The results showed that intracellular tonus, focal adhesion forces as well as nuclear deformation increased with cell spreading. The cell model was also implemented to simulate the adhesion process of a cell that spreads on protein-coated substrate by emitting filopodia and creating new distant focal adhesion points. As a result, the cell model predicted cytoskeleton reorganisation and reinforcement during cell spreading. The present model quantitatively computed the evolution of certain elements of mechanotransduction and may be a powerful tool for understanding cell mechanobiology and designing biomaterials with specific surface properties to control cell adhesion and differentiation.
[ { "created": "Thu, 12 Sep 2019 11:26:54 GMT", "version": "v1" } ]
2019-09-13
[ [ "Milan", "Jean-Louis", "", "ISM" ], [ "Lavenus", "Sandrine", "", "IMN" ], [ "Pilet", "Paul", "", "LIOAD" ], [ "Louarn", "Guy", "", "IMN" ], [ "Wendling", "Sylvie", "", "ISM" ], [ "Heymann", "Dominique", ...
The shape that stem cells reach at the end of adhesion process influences their differentiation. Rearrangement of cytoskeleton and modification of intracellular tension may activate mechanotransduction pathways controlling cell commitment. In the present study, the mechanical signals involved in cell adhesion were computed in in vitro stem cells of different shapes using a single cell model, the so-called Cytoskeleton Divided Medium (CDM) model. In the CDM model, the filamentous cytoskeleton and nucleoskeleton networks were represented as a mechanical system of multiple tensile and compressive interactions between the nodes of a divided medium. The results showed that intracellular tonus, focal adhesion forces as well as nuclear deformation increased with cell spreading. The cell model was also implemented to simulate the adhesion process of a cell that spreads on protein-coated substrate by emitting filopodia and creating new distant focal adhesion points. As a result, the cell model predicted cytoskeleton reorganisation and reinforcement during cell spreading. The present model quantitatively computed the evolution of certain elements of mechanotransduction and may be a powerful tool for understanding cell mechanobiology and designing biomaterials with specific surface properties to control cell adhesion and differentiation.
1412.6068
Christina Schenk
Alfio Borz\`i, Juri Merger, Jonas M\"uller, Achim Rosch, Christina Schenk, Dominik Schmidt, Stephan Schmidt, Volker Schulz, Kai Velten, Christian von Wallbrunn and Michael Z\"anglein
Novel model for wine fermentation including the yeast dying phase
6 pages, 2 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel model for wine fermentation including a death phase for yeast and the influence of oxygen on the process. A model for the inclusion of the yeast dying phase is derived and compared to a model taken from the literature. The modeling ability of the several models is analyzed by comparing their simulation results.
[ { "created": "Wed, 17 Dec 2014 15:58:24 GMT", "version": "v1" } ]
2014-12-19
[ [ "Borzì", "Alfio", "" ], [ "Merger", "Juri", "" ], [ "Müller", "Jonas", "" ], [ "Rosch", "Achim", "" ], [ "Schenk", "Christina", "" ], [ "Schmidt", "Dominik", "" ], [ "Schmidt", "Stephan", "" ], [ "S...
This paper presents a novel model for wine fermentation including a death phase for yeast and the influence of oxygen on the process. A model for the inclusion of the yeast dying phase is derived and compared to a model taken from the literature. The modeling ability of the several models is analyzed by comparing their simulation results.
1903.00504
Vijay Balasubramanian
Serena Bradde, Armita Nourmohammad, Sidhartha Goyal, Vijay Balasubramanian
The size of the immune repertoire of bacteria
9 pages, 5 figures
null
10.1073/pnas.1903666117
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some bacteria and archaea possess an immune system, based on the CRISPR-Cas mechanism, that confers adaptive immunity against phage. In such species, individual bacteria maintain a "cassette" of viral DNA elements called spacers as a memory of past infections. The typical cassette contains a few dozen spacers. Given that bacteria can have very large genomes, and since having more spacers should confer a better memory, it is puzzling that so little genetic space would be devoted by bacteria to their adaptive immune system. Here, we identify a fundamental trade-off between the size of the bacterial immune repertoire and effectiveness of response to a given threat, and show how this tradeoff imposes a limit on the optimal size of the CRISPR cassette.
[ { "created": "Fri, 1 Mar 2019 19:31:44 GMT", "version": "v1" } ]
2022-06-08
[ [ "Bradde", "Serena", "" ], [ "Nourmohammad", "Armita", "" ], [ "Goyal", "Sidhartha", "" ], [ "Balasubramanian", "Vijay", "" ] ]
Some bacteria and archaea possess an immune system, based on the CRISPR-Cas mechanism, that confers adaptive immunity against phage. In such species, individual bacteria maintain a "cassette" of viral DNA elements called spacers as a memory of past infections. The typical cassette contains a few dozen spacers. Given that bacteria can have very large genomes, and since having more spacers should confer a better memory, it is puzzling that so little genetic space would be devoted by bacteria to their adaptive immune system. Here, we identify a fundamental trade-off between the size of the bacterial immune repertoire and effectiveness of response to a given threat, and show how this tradeoff imposes a limit on the optimal size of the CRISPR cassette.
1406.0094
Garri Davydyan
Garri Davydyan
Conception of Biologic System: Basis Functional Elements and Metric Properties
17 pages, 11 figures, Journal of Complex Systems, Volume 2014, Article ID 693938
null
10.1155/2014/693938
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A notion of biologic system or just a system implies a functional wholeness of comprising system components. Positive and negative feedback are the examples of how the idea to unite anatomical elements in the whole functional structure was successfully used in practice to explain regulatory mechanisms in biology and medicine. There are numerous examples of functional and metabolic pathways which are not regulated by feedback loops and have a structure of reciprocal relationships. Expressed in the matrix form positive feedback, negative feedback, and reciprocal links represent three basis elements of a Lie algebra sl(2,R)of a special linear group SL(2,R). It is proposed that the mathematical group structure can be realized through the three regulatory elements playing a role of a functional basis of biologic systems. The structure of the basis elements endows the space of biological variables with indefinite metric. Metric structure resembles Minkowski's space-time (+, -, -) making the carrier spaces of biologic variables and the space of transformations inhomogeneous. It endows biologic systems with a rich functional structure, giving the regulatory elements special differentiating features to form steady autonomous subsystems reducible to one-dimensional components.
[ { "created": "Sat, 31 May 2014 17:47:10 GMT", "version": "v1" } ]
2014-06-03
[ [ "Davydyan", "Garri", "" ] ]
A notion of biologic system or just a system implies a functional wholeness of comprising system components. Positive and negative feedback are the examples of how the idea to unite anatomical elements in the whole functional structure was successfully used in practice to explain regulatory mechanisms in biology and medicine. There are numerous examples of functional and metabolic pathways which are not regulated by feedback loops and have a structure of reciprocal relationships. Expressed in the matrix form positive feedback, negative feedback, and reciprocal links represent three basis elements of a Lie algebra sl(2,R)of a special linear group SL(2,R). It is proposed that the mathematical group structure can be realized through the three regulatory elements playing a role of a functional basis of biologic systems. The structure of the basis elements endows the space of biological variables with indefinite metric. Metric structure resembles Minkowski's space-time (+, -, -) making the carrier spaces of biologic variables and the space of transformations inhomogeneous. It endows biologic systems with a rich functional structure, giving the regulatory elements special differentiating features to form steady autonomous subsystems reducible to one-dimensional components.
2303.17776
Marissa Connor
Marissa Connor, Bruno Olshausen, Christopher Rozell
Learning Internal Representations of 3D Transformations from 2D Projected Inputs
null
null
null
null
q-bio.NC cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.
[ { "created": "Fri, 31 Mar 2023 02:43:01 GMT", "version": "v1" } ]
2023-04-03
[ [ "Connor", "Marissa", "" ], [ "Olshausen", "Bruno", "" ], [ "Rozell", "Christopher", "" ] ]
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images. It has been shown that humans use the persistence of object shape over motion-induced transformations as a cue to resolve depth ambiguity when solving this underconstrained problem. With the aim of understanding how biological vision systems may internally represent 3D transformations, we propose a computational model, based on a generative manifold model, which can be used to infer 3D structure from the motion of 2D points. Our model can also learn representations of the transformations with minimal supervision, providing a proof of concept for how humans may develop internal representations on a developmental or evolutionary time scale. Focused on rotational motion, we show how our model infers depth from moving 2D projected points, learns 3D rotational transformations from 2D training stimuli, and compares to human performance on psychophysical structure-from-motion experiments.
2407.09954
\'Elisabeth Remy
Nadine Ben Boina, Brigitte Moss\'e, Ana\"is Baudot, \'Elisabeth Remy
Refining Boolean models with the partial most permissive scheme
null
null
null
null
q-bio.MN cs.DM
http://creativecommons.org/publicdomain/zero/1.0/
Motivation: In systems biology, modelling strategies aim to decode how molecular components interact to generate dynamical behaviour. Boolean modelling is more and more used, but the description of the dynamics from two-levels components may be too limited to capture certain dynamical properties. %However, in Boolean models, the description of the dynamics may be too limited to capture certain dynamical properties. Multivalued logical models can overcome this limitation by allowing more than two levels for each component. However, multivaluing a Boolean model is challenging. Results: We present MRBM, a method for efficiently identifying the components of a Boolean model to be multivalued in order to capture specific fixed-point reachabilities in the asynchronous dynamics. To this goal, we defined a new updating scheme locating reachability properties in the most permissive dynamics. MRBM is supported by mathematical demonstrations and illustrated on a toy model and on two models of stem cell differentiation.
[ { "created": "Sat, 13 Jul 2024 17:32:27 GMT", "version": "v1" } ]
2024-07-16
[ [ "Boina", "Nadine Ben", "" ], [ "Mossé", "Brigitte", "" ], [ "Baudot", "Anaïs", "" ], [ "Remy", "Élisabeth", "" ] ]
Motivation: In systems biology, modelling strategies aim to decode how molecular components interact to generate dynamical behaviour. Boolean modelling is more and more used, but the description of the dynamics from two-levels components may be too limited to capture certain dynamical properties. %However, in Boolean models, the description of the dynamics may be too limited to capture certain dynamical properties. Multivalued logical models can overcome this limitation by allowing more than two levels for each component. However, multivaluing a Boolean model is challenging. Results: We present MRBM, a method for efficiently identifying the components of a Boolean model to be multivalued in order to capture specific fixed-point reachabilities in the asynchronous dynamics. To this goal, we defined a new updating scheme locating reachability properties in the most permissive dynamics. MRBM is supported by mathematical demonstrations and illustrated on a toy model and on two models of stem cell differentiation.
2012.05827
Junhyuk Kang
Anjana Jayaraman, Junhyuk Kang, Megan A. Jamiolkowski, William R. Wagner, Brian J. Kirby, and James F. Antaki
Synergistic effect of shear and ADP on platelet growth on ZTA and Ti6Al4V surfaces
Disagreements with data interpretation and analysis
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous-flow ventricular assist devices (VADs) have been an increasingly common, life-saving therapy for advanced heart-failure patients, but elevate the risk of thrombosis due to a combination of non-physiological hemodynamics and synthetic biomaterials. Limited work has been done to address platelet adhesion and aggregation on artificial surfaces under flow with sub-threshold concentrations of weak agonists. We perfused a blood analog containing hemoglobin-depleted red blood cells and fluorescently labeled platelets across a titanium alloy (Ti6Al4V) and zirconia-toughened alumina (ZTA) surface at shear rates of 400 and 1000 s-1. Upstream of the specimen, sub-threshold concentrations of ADP were uniformly introduced at concentrations of 0, 5, and 10 nM. Time-lapse videos of depositing platelets were recorded, and the percentage of the surface covered was quantified. Surface coverage percentages at 400 s-1and 1000 s-1were compared for each concentration of ADP and material surface combination. We observed a threshold concentration of ADP that expedites platelet deposition that is dependent on both shear and material surface chemistry. Additionally, we observed embolization when thrombus areas exceeded 300{\mu}m2, which was dependent on the combination of shear, ADP concentration, and material surface. This work is the first to simultaneously examine the three key contributing factors leading to thrombotic events. Our findings assist in considering alternative material choices constituting VADs and the need to address material reactivity in assessing antiplatelet agent tests.
[ { "created": "Thu, 10 Dec 2020 16:55:59 GMT", "version": "v1" }, { "created": "Tue, 6 Jul 2021 17:31:40 GMT", "version": "v2" } ]
2021-07-07
[ [ "Jayaraman", "Anjana", "" ], [ "Kang", "Junhyuk", "" ], [ "Jamiolkowski", "Megan A.", "" ], [ "Wagner", "William R.", "" ], [ "Kirby", "Brian J.", "" ], [ "Antaki", "James F.", "" ] ]
Continuous-flow ventricular assist devices (VADs) have been an increasingly common, life-saving therapy for advanced heart-failure patients, but elevate the risk of thrombosis due to a combination of non-physiological hemodynamics and synthetic biomaterials. Limited work has been done to address platelet adhesion and aggregation on artificial surfaces under flow with sub-threshold concentrations of weak agonists. We perfused a blood analog containing hemoglobin-depleted red blood cells and fluorescently labeled platelets across a titanium alloy (Ti6Al4V) and zirconia-toughened alumina (ZTA) surface at shear rates of 400 and 1000 s-1. Upstream of the specimen, sub-threshold concentrations of ADP were uniformly introduced at concentrations of 0, 5, and 10 nM. Time-lapse videos of depositing platelets were recorded, and the percentage of the surface covered was quantified. Surface coverage percentages at 400 s-1and 1000 s-1were compared for each concentration of ADP and material surface combination. We observed a threshold concentration of ADP that expedites platelet deposition that is dependent on both shear and material surface chemistry. Additionally, we observed embolization when thrombus areas exceeded 300{\mu}m2, which was dependent on the combination of shear, ADP concentration, and material surface. This work is the first to simultaneously examine the three key contributing factors leading to thrombotic events. Our findings assist in considering alternative material choices constituting VADs and the need to address material reactivity in assessing antiplatelet agent tests.
1404.2685
Tatiana T. Marquez-Lago
Faustino S\'anchez-Gardu\~no, Pedro Miramontes and Tatiana T. Marquez-Lago
Role reversal in a predator-prey interaction
23 pages, 7 figures
Royal Society Open Science 1: 140186, 2014
10.1098/rsos.140186
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predator-prey relationships are one of the most studied interactions in population ecology. However, little attention has been paid to the possibility of role exchange between species once determined as predators and preys, despite firm field evidence of such phenomena in the nature. In this paper, we build a model capable of reproducing the main phenomenological features of one reported predator-prey role-reversal system, and present results for both the homogeneous and the space explicit cases. We find that, depending on the choice of parameters, our role-reversal dynamical system exhibits excitable-like behaviour, generating waves of species' concentrations that propagate through space.
[ { "created": "Thu, 10 Apr 2014 04:19:26 GMT", "version": "v1" } ]
2014-10-31
[ [ "Sánchez-Garduño", "Faustino", "" ], [ "Miramontes", "Pedro", "" ], [ "Marquez-Lago", "Tatiana T.", "" ] ]
Predator-prey relationships are one of the most studied interactions in population ecology. However, little attention has been paid to the possibility of role exchange between species once determined as predators and preys, despite firm field evidence of such phenomena in the nature. In this paper, we build a model capable of reproducing the main phenomenological features of one reported predator-prey role-reversal system, and present results for both the homogeneous and the space explicit cases. We find that, depending on the choice of parameters, our role-reversal dynamical system exhibits excitable-like behaviour, generating waves of species' concentrations that propagate through space.
1502.00201
Carsten Lemmen
Carsten Lemmen
Cultural and Demic Diffusion of First Farmers, Herders, and their Innovations Across Eurasia
9 pages, 3 figures, revised version submitted to Documenta Prehistoric
null
10.4312/dp.42.5
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Was the spread of agropastoralism from the Eurasian founder regions dominated by demic or by cultural diffusion? This study employs a mathematical model of regional sociocultural development that includes different diffusion processes, local innovation and societal adaptation. Simulations hindcast the emergence and expansion of agropastoral life style in 294 regions of Eurasia and North Africa. Different scenarios for demic and diffusive exchange processes between adjacent regions are contrasted and the spatiotemporal pattern of diffusive events is evaluated. This study supports from a modeling perspective the hypothesis that there is no simple or exclusive demic or cultural diffusion, but that in most regions of Eurasia a combination of demic and cultural processes were important. Furthermore, we demonstrate the strong spatial and temporal variability in the balance of spread processes. Each region shows sometimes more demic, and at other times more cultural diffusion. Only few, possibly environmentally marginal, areas show a dominance of demic diffusion. This study affirms that diffusion processes should be investigated in a diachronic fashion and not from a time-integrated perspective.
[ { "created": "Sun, 1 Feb 2015 05:14:26 GMT", "version": "v1" }, { "created": "Mon, 12 Oct 2015 08:44:48 GMT", "version": "v2" } ]
2017-02-24
[ [ "Lemmen", "Carsten", "" ] ]
Was the spread of agropastoralism from the Eurasian founder regions dominated by demic or by cultural diffusion? This study employs a mathematical model of regional sociocultural development that includes different diffusion processes, local innovation and societal adaptation. Simulations hindcast the emergence and expansion of agropastoral life style in 294 regions of Eurasia and North Africa. Different scenarios for demic and diffusive exchange processes between adjacent regions are contrasted and the spatiotemporal pattern of diffusive events is evaluated. This study supports from a modeling perspective the hypothesis that there is no simple or exclusive demic or cultural diffusion, but that in most regions of Eurasia a combination of demic and cultural processes were important. Furthermore, we demonstrate the strong spatial and temporal variability in the balance of spread processes. Each region shows sometimes more demic, and at other times more cultural diffusion. Only few, possibly environmentally marginal, areas show a dominance of demic diffusion. This study affirms that diffusion processes should be investigated in a diachronic fashion and not from a time-integrated perspective.
1101.3474
Gang Fang
Gang Fang, Michael Steinbach, Chad L. Myers, Vipin Kumar
Integration of Differential Gene-combination Search and Gene Set Enrichment Analysis: A General Approach
null
null
null
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene Set Enrichment Analysis (GSEA) and its variations aim to discover collections of genes that show moderate but coordinated differences in expression. However, such techniques may be ineffective if many individual genes in a phenotype-related gene set have weak discriminative power. A potential solution is to search for combinations of genes that are highly differentiating even when individual genes are not. Although such techniques have been developed, these approaches have not been used with GSEA to any significant degree because of the large number of potential gene combinations and the heterogeneity of measures that assess the differentiation provided by gene groups of different sizes. To integrate the search for differentiating gene combinations and GSEA, we propose a general framework with two key components: (A) a procedure that reduces the number of scores to be handled by GSEA to the number of genes by summarizing the scores of the gene combinations involving a particular gene in a single score, and (B) a procedure to integrate the heterogeneous scores from combinations of different sizes and from different gene combination measures by mapping the scores to p-values. Experiments on four gene expression data sets demonstrate that the integration of GSEA and gene combination search can enhance the power of traditional GSEA by discovering gene sets that include genes with weak individual differentiation but strong joint discriminative power. Also, gene sets discovered by the integrative framework share several common biological processes and improve the consistency of the results among three lung cancer data sets.
[ { "created": "Tue, 18 Jan 2011 15:13:30 GMT", "version": "v1" } ]
2011-01-19
[ [ "Fang", "Gang", "" ], [ "Steinbach", "Michael", "" ], [ "Myers", "Chad L.", "" ], [ "Kumar", "Vipin", "" ] ]
Gene Set Enrichment Analysis (GSEA) and its variations aim to discover collections of genes that show moderate but coordinated differences in expression. However, such techniques may be ineffective if many individual genes in a phenotype-related gene set have weak discriminative power. A potential solution is to search for combinations of genes that are highly differentiating even when individual genes are not. Although such techniques have been developed, these approaches have not been used with GSEA to any significant degree because of the large number of potential gene combinations and the heterogeneity of measures that assess the differentiation provided by gene groups of different sizes. To integrate the search for differentiating gene combinations and GSEA, we propose a general framework with two key components: (A) a procedure that reduces the number of scores to be handled by GSEA to the number of genes by summarizing the scores of the gene combinations involving a particular gene in a single score, and (B) a procedure to integrate the heterogeneous scores from combinations of different sizes and from different gene combination measures by mapping the scores to p-values. Experiments on four gene expression data sets demonstrate that the integration of GSEA and gene combination search can enhance the power of traditional GSEA by discovering gene sets that include genes with weak individual differentiation but strong joint discriminative power. Also, gene sets discovered by the integrative framework share several common biological processes and improve the consistency of the results among three lung cancer data sets.
q-bio/0501038
Benjamin Lindner
Benjamin Lindner and Andre Longtin
Comment on: "Characterization of subthreshold voltage fluctuations in neuronal membranes" by M. Rudolph and A. Destexhe
22 pages, 6 figures; new results have been added including a criticism of a recent result for the probability density of the membrane voltage by Rudolph and Destexhe
null
null
null
q-bio.NC
null
In two recent papers, Rudolph and Destexhe (Neural Comp. {\bf 15}, 2577-2618, 2003; Neural Comp. in press, 2005) studied a leaky integrator model (i.e. an RC-circuit) driven by correlated (``colored'') Gaussian conductance noise and Gaussian current noise. In the first paper they derived an expression for the stationary probability density of the membrane voltage; in the second paper this expression was modified to cover a larger parameter regime. Here we show by standard analysis of solvable limit cases (white-noise limit of additive and multiplicative noise sources; only slow multiplicative noise; only additive noise) and by numerical simulations that their first result does not hold for the general colored-noise case and uncover the errors made in the derivation of a Fokker-Planck equation for the probability density. Furthermore, we demonstrate analytically (including an exact integral expression for the time-dependent mean value of the voltage) and by comparison to simulation results, that the extended expression for the probability density works much better but still does not solve exactly the full colored-noise problem. We also show that at stronger synaptic input the stationary mean value of the linear voltage model may diverge and give an exact condition relating the system parameters for which this takes place.
[ { "created": "Fri, 28 Jan 2005 19:47:27 GMT", "version": "v1" }, { "created": "Wed, 15 Jun 2005 12:33:30 GMT", "version": "v2" } ]
2007-05-23
[ [ "Lindner", "Benjamin", "" ], [ "Longtin", "Andre", "" ] ]
In two recent papers, Rudolph and Destexhe (Neural Comp. {\bf 15}, 2577-2618, 2003; Neural Comp. in press, 2005) studied a leaky integrator model (i.e. an RC-circuit) driven by correlated (``colored'') Gaussian conductance noise and Gaussian current noise. In the first paper they derived an expression for the stationary probability density of the membrane voltage; in the second paper this expression was modified to cover a larger parameter regime. Here we show by standard analysis of solvable limit cases (white-noise limit of additive and multiplicative noise sources; only slow multiplicative noise; only additive noise) and by numerical simulations that their first result does not hold for the general colored-noise case and uncover the errors made in the derivation of a Fokker-Planck equation for the probability density. Furthermore, we demonstrate analytically (including an exact integral expression for the time-dependent mean value of the voltage) and by comparison to simulation results, that the extended expression for the probability density works much better but still does not solve exactly the full colored-noise problem. We also show that at stronger synaptic input the stationary mean value of the linear voltage model may diverge and give an exact condition relating the system parameters for which this takes place.
2211.15669
Yeojin Kim
Yeojin Kim, Hyunju Lee
PINNet: a deep neural network with pathway prior knowledge for Alzheimer's disease
null
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of Alzheimer's Disease (AD)-related transcriptomic signatures from blood is important for early diagnosis of the disease. Deep learning techniques are potent classifiers for AD diagnosis, but most have been unable to identify biomarkers because of their lack of interpretability. To address these challenges, we propose a pathway information-based neural network (PINNet) to predict AD patients and analyze blood and brain transcriptomic signatures using an interpretable deep learning model. PINNet is a deep neural network (DNN) model with pathway prior knowledge from either the Gene Ontology or Kyoto Encyclopedia of Genes and Genomes databases. Then, a backpropagation-based model interpretation method was applied to reveal essential pathways and genes for predicting AD. We compared the performance of PINNet with a DNN model without a pathway. Performances of PINNet outperformed or were similar to those of DNN without a pathway using blood and brain gene expressions, respectively. Moreover, PINNet considers more AD-related genes as essential features than DNN without a pathway in the learning process. Pathway analysis of protein-protein interaction modules of highly contributed genes showed that AD-related genes in blood were enriched with cell migration, PI3K-Akt, MAPK signaling, and apoptosis in blood. The pathways enriched in the brain module included cell migration, PI3K-Akt, MAPK signaling, apoptosis, protein ubiquitination, and t-cell activation. Collectively, with prior knowledge about pathways, PINNet reveals essential pathways related to AD.
[ { "created": "Sun, 27 Nov 2022 05:00:26 GMT", "version": "v1" } ]
2022-11-30
[ [ "Kim", "Yeojin", "" ], [ "Lee", "Hyunju", "" ] ]
Identification of Alzheimer's Disease (AD)-related transcriptomic signatures from blood is important for early diagnosis of the disease. Deep learning techniques are potent classifiers for AD diagnosis, but most have been unable to identify biomarkers because of their lack of interpretability. To address these challenges, we propose a pathway information-based neural network (PINNet) to predict AD patients and analyze blood and brain transcriptomic signatures using an interpretable deep learning model. PINNet is a deep neural network (DNN) model with pathway prior knowledge from either the Gene Ontology or Kyoto Encyclopedia of Genes and Genomes databases. Then, a backpropagation-based model interpretation method was applied to reveal essential pathways and genes for predicting AD. We compared the performance of PINNet with a DNN model without a pathway. Performances of PINNet outperformed or were similar to those of DNN without a pathway using blood and brain gene expressions, respectively. Moreover, PINNet considers more AD-related genes as essential features than DNN without a pathway in the learning process. Pathway analysis of protein-protein interaction modules of highly contributed genes showed that AD-related genes in blood were enriched with cell migration, PI3K-Akt, MAPK signaling, and apoptosis in blood. The pathways enriched in the brain module included cell migration, PI3K-Akt, MAPK signaling, apoptosis, protein ubiquitination, and t-cell activation. Collectively, with prior knowledge about pathways, PINNet reveals essential pathways related to AD.
q-bio/0609029
Changbong Hyeon
Changbong Hyeon and D. Thirumalai
Mechanical unfolding of RNA: From hairpins to structures with internal multiloops
26 pages, 6 figures, Biophys. J. (in press)
Biophys. J. vol 92 (2007) 731-743
10.1529/biophysj.106.093062
null
q-bio.BM cond-mat.soft physics.bio-ph q-bio.QM
null
Mechanical unfolding of RNA structures, ranging from hairpins to ribozymes, using laser optical tweezer (LOT) experiments have begun to reveal the features of the energy landscape that cannot be easily explored using conventional experiments. Upon application of constant force ($f$), RNA hairpins undergo cooperative transitions from folded to unfolded states whereas subdomains of ribozymes unravel one at a time. Here, we use a self-organized polymer (SOP) model and Brownian dynamics simulations to probe mechanical unfolding at constant force and constant-loading rate of four RNA structures of varying complexity. Our work shows (i) the response of RNA to force is largely determined by the native structure; (ii) only by probing mechanical unfolding over a wide range of forces can the underlying energy landscape be fully explored.
[ { "created": "Wed, 20 Sep 2006 03:31:10 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hyeon", "Changbong", "" ], [ "Thirumalai", "D.", "" ] ]
Mechanical unfolding of RNA structures, ranging from hairpins to ribozymes, using laser optical tweezer (LOT) experiments have begun to reveal the features of the energy landscape that cannot be easily explored using conventional experiments. Upon application of constant force ($f$), RNA hairpins undergo cooperative transitions from folded to unfolded states whereas subdomains of ribozymes unravel one at a time. Here, we use a self-organized polymer (SOP) model and Brownian dynamics simulations to probe mechanical unfolding at constant force and constant-loading rate of four RNA structures of varying complexity. Our work shows (i) the response of RNA to force is largely determined by the native structure; (ii) only by probing mechanical unfolding over a wide range of forces can the underlying energy landscape be fully explored.
1406.4006
Javier Buldu
D. Papo, M. Zanin, J.A. Pineda-Pardo, S. Boccaletti and J.M. Buld\'u
Functional brain networks: great expectations, hard times, and the big leap forward
12 pages, no figures. Philosophical Transactions of the Royal Society B, in press
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many physical and biological systems can be studied using complex network theory, a new statistical physics understanding of graph theory. The recent application of complex network theory to the study of functional brain networks generated great enthusiasm as it allows addressing hitherto non-standard issues in the field, such as efficiency of brain functioning or vulnerability to damage. However, in spite of its high degree of generality, the theory was originally designed to describe systems profoundly different from the brain. We discuss some important caveats in the wholesale application of existing tools and concepts to a field they were not originally designed to describe. At the same time, we argue that complex network theory has not yet been taken full advantage of, as many of its important aspects are yet to make their appearance in the neuroscience literature. Finally, we propose that, rather than simply borrowing from an existing theory, functional neural networks can inspire a fundamental reformulation of complex network theory, to account for its exquisitely complex functioning mode.
[ { "created": "Mon, 16 Jun 2014 13:19:38 GMT", "version": "v1" } ]
2014-06-17
[ [ "Papo", "D.", "" ], [ "Zanin", "M.", "" ], [ "Pineda-Pardo", "J. A.", "" ], [ "Boccaletti", "S.", "" ], [ "Buldú", "J. M.", "" ] ]
Many physical and biological systems can be studied using complex network theory, a new statistical physics understanding of graph theory. The recent application of complex network theory to the study of functional brain networks generated great enthusiasm as it allows addressing hitherto non-standard issues in the field, such as efficiency of brain functioning or vulnerability to damage. However, in spite of its high degree of generality, the theory was originally designed to describe systems profoundly different from the brain. We discuss some important caveats in the wholesale application of existing tools and concepts to a field they were not originally designed to describe. At the same time, we argue that complex network theory has not yet been taken full advantage of, as many of its important aspects are yet to make their appearance in the neuroscience literature. Finally, we propose that, rather than simply borrowing from an existing theory, functional neural networks can inspire a fundamental reformulation of complex network theory, to account for its exquisitely complex functioning mode.
2209.00584
Josinaldo Menezes
J. Menezes, S. Batista, E. Rangel
Spatial organisation plasticity reduces disease infection risk in rock-paper-scissors models
8 pages, 8 figures
Biosystems 221, 104777 (2022)
10.1016/j.biosystems.2022.104777
null
q-bio.PE nlin.AO nlin.PS physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a three-species cyclic game system where organisms face a contagious disease whose virulence may change by a pathogen mutation. As a responsive defence strategy, organisms' mobility is restricted to reduce disease dissemination in the system. The impact of the collective self-preservation strategy on the disease infection risk is investigated by performing stochastic simulations of the spatial version of the rock-paper-scissors game. Our outcomes show that the mobility control strategy induces plasticity in the spatial patterns with groups of organisms of the same species inhabiting spatial domains whose characteristic length scales depend on the level of dispersal restrictions. The spatial organisation plasticity allows the ecosystems to adapt to minimise the individuals' disease contamination risk if an eventual pathogen alters the disease virulence. We discover that if a pathogen mutation makes the disease more transmissible or less lethal, the organisms benefit more if the mobility is not strongly restricted, thus forming large spatial domains. Conversely, the benefits of protecting against a pathogen causing a less contagious or deadlier disease are maximised if the average size of groups of individuals of the same species is significantly limited, reducing the dimensions of groups of organisms significantly. Our findings may help biologists understand the effects of dispersal control as a conservation strategy in ecosystems affected by epidemic outbreaks.
[ { "created": "Thu, 1 Sep 2022 16:55:14 GMT", "version": "v1" } ]
2022-12-27
[ [ "Menezes", "J.", "" ], [ "Batista", "S.", "" ], [ "Rangel", "E.", "" ] ]
We study a three-species cyclic game system where organisms face a contagious disease whose virulence may change by a pathogen mutation. As a responsive defence strategy, organisms' mobility is restricted to reduce disease dissemination in the system. The impact of the collective self-preservation strategy on the disease infection risk is investigated by performing stochastic simulations of the spatial version of the rock-paper-scissors game. Our outcomes show that the mobility control strategy induces plasticity in the spatial patterns with groups of organisms of the same species inhabiting spatial domains whose characteristic length scales depend on the level of dispersal restrictions. The spatial organisation plasticity allows the ecosystems to adapt to minimise the individuals' disease contamination risk if an eventual pathogen alters the disease virulence. We discover that if a pathogen mutation makes the disease more transmissible or less lethal, the organisms benefit more if the mobility is not strongly restricted, thus forming large spatial domains. Conversely, the benefits of protecting against a pathogen causing a less contagious or deadlier disease are maximised if the average size of groups of individuals of the same species is significantly limited, reducing the dimensions of groups of organisms significantly. Our findings may help biologists understand the effects of dispersal control as a conservation strategy in ecosystems affected by epidemic outbreaks.
q-bio/0610008
Jorge Gon\c{c}alves
Jorge Goncalves and Sean Warnick
Dynamical Structure Functions for the Estimation of LTI Networks with Limited Information
7 pages, 2 figures
null
null
null
q-bio.MN q-bio.OT
null
This research explores the role and representation of network structure for LTI Systems. We demonstrate that transfer functions contain no structural information without more assumptions being made about the system, assumptions that we believe are unreasonable when dealing with truly complex systems. We then introduce Dynamical Structure Functions as an alternative, graphical-model based representation of LTI systems that contain both dynamical and structural information of the system. We use Dynamical Structure to prove necessary and sufficient conditions for estimating structure from data, and demonstrate, for example, the danger of attempting to use steady-state information to estimate network structure.
[ { "created": "Tue, 3 Oct 2006 11:51:38 GMT", "version": "v1" } ]
2009-09-29
[ [ "Goncalves", "Jorge", "" ], [ "Warnick", "Sean", "" ] ]
This research explores the role and representation of network structure for LTI Systems. We demonstrate that transfer functions contain no structural information without more assumptions being made about the system, assumptions that we believe are unreasonable when dealing with truly complex systems. We then introduce Dynamical Structure Functions as an alternative, graphical-model based representation of LTI systems that contain both dynamical and structural information of the system. We use Dynamical Structure to prove necessary and sufficient conditions for estimating structure from data, and demonstrate, for example, the danger of attempting to use steady-state information to estimate network structure.
1506.06446
Huw Ogilvie
Huw A. Ogilvie, Joseph Heled, Dong Xie and Alexei J. Drummond
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods
null
Syst Biol (2016) 65 (3): 381-396
10.1093/sysbio/syv118
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behaviour of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent.
[ { "created": "Mon, 22 Jun 2015 02:27:28 GMT", "version": "v1" }, { "created": "Tue, 22 Sep 2015 10:06:48 GMT", "version": "v2" }, { "created": "Tue, 6 Oct 2015 02:34:27 GMT", "version": "v3" } ]
2016-06-13
[ [ "Ogilvie", "Huw A.", "" ], [ "Heled", "Joseph", "" ], [ "Xie", "Dong", "" ], [ "Drummond", "Alexei J.", "" ] ]
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behaviour of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent.
0904.0636
Thierry Rabilloud
Pierre Lescuyer, Mireille Chevallet (BBSI), Sylvie Luche (BBSI), Thierry Rabilloud (BBSI)
Organelle proteomics
null
Curr Protoc Protein Sci Chapter 24 (2006) Unit 24.2
10.1002/0471140864.ps2402s44
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This unit describes strategies for studying the proteomes of organelles, which is one example of targeted proteomics. It relies heavily on previously published units dealing with organelle preparation, protein solubilization, and proteomics techniques. A specific commentary for organelle proteomics is provided. Specific protocols for the isolation of nuclei from various sources (cell cultures, tissues) are also provided.
[ { "created": "Fri, 3 Apr 2009 18:53:00 GMT", "version": "v1" } ]
2009-04-06
[ [ "Lescuyer", "Pierre", "", "BBSI" ], [ "Chevallet", "Mireille", "", "BBSI" ], [ "Luche", "Sylvie", "", "BBSI" ], [ "Rabilloud", "Thierry", "", "BBSI" ] ]
This unit describes strategies for studying the proteomes of organelles, which is one example of targeted proteomics. It relies heavily on previously published units dealing with organelle preparation, protein solubilization, and proteomics techniques. A specific commentary for organelle proteomics is provided. Specific protocols for the isolation of nuclei from various sources (cell cultures, tissues) are also provided.
q-bio/0510017
Can Ozan Tan Mr.
Stacy L. Ozesmi, Uygar Ozesmi and Can Ozan Tan
Methodological Issues in Building, Training, and Testing Artificial Neural Networks
22 pages, 2 figures. Presented in ISEI3 (2002). Ecological Modelling in press
Ecological Modelling, 195:83-93. 2006
10.1016/j.ecolmodel.2005.11.012
null
q-bio.PE q-bio.QM
null
We review the use of artificial neural networks, particularly the feedforward multilayer perceptron with back-propagation for training (MLP), in ecological modelling. Overtraining on data or giving vague references to how it was avoided is the major problem. Various methods can be used to determine when to stop training in artificial neural networks: 1) early stopping based on cross-validation, 2) stopping after a analyst defined error is reached or after the error levels off, 3) use of a test data set. We do not recommend the third method as the test data set is then not independent of model development. Many studies used the testing data to optimize the model and training. Although this method may give the best model for that set of data it does not give generalizability or improve understanding of the study system. The importance of an independent data set cannot be overemphasized as we found dramatic differences in model accuracy assessed with prediction accuracy on the training data set, as estimated with bootstrapping, and from use of an independent data set. The comparison of the artificial neural network with a general linear model (GLM) as a standard procedure is recommended because a GLM may perform as well or better than the MLP. MLP models should not be treated as black box models but instead techniques such as sensitivity analyses, input variable relevances, neural interpretation diagrams, randomization tests, and partial derivatives should be used to make the model more transparent, and further our ecological understanding which is an important goal of the modelling process. Based on our experience we discuss how to build a MLP model and how to optimize the parameters and architecture.
[ { "created": "Fri, 7 Oct 2005 19:17:27 GMT", "version": "v1" } ]
2011-07-29
[ [ "Ozesmi", "Stacy L.", "" ], [ "Ozesmi", "Uygar", "" ], [ "Tan", "Can Ozan", "" ] ]
We review the use of artificial neural networks, particularly the feedforward multilayer perceptron with back-propagation for training (MLP), in ecological modelling. Overtraining on data or giving vague references to how it was avoided is the major problem. Various methods can be used to determine when to stop training in artificial neural networks: 1) early stopping based on cross-validation, 2) stopping after a analyst defined error is reached or after the error levels off, 3) use of a test data set. We do not recommend the third method as the test data set is then not independent of model development. Many studies used the testing data to optimize the model and training. Although this method may give the best model for that set of data it does not give generalizability or improve understanding of the study system. The importance of an independent data set cannot be overemphasized as we found dramatic differences in model accuracy assessed with prediction accuracy on the training data set, as estimated with bootstrapping, and from use of an independent data set. The comparison of the artificial neural network with a general linear model (GLM) as a standard procedure is recommended because a GLM may perform as well or better than the MLP. MLP models should not be treated as black box models but instead techniques such as sensitivity analyses, input variable relevances, neural interpretation diagrams, randomization tests, and partial derivatives should be used to make the model more transparent, and further our ecological understanding which is an important goal of the modelling process. Based on our experience we discuss how to build a MLP model and how to optimize the parameters and architecture.
1211.1737
Sonal Singhal
Sonal Singhal
De novo genomic analyses for non-model organisms: an evaluation of methods across a multi-species data set
5 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput sequencing (HTS) is revolutionizing biological research by enabling scientists to quickly and cheaply query variation at a genomic scale. Despite the increasing ease of obtaining such data, using these data effectively still poses notable challenges, especially for those working with organisms without a high-quality reference genome. For every stage of analysis - from assembly to annotation to variant discovery - researchers have to distinguish technical artifacts from the biological realities of their data before they can make inference. In this work, I explore these challenges by generating a large de novo comparative transcriptomic dataset data for a clade of lizards and constructing a pipeline to analyze these data. Then, using a combination of novel metrics and an externally validated variant data set, I test the efficacy of my approach, identify areas of improvement, and propose ways to minimize these errors. I find that with careful data curation, HTS can be a powerful tool for generating genomic data for non-model organisms.
[ { "created": "Thu, 8 Nov 2012 00:42:04 GMT", "version": "v1" } ]
2012-11-09
[ [ "Singhal", "Sonal", "" ] ]
High-throughput sequencing (HTS) is revolutionizing biological research by enabling scientists to quickly and cheaply query variation at a genomic scale. Despite the increasing ease of obtaining such data, using these data effectively still poses notable challenges, especially for those working with organisms without a high-quality reference genome. For every stage of analysis - from assembly to annotation to variant discovery - researchers have to distinguish technical artifacts from the biological realities of their data before they can make inference. In this work, I explore these challenges by generating a large de novo comparative transcriptomic dataset data for a clade of lizards and constructing a pipeline to analyze these data. Then, using a combination of novel metrics and an externally validated variant data set, I test the efficacy of my approach, identify areas of improvement, and propose ways to minimize these errors. I find that with careful data curation, HTS can be a powerful tool for generating genomic data for non-model organisms.
1603.02266
Sara Garbarino Dr
Fabrice Delbary, Sara Garbarino and Valentina Vivaldi
Compartmental analysis of dynamic nuclear medicine data: models and identifiability
null
Inverse Problems 32 (2016), 125010
10.1088/0266-5611/32/12/125010
null
q-bio.QM math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compartmental models based on tracer mass balance are extensively used in clinical and pre-clinical nuclear medicine in order to obtain quantitative information on tracer metabolism in the biological tissue. This paper is the first of a series of two that deal with the problem of tracer coefficient estimation via compartmental modelling in an inverse problem framework. Specifically, here we discuss the identifiability problem for a general n-dimension compartmental system and provide uniqueness results in the case of two-compartment and three-compartment compartmental models. The second paper will utilize this framework in order to show how non-linear regularization schemes can be applied to obtain numerical estimates of the tracer coefficients in the case of nuclear medicine data corresponding to brain, liver and kidney physiology.
[ { "created": "Mon, 7 Mar 2016 12:14:49 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2016 17:29:23 GMT", "version": "v2" } ]
2018-08-21
[ [ "Delbary", "Fabrice", "" ], [ "Garbarino", "Sara", "" ], [ "Vivaldi", "Valentina", "" ] ]
Compartmental models based on tracer mass balance are extensively used in clinical and pre-clinical nuclear medicine in order to obtain quantitative information on tracer metabolism in the biological tissue. This paper is the first of a series of two that deal with the problem of tracer coefficient estimation via compartmental modelling in an inverse problem framework. Specifically, here we discuss the identifiability problem for a general n-dimension compartmental system and provide uniqueness results in the case of two-compartment and three-compartment compartmental models. The second paper will utilize this framework in order to show how non-linear regularization schemes can be applied to obtain numerical estimates of the tracer coefficients in the case of nuclear medicine data corresponding to brain, liver and kidney physiology.
2210.12158
Hansheng Xue
Hansheng Xue, Vaibhav Rajan, Yu Lin
Graph Coloring via Neural Networks for Haplotype Assembly and Viral Quasispecies Reconstruction
Accepted by NeurIPS 2022
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding genetic variation, e.g., through mutations, in organisms is crucial to unravel their effects on the environment and human health. A fundamental characterization can be obtained by solving the haplotype assembly problem, which yields the variation across multiple copies of chromosomes. Variations among fast evolving viruses that lead to different strains (called quasispecies) are also deciphered with similar approaches. In both these cases, high-throughput sequencing technologies that provide oversampled mixtures of large noisy fragments (reads) of genomes, are used to infer constituent components (haplotypes or quasispecies). The problem is harder for polyploid species where there are more than two copies of chromosomes. State-of-the-art neural approaches to solve this NP-hard problem do not adequately model relations among the reads that are important for deconvolving the input signal. We address this problem by developing a new method, called NeurHap, that combines graph representation learning with combinatorial optimization. Our experiments demonstrate substantially better performance of NeurHap in real and synthetic datasets compared to competing approaches.
[ { "created": "Fri, 21 Oct 2022 12:53:09 GMT", "version": "v1" } ]
2022-10-25
[ [ "Xue", "Hansheng", "" ], [ "Rajan", "Vaibhav", "" ], [ "Lin", "Yu", "" ] ]
Understanding genetic variation, e.g., through mutations, in organisms is crucial to unravel their effects on the environment and human health. A fundamental characterization can be obtained by solving the haplotype assembly problem, which yields the variation across multiple copies of chromosomes. Variations among fast evolving viruses that lead to different strains (called quasispecies) are also deciphered with similar approaches. In both these cases, high-throughput sequencing technologies that provide oversampled mixtures of large noisy fragments (reads) of genomes, are used to infer constituent components (haplotypes or quasispecies). The problem is harder for polyploid species where there are more than two copies of chromosomes. State-of-the-art neural approaches to solve this NP-hard problem do not adequately model relations among the reads that are important for deconvolving the input signal. We address this problem by developing a new method, called NeurHap, that combines graph representation learning with combinatorial optimization. Our experiments demonstrate substantially better performance of NeurHap in real and synthetic datasets compared to competing approaches.
1301.6937
Marco M\"oller
Marco M\"oller and Barbara Drossel
Formation of the frozen core in critical Boolean Networks
null
null
10.1088/1367-2630/14/2/023051
null
q-bio.MN cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate numerically and analytically the formation of the frozen core in critical random Boolean networks with biased functions. We demonstrate that a previously used efficient algorithm for obtaining the frozen core, which starts from the nodes with constant functions, fails when the number of inputs per node exceeds 4. We present computer simulation data for the process of formation of the frozen core and its robustness, and we show that several important features of the data can be derived by using a mean-field calculation.
[ { "created": "Tue, 29 Jan 2013 14:40:27 GMT", "version": "v1" } ]
2013-02-14
[ [ "Möller", "Marco", "" ], [ "Drossel", "Barbara", "" ] ]
We investigate numerically and analytically the formation of the frozen core in critical random Boolean networks with biased functions. We demonstrate that a previously used efficient algorithm for obtaining the frozen core, which starts from the nodes with constant functions, fails when the number of inputs per node exceeds 4. We present computer simulation data for the process of formation of the frozen core and its robustness, and we show that several important features of the data can be derived by using a mean-field calculation.
1806.00716
Andrea Perna
Andrea Perna, Giulio Facchini and Jean Louis Deneubourg
Weber's law based perception and the stability of animal groups
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group living animals form aggregations and flocks that remain cohesive in spite of internal movements of individuals. This is possible because individual group members repeatedly adjust their position and motion in response to the position and motion of other group members. Here we develop a theoretical approach to address the question, what general features -- if any -- underlie the interaction rules that mediate group stability in animals of all species? We do so by considering how the spatial organisation of a group would change in the complete absence of interactions. Without interactions, a group would disperse in a way that can be easily characterised in terms of Fick's diffusion equations. We can hence address the inverse theoretical problem of finding the individual-level interaction responses that are required to counterbalance diffusion and to preserve group stability. We show that an individual-level response to neighbour densities in the form of Weber's law (a 'universal' law describing the functioning of the sensory systems of animals of all species) results in an 'anti-diffusion' term at the group level. On short time scales, this anti-diffusion restores the initial group configuration in a way which is reminiscent of methods for image deblurring in image processing. We also show that any non-homogeneous, spatial density distribution can be preserved over time if individual movement patterns have the form of a Weber's law response. Weber's law describes the fundamental functioning of perceptual systems. Our study indicates that it is also a necessary -- but not sufficient -- feature of collective interactions in stable animal groups.
[ { "created": "Sun, 3 Jun 2018 00:19:59 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2018 17:04:08 GMT", "version": "v2" }, { "created": "Thu, 18 Apr 2019 22:05:27 GMT", "version": "v3" } ]
2019-04-22
[ [ "Perna", "Andrea", "" ], [ "Facchini", "Giulio", "" ], [ "Deneubourg", "Jean Louis", "" ] ]
Group living animals form aggregations and flocks that remain cohesive in spite of internal movements of individuals. This is possible because individual group members repeatedly adjust their position and motion in response to the position and motion of other group members. Here we develop a theoretical approach to address the question, what general features -- if any -- underlie the interaction rules that mediate group stability in animals of all species? We do so by considering how the spatial organisation of a group would change in the complete absence of interactions. Without interactions, a group would disperse in a way that can be easily characterised in terms of Fick's diffusion equations. We can hence address the inverse theoretical problem of finding the individual-level interaction responses that are required to counterbalance diffusion and to preserve group stability. We show that an individual-level response to neighbour densities in the form of Weber's law (a 'universal' law describing the functioning of the sensory systems of animals of all species) results in an 'anti-diffusion' term at the group level. On short time scales, this anti-diffusion restores the initial group configuration in a way which is reminiscent of methods for image deblurring in image processing. We also show that any non-homogeneous, spatial density distribution can be preserved over time if individual movement patterns have the form of a Weber's law response. Weber's law describes the fundamental functioning of perceptual systems. Our study indicates that it is also a necessary -- but not sufficient -- feature of collective interactions in stable animal groups.
2007.07734
Agnieszka Pregowska
Agnieszka Pregowska
Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels
11 pages, 3 figures
null
10.3390/e23010092
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In nervous system information is conveyed by sequence of action potentials (spikes-trains). As MacKay and McCulloch proposed, spike-trains can be represented as bits sequences coming from Information Sources. Previously, we studied relations between Information Transmission Rates (ITR) carried out by the spikes, their correlations, and frequencies. Here, we concentrate on the problem of how spikes fluctuations affect ITR. The Information Theory Method developed by Shannon is applied. Information Sources are modeled as stationary stochastic processes. We assume such sources as two states Markov processes. As a spike-trains' fluctuation measure, we consider the Standard Deviation SD, which, in fact, measures average fluctuation of spikes around the average spike frequency. We found that character of ITR and signal fluctuations relation strongly depends on parameter s which is a sum of transitions probabilities from no spike state to spike state and vice versa. It turned out that for smaller s (s<1) the quotient ITR/SD has a maximum and can tend to zero depending on transition probabilities. While for s large enough 1<s the ITR/SD is separated from 0 for each s. Similar behavior was observed when we replaced Shannon entropy terms in Markov entropy formula by their approximation with polynomials. We also show that the ITR quotient by Variance behaves in a completely different way. We show that for large transition parameter s the Information Transmission Rate by SD will never decrease to 0. Specifically, for 1<s<1.7 the ITR will be always, independently on transition probabilities which form this s, above the level of fluctuations, i.e. we have SD<ITR. We conclude that in a more noisy environment, to get appropriate reliability and efficiency of transmission, Information Sources with higher tendency of transition from the state no spike to spike state and vice versa should be applied.
[ { "created": "Wed, 15 Jul 2020 15:07:04 GMT", "version": "v1" }, { "created": "Wed, 26 Aug 2020 14:11:19 GMT", "version": "v2" } ]
2021-02-03
[ [ "Pregowska", "Agnieszka", "" ] ]
In nervous system information is conveyed by sequence of action potentials (spikes-trains). As MacKay and McCulloch proposed, spike-trains can be represented as bits sequences coming from Information Sources. Previously, we studied relations between Information Transmission Rates (ITR) carried out by the spikes, their correlations, and frequencies. Here, we concentrate on the problem of how spikes fluctuations affect ITR. The Information Theory Method developed by Shannon is applied. Information Sources are modeled as stationary stochastic processes. We assume such sources as two states Markov processes. As a spike-trains' fluctuation measure, we consider the Standard Deviation SD, which, in fact, measures average fluctuation of spikes around the average spike frequency. We found that character of ITR and signal fluctuations relation strongly depends on parameter s which is a sum of transitions probabilities from no spike state to spike state and vice versa. It turned out that for smaller s (s<1) the quotient ITR/SD has a maximum and can tend to zero depending on transition probabilities. While for s large enough 1<s the ITR/SD is separated from 0 for each s. Similar behavior was observed when we replaced Shannon entropy terms in Markov entropy formula by their approximation with polynomials. We also show that the ITR quotient by Variance behaves in a completely different way. We show that for large transition parameter s the Information Transmission Rate by SD will never decrease to 0. Specifically, for 1<s<1.7 the ITR will be always, independently on transition probabilities which form this s, above the level of fluctuations, i.e. we have SD<ITR. We conclude that in a more noisy environment, to get appropriate reliability and efficiency of transmission, Information Sources with higher tendency of transition from the state no spike to spike state and vice versa should be applied.
2105.09411
Susmita Sadhu
Susmita Sadhu
Analysis of long-term transients and detection of early warning signs of major population changes in a two-timescale ecosystem
null
null
10.1007/s00285-022-01805-4
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
Identifying early warning signs of sudden population changes and mechanisms leading to regime shifts are highly desirable in population biology. In this paper, a two-trophic ecosystem comprising of two species of predators, competing for their common prey, with explicit interference competition is considered. With proper rescaling, the model is portrayed as a singularly perturbed system with fast prey dynamics and slow dynamics of the predators. In a parameter regime near singular Hopf bifurcation, chaotic mixed-mode oscillations (MMOs), featuring concatenation of small and large amplitude oscillations are observed as long-lasting transients before the system approaches its asymptotic state. To analyze the dynamical cause that initiates a large amplitude oscillation in an MMO orbit, the model is reduced to a suitable normal form near the singular-Hopf point. The normal form possesses a separatrix surface that separates two different types of oscillations. A large amplitude oscillation is initiated if a trajectory moves from the "inner" to the "outer side" of this surface. A set of conditions on the normal form variables are obtained to determine whether a trajectory would exhibit another cycle of MMO dynamics before experiencing a regime shift (i.e. approaching its asymptotic state). These conditions can serve as early warning signs for a sudden population shift in an ecosystem.
[ { "created": "Wed, 19 May 2021 21:45:13 GMT", "version": "v1" } ]
2022-09-23
[ [ "Sadhu", "Susmita", "" ] ]
Identifying early warning signs of sudden population changes and mechanisms leading to regime shifts are highly desirable in population biology. In this paper, a two-trophic ecosystem comprising of two species of predators, competing for their common prey, with explicit interference competition is considered. With proper rescaling, the model is portrayed as a singularly perturbed system with fast prey dynamics and slow dynamics of the predators. In a parameter regime near singular Hopf bifurcation, chaotic mixed-mode oscillations (MMOs), featuring concatenation of small and large amplitude oscillations are observed as long-lasting transients before the system approaches its asymptotic state. To analyze the dynamical cause that initiates a large amplitude oscillation in an MMO orbit, the model is reduced to a suitable normal form near the singular-Hopf point. The normal form possesses a separatrix surface that separates two different types of oscillations. A large amplitude oscillation is initiated if a trajectory moves from the "inner" to the "outer side" of this surface. A set of conditions on the normal form variables are obtained to determine whether a trajectory would exhibit another cycle of MMO dynamics before experiencing a regime shift (i.e. approaching its asymptotic state). These conditions can serve as early warning signs for a sudden population shift in an ecosystem.
q-bio/0501039
Riccardo Boscolo
Riccardo Boscolo, Behnam A. Rezaei, P. Oscar Boykin, and Vwani P. Roychowdhury
Functionality Encoded In Topology? Discovering Macroscopic Regulatory Modules from Large-Scale Protein-DNA Interaction Networks
11 pages, 3 figures
null
null
null
q-bio.MN cond-mat.dis-nn
null
The promise of discovering a functional blueprint of a cellular system from large-scale and high-throughput sequence and experimental data is predicated on the belief that the same top-down investigative approach that proved successful in other biological problems (e.g. DNA sequencing) will be as effective when it comes to inferring more complex intracellular processes. The results in this paper address this fundamental issue in the specific context of transcription regulatory networks. Although simple recurring regulatory motifs have been identified in the past, due to the size and complexity of the connectivity structure, the subdivision of such networks into larger, and possibly inter-connected, regulatory modules is still under investigation. Specifically, it is unclear whether functionally well-characterized transcriptional sub-networks can be identified by solely analyzing the connectivity structure of the overall network topology. In this paper, we show that transcriptional regulatory networks can be systematically partitioned into communities whose members are consistently functionally related. We applied the partitioning method to the transcriptional regulatory networks of the yeast Saccharomyces cerevisiae; the resulting communities of gene and transcriptional regulators can be associated to distinct functional units, such as amino acid metabolism, cell cycle regulation, protein biosynthesis and localization, DNA replication and maintenance, lipid catabolism, stress response and so on. Moreover, the observation of inter-community connectivity patterns provides a valuable tool for elucidating the inter-dependency between the discovered regulatory modules.
[ { "created": "Sun, 30 Jan 2005 19:09:38 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2005 06:46:23 GMT", "version": "v2" } ]
2007-05-23
[ [ "Boscolo", "Riccardo", "" ], [ "Rezaei", "Behnam A.", "" ], [ "Boykin", "P. Oscar", "" ], [ "Roychowdhury", "Vwani P.", "" ] ]
The promise of discovering a functional blueprint of a cellular system from large-scale and high-throughput sequence and experimental data is predicated on the belief that the same top-down investigative approach that proved successful in other biological problems (e.g. DNA sequencing) will be as effective when it comes to inferring more complex intracellular processes. The results in this paper address this fundamental issue in the specific context of transcription regulatory networks. Although simple recurring regulatory motifs have been identified in the past, due to the size and complexity of the connectivity structure, the subdivision of such networks into larger, and possibly inter-connected, regulatory modules is still under investigation. Specifically, it is unclear whether functionally well-characterized transcriptional sub-networks can be identified by solely analyzing the connectivity structure of the overall network topology. In this paper, we show that transcriptional regulatory networks can be systematically partitioned into communities whose members are consistently functionally related. We applied the partitioning method to the transcriptional regulatory networks of the yeast Saccharomyces cerevisiae; the resulting communities of gene and transcriptional regulators can be associated to distinct functional units, such as amino acid metabolism, cell cycle regulation, protein biosynthesis and localization, DNA replication and maintenance, lipid catabolism, stress response and so on. Moreover, the observation of inter-community connectivity patterns provides a valuable tool for elucidating the inter-dependency between the discovered regulatory modules.
0709.0346
Akira Kinjo
Akira R. Kinjo and Sanzo Miyazawa
On the optimal contact potential of proteins
5 pages, text only
Chemical Physics Letters, Vol. 451 pp. 132-135 (2008)
10.1016/j.cplett.2007.12.005
null
q-bio.BM physics.bio-ph physics.chem-ph
null
We analytically derive the lower bound of the total conformational energy of a protein structure by assuming that the total conformational energy is well approximated by the sum of sequence-dependent pairwise contact energies. The condition for the native structure achieving the lower bound leads to the contact energy matrix that is a scalar multiple of the native contact matrix, i.e., the so-called Go potential. We also derive spectral relations between contact matrix and energy matrix, and approximations related to one-dimensional protein structures. Implications for protein structure prediction are discussed.
[ { "created": "Tue, 4 Sep 2007 07:31:28 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2008 00:17:18 GMT", "version": "v2" } ]
2008-01-03
[ [ "Kinjo", "Akira R.", "" ], [ "Miyazawa", "Sanzo", "" ] ]
We analytically derive the lower bound of the total conformational energy of a protein structure by assuming that the total conformational energy is well approximated by the sum of sequence-dependent pairwise contact energies. The condition for the native structure achieving the lower bound leads to the contact energy matrix that is a scalar multiple of the native contact matrix, i.e., the so-called Go potential. We also derive spectral relations between contact matrix and energy matrix, and approximations related to one-dimensional protein structures. Implications for protein structure prediction are discussed.
0810.3834
Alfredo Iorio
Alfredo Iorio, Siddhartha Sen
Scar-Driven Shape-Changes of Virus Capsids
13 page, 5 figures
Cent. Eur. J. Biol. 3(4) (2008) 380-387
10.2478/s11535-008-0038-1
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose that certain patterns (scars) -- theoretically and numerically predicted to be formed by electrons arranged on a sphere to minimize the repulsive Coulomb potential (the Thomson problem) and experimentally found in spherical crystals formed by self-assembled polystyrene beads (an instance of the {\it generalized} Thomson problem) -- could be relevant to extend the classic Caspar and Klug construction for icosahedrally-shaped virus capsids. The main idea is that scars could be produced on the capsid at an intermediate stage of its evolution and the release of the bending energy present in scars into stretching energy could allow for shape-changes. The conjecture can be tested in experiments and/or in numerical simulations.
[ { "created": "Tue, 21 Oct 2008 13:55:49 GMT", "version": "v1" } ]
2008-10-22
[ [ "Iorio", "Alfredo", "" ], [ "Sen", "Siddhartha", "" ] ]
We propose that certain patterns (scars) -- theoretically and numerically predicted to be formed by electrons arranged on a sphere to minimize the repulsive Coulomb potential (the Thomson problem) and experimentally found in spherical crystals formed by self-assembled polystyrene beads (an instance of the {\it generalized} Thomson problem) -- could be relevant to extend the classic Caspar and Klug construction for icosahedrally-shaped virus capsids. The main idea is that scars could be produced on the capsid at an intermediate stage of its evolution and the release of the bending energy present in scars into stretching energy could allow for shape-changes. The conjecture can be tested in experiments and/or in numerical simulations.
1603.05605
Lina Meinecke
Lina Meinecke
Multiscale modeling of diffusion in a crowded environment
null
null
null
null
q-bio.SC math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a multiscale approach to model diffusion in a crowded environment and its effect on the reaction rates. Diffusion in biological systems is often modeled by a discrete space jump process in order to capture the inherent noise of biological systems, which becomes important in the low copy number regime. To model diffusion in the crowded cell environment efficiently, we compute the jump rates in this mesoscopic model from local first exit times, which account for the microscopic positions of the crowding molecules, while the diffusing molecules jump on a coarser Cartesian grid. We then extract a macroscopic description from the resulting jump rates, where the excluded volume effect is modeled by a diffusion equation with space dependent diffusion coefficient. The crowding molecules can be of arbitrary shape and size and numerical experiments demonstrate that those factors together with the size of the diffusing molecule play a crucial role on the magnitude of the decrease in diffusive motion. When correcting the reaction rates for the altered diffusion we can show that molecular crowding either enhances or inhibits chemical reactions depending on local fluctuations of the obstacle density.
[ { "created": "Sat, 12 Mar 2016 10:06:54 GMT", "version": "v1" } ]
2016-03-18
[ [ "Meinecke", "Lina", "" ] ]
We present a multiscale approach to model diffusion in a crowded environment and its effect on the reaction rates. Diffusion in biological systems is often modeled by a discrete space jump process in order to capture the inherent noise of biological systems, which becomes important in the low copy number regime. To model diffusion in the crowded cell environment efficiently, we compute the jump rates in this mesoscopic model from local first exit times, which account for the microscopic positions of the crowding molecules, while the diffusing molecules jump on a coarser Cartesian grid. We then extract a macroscopic description from the resulting jump rates, where the excluded volume effect is modeled by a diffusion equation with space dependent diffusion coefficient. The crowding molecules can be of arbitrary shape and size and numerical experiments demonstrate that those factors together with the size of the diffusing molecule play a crucial role on the magnitude of the decrease in diffusive motion. When correcting the reaction rates for the altered diffusion we can show that molecular crowding either enhances or inhibits chemical reactions depending on local fluctuations of the obstacle density.
2003.06079
Heng Li
Heng Li, Xiaowen Feng, Chong Chu
The design and construction of reference pangenome graphs
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent advances in sequencing technologies enables the assembly of individual genomes to the reference quality. How to integrate multiple genomes from the same species and to make the integrated representation accessible to biologists remain an open challenge. Here we propose a graph-based data model and associated formats to represent multiple genomes while preserving the coordinate of the linear reference genome. We implemented our ideas in the minigraph toolkit and demonstrate that we can efficiently construct a pangenome graph and compactly encode tens of thousands of structural variants missing from the current reference genome.
[ { "created": "Fri, 13 Mar 2020 01:26:00 GMT", "version": "v1" } ]
2020-03-16
[ [ "Li", "Heng", "" ], [ "Feng", "Xiaowen", "" ], [ "Chu", "Chong", "" ] ]
The recent advances in sequencing technologies enables the assembly of individual genomes to the reference quality. How to integrate multiple genomes from the same species and to make the integrated representation accessible to biologists remain an open challenge. Here we propose a graph-based data model and associated formats to represent multiple genomes while preserving the coordinate of the linear reference genome. We implemented our ideas in the minigraph toolkit and demonstrate that we can efficiently construct a pangenome graph and compactly encode tens of thousands of structural variants missing from the current reference genome.
2310.19198
Xiong Xiong
Xiong Xiong, Li Su, Jinguo Huang, Guixia Kang
Enhancing Motor Imagery Decoding in Brain Computer Interfaces using Riemann Tangent Space Mapping and Cross Frequency Coupling
22 pages, 7 figures
null
null
null
q-bio.QM cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs), aiming to decoding motor intentions from electroencephalogram (EEG) signals. Method: Drawing inspiration from Riemannian geometry and Cross-Frequency Coupling (CFC), this paper introduces a novel approach termed Riemann Tangent Space Mapping using Dichotomous Filter Bank with Convolutional Neural Network (DFBRTS) to enhance the representation quality and decoding capability pertaining to MI features. DFBRTS first initiates the process by meticulously filtering EEG signals through a Dichotomous Filter Bank, structured in the fashion of a complete binary tree. Subsequently, it employs Riemann Tangent Space Mapping to extract salient EEG signal features within each sub-band. Finally, a lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss. To validate the efficacy, extensive experiments were conducted using DFBRTS on two well-established benchmark datasets: the BCI competition IV 2a (BCIC-IV-2a) dataset and the OpenBMI dataset. The performance of DFBRTS was benchmarked against several state-of-the-art MI decoding methods, alongside other Riemannian geometry-based MI decoding approaches. Results: DFBRTS significantly outperforms other MI decoding algorithms on both datasets, achieving a remarkable classification accuracy of 78.16% for four-class and 71.58% for two-class hold-out classification, as compared to the existing benchmarks.
[ { "created": "Sun, 29 Oct 2023 23:37:47 GMT", "version": "v1" } ]
2023-10-31
[ [ "Xiong", "Xiong", "" ], [ "Su", "Li", "" ], [ "Huang", "Jinguo", "" ], [ "Kang", "Guixia", "" ] ]
Objective: Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs), aiming to decoding motor intentions from electroencephalogram (EEG) signals. Method: Drawing inspiration from Riemannian geometry and Cross-Frequency Coupling (CFC), this paper introduces a novel approach termed Riemann Tangent Space Mapping using Dichotomous Filter Bank with Convolutional Neural Network (DFBRTS) to enhance the representation quality and decoding capability pertaining to MI features. DFBRTS first initiates the process by meticulously filtering EEG signals through a Dichotomous Filter Bank, structured in the fashion of a complete binary tree. Subsequently, it employs Riemann Tangent Space Mapping to extract salient EEG signal features within each sub-band. Finally, a lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss. To validate the efficacy, extensive experiments were conducted using DFBRTS on two well-established benchmark datasets: the BCI competition IV 2a (BCIC-IV-2a) dataset and the OpenBMI dataset. The performance of DFBRTS was benchmarked against several state-of-the-art MI decoding methods, alongside other Riemannian geometry-based MI decoding approaches. Results: DFBRTS significantly outperforms other MI decoding algorithms on both datasets, achieving a remarkable classification accuracy of 78.16% for four-class and 71.58% for two-class hold-out classification, as compared to the existing benchmarks.
1410.8391
Yoichiro Mori
Yoichiro Mori
A Multidomain Model for Ionic Electrodiffusion and Osmosis with an Application to Cortical Spreading Depression
submitted for publication, Aug. 28, 2014
null
null
null
q-bio.TO cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ionic electrodiffusion and osmotic water flow are central processes in many physiological systems. We formulate a system of partial differential equations that governs ion movement and water flow in biological tissue. A salient feature of this model is that it satisfies a free energy identity, ensuring the thermodynamic consistency of the model. A numerical scheme is developed for the model in one spatial dimension and is applied to a model of cortical spreading depression, a propagating breakdown of ionic and cell volume homeostasis in the brain.
[ { "created": "Thu, 30 Oct 2014 15:01:19 GMT", "version": "v1" } ]
2014-10-31
[ [ "Mori", "Yoichiro", "" ] ]
Ionic electrodiffusion and osmotic water flow are central processes in many physiological systems. We formulate a system of partial differential equations that governs ion movement and water flow in biological tissue. A salient feature of this model is that it satisfies a free energy identity, ensuring the thermodynamic consistency of the model. A numerical scheme is developed for the model in one spatial dimension and is applied to a model of cortical spreading depression, a propagating breakdown of ionic and cell volume homeostasis in the brain.
0811.2720
Nicola Pizzolato
Nicola Pizzolato, Davide Valenti, Dominique Persano Adorno, Bernardo Spagnolo
Evolutionary dynamics of imatinib-treated leukemic cells by stochastic approach
Submitted to Central European Journal of Physics
null
10.2478/s11534-009-0020-1
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a statistical approach. Cancer progression is explored by applying a Monte Carlo method to simulate the stochastic behavior of cell reproduction and death in a population of blood cells which can experience genetic mutations. In CML front line therapy is represented by the tyrosine kinase inhibitor imatinib which strongly affects the reproduction of leukemic cells only. In this work, we analyze the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. Several scenarios of the evolutionary dynamics of imatinib-treated leukemic cells are described as a consequence of the efficacy of the different modeled therapies. We show how the patient response to the therapy changes when an high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations. Unfortunately, development of resistance to imatinib is observed in a proportion of patients, whose blood cells are characterized by an increasing number of genetic alterations. We find that the occurrence of resistance to the therapy can be related to a progressive increase of deleterious mutations.
[ { "created": "Mon, 17 Nov 2008 15:15:48 GMT", "version": "v1" } ]
2009-11-13
[ [ "Pizzolato", "Nicola", "" ], [ "Valenti", "Davide", "" ], [ "Adorno", "Dominique Persano", "" ], [ "Spagnolo", "Bernardo", "" ] ]
The evolutionary dynamics of a system of cancerous cells in a model of chronic myeloid leukemia (CML) is investigated by a statistical approach. Cancer progression is explored by applying a Monte Carlo method to simulate the stochastic behavior of cell reproduction and death in a population of blood cells which can experience genetic mutations. In CML front line therapy is represented by the tyrosine kinase inhibitor imatinib which strongly affects the reproduction of leukemic cells only. In this work, we analyze the effects of a targeted therapy on the evolutionary dynamics of normal, first-mutant and cancerous cell populations. Several scenarios of the evolutionary dynamics of imatinib-treated leukemic cells are described as a consequence of the efficacy of the different modeled therapies. We show how the patient response to the therapy changes when an high value of the mutation rate from healthy to cancerous cells is present. Our results are in agreement with clinical observations. Unfortunately, development of resistance to imatinib is observed in a proportion of patients, whose blood cells are characterized by an increasing number of genetic alterations. We find that the occurrence of resistance to the therapy can be related to a progressive increase of deleterious mutations.
1505.04295
Yue Wang
Yi Fu, Jun Ruan, Guoqiang Yu, Douglas A. Levine, Niya Wang, Ie-Ming Shih, Zhen Zhang, Robert Clarke, and Yue Wang
BACOM2: a Java tool for detecting normal cell contamination of copy number in heterogeneous tumor
9 pages, 6 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a cross-platform open-source Java application (BACOM2) with graphic user interface (GUI), and users also can use a XML file to set the parameters of algorithm model, file paths and the dataset of paired samples. BACOM2 implements the new entire pipeline of copy number change analysis for heterogeneous cancer tissues, including extraction of raw copy number signals from CEL files of paired samples, attenuation correction, identification of balanced AB-genotype loci, copy number detection and segmentation, global baseline calculation and absolute normalization, differentiation of deletion types, estimation of the normal tissue fraction and correction of normal tissue contamination. BACOM2 focuses on the common tools for data preparation and absolute normalization for copy number analysis of heterogeneous cancer tissues. The software provides an additional choice for scientists who require a user-friendly, high-speed processing, cross-platform computing environment for large copy number data analysis.
[ { "created": "Sat, 16 May 2015 17:12:25 GMT", "version": "v1" } ]
2015-05-19
[ [ "Fu", "Yi", "" ], [ "Ruan", "Jun", "" ], [ "Yu", "Guoqiang", "" ], [ "Levine", "Douglas A.", "" ], [ "Wang", "Niya", "" ], [ "Shih", "Ie-Ming", "" ], [ "Zhang", "Zhen", "" ], [ "Clarke", "Robert...
We develop a cross-platform open-source Java application (BACOM2) with graphic user interface (GUI), and users also can use a XML file to set the parameters of algorithm model, file paths and the dataset of paired samples. BACOM2 implements the new entire pipeline of copy number change analysis for heterogeneous cancer tissues, including extraction of raw copy number signals from CEL files of paired samples, attenuation correction, identification of balanced AB-genotype loci, copy number detection and segmentation, global baseline calculation and absolute normalization, differentiation of deletion types, estimation of the normal tissue fraction and correction of normal tissue contamination. BACOM2 focuses on the common tools for data preparation and absolute normalization for copy number analysis of heterogeneous cancer tissues. The software provides an additional choice for scientists who require a user-friendly, high-speed processing, cross-platform computing environment for large copy number data analysis.
2404.15776
Marek Mutwil
Eugene Koh, Rohan Shawn Sunil, Hilbert Yuen In Lam, Marek Mutwil
Harnessing Big Data and Artificial Intelligence to Study Plant Stress
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Life finds a way. For sessile organisms like plants, the need to adapt to changes in the environment is even more poignant. For humanity, the need to develop crops that can grow in diverse environments and feed our growing population is an existential one. The advent of the genomics era enabled the generation of high-throughput data and computational methods that serve as powerful hypothesis-generating tools to understand the genomic and gene functional basis of stress resilience. Today, the proliferation of artificial intelligence (AI) allows scientists to rapidly screen through high-throughput datasets to uncover elusive patterns and correlations, enabling us to create more performant models for prediction and hypothesis generation in plant biology. This review aims to provide an overview of the availability of large-scale data in plant stress research and discuss the application of AI tools on these large-scale datasets in a bid to develop more stress-resilient plants.
[ { "created": "Wed, 24 Apr 2024 09:55:17 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 22:48:29 GMT", "version": "v2" } ]
2024-08-15
[ [ "Koh", "Eugene", "" ], [ "Sunil", "Rohan Shawn", "" ], [ "Lam", "Hilbert Yuen In", "" ], [ "Mutwil", "Marek", "" ] ]
Life finds a way. For sessile organisms like plants, the need to adapt to changes in the environment is even more poignant. For humanity, the need to develop crops that can grow in diverse environments and feed our growing population is an existential one. The advent of the genomics era enabled the generation of high-throughput data and computational methods that serve as powerful hypothesis-generating tools to understand the genomic and gene functional basis of stress resilience. Today, the proliferation of artificial intelligence (AI) allows scientists to rapidly screen through high-throughput datasets to uncover elusive patterns and correlations, enabling us to create more performant models for prediction and hypothesis generation in plant biology. This review aims to provide an overview of the availability of large-scale data in plant stress research and discuss the application of AI tools on these large-scale datasets in a bid to develop more stress-resilient plants.
1812.11178
Pengfei Liu
Pengfei Liu
Drug cell line interaction prediction
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing. The Genomics of Drug Sensitivity in Cancer (GDSC) database provides open data for researchers in phenotypic screening to test their models and methods. Previously, most research in these areas starts from the fingerprints or features of drugs, instead of their structures. In this paper, we introduce a model for phenotypic screening, which is called twin Convolutional Neural Network for drugs in SMILES format (tCNNS). tCNNS is comprised of CNN input channels for drugs in SMILES format and cancer cell lines respectively. Our model achieves $0.84$ for the coefficient of determinant($R^2$) and $0.92$ for Pearson correlation($R_p$), which are significantly better than previous works\cite{ammad2014integrative,haider2015copula,menden2013machine}. Besides these statistical metrics, tCNNS also provides some insights into phenotypic screening.
[ { "created": "Fri, 28 Dec 2018 09:14:16 GMT", "version": "v1" } ]
2019-01-01
[ [ "Liu", "Pengfei", "" ] ]
Understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing. The Genomics of Drug Sensitivity in Cancer (GDSC) database provides open data for researchers in phenotypic screening to test their models and methods. Previously, most research in these areas starts from the fingerprints or features of drugs, instead of their structures. In this paper, we introduce a model for phenotypic screening, which is called twin Convolutional Neural Network for drugs in SMILES format (tCNNS). tCNNS is comprised of CNN input channels for drugs in SMILES format and cancer cell lines respectively. Our model achieves $0.84$ for the coefficient of determinant($R^2$) and $0.92$ for Pearson correlation($R_p$), which are significantly better than previous works\cite{ammad2014integrative,haider2015copula,menden2013machine}. Besides these statistical metrics, tCNNS also provides some insights into phenotypic screening.
1612.03692
Daniel Hoffmann
Mohammadkarim Saeedghalati, Farnoush Farahpour, Bettina Budeus, Anja Lange, Astrid M. Westendorf, Marc Seifert, Ralf K\"uppers, Daniel Hoffmann
Quantitative Comparison of Abundance Structures of Generalized Communities: From B-Cell Receptor Repertoires to Microbiomes
null
null
10.1371/journal.pcbi.1005362
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The \emph{community}, the assemblage of organisms co-existing in a given space and time, has the potential to become one of the unifying concepts of biology, especially with the advent of high-throughput sequencing experiments that reveal genetic diversity exhaustively. In this spirit we show that a tool from community ecology, the Rank Abundance Distribution (RAD), can be turned by the new MaxRank normalization method into a generic, expressive descriptor for quantitative comparison of communities in many areas of biology. To illustrate the versatility of the method, we analyze RADs from various \emph{generalized communities}, i.e.\ assemblages of genetically diverse cells or organisms, including human B cells, gut microbiomes under antibiotic treatment and of different ages and countries of origin, and other human and environmental microbial communities. We show that normalized RADs enable novel quantitative approaches that help to understand structures and dynamics of complex generalize communities.
[ { "created": "Mon, 12 Dec 2016 14:12:20 GMT", "version": "v1" } ]
2017-05-31
[ [ "Saeedghalati", "Mohammadkarim", "" ], [ "Farahpour", "Farnoush", "" ], [ "Budeus", "Bettina", "" ], [ "Lange", "Anja", "" ], [ "Westendorf", "Astrid M.", "" ], [ "Seifert", "Marc", "" ], [ "Küppers", "Ralf", ...
The \emph{community}, the assemblage of organisms co-existing in a given space and time, has the potential to become one of the unifying concepts of biology, especially with the advent of high-throughput sequencing experiments that reveal genetic diversity exhaustively. In this spirit we show that a tool from community ecology, the Rank Abundance Distribution (RAD), can be turned by the new MaxRank normalization method into a generic, expressive descriptor for quantitative comparison of communities in many areas of biology. To illustrate the versatility of the method, we analyze RADs from various \emph{generalized communities}, i.e.\ assemblages of genetically diverse cells or organisms, including human B cells, gut microbiomes under antibiotic treatment and of different ages and countries of origin, and other human and environmental microbial communities. We show that normalized RADs enable novel quantitative approaches that help to understand structures and dynamics of complex generalize communities.
1609.06742
Peter Ashcroft
Peter Ashcroft, Cassandra E. R. Smith, Matthew Garrod, Tobias Galla
Effects of population growth on the success of invading mutants
12 pages, 10 figures
J. Theor. Biol. 420, 232--240 (2017)
10.1016/j.jtbi.2017.03.014
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding if and how mutants reach fixation in populations is an important question in evolutionary biology. We study the impact of population growth has on the success of mutants. To systematically understand the effects of growth we decouple competition from reproduction; competition follows a birth--death process and is governed by an evolutionary game, while growth is determined by an externally controlled branching rate. In stochastic simulations we find non-monotonic behaviour of the fixation probability of mutants as the speed of growth is varied; the right amount of growth can lead to a higher success rate. These results are observed in both coordination and coexistence game scenarios, and we find that the 'one-third law' for coordination games can break down in the presence of growth. We also propose a simplified description in terms of stochastic differential equations to approximate the individual-based model.
[ { "created": "Wed, 21 Sep 2016 20:40:21 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2017 16:27:49 GMT", "version": "v2" } ]
2017-03-29
[ [ "Ashcroft", "Peter", "" ], [ "Smith", "Cassandra E. R.", "" ], [ "Garrod", "Matthew", "" ], [ "Galla", "Tobias", "" ] ]
Understanding if and how mutants reach fixation in populations is an important question in evolutionary biology. We study the impact of population growth has on the success of mutants. To systematically understand the effects of growth we decouple competition from reproduction; competition follows a birth--death process and is governed by an evolutionary game, while growth is determined by an externally controlled branching rate. In stochastic simulations we find non-monotonic behaviour of the fixation probability of mutants as the speed of growth is varied; the right amount of growth can lead to a higher success rate. These results are observed in both coordination and coexistence game scenarios, and we find that the 'one-third law' for coordination games can break down in the presence of growth. We also propose a simplified description in terms of stochastic differential equations to approximate the individual-based model.
0903.4664
Tobias H. Jaeger
T. Jaeger
Neuronal Coding of pacemaker neurons - A random dynamical systems approach
12 pages, 3 figures
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The behaviour of neurons under the influence of periodic external input has been modelled very successfully by circle maps. The aim of this note is to extend certain aspects of this analysis to a much more general class of forcing processes. We apply results on the fibred rotation number of randomly forced circle maps to show the uniqueness of the asymptotic firing frequency of ergodically forced pacemaker neurons. The details of the analysis are carried out for the forced leaky integrate-and-fire model, but the results should also remain valid for a large class of further models.
[ { "created": "Thu, 26 Mar 2009 18:23:27 GMT", "version": "v1" } ]
2009-03-27
[ [ "Jaeger", "T.", "" ] ]
The behaviour of neurons under the influence of periodic external input has been modelled very successfully by circle maps. The aim of this note is to extend certain aspects of this analysis to a much more general class of forcing processes. We apply results on the fibred rotation number of randomly forced circle maps to show the uniqueness of the asymptotic firing frequency of ergodically forced pacemaker neurons. The details of the analysis are carried out for the forced leaky integrate-and-fire model, but the results should also remain valid for a large class of further models.
q-bio/0603029
Karen Luz Burgoa K. Luz-Burgoa
K. Luz-Burgoa, S. Moss de Oliveira, Veit Schw\"ammle and J. S. S\'a Martins
Thermodynamic behavior of a phase transition in a model for sympatric speciation
4 pages and 10 figures
null
10.1103/PhysRevE.74.021910
null
q-bio.PE q-bio.OT
null
We investigate the macroscopic effects of the ingredients that drive the origin of species through sympatric speciation. In our model, sympatric speciation is obtained as we tune up the strength of competition between individuals with different phenotypes. As a function of this control parameter, we can characterize, through the behavior of a macroscopic order parameter, a phase transition from a non-speciation to a speciation state of the system. The behavior of the first derivative of the order parameter with respect to the control parameter is consistent with a phase transition and exhibits a sharp peak at the transition point. For different resources distribution, the transition point is shifted, an effect similar to pressure in PVT system. The inverse of the parameter related to sexual selection strength behaves like an external field in the system and, as thus, is also a control parameter. The macroscopic effects of the biological parameters used in our model reveal thus fingerprints typical of thermodynamic quantities in a phase transition of an equilibrium physical system.
[ { "created": "Thu, 23 Mar 2006 17:54:09 GMT", "version": "v1" } ]
2009-11-13
[ [ "Luz-Burgoa", "K.", "" ], [ "de Oliveira", "S. Moss", "" ], [ "Schwämmle", "Veit", "" ], [ "Martins", "J. S. Sá", "" ] ]
We investigate the macroscopic effects of the ingredients that drive the origin of species through sympatric speciation. In our model, sympatric speciation is obtained as we tune up the strength of competition between individuals with different phenotypes. As a function of this control parameter, we can characterize, through the behavior of a macroscopic order parameter, a phase transition from a non-speciation to a speciation state of the system. The behavior of the first derivative of the order parameter with respect to the control parameter is consistent with a phase transition and exhibits a sharp peak at the transition point. For different resources distribution, the transition point is shifted, an effect similar to pressure in PVT system. The inverse of the parameter related to sexual selection strength behaves like an external field in the system and, as thus, is also a control parameter. The macroscopic effects of the biological parameters used in our model reveal thus fingerprints typical of thermodynamic quantities in a phase transition of an equilibrium physical system.
1309.7990
Bradly Alicea
Bradly Alicea
The Emergence of Animal Social Complexity: theoretical and biobehavioral evidence
22 pages, 4 figures. Working paper
null
null
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper will introduce a theory of emergent animal social complexity using various results from computational models and empirical results. These results will be organized into a vertical model of social complexity. This will support the perspective that social complexity is in essence an emergent phenomenon while helping to answer two interrelated questions. The first of these involves how behavior is integrated at units of analysis larger than the individual organism. The second involves placing aggregate social events into the context of processes occurring within individual organisms over time (e.g. genomic and physiological processes). By using a complex systems perspective, five principles of social complexity can be identified. These principles suggest that lower-level mechanisms give rise to high-level mechanisms, ultimately resulting in metastable networks of social relations. These network structures then constrain lower-level phenomena ranging from transient, collective social groups to physiological regulatory mechanisms within individual organisms. In conclusion, the broader implications and drawbacks of applying the theory to a diversity of natural populations will be discussed.
[ { "created": "Mon, 30 Sep 2013 19:55:35 GMT", "version": "v1" }, { "created": "Thu, 17 Oct 2013 15:13:31 GMT", "version": "v2" } ]
2013-10-18
[ [ "Alicea", "Bradly", "" ] ]
This paper will introduce a theory of emergent animal social complexity using various results from computational models and empirical results. These results will be organized into a vertical model of social complexity. This will support the perspective that social complexity is in essence an emergent phenomenon while helping to answer two interrelated questions. The first of these involves how behavior is integrated at units of analysis larger than the individual organism. The second involves placing aggregate social events into the context of processes occurring within individual organisms over time (e.g. genomic and physiological processes). By using a complex systems perspective, five principles of social complexity can be identified. These principles suggest that lower-level mechanisms give rise to high-level mechanisms, ultimately resulting in metastable networks of social relations. These network structures then constrain lower-level phenomena ranging from transient, collective social groups to physiological regulatory mechanisms within individual organisms. In conclusion, the broader implications and drawbacks of applying the theory to a diversity of natural populations will be discussed.
2012.12968
Teddy Cai
Teddy X. Cai, Nathan H. Williamson, Velencia J. Witherspoon, Rea Ravin, and Peter J. Basser
A single-shot measurement of time-dependent diffusion over sub-millisecond timescales using static field gradient NMR
7 pages, 6 figures + Supplementary Material
Journal of Chemical Physics, Vol. 154, Iss. 11, 2021, Pages 111105
10.1063/5.0041354
null
q-bio.QM cond-mat.soft
http://creativecommons.org/licenses/by/4.0/
Time-dependent diffusion behavior is probed over sub-millisecond timescales in a single shot using an NMR static gradient, time-incremented echo train acquisition (SG-TIETA) framework. The method extends the Carr-Purcell-Meiboom-Gill (CPMG) cycle under a static field gradient by discretely incrementing the $\pi$-pulse spacings to simultaneously avoid off-resonance effects and probe a range of timescales ($50 - 500$ microseconds). Pulse spacings are optimized based on a derived ruleset. The remaining effects of pulse inaccuracy are examined and found to be consistent across pure liquids of different diffusivities: water, decane, and octanol-1. A pulse accuracy correction is developed. Instantaneous diffusivity, $D_{\mathrm{inst}}(t)$, curves (i.e., half of the time derivative of the mean-squared displacement in the gradient direction), are recovered from pulse accuracy-corrected SG-TIETA decays using a model-free, log-linear least squares inversion method validated by Monte Carlo simulations. A signal-averaged, 1-minute experiment is described. A flat $D_{\mathrm{inst}}(t)$ is measured on pure dodecamethylcyclohexasiloxane whereas decreasing $D_{\mathrm{inst}}(t)$ are measured on yeast suspensions, consistent with the expected short-time $D_{\mathrm{inst}}(t)$ behavior for confining microstructural barriers on the order of microns.
[ { "created": "Wed, 23 Dec 2020 20:49:55 GMT", "version": "v1" }, { "created": "Mon, 28 Dec 2020 14:45:36 GMT", "version": "v2" }, { "created": "Wed, 3 Mar 2021 13:36:30 GMT", "version": "v3" } ]
2021-03-18
[ [ "Cai", "Teddy X.", "" ], [ "Williamson", "Nathan H.", "" ], [ "Witherspoon", "Velencia J.", "" ], [ "Ravin", "Rea", "" ], [ "Basser", "Peter J.", "" ] ]
Time-dependent diffusion behavior is probed over sub-millisecond timescales in a single shot using an NMR static gradient, time-incremented echo train acquisition (SG-TIETA) framework. The method extends the Carr-Purcell-Meiboom-Gill (CPMG) cycle under a static field gradient by discretely incrementing the $\pi$-pulse spacings to simultaneously avoid off-resonance effects and probe a range of timescales ($50 - 500$ microseconds). Pulse spacings are optimized based on a derived ruleset. The remaining effects of pulse inaccuracy are examined and found to be consistent across pure liquids of different diffusivities: water, decane, and octanol-1. A pulse accuracy correction is developed. Instantaneous diffusivity, $D_{\mathrm{inst}}(t)$, curves (i.e., half of the time derivative of the mean-squared displacement in the gradient direction), are recovered from pulse accuracy-corrected SG-TIETA decays using a model-free, log-linear least squares inversion method validated by Monte Carlo simulations. A signal-averaged, 1-minute experiment is described. A flat $D_{\mathrm{inst}}(t)$ is measured on pure dodecamethylcyclohexasiloxane whereas decreasing $D_{\mathrm{inst}}(t)$ are measured on yeast suspensions, consistent with the expected short-time $D_{\mathrm{inst}}(t)$ behavior for confining microstructural barriers on the order of microns.
2407.12789
Laurens Engwegen
Laurens Engwegen, Daan Brinks, Wendelin B\"ohmer
Generalisation to unseen topologies: Towards control of biological neural network activity
null
null
null
null
q-bio.NC cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Novel imaging and neurostimulation techniques open doors for advancements in closed-loop control of activity in biological neural networks. This would allow for applications in the investigation of activity propagation, and for diagnosis and treatment of pathological behaviour. Due to the partially observable characteristics of activity propagation, through networks in which edges can not be observed, and the dynamic nature of neuronal systems, there is a need for adaptive, generalisable control. In this paper, we introduce an environment that procedurally generates neuronal networks with different topologies to investigate this generalisation problem. Additionally, an existing transformer-based architecture is adjusted to evaluate the generalisation performance of a deep RL agent in the presented partially observable environment. The agent demonstrates the capability to generalise control from a limited number of training networks to unseen test networks.
[ { "created": "Mon, 17 Jun 2024 13:53:39 GMT", "version": "v1" } ]
2024-07-19
[ [ "Engwegen", "Laurens", "" ], [ "Brinks", "Daan", "" ], [ "Böhmer", "Wendelin", "" ] ]
Novel imaging and neurostimulation techniques open doors for advancements in closed-loop control of activity in biological neural networks. This would allow for applications in the investigation of activity propagation, and for diagnosis and treatment of pathological behaviour. Due to the partially observable characteristics of activity propagation, through networks in which edges can not be observed, and the dynamic nature of neuronal systems, there is a need for adaptive, generalisable control. In this paper, we introduce an environment that procedurally generates neuronal networks with different topologies to investigate this generalisation problem. Additionally, an existing transformer-based architecture is adjusted to evaluate the generalisation performance of a deep RL agent in the presented partially observable environment. The agent demonstrates the capability to generalise control from a limited number of training networks to unseen test networks.
1505.06223
German Mi\~no Dr
German A. Mi\~no-Galaz and Gonzalo Gutierrez
Hydrogen bonds and asymmetrical heat diffusion in a-Helices. A Computational Analysis
null
null
10.1016/j.cplett.2015.06.041
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we report the heat rectifying capability of a-helices. Using molecular dynamics simulations we show an increased thermal diffusivity in the C-Terminal to N-Terminal direction of propagation. The origin of this effect seems to be a function of the particular orientation of the hydrogen bonds stabilizing these a-helices. Our results may be relevant for the design of thermal rectification devices for materials science and lend support to the role of normal length hydrogen bonds in the asymmetrical energy flow in proteins.
[ { "created": "Fri, 22 May 2015 21:00:30 GMT", "version": "v1" } ]
2017-01-04
[ [ "Miño-Galaz", "German A.", "" ], [ "Gutierrez", "Gonzalo", "" ] ]
In this work, we report the heat rectifying capability of a-helices. Using molecular dynamics simulations we show an increased thermal diffusivity in the C-Terminal to N-Terminal direction of propagation. The origin of this effect seems to be a function of the particular orientation of the hydrogen bonds stabilizing these a-helices. Our results may be relevant for the design of thermal rectification devices for materials science and lend support to the role of normal length hydrogen bonds in the asymmetrical energy flow in proteins.
0804.1443
Gareth Hughes
Gareth Hughes
Notes on the mathematical basis of the UK Non-Native Organism Risk Assessment Scheme
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some problems with the mathematical analysis on which the UK Non-Native Organism Risk Assessment Scheme is based are outlined.
[ { "created": "Wed, 9 Apr 2008 09:43:56 GMT", "version": "v1" }, { "created": "Wed, 2 Jul 2008 08:17:41 GMT", "version": "v2" } ]
2008-07-02
[ [ "Hughes", "Gareth", "" ] ]
Some problems with the mathematical analysis on which the UK Non-Native Organism Risk Assessment Scheme is based are outlined.
1710.07836
Mike Steel Prof.
Joan Carles Pons, Charles Semple, Mike Steel
Tree-based networks: characterisations, metrics, and support trees
20 pages, 6 figures
null
null
null
q-bio.PE cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks generalise phylogenetic trees and allow for the accurate representation of the evolutionary history of a set of present-day species whose past includes reticulate events such as hybridisation and lateral gene transfer. One way to obtain such a network is by starting with a (rooted) phylogenetic tree $T$, called a base tree, and adding arcs between arcs of $T$. The class of phylogenetic networks that can be obtained in this way is called tree-based networks and includes the prominent classes of tree-child and reticulation-visible networks. Initially defined for binary phylogenetic networks, tree-based networks naturally extend to arbitrary phylogenetic networks. In this paper, we generalise recent tree-based characterisations and associated proximity measures for binary phylogenetic networks to arbitrary phylogenetic networks. These characterisations are in terms of matchings in bipartite graphs, path partitions, and antichains. Some of the generalisations are straightforward to establish using the original approach, while others require a very different approach. Furthermore, for an arbitrary tree-based network $N$, we characterise the support trees of $N$, that is, the tree-based embeddings of $N$. We use this characterisation to give an explicit formula for the number of support trees of $N$ when $N$ is binary. This formula is written in terms of the components of a bipartite graph.
[ { "created": "Sat, 21 Oct 2017 18:18:10 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2018 10:46:55 GMT", "version": "v2" } ]
2018-09-05
[ [ "Pons", "Joan Carles", "" ], [ "Semple", "Charles", "" ], [ "Steel", "Mike", "" ] ]
Phylogenetic networks generalise phylogenetic trees and allow for the accurate representation of the evolutionary history of a set of present-day species whose past includes reticulate events such as hybridisation and lateral gene transfer. One way to obtain such a network is by starting with a (rooted) phylogenetic tree $T$, called a base tree, and adding arcs between arcs of $T$. The class of phylogenetic networks that can be obtained in this way is called tree-based networks and includes the prominent classes of tree-child and reticulation-visible networks. Initially defined for binary phylogenetic networks, tree-based networks naturally extend to arbitrary phylogenetic networks. In this paper, we generalise recent tree-based characterisations and associated proximity measures for binary phylogenetic networks to arbitrary phylogenetic networks. These characterisations are in terms of matchings in bipartite graphs, path partitions, and antichains. Some of the generalisations are straightforward to establish using the original approach, while others require a very different approach. Furthermore, for an arbitrary tree-based network $N$, we characterise the support trees of $N$, that is, the tree-based embeddings of $N$. We use this characterisation to give an explicit formula for the number of support trees of $N$ when $N$ is binary. This formula is written in terms of the components of a bipartite graph.
1507.07317
Alain Destexhe
Jean-Marie Gomes, Claude Bedard, Silvana Valtcheva, Matthew Nelson, Vitalia Khokhlova, Pierre Pouget, Laurent Venance, Thierry Bal and Alain Destexhe
Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons
32 pages, 9 figures
Biophysical Journal 110: 234-246, 2016
10.1016/j.bpj.2015.11.019
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The electrical properties of extracellular space around neurons are important to understand the genesis of extracellular potentials, as well as for localizing neuronal activity from extracellular recordings. However, the exact nature of these extracellular properties is still uncertain. We introduce a method to measure the impedance of the tissue, and which preserves the intact cell-medium interface, using whole-cell patch-clamp recordings in vivo and in vitro. We find that neural tissue has marked non-ohmic and frequency-filtering properties, which are not consistent with a resistive (ohmic) medium, as often assumed. In contrast, using traditional metal electrodes provides very different results, more consistent with a resistive medium. The amplitude and phase profiles of the measured impedance are consistent with the contribution of ionic diffusion. We also show that the impact of such frequency-filtering properties is possibly important on the genesis of local field potentials, as well as on the cable properties of neurons. The present results show non-ohmic properties of the extracellular medium around neurons, and suggest that source estimation methods, as well as the cable properties of neurons, which all assume ohmic extracellular medium, may need to be re-evaluated.
[ { "created": "Mon, 27 Jul 2015 07:14:25 GMT", "version": "v1" }, { "created": "Sat, 31 Oct 2015 11:14:24 GMT", "version": "v2" }, { "created": "Fri, 13 Nov 2015 22:07:33 GMT", "version": "v3" } ]
2017-01-03
[ [ "Gomes", "Jean-Marie", "" ], [ "Bedard", "Claude", "" ], [ "Valtcheva", "Silvana", "" ], [ "Nelson", "Matthew", "" ], [ "Khokhlova", "Vitalia", "" ], [ "Pouget", "Pierre", "" ], [ "Venance", "Laurent", "" ...
The electrical properties of extracellular space around neurons are important to understand the genesis of extracellular potentials, as well as for localizing neuronal activity from extracellular recordings. However, the exact nature of these extracellular properties is still uncertain. We introduce a method to measure the impedance of the tissue, and which preserves the intact cell-medium interface, using whole-cell patch-clamp recordings in vivo and in vitro. We find that neural tissue has marked non-ohmic and frequency-filtering properties, which are not consistent with a resistive (ohmic) medium, as often assumed. In contrast, using traditional metal electrodes provides very different results, more consistent with a resistive medium. The amplitude and phase profiles of the measured impedance are consistent with the contribution of ionic diffusion. We also show that the impact of such frequency-filtering properties is possibly important on the genesis of local field potentials, as well as on the cable properties of neurons. The present results show non-ohmic properties of the extracellular medium around neurons, and suggest that source estimation methods, as well as the cable properties of neurons, which all assume ohmic extracellular medium, may need to be re-evaluated.
1612.01606
Ricard Sole
Sergi Valverde, Jose Montoya, Lucas Joppa and Ricard Sole
Is nestedness in mutualistic networks an evolutionary spandrel?
8 pages, 4 figures. Submitted
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutualistic networks have been shown to involve complex patterns of interactions among animal and plant species. The architecture of these webs seems to pervade some of their robust and fragile behaviour. Recent work indicates that there is a strong correlation between the patterning of animal-plant interactions and their phylogenetic organisation. Here we show that such pattern and other reported regularities from mutualistic webs can be properly explained by means of a very simple model of speciation and divergence. This model also predicts a co-extinction dynamics under species loss consistent with the presence of an evolutionary signal. The agreement between observed and model networks suggests that some patterns displayed by real mutualistic webs might actually represent evolutionary spandrels.
[ { "created": "Tue, 6 Dec 2016 00:44:51 GMT", "version": "v1" } ]
2016-12-07
[ [ "Valverde", "Sergi", "" ], [ "Montoya", "Jose", "" ], [ "Joppa", "Lucas", "" ], [ "Sole", "Ricard", "" ] ]
Mutualistic networks have been shown to involve complex patterns of interactions among animal and plant species. The architecture of these webs seems to pervade some of their robust and fragile behaviour. Recent work indicates that there is a strong correlation between the patterning of animal-plant interactions and their phylogenetic organisation. Here we show that such pattern and other reported regularities from mutualistic webs can be properly explained by means of a very simple model of speciation and divergence. This model also predicts a co-extinction dynamics under species loss consistent with the presence of an evolutionary signal. The agreement between observed and model networks suggests that some patterns displayed by real mutualistic webs might actually represent evolutionary spandrels.
1901.00802
Richard P. Sear
Richard P. Sear
Diffusiophoresis in Cells: a General Non-Equilibrium, Non-Motor Mechanism for the Metabolism-Dependent Transport of Particles in Cells
8 pages, 2 figures
Phys. Rev. Lett. 122, 128101 (2019)
10.1103/PhysRevLett.122.128101
null
q-bio.SC cond-mat.soft
http://creativecommons.org/licenses/by/4.0/
The more we learn about the cytoplasm of cells, the more we realise that the cytoplasm is not uniform but instead is highly inhomogeneous. In any inhomogeneous solution, there are concentration gradients, and particles move either up or down these gradients due to a mechanism called diffusiophoresis. I estimate that inside metabolically active cells, the dynamics of particles can be strongly accelerated by diffusiophoresis, provided that they are at least tens of nanometres across. The dynamics of smaller objects, such as single proteins are largely unaffected.
[ { "created": "Thu, 3 Jan 2019 16:09:58 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2019 17:27:01 GMT", "version": "v2" } ]
2019-04-03
[ [ "Sear", "Richard P.", "" ] ]
The more we learn about the cytoplasm of cells, the more we realise that the cytoplasm is not uniform but instead is highly inhomogeneous. In any inhomogeneous solution, there are concentration gradients, and particles move either up or down these gradients due to a mechanism called diffusiophoresis. I estimate that inside metabolically active cells, the dynamics of particles can be strongly accelerated by diffusiophoresis, provided that they are at least tens of nanometres across. The dynamics of smaller objects, such as single proteins are largely unaffected.
q-bio/0505034
Peng-Ye Wang
Wei Li, Shuo-Xing Dou, Peng-Ye Wang
The histone octamer influences the wrapping direction of DNA on it: Brownian dynamics simulation of the nucleosome chirality
12 pages, 8 figues
Journal of Theoretical Biology 235 (2005) 365-372
null
null
q-bio.BM
null
In eukaryote nucleosome, DNA wraps around a histone octamer in a left-handed way. We study the process of chirality formation of nucleosome with Brownian dynamics simulation. We model the histone octamer with a quantitatively adjustable chirality: left-handed, right-handed or non-chiral, and simulate the dynamical wrapping process of a DNA molecule on it. We find that the chirality of a nucleosome formed is strongly dependent on that of the histone octamer, and different chiralities of the histone octamer induce its different rotation directions in the wrapping process of DNA. In addition, a very weak chirality of the histone octamer is quite enough for sustaining the correct chirality of the nucleosome formed. We also show that the chirality of a nucleosome may be broken at elevated temperature.
[ { "created": "Wed, 18 May 2005 07:57:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Li", "Wei", "" ], [ "Dou", "Shuo-Xing", "" ], [ "Wang", "Peng-Ye", "" ] ]
In eukaryote nucleosome, DNA wraps around a histone octamer in a left-handed way. We study the process of chirality formation of nucleosome with Brownian dynamics simulation. We model the histone octamer with a quantitatively adjustable chirality: left-handed, right-handed or non-chiral, and simulate the dynamical wrapping process of a DNA molecule on it. We find that the chirality of a nucleosome formed is strongly dependent on that of the histone octamer, and different chiralities of the histone octamer induce its different rotation directions in the wrapping process of DNA. In addition, a very weak chirality of the histone octamer is quite enough for sustaining the correct chirality of the nucleosome formed. We also show that the chirality of a nucleosome may be broken at elevated temperature.
1901.07945
Samuel Gershman
Samuel J. Gershman
What does the free energy principle tell us about the brain?
Accepted for publication in Neurons, Behavior, Data Analysis, and Theory
Neurons, Behavior, Data Analysis, and Theory, 2019
10.51628/001c.37270
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The free energy principle has been proposed as a unifying account of brain function. It is closely related, and in some cases subsumes, earlier unifying ideas such as Bayesian inference, predictive coding, and active learning. This article clarifies these connections, teasing apart distinctive and shared predictions.
[ { "created": "Wed, 23 Jan 2019 15:23:00 GMT", "version": "v1" }, { "created": "Sun, 7 Jul 2019 16:31:46 GMT", "version": "v2" }, { "created": "Tue, 3 Sep 2019 15:30:20 GMT", "version": "v3" }, { "created": "Wed, 4 Sep 2019 08:16:30 GMT", "version": "v4" }, { "crea...
2022-11-30
[ [ "Gershman", "Samuel J.", "" ] ]
The free energy principle has been proposed as a unifying account of brain function. It is closely related, and in some cases subsumes, earlier unifying ideas such as Bayesian inference, predictive coding, and active learning. This article clarifies these connections, teasing apart distinctive and shared predictions.
2103.08332
Fernando Palluzzi
Fernando Palluzzi and Mario Grassi
SEMgraph: An R Package for Causal Network Analysis of High-Throughput Data with Structural Equation Models
29 pages; 5 figures; original article; R package; CRAN stable version at: https://CRAN.R-project.org/package=SEMgraph; Development version available at https://github.com/fernandoPalluzzi/SEMgraph
null
10.1093/bioinformatics/btac567
null
q-bio.MN stat.AP
http://creativecommons.org/licenses/by/4.0/
With the advent of high-throughput sequencing (HTS) in molecular biology and medicine, the need for scalable statistical solutions for modeling complex biological systems has become of critical importance. The increasing number of platforms and possible experimental scenarios raised the problem of integrating large amounts of new heterogeneous data and current knowledge, to test novel hypotheses and improve our comprehension of physiological processes and diseases. Although network theory provided a framework to represent biological systems and study their hidden properties, different algorithms still offer low reproducibility and robustness, dependence on user-defined setup, and poor interpretability. Here we discuss the R package SEMgraph, combining network analysis and causal inference within the framework of structural equation modeling (SEM). It provides a fully automated toolkit, managing complex biological systems as multivariate networks, ensuring robustness and reproducibility through data-driven evaluation of model architecture and perturbation, that is readily interpretable in terms of causal effects among system components. In addition, SEMgraph offers several functions for perturbed path finding, model reduction, and parallelization options for the analysis of large interaction networks.
[ { "created": "Mon, 15 Mar 2021 12:24:40 GMT", "version": "v1" }, { "created": "Mon, 9 Aug 2021 09:11:07 GMT", "version": "v2" }, { "created": "Sun, 19 Sep 2021 20:57:24 GMT", "version": "v3" }, { "created": "Mon, 3 Jan 2022 09:38:44 GMT", "version": "v4" } ]
2022-10-19
[ [ "Palluzzi", "Fernando", "" ], [ "Grassi", "Mario", "" ] ]
With the advent of high-throughput sequencing (HTS) in molecular biology and medicine, the need for scalable statistical solutions for modeling complex biological systems has become of critical importance. The increasing number of platforms and possible experimental scenarios raised the problem of integrating large amounts of new heterogeneous data and current knowledge, to test novel hypotheses and improve our comprehension of physiological processes and diseases. Although network theory provided a framework to represent biological systems and study their hidden properties, different algorithms still offer low reproducibility and robustness, dependence on user-defined setup, and poor interpretability. Here we discuss the R package SEMgraph, combining network analysis and causal inference within the framework of structural equation modeling (SEM). It provides a fully automated toolkit, managing complex biological systems as multivariate networks, ensuring robustness and reproducibility through data-driven evaluation of model architecture and perturbation, that is readily interpretable in terms of causal effects among system components. In addition, SEMgraph offers several functions for perturbed path finding, model reduction, and parallelization options for the analysis of large interaction networks.
1603.02453
Yao-Gen Shu
Yong-Shun Song, Yao-Gen Shu, Xin Zhou, Zhong-Can Ou-Yang, Ming Li
Proofreading of DNA Polymerase: a new kinetic model with higher-order terminal effects
11 figures
J. Phys.: Condens. Matter 29 (2017) 025101
10.1088/0953-8984/29/2/025101
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fidelity of DNA replication by DNA polymerase (DNAP) has long been an important issue in biology. While numerous experiments have revealed details of the molecular structure and working mechanism of DNAP which consists of both a polymerase site and an exonuclease (proofreading) site, there were quite few theoretical studies on the fidelity issue. The first model which explicitly considered both sites was proposed in 1970s' and the basic idea was widely accepted by later models. However, all these models did not systematically and rigorously investigate the dominant factor on DNAP fidelity, i.e, the higher-order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity. In this paper, we propose a new and comprehensive kinetic model of DNAP based on some recent experimental observations, which includes previous models as special cases. We present a rigorous and unified treatment of the corresponding steady-state kinetic equations of any-order terminal effects, and derive analytical expressions for fidelity in terms of kinetic parameters under bio-relevant conditions. These expressions offer new insights on how the the higher-order terminal effects contribute substantially to the fidelity in an order-by-order way, and also show that the polymerization-and-proofreading mechanism is dominated only by very few key parameters. We then apply these results to calculate the fidelity of some real DNAPs, which are in good agreements with previous intuitive estimates given by experimentalists.
[ { "created": "Tue, 8 Mar 2016 10:06:53 GMT", "version": "v1" }, { "created": "Sat, 7 May 2016 13:00:27 GMT", "version": "v2" } ]
2016-11-29
[ [ "Song", "Yong-Shun", "" ], [ "Shu", "Yao-Gen", "" ], [ "Zhou", "Xin", "" ], [ "Ou-Yang", "Zhong-Can", "" ], [ "Li", "Ming", "" ] ]
The fidelity of DNA replication by DNA polymerase (DNAP) has long been an important issue in biology. While numerous experiments have revealed details of the molecular structure and working mechanism of DNAP which consists of both a polymerase site and an exonuclease (proofreading) site, there were quite few theoretical studies on the fidelity issue. The first model which explicitly considered both sites was proposed in 1970s' and the basic idea was widely accepted by later models. However, all these models did not systematically and rigorously investigate the dominant factor on DNAP fidelity, i.e, the higher-order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity. In this paper, we propose a new and comprehensive kinetic model of DNAP based on some recent experimental observations, which includes previous models as special cases. We present a rigorous and unified treatment of the corresponding steady-state kinetic equations of any-order terminal effects, and derive analytical expressions for fidelity in terms of kinetic parameters under bio-relevant conditions. These expressions offer new insights on how the the higher-order terminal effects contribute substantially to the fidelity in an order-by-order way, and also show that the polymerization-and-proofreading mechanism is dominated only by very few key parameters. We then apply these results to calculate the fidelity of some real DNAPs, which are in good agreements with previous intuitive estimates given by experimentalists.
2002.05866
Qiang Li
Qiang Li and Wei Feng
Trend and forecasting of the COVID-19 outbreak in China
12 figures, 10 pages
Journal of Infection 80 (2020) 469-496
10.1016/j.jinf.2020.02.014
Online 27-FEB-2020
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By using the public data from Jan. 20 to Feb. 11, 2020, we perform data-driven analysis and forecasting on the COVID-19 epidemic in mainland China, especially Hubei province. Our results show that the turning points of the daily infections are predicted to be Feb. 6 and Feb. 1, 2020, for Hubei and China other than Hubei, respectively. The epidemic in China is predicted to end up after Mar. 10, 2020, and the number of the total infections are predicted to be 51600. The data trends reveal that quick and active strategies taken by China to reduce human exposure have already had a good impact on the control of the epidemic.
[ { "created": "Fri, 14 Feb 2020 04:21:19 GMT", "version": "v1" } ]
2020-03-30
[ [ "Li", "Qiang", "" ], [ "Feng", "Wei", "" ] ]
By using the public data from Jan. 20 to Feb. 11, 2020, we perform data-driven analysis and forecasting on the COVID-19 epidemic in mainland China, especially Hubei province. Our results show that the turning points of the daily infections are predicted to be Feb. 6 and Feb. 1, 2020, for Hubei and China other than Hubei, respectively. The epidemic in China is predicted to end up after Mar. 10, 2020, and the number of the total infections are predicted to be 51600. The data trends reveal that quick and active strategies taken by China to reduce human exposure have already had a good impact on the control of the epidemic.
1610.04258
Kameron Harris
Kameron Decker Harris and Tatiana Dashevskiy and Joshua Mendoza and Alfredo J. Garcia III and Jan-Marino Ramirez and Eric Shea-Brown
Different roles for inhibition in the rhythm-generating respiratory network
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unraveling the interplay of excitation and inhibition within rhythm-generating networks remains a fundamental issue in neuroscience. We use a biophysical model to investigate the different roles of local and long-range inhibition in the respiratory network, a key component of which is the pre-B\"otzinger complex inspiratory microcircuit. Increasing inhibition within the microcircuit results in a limited number of out-of-phase neurons before rhythmicity and synchrony degenerate. Thus, unstructured local inhibition is destabilizing and cannot support the generation of more than one rhythm. A two-phase rhythm requires restructuring the network into two microcircuits coupled by long-range inhibition in the manner of a half-center. In this context, inhibition leads to greater stability of the two out-of-phase rhythms. We support our computational results with in vitro recordings from mouse pre-B\"otzinger complex. Partial excitation block leads to increased rhythmic variability, but this recovers following blockade of inhibition. Our results support the idea that local inhibition in the pre-B\"otzinger complex is present to allow for descending control of synchrony or robustness to adverse conditions like hypoxia. We conclude that the balance of inhibition and excitation determines the stability of rhythmogenesis, but with opposite roles within and between areas. These different inhibitory roles may apply to a variety of rhythmic behaviors that emerge in widespread pattern generating circuits of the nervous system.
[ { "created": "Thu, 13 Oct 2016 20:49:12 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2017 18:43:32 GMT", "version": "v2" } ]
2017-06-14
[ [ "Harris", "Kameron Decker", "" ], [ "Dashevskiy", "Tatiana", "" ], [ "Mendoza", "Joshua", "" ], [ "Garcia", "Alfredo J.", "III" ], [ "Ramirez", "Jan-Marino", "" ], [ "Shea-Brown", "Eric", "" ] ]
Unraveling the interplay of excitation and inhibition within rhythm-generating networks remains a fundamental issue in neuroscience. We use a biophysical model to investigate the different roles of local and long-range inhibition in the respiratory network, a key component of which is the pre-B\"otzinger complex inspiratory microcircuit. Increasing inhibition within the microcircuit results in a limited number of out-of-phase neurons before rhythmicity and synchrony degenerate. Thus, unstructured local inhibition is destabilizing and cannot support the generation of more than one rhythm. A two-phase rhythm requires restructuring the network into two microcircuits coupled by long-range inhibition in the manner of a half-center. In this context, inhibition leads to greater stability of the two out-of-phase rhythms. We support our computational results with in vitro recordings from mouse pre-B\"otzinger complex. Partial excitation block leads to increased rhythmic variability, but this recovers following blockade of inhibition. Our results support the idea that local inhibition in the pre-B\"otzinger complex is present to allow for descending control of synchrony or robustness to adverse conditions like hypoxia. We conclude that the balance of inhibition and excitation determines the stability of rhythmogenesis, but with opposite roles within and between areas. These different inhibitory roles may apply to a variety of rhythmic behaviors that emerge in widespread pattern generating circuits of the nervous system.
2109.00146
Ze Wang
Ze Wang
Resting state fMRI-based temporal coherence mapping
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Long-range temporal coherence (LRTC) is quite common to dynamic systems and is fundamental to the system function. LRTC in the brain has been shown to be important to cognition. Assessing LRTC may provide critical information for understanding the potential underpinnings of brain organization, function, and cognition. To facilitate this overarching goal, we provide a method, which is named temporal coherence mapping (TCM), to explicitly quantify LRTC using resting state fMRI. TCM is based on correlation analysis of the transit states of the phase space reconstructed by temporal embedding. A few TCM properties were collected to measure LRTC, including the averaged correlation, anti-correlation, the ratio of correlation and anticorrelation, the mean coherent and incoherent duration, and the ratio between the coherent and incoherent time. TCM was first evaluated with simulations and then with the large Human Connectome Project data. Evaluation results showed that TCM metrics can successfully differentiate signals with different temporal coherence regardless of the parameters used to reconstruct the phase space. In human brain, TCM metrics except the ratio of the coherent/incoherent time showed high test-retest reproducibility; TCM metrics are related to age, sex, and total cognitive scores. In summary, TCM provides a first-of-its-kind tool to assess LRTC and the imbalance between coherence and incoherence; TCM properties are physiologically and cognitively meaningful.
[ { "created": "Wed, 1 Sep 2021 01:42:05 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2024 13:19:06 GMT", "version": "v2" } ]
2024-07-10
[ [ "Wang", "Ze", "" ] ]
Long-range temporal coherence (LRTC) is quite common to dynamic systems and is fundamental to the system function. LRTC in the brain has been shown to be important to cognition. Assessing LRTC may provide critical information for understanding the potential underpinnings of brain organization, function, and cognition. To facilitate this overarching goal, we provide a method, which is named temporal coherence mapping (TCM), to explicitly quantify LRTC using resting state fMRI. TCM is based on correlation analysis of the transit states of the phase space reconstructed by temporal embedding. A few TCM properties were collected to measure LRTC, including the averaged correlation, anti-correlation, the ratio of correlation and anticorrelation, the mean coherent and incoherent duration, and the ratio between the coherent and incoherent time. TCM was first evaluated with simulations and then with the large Human Connectome Project data. Evaluation results showed that TCM metrics can successfully differentiate signals with different temporal coherence regardless of the parameters used to reconstruct the phase space. In human brain, TCM metrics except the ratio of the coherent/incoherent time showed high test-retest reproducibility; TCM metrics are related to age, sex, and total cognitive scores. In summary, TCM provides a first-of-its-kind tool to assess LRTC and the imbalance between coherence and incoherence; TCM properties are physiologically and cognitively meaningful.
2004.10396
Michael Small
Michael Small and David Cavanagh
Modelling strong control measures for epidemic propagation with networks -- A COVID-19 case study
Revised. 14 pages, 7 figures. Including impact discussion and recovery predictions. in press in IEEE Access
null
10.1109/ACCESS.2020.3001298
null
q-bio.PE nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that precise knowledge of epidemic transmission parameters is not required to build an informative model of the spread of disease. We propose a detailed model of the topology of the contact network under various external control regimes and demonstrate that this is sufficient to capture the salient dynamical characteristics and to inform decisions. Contact between individuals in the community is characterised by a contact graph, the structure of that contact graph is selected to mimic community control measures. Our model of city-level transmission of an infectious agent (SEIR model) characterises spread via a (a) scale-free contact network (no control); (b) a random graph (elimination of mass gatherings); and (c) small world lattice (partial to full lockdown -- "social" distancing). This model exhibits good qualitative agreement between simulation and data from the 2020 pandemic spread of coronavirus. Estimates of the relevant rate parameters of the SEIR model are obtained and we demonstrate the robustness of our model predictions under uncertainty of those estimates. The social context and utility of this work is identified, contributing to a highly effective pandemic response in Western Australia.
[ { "created": "Wed, 22 Apr 2020 05:05:27 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 07:46:13 GMT", "version": "v2" }, { "created": "Wed, 27 May 2020 05:33:07 GMT", "version": "v3" }, { "created": "Mon, 8 Jun 2020 06:23:10 GMT", "version": "v4" } ]
2020-06-15
[ [ "Small", "Michael", "" ], [ "Cavanagh", "David", "" ] ]
We show that precise knowledge of epidemic transmission parameters is not required to build an informative model of the spread of disease. We propose a detailed model of the topology of the contact network under various external control regimes and demonstrate that this is sufficient to capture the salient dynamical characteristics and to inform decisions. Contact between individuals in the community is characterised by a contact graph, the structure of that contact graph is selected to mimic community control measures. Our model of city-level transmission of an infectious agent (SEIR model) characterises spread via a (a) scale-free contact network (no control); (b) a random graph (elimination of mass gatherings); and (c) small world lattice (partial to full lockdown -- "social" distancing). This model exhibits good qualitative agreement between simulation and data from the 2020 pandemic spread of coronavirus. Estimates of the relevant rate parameters of the SEIR model are obtained and we demonstrate the robustness of our model predictions under uncertainty of those estimates. The social context and utility of this work is identified, contributing to a highly effective pandemic response in Western Australia.
0707.2658
Arunava Goswami
Ayesha Rahman, Dipankar Seth, Nitai Debnath, C. Ulrichs, I. Mewis, R. L. Brahmachary and A. Goswami
Nanosilica mop up host lipids and fights baculovirus
null
null
null
null
q-bio.BM q-bio.MN
null
Various types of surface functionalized nanosilica (50-60 nm size with 3-10 nm inner pore size range) have been used to kill insect pests by sucking up cuticular lipids and breaking the water barrier. We have also utilized nanosilica for mopping up host lipids induced by the malarial parasite, P. gallinaceum in poultry birds; VLDL cholesterol and serum triglycerides are brought back to the normal level with a concomitant check in parasite growth. While this work continues, we have explored another more convenient system, silkworm (Bombyx mori) that is frequently decimated by a baculovirus, NPV for which no antidote is known so far. Here, too, viral infection enhances host lipids. Eight different types of nanosilica were injected in the virus infected silkworm (batches of 10 worms) after ensuring 100% survival up to cocoon formation in control larvae (injected with the same volume of ethanol, the medium of nanosilica). Of these 8, AL60102 and AL60106, have the most marked effect on infected silkworm, both as prophylactic and pharmaceutical agents. Normal larvae injected with these nanosilica survive up to cocoon formation.
[ { "created": "Wed, 18 Jul 2007 07:31:27 GMT", "version": "v1" } ]
2007-07-19
[ [ "Rahman", "Ayesha", "" ], [ "Seth", "Dipankar", "" ], [ "Debnath", "Nitai", "" ], [ "Ulrichs", "C.", "" ], [ "Mewis", "I.", "" ], [ "Brahmachary", "R. L.", "" ], [ "Goswami", "A.", "" ] ]
Various types of surface functionalized nanosilica (50-60 nm size with 3-10 nm inner pore size range) have been used to kill insect pests by sucking up cuticular lipids and breaking the water barrier. We have also utilized nanosilica for mopping up host lipids induced by the malarial parasite, P. gallinaceum in poultry birds; VLDL cholesterol and serum triglycerides are brought back to the normal level with a concomitant check in parasite growth. While this work continues, we have explored another more convenient system, silkworm (Bombyx mori) that is frequently decimated by a baculovirus, NPV for which no antidote is known so far. Here, too, viral infection enhances host lipids. Eight different types of nanosilica were injected in the virus infected silkworm (batches of 10 worms) after ensuring 100% survival up to cocoon formation in control larvae (injected with the same volume of ethanol, the medium of nanosilica). Of these 8, AL60102 and AL60106, have the most marked effect on infected silkworm, both as prophylactic and pharmaceutical agents. Normal larvae injected with these nanosilica survive up to cocoon formation.
2212.10602
Alexander Gorsky
Alexander Gorsky
Page time and the order parameter for a consciousness state
12 pages,minor changes, references added
null
null
null
q-bio.NC cond-mat.dis-nn hep-th
http://creativecommons.org/licenses/by/4.0/
In this short note using the analogy with the recent resolution of the black hole information paradox we conjecture the order parameter for the state of consciousness based on the notion of the Page curve and the Page time. The entanglement between the state of the brain and time series of neuronal firing as well as the non-orthogonality of the functional connectomes play a key role.
[ { "created": "Sun, 18 Dec 2022 18:31:50 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2023 17:08:59 GMT", "version": "v2" } ]
2023-03-07
[ [ "Gorsky", "Alexander", "" ] ]
In this short note using the analogy with the recent resolution of the black hole information paradox we conjecture the order parameter for the state of consciousness based on the notion of the Page curve and the Page time. The entanglement between the state of the brain and time series of neuronal firing as well as the non-orthogonality of the functional connectomes play a key role.
0710.3421
Jeremy Gunawardena
Aneil Mallavarapu, Matthew Thomson, Benjamin Ullian, Jeremy Gunawardena
Modular model building
24 pages, 6 figures
null
null
null
q-bio.MN
null
Mathematical models are increasingly used in both academia and the pharmaceutical industry to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling general properties to be specified independently of specific instances. These in turn require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created for generic biological processes, which can be instantiated and re-used repeatedly in different contexts with different components. We have developed a computational infrastructure to support this. We show here why these capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.
[ { "created": "Thu, 18 Oct 2007 01:14:42 GMT", "version": "v1" } ]
2007-10-19
[ [ "Mallavarapu", "Aneil", "" ], [ "Thomson", "Matthew", "" ], [ "Ullian", "Benjamin", "" ], [ "Gunawardena", "Jeremy", "" ] ]
Mathematical models are increasingly used in both academia and the pharmaceutical industry to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling general properties to be specified independently of specific instances. These in turn require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created for generic biological processes, which can be instantiated and re-used repeatedly in different contexts with different components. We have developed a computational infrastructure to support this. We show here why these capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.
2110.04624
Wengong Jin
Wengong Jin, Jeremy Wohlwend, Regina Barzilay, Tommi Jaakkola
Iterative Refinement Graph Neural Network for Antibody Sequence-Structure Co-design
Accepted to ICLR 2022
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antibodies are versatile proteins that bind to pathogens like viruses and stimulate the adaptive immune system. The specificity of antibody binding is determined by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a generative model to automatically design the CDRs of antibodies with enhanced binding specificity or neutralization capabilities. Previous generative approaches formulate protein design as a structure-conditioned sequence generation task, assuming the desired 3D structure is given a priori. In contrast, we propose to co-design the sequence and 3D structure of CDRs as graphs. Our model unravels a sequence autoregressively while iteratively refining its predicted global structure. The inferred structure in turn guides subsequent residue choices. For efficiency, we model the conditional dependence between residues inside and outside of a CDR in a coarse-grained manner. Our method achieves superior log-likelihood on the test set and outperforms previous baselines in designing antibodies capable of neutralizing the SARS-CoV-2 virus.
[ { "created": "Sat, 9 Oct 2021 18:23:32 GMT", "version": "v1" }, { "created": "Fri, 15 Oct 2021 19:48:56 GMT", "version": "v2" }, { "created": "Thu, 27 Jan 2022 22:29:40 GMT", "version": "v3" } ]
2022-01-31
[ [ "Jin", "Wengong", "" ], [ "Wohlwend", "Jeremy", "" ], [ "Barzilay", "Regina", "" ], [ "Jaakkola", "Tommi", "" ] ]
Antibodies are versatile proteins that bind to pathogens like viruses and stimulate the adaptive immune system. The specificity of antibody binding is determined by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a generative model to automatically design the CDRs of antibodies with enhanced binding specificity or neutralization capabilities. Previous generative approaches formulate protein design as a structure-conditioned sequence generation task, assuming the desired 3D structure is given a priori. In contrast, we propose to co-design the sequence and 3D structure of CDRs as graphs. Our model unravels a sequence autoregressively while iteratively refining its predicted global structure. The inferred structure in turn guides subsequent residue choices. For efficiency, we model the conditional dependence between residues inside and outside of a CDR in a coarse-grained manner. Our method achieves superior log-likelihood on the test set and outperforms previous baselines in designing antibodies capable of neutralizing the SARS-CoV-2 virus.
1709.00133
Ann Sizemore
Ann E. Sizemore, Elisabeth A. Karuza, Chad Giusti, Danielle S. Bassett
Knowledge gaps in the early growth of semantic networks
17 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the features of and mechanisms behind language learning can provide insights into the general process of knowledge acquisition. Recent methods from network science applied to language learning have advanced the field, particularly by noting associations between densely connected words and acquisition. However, the importance of sparse areas of the network, or knowledge gaps, remains unexplored. Here we create a semantic feature network in which words correspond to nodes and in which connections correspond to semantic similarity. We develop a new analytical approach built on principles of applied topology to query the prevalence of knowledge gaps, which we propose manifest as cavities within the network. We detect topological cavities of multiple dimensions in the growing semantic feature network of children ages 16 to 30 months. The pattern of cavity appearance matches that of a constrained null model, created by predefining the affinity of each node for connections. Furthermore, when word acquisition time is computed from children of mothers with differing levels of education, we find that despite variation at the word level, the global organization as measured by persistent homology remains comparable. We show that topological properties of a node correlate with filling in cavities better than simple lexical properties such as the length and frequency of the corresponding word. Finally, we show that the large-scale architecture of the semantic feature network is topologically accommodating to many node orders. We discuss the importance of topology in language learning, and we speculate that the formation and filling of knowledge gaps may be a robust feature of knowledge acquisition.
[ { "created": "Fri, 1 Sep 2017 02:30:24 GMT", "version": "v1" } ]
2017-09-04
[ [ "Sizemore", "Ann E.", "" ], [ "Karuza", "Elisabeth A.", "" ], [ "Giusti", "Chad", "" ], [ "Bassett", "Danielle S.", "" ] ]
Understanding the features of and mechanisms behind language learning can provide insights into the general process of knowledge acquisition. Recent methods from network science applied to language learning have advanced the field, particularly by noting associations between densely connected words and acquisition. However, the importance of sparse areas of the network, or knowledge gaps, remains unexplored. Here we create a semantic feature network in which words correspond to nodes and in which connections correspond to semantic similarity. We develop a new analytical approach built on principles of applied topology to query the prevalence of knowledge gaps, which we propose manifest as cavities within the network. We detect topological cavities of multiple dimensions in the growing semantic feature network of children ages 16 to 30 months. The pattern of cavity appearance matches that of a constrained null model, created by predefining the affinity of each node for connections. Furthermore, when word acquisition time is computed from children of mothers with differing levels of education, we find that despite variation at the word level, the global organization as measured by persistent homology remains comparable. We show that topological properties of a node correlate with filling in cavities better than simple lexical properties such as the length and frequency of the corresponding word. Finally, we show that the large-scale architecture of the semantic feature network is topologically accommodating to many node orders. We discuss the importance of topology in language learning, and we speculate that the formation and filling of knowledge gaps may be a robust feature of knowledge acquisition.
1105.2893
Sergei Kozyrev
S.V. Kozyrev, A.Yu. Khrennikov
Replica procedure for probabilistic algorithms as a model of gene duplication
null
Doklady Mathematics, 84 (2011) no. 2, 726--729
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present paper we propose to describe gene networks in biological systems using probabilistic algorithms. We describe gene duplication in the process of biological evolution using introduction of the replica procedure for probabilistic algorithms. We construct the examples of such a replica procedure for hidden Markov models. We introduce the family of hidden Markov models where the set of hidden states is a finite additive group with a p-adic metric and build the replica procedure for this family of markovian models.
[ { "created": "Sat, 14 May 2011 12:07:53 GMT", "version": "v1" } ]
2012-08-01
[ [ "Kozyrev", "S. V.", "" ], [ "Khrennikov", "A. Yu.", "" ] ]
In the present paper we propose to describe gene networks in biological systems using probabilistic algorithms. We describe gene duplication in the process of biological evolution using introduction of the replica procedure for probabilistic algorithms. We construct the examples of such a replica procedure for hidden Markov models. We introduce the family of hidden Markov models where the set of hidden states is a finite additive group with a p-adic metric and build the replica procedure for this family of markovian models.
2307.02721
Hiroshi Tamura
Hiroshi Tamura
Spontaneous segregation of visual information between parallel streams of a multi-stream convolutional neural network
37 pages, 9 figures, 2 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visual information is processed in hierarchically organized parallel pathways in the primate brain. In lower cortical areas, color information and shape information are processed in a parallel manner, while in higher cortical areas, various types of visual information, such as color, face, animate/inanimate, are processed in a parallel manner. In the present study, the possibility of spontaneous segregation of visual information in parallel streams was examined by constructing a convolutional neural network with parallel architecture in all of the convolutional layers. The results revealed that color information was segregated from shape information in most model instances. Deletion of the color-related stream decreased recognition accuracy in the inanimate category, whereas deletion of the shape-related stream decreased recognition accuracy in the animate category. The results suggest that properties of filters and functions of a stream are spontaneously segregated in parallel streams of neural networks.
[ { "created": "Thu, 6 Jul 2023 02:05:13 GMT", "version": "v1" } ]
2023-07-07
[ [ "Tamura", "Hiroshi", "" ] ]
Visual information is processed in hierarchically organized parallel pathways in the primate brain. In lower cortical areas, color information and shape information are processed in a parallel manner, while in higher cortical areas, various types of visual information, such as color, face, animate/inanimate, are processed in a parallel manner. In the present study, the possibility of spontaneous segregation of visual information in parallel streams was examined by constructing a convolutional neural network with parallel architecture in all of the convolutional layers. The results revealed that color information was segregated from shape information in most model instances. Deletion of the color-related stream decreased recognition accuracy in the inanimate category, whereas deletion of the shape-related stream decreased recognition accuracy in the animate category. The results suggest that properties of filters and functions of a stream are spontaneously segregated in parallel streams of neural networks.
1510.00427
Andre Peterson
Andre D. H. Peterson, Hamish Meffin, Mark J. Cook, David B. Grayden, Iven M.Y Mareels, Anthony N. Burkitt
A homotopic mapping between current-based and conductance-based synapses in a mesoscopic neural model of epilepsy
This is the submitted version
null
null
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Changes in brain states, as found in many neurological diseases such as epilepsy, are often described as bifurcations in mesoscopic neural models. Nearly all of these models rely on a mathematically convenient, but biophysically inaccurate, description of the synaptic input to neurons called current-based synapses. We develop a novel analytical framework to analyze the effects of a more biophysically realistic description, known as conductance-based synapses. These are implemented in a mesoscopic neural model and compared to the standard approximation via a single parameter homotopic mapping. A bifurcation analysis using the homotopy parameter demonstrates that if a more realistic synaptic coupling mechanism is used in this class of models, then a bifurcation or transition to an abnormal brain state does not occur in the same parameter space. We show that the more realistic coupling has additional mathematical parameters that require a fundamentally different biophysical mechanism to undergo a state transition. These results demonstrate the importance of incorporating more realistic synapses in mesoscopic neural models and challenge the accuracy of previous models, especially those describing brain state transitions such as epilepsy.
[ { "created": "Thu, 1 Oct 2015 21:27:37 GMT", "version": "v1" }, { "created": "Sun, 16 Dec 2018 03:11:08 GMT", "version": "v2" } ]
2018-12-19
[ [ "Peterson", "Andre D. H.", "" ], [ "Meffin", "Hamish", "" ], [ "Cook", "Mark J.", "" ], [ "Grayden", "David B.", "" ], [ "Mareels", "Iven M. Y", "" ], [ "Burkitt", "Anthony N.", "" ] ]
Changes in brain states, as found in many neurological diseases such as epilepsy, are often described as bifurcations in mesoscopic neural models. Nearly all of these models rely on a mathematically convenient, but biophysically inaccurate, description of the synaptic input to neurons called current-based synapses. We develop a novel analytical framework to analyze the effects of a more biophysically realistic description, known as conductance-based synapses. These are implemented in a mesoscopic neural model and compared to the standard approximation via a single parameter homotopic mapping. A bifurcation analysis using the homotopy parameter demonstrates that if a more realistic synaptic coupling mechanism is used in this class of models, then a bifurcation or transition to an abnormal brain state does not occur in the same parameter space. We show that the more realistic coupling has additional mathematical parameters that require a fundamentally different biophysical mechanism to undergo a state transition. These results demonstrate the importance of incorporating more realistic synapses in mesoscopic neural models and challenge the accuracy of previous models, especially those describing brain state transitions such as epilepsy.
1404.4197
William Harvey
William T. Harvey, Victoria Gregory, Donald J. Benton, James P. J. Hall, Rodney S. Daniels, Trevor Bedford, Daniel T. Haydon, Alan J. Hay, John W. McCauley, Richard Reeve
Identifying the genetic basis of antigenic change in influenza A(H1N1)
null
PLoS Pathogens (2016) 12(4): e1005526
10.1371/journal.ppat.1005526
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining phenotype from genetic data is a fundamental challenge. Influenza A viruses undergo rapid antigenic drift and identification of emerging antigenic variants is critical to the vaccine selection process. Using former seasonal influenza A(H1N1) viruses, hemagglutinin sequence and corresponding antigenic data were analyzed in combination with 3-D structural information. We attributed variation in hemagglutination inhibition to individual amino acid substitutions and quantified their antigenic impact, validating a subset experimentally using reverse genetics. Substitutions identified as low-impact were shown to be a critical component of influenza antigenic evolution and by including these, as well as the high-impact substitutions often focused on, the accuracy of predicting antigenic phenotypes of emerging viruses from genotype was doubled. The ability to quantify the phenotypic impact of specific amino acid substitutions should help refine techniques that predict the fitness and evolutionary success of variant viruses, leading to stronger theoretical foundations for selection of candidate vaccine viruses.
[ { "created": "Wed, 16 Apr 2014 10:41:32 GMT", "version": "v1" }, { "created": "Fri, 11 Sep 2015 20:19:35 GMT", "version": "v2" } ]
2016-04-12
[ [ "Harvey", "William T.", "" ], [ "Gregory", "Victoria", "" ], [ "Benton", "Donald J.", "" ], [ "Hall", "James P. J.", "" ], [ "Daniels", "Rodney S.", "" ], [ "Bedford", "Trevor", "" ], [ "Haydon", "Daniel T.", ...
Determining phenotype from genetic data is a fundamental challenge. Influenza A viruses undergo rapid antigenic drift and identification of emerging antigenic variants is critical to the vaccine selection process. Using former seasonal influenza A(H1N1) viruses, hemagglutinin sequence and corresponding antigenic data were analyzed in combination with 3-D structural information. We attributed variation in hemagglutination inhibition to individual amino acid substitutions and quantified their antigenic impact, validating a subset experimentally using reverse genetics. Substitutions identified as low-impact were shown to be a critical component of influenza antigenic evolution and by including these, as well as the high-impact substitutions often focused on, the accuracy of predicting antigenic phenotypes of emerging viruses from genotype was doubled. The ability to quantify the phenotypic impact of specific amino acid substitutions should help refine techniques that predict the fitness and evolutionary success of variant viruses, leading to stronger theoretical foundations for selection of candidate vaccine viruses.
2310.08801
Ying Chen
Yuhao Yao, Shufang Zhang, Boyao Wang, Gaofeng Zhao, Hong Deng, Ying Chen
Neural Dysfunction Underlying Working Memory Processing at Different Stages of the Illness Course in Schizophrenia:A Comparative Meta-analysis
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Schizophrenia (SCZ), as a chronic and persistent disorder, exhibits working memory deficits across various stages of the disorder, yet the neural mechanisms underlying these deficits remain elusive with inconsistent neuroimaging findings. We aimed to compare the brain functional changes of working memory in patients at different stages: clinical high risk (CHR), first-episode psychosis (FEP), and long-term SCZ, using meta-analyses of functional magnetic resonance imaging (fMRI) studies. Following a systematic literature search, fifty-six whole-brain task-based fMRI studies (15 for CHR, 16 for FEP, 25 for long-term SCZ) were included. The separate and pooled neurofunctional mechanisms among CHR, FEP and long-term SCZ were generated by Seed-based d Mapping toolbox. The CHR and FEP groups exhibited overlapping hypoactivation in the right inferior parietal lobule, right middle frontal gyrus, and left superior parietal lobule, indicating key lesion sites in the early phase of SCZ. Individuals with FEP showed lower activation in left inferior parietal lobule than those with long-term SCZ, reflecting a possible recovery process or more neural inefficiency. We concluded that SCZ represent as a continuum in the early stage of illness progression, while the neural bases are inversely changed with the development of illness course to long-term course.
[ { "created": "Fri, 13 Oct 2023 01:19:20 GMT", "version": "v1" } ]
2023-10-16
[ [ "Yao", "Yuhao", "" ], [ "Zhang", "Shufang", "" ], [ "Wang", "Boyao", "" ], [ "Zhao", "Gaofeng", "" ], [ "Deng", "Hong", "" ], [ "Chen", "Ying", "" ] ]
Schizophrenia (SCZ), as a chronic and persistent disorder, exhibits working memory deficits across various stages of the disorder, yet the neural mechanisms underlying these deficits remain elusive with inconsistent neuroimaging findings. We aimed to compare the brain functional changes of working memory in patients at different stages: clinical high risk (CHR), first-episode psychosis (FEP), and long-term SCZ, using meta-analyses of functional magnetic resonance imaging (fMRI) studies. Following a systematic literature search, fifty-six whole-brain task-based fMRI studies (15 for CHR, 16 for FEP, 25 for long-term SCZ) were included. The separate and pooled neurofunctional mechanisms among CHR, FEP and long-term SCZ were generated by Seed-based d Mapping toolbox. The CHR and FEP groups exhibited overlapping hypoactivation in the right inferior parietal lobule, right middle frontal gyrus, and left superior parietal lobule, indicating key lesion sites in the early phase of SCZ. Individuals with FEP showed lower activation in left inferior parietal lobule than those with long-term SCZ, reflecting a possible recovery process or more neural inefficiency. We concluded that SCZ represent as a continuum in the early stage of illness progression, while the neural bases are inversely changed with the development of illness course to long-term course.
2404.11842
Paul Constable
Mikhail Kulyabin, Aleksei Zhdanov, Andreas Maier, Lynne Loh, Jose J. Estevez, Paul A. Constable
Generating synthetic light-adapted electroretinogram waveforms using Artificial Intelligence to improve classification of retinal conditions in under-represented populations
null
null
10.1155/2024/1990419
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual electrophysiology is often used clinically to determine functional changes associated with retinal or neurological conditions. The full-field flash electroretinogram (ERG) assesses the global contribution of the outer and inner retinal layers initiated by the rods and cone pathways depending on the state of retinal adaptation. Within clinical centers reference normative data are used to compare with clinical cases that may be rare or underpowered within a specific demographic. To bolster either reference or case datasets the application of synthetic ERG waveforms may offer benefits to disease classification and case-control studies. In this study and as a proof of concept, artificial intelligence (AI) to generate synthetic signals using Generative Adversarial Networks is deployed to up-scale male participants within an ISCEV reference dataset containing 68 participants, with waveforms from the right and left eye. Random Forest Classifiers further improved classification for sex within the group from a balanced accuracy of 0.72 to 0.83 with the added synthetic male waveforms. This is the first study to demonstrate the generation of synthetic ERG waveforms to improve machine learning classification modelling with electroretinogram waveforms.
[ { "created": "Thu, 18 Apr 2024 01:45:46 GMT", "version": "v1" }, { "created": "Wed, 26 Jun 2024 00:24:34 GMT", "version": "v2" } ]
2024-07-18
[ [ "Kulyabin", "Mikhail", "" ], [ "Zhdanov", "Aleksei", "" ], [ "Maier", "Andreas", "" ], [ "Loh", "Lynne", "" ], [ "Estevez", "Jose J.", "" ], [ "Constable", "Paul A.", "" ] ]
Visual electrophysiology is often used clinically to determine functional changes associated with retinal or neurological conditions. The full-field flash electroretinogram (ERG) assesses the global contribution of the outer and inner retinal layers initiated by the rods and cone pathways depending on the state of retinal adaptation. Within clinical centers reference normative data are used to compare with clinical cases that may be rare or underpowered within a specific demographic. To bolster either reference or case datasets the application of synthetic ERG waveforms may offer benefits to disease classification and case-control studies. In this study and as a proof of concept, artificial intelligence (AI) to generate synthetic signals using Generative Adversarial Networks is deployed to up-scale male participants within an ISCEV reference dataset containing 68 participants, with waveforms from the right and left eye. Random Forest Classifiers further improved classification for sex within the group from a balanced accuracy of 0.72 to 0.83 with the added synthetic male waveforms. This is the first study to demonstrate the generation of synthetic ERG waveforms to improve machine learning classification modelling with electroretinogram waveforms.
2204.09680
Takeshi Ishida
Takeshi Ishida
Emergent simulation of cell-like shapes satisfying the conditions of life using lattice-type multiset chemical model
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
One of the great challenges in science is determining when, where, why, and how life first arose as well as the form taken by this life. In the present study, life was assumed to be (1) bounded, (2) replicating, (3) able to inherit information, and (4) able to metabolize energy. The various existing hypotheses provide little explanation of how these four conditions for life were established. Indeed, 'how' a chemical process that simultaneously satisfies all four conditions emerged after the materials for life were in place is not always clear. In this study, a 'multiset chemical lattice model', which allows virtual molecules of multiple types to be placed in each cell on a two-dimensional space, was considered. Using only the processes of molecular diffusion, reaction, and polymerization and modeling the chemical reactions of 15 types of molecules and 2 types of polymerized molecules and using the morphogenesis rule of the Turing model, the process of emergence of a cell-like form with the four conditions of life was modeled and demonstrated. Thus, in future research, this model will allow us to revisit and refine each of the hypotheses for the emergence of life.
[ { "created": "Thu, 21 Apr 2022 01:03:22 GMT", "version": "v1" }, { "created": "Tue, 2 Aug 2022 04:37:44 GMT", "version": "v2" }, { "created": "Mon, 12 Sep 2022 00:12:54 GMT", "version": "v3" } ]
2022-09-13
[ [ "Ishida", "Takeshi", "" ] ]
One of the great challenges in science is determining when, where, why, and how life first arose as well as the form taken by this life. In the present study, life was assumed to be (1) bounded, (2) replicating, (3) able to inherit information, and (4) able to metabolize energy. The various existing hypotheses provide little explanation of how these four conditions for life were established. Indeed, 'how' a chemical process that simultaneously satisfies all four conditions emerged after the materials for life were in place is not always clear. In this study, a 'multiset chemical lattice model', which allows virtual molecules of multiple types to be placed in each cell on a two-dimensional space, was considered. Using only the processes of molecular diffusion, reaction, and polymerization and modeling the chemical reactions of 15 types of molecules and 2 types of polymerized molecules and using the morphogenesis rule of the Turing model, the process of emergence of a cell-like form with the four conditions of life was modeled and demonstrated. Thus, in future research, this model will allow us to revisit and refine each of the hypotheses for the emergence of life.
1507.07693
Maddalena Dilucca
Maddalena Dilucca, Giulio Cimini, Andrea Semmoloni, Antonio Deiana, Andrea Giansanti
Codon Bias Patterns of $E.coli$'s Interacting Proteins
null
PLoS ONE 10(11): e0142127 (2015)
10.1371/journal.pone.0142127
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synonymous codons, i.e., DNA nucleotide triplets coding for the same amino acid, are used differently across the variety of living organisms. The biological meaning of this phenomenon, known as codon usage bias, is still controversial. In order to shed light on this point, we propose a new codon bias index, $CompAI$, that is based on the competition between cognate and near-cognate tRNAs during translation, without being tuned to the usage bias of highly expressed genes. We perform a genome-wide evaluation of codon bias for $E.coli$, comparing $CompAI$ with other widely used indices: $tAI$, $CAI$, and $Nc$. We show that $CompAI$ and $tAI$ capture similar information by being positively correlated with gene conservation, measured by ERI, and essentiality, whereas, $CAI$ and $Nc$ appear to be less sensitive to evolutionary-functional parameters. Notably, the rate of variation of $tAI$ and $CompAI$ with ERI allows to obtain sets of genes that consistently belong to specific clusters of orthologous genes (COGs). We also investigate the correlation of codon bias at the genomic level with the network features of protein-protein interactions in $E.coli$. We find that the most densely connected communities of the network share a similar level of codon bias (as measured by $CompAI$ and $tAI$). Conversely, a small difference in codon bias between two genes is, statistically, a prerequisite for the corresponding proteins to interact. Importantly, among all codon bias indices, $CompAI$ turns out to have the most coherent distribution over the communities of the interactome, pointing to the significance of competition among cognate and near-cognate tRNAs for explaining codon usage adaptation.
[ { "created": "Tue, 28 Jul 2015 09:22:23 GMT", "version": "v1" } ]
2015-11-17
[ [ "Dilucca", "Maddalena", "" ], [ "Cimini", "Giulio", "" ], [ "Semmoloni", "Andrea", "" ], [ "Deiana", "Antonio", "" ], [ "Giansanti", "Andrea", "" ] ]
Synonymous codons, i.e., DNA nucleotide triplets coding for the same amino acid, are used differently across the variety of living organisms. The biological meaning of this phenomenon, known as codon usage bias, is still controversial. In order to shed light on this point, we propose a new codon bias index, $CompAI$, that is based on the competition between cognate and near-cognate tRNAs during translation, without being tuned to the usage bias of highly expressed genes. We perform a genome-wide evaluation of codon bias for $E.coli$, comparing $CompAI$ with other widely used indices: $tAI$, $CAI$, and $Nc$. We show that $CompAI$ and $tAI$ capture similar information by being positively correlated with gene conservation, measured by ERI, and essentiality, whereas, $CAI$ and $Nc$ appear to be less sensitive to evolutionary-functional parameters. Notably, the rate of variation of $tAI$ and $CompAI$ with ERI allows to obtain sets of genes that consistently belong to specific clusters of orthologous genes (COGs). We also investigate the correlation of codon bias at the genomic level with the network features of protein-protein interactions in $E.coli$. We find that the most densely connected communities of the network share a similar level of codon bias (as measured by $CompAI$ and $tAI$). Conversely, a small difference in codon bias between two genes is, statistically, a prerequisite for the corresponding proteins to interact. Importantly, among all codon bias indices, $CompAI$ turns out to have the most coherent distribution over the communities of the interactome, pointing to the significance of competition among cognate and near-cognate tRNAs for explaining codon usage adaptation.
1702.05307
Magdalena Urszula Bogda\'nska
Magdalena U. Bogda\'nska, Marek Bodnar, Juan Belmonte-Beitia, Michael Murek, Philippe Schucht, J\"urgen Beck, V\'ictor M. P\'erez-Garc\'ia
A mathematical model of low grade gliomas treated with temozolomide and its therapeutical implications
41 pages, 8 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low grade gliomas (LGGs) are infiltrative and incurable primary brain tumours with typically slow evolution. These tumours usually occur in young and otherwise healthy patients, bringing controversies in treatment planning since aggressive treatment may lead to undesirable side effects. Thus, for management decisions it would be valuable to obtain early estimates of LGG growth potential. Here we propose a simple mathematical model of LGG growth and its response to chemotherapy which allows the growth of LGGs to be described in real patients. The model predicts, and our clinical data confirms, that the speed of response to chemotherapy is related to tumour aggressiveness. Moreover, we provide a formula for the time to radiological progression, which can be possibly used as a measure of tumour aggressiveness. Finally, we suggest that the response to a few chemotherapy cycles upon diagnosis might be used to predict tumour growth and to guide therapeutical actions on the basis of the findings.
[ { "created": "Fri, 17 Feb 2017 11:45:20 GMT", "version": "v1" } ]
2017-02-20
[ [ "Bogdańska", "Magdalena U.", "" ], [ "Bodnar", "Marek", "" ], [ "Belmonte-Beitia", "Juan", "" ], [ "Murek", "Michael", "" ], [ "Schucht", "Philippe", "" ], [ "Beck", "Jürgen", "" ], [ "Pérez-García", "Víctor M....
Low grade gliomas (LGGs) are infiltrative and incurable primary brain tumours with typically slow evolution. These tumours usually occur in young and otherwise healthy patients, bringing controversies in treatment planning since aggressive treatment may lead to undesirable side effects. Thus, for management decisions it would be valuable to obtain early estimates of LGG growth potential. Here we propose a simple mathematical model of LGG growth and its response to chemotherapy which allows the growth of LGGs to be described in real patients. The model predicts, and our clinical data confirms, that the speed of response to chemotherapy is related to tumour aggressiveness. Moreover, we provide a formula for the time to radiological progression, which can be possibly used as a measure of tumour aggressiveness. Finally, we suggest that the response to a few chemotherapy cycles upon diagnosis might be used to predict tumour growth and to guide therapeutical actions on the basis of the findings.
1405.6546
Jes\'us Fern\'andez-S\'anchez
Jes\'us Fern\'andez-S\'anchez and Marta Casanellas
Invariant versus classical quartet inference when evolution is heterogeneous across sites and lineages
32 pages; 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One reason why classical phylogenetic reconstruction methods fail to correctly infer the underlying topology is because they assume oversimplified models. In this paper we propose a topology reconstruction method consistent with the most general Markov model of nucleotide substitution, which can also deal with data coming from mixtures on the same topology. It is based on an idea of Eriksson on using phylogenetic invariants and provides a system of weights that can be used as input of quartet-based methods. We study its performance on real data and on a wide range of simulated 4-taxon data (both time-homogeneous and nonhomogeneous, with or without among-site rate heterogeneity, and with different branch length settings). We compare it to the classical methods of neighbor-joining (with paralinear distance), maximum likelihood (with different underlying models), and maximum parsimony. Our results show that this method is accurate and robust, has a similar performance to ML when data satisfies the assumptions of both methods, and outperforms all methods when these are based on inappropriate substitution models or when both long and short branches are present. If alignments are long enough, then it also outperforms other methods when some of its assumptions are violated.
[ { "created": "Mon, 26 May 2014 11:37:34 GMT", "version": "v1" }, { "created": "Mon, 27 Oct 2014 15:22:29 GMT", "version": "v2" } ]
2014-10-28
[ [ "Fernández-Sánchez", "Jesús", "" ], [ "Casanellas", "Marta", "" ] ]
One reason why classical phylogenetic reconstruction methods fail to correctly infer the underlying topology is because they assume oversimplified models. In this paper we propose a topology reconstruction method consistent with the most general Markov model of nucleotide substitution, which can also deal with data coming from mixtures on the same topology. It is based on an idea of Eriksson on using phylogenetic invariants and provides a system of weights that can be used as input of quartet-based methods. We study its performance on real data and on a wide range of simulated 4-taxon data (both time-homogeneous and nonhomogeneous, with or without among-site rate heterogeneity, and with different branch length settings). We compare it to the classical methods of neighbor-joining (with paralinear distance), maximum likelihood (with different underlying models), and maximum parsimony. Our results show that this method is accurate and robust, has a similar performance to ML when data satisfies the assumptions of both methods, and outperforms all methods when these are based on inappropriate substitution models or when both long and short branches are present. If alignments are long enough, then it also outperforms other methods when some of its assumptions are violated.
2402.12389
Tony Monnet
Andrian Kuch (PPRIME), Romain Tisserand (PPRIME), Fran\c{c}ois Durand, Tony Monnet (PPRIME), Jean-Fran\c{c}ois Debril
Postural adjustments preceding string release in trained archers
null
Journal of Sports Sciences, 2023, 41 (7), pp.677-685
10.1080/02640414.2023.2235154
null
q-bio.QM physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal postural stability is required to perform in archery. Since the dynamic consequences of the string release may disturb the archer's postural equilibrium, they should have integrated them in their motor program to optimize postural stability. This study aimed to characterize the postural strategy archers use to limit the potentially detrimental impact of the bow release on their postural stability and identify characteristics that may explain a better performance. Six elite and seven sub-elite archers performed a series of 18 shots at 70 meters, standing on two force plates. Postural stability indicators were computed during the aiming and the shooting phase using the trajectory of the center of pressure. Two postural strategies were defined, as whether they were triggered before (early) or after (late) the string release time. Both groups used anticipated postural adjustments, but elite archers triggered them before the string release more often and sooner. Scores differed between the two groups, but no differences were found between early and late shots. Trained archers seem to have finely integrated the dynamic consequences of their bow motion, triggering anticipated postural adjustments prior to the string release. However, it remains unclear whether this anticipation can positively influence the performance outcome.
[ { "created": "Wed, 14 Feb 2024 09:22:45 GMT", "version": "v1" } ]
2024-02-21
[ [ "Kuch", "Andrian", "", "PPRIME" ], [ "Tisserand", "Romain", "", "PPRIME" ], [ "Durand", "François", "", "PPRIME" ], [ "Monnet", "Tony", "", "PPRIME" ], [ "Debril", "Jean-François", "" ] ]
Optimal postural stability is required to perform in archery. Since the dynamic consequences of the string release may disturb the archer's postural equilibrium, they should have integrated them in their motor program to optimize postural stability. This study aimed to characterize the postural strategy archers use to limit the potentially detrimental impact of the bow release on their postural stability and identify characteristics that may explain a better performance. Six elite and seven sub-elite archers performed a series of 18 shots at 70 meters, standing on two force plates. Postural stability indicators were computed during the aiming and the shooting phase using the trajectory of the center of pressure. Two postural strategies were defined, as whether they were triggered before (early) or after (late) the string release time. Both groups used anticipated postural adjustments, but elite archers triggered them before the string release more often and sooner. Scores differed between the two groups, but no differences were found between early and late shots. Trained archers seem to have finely integrated the dynamic consequences of their bow motion, triggering anticipated postural adjustments prior to the string release. However, it remains unclear whether this anticipation can positively influence the performance outcome.
1607.06720
Raj Mohanty
Remco Spanjaard, David Weaver, Shyamsunder Erramilli, Pritiraj Mohanty
Perspective: Melanoma diagnosis and monitoring: Sunrise for melanoma therapy but early detection remains in the shade
12 pages, 4 figures and 3 tables
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Melanoma is one of the most dangerous forms of cancer. The five-year survival rate is 98% if it is detected early. However, this rate plummets to 63% for regional disease and 17% when tumors have metastasized, that is, spread to distant sites. Furthermore, the incidence of melanoma has been rising by about 3% per year, whereas the incidence of cancers that are more common is decreasing. A handful of targeted therapies have recently become available that have finally shown real promise for treatment, but for reasons that remain unclear only a fraction of patients respond long term. These drugs often increase survival by only a few months in metastatic patient groups before relapse occurs. More effective treatment may be possible if a diagnosis can be made when the tumor burden is still low. Here, an overview of the current state-of-the-art is provided along with an argument for newer technologies towards early point-of-care diagnosis of melanoma.
[ { "created": "Fri, 18 Sep 2015 22:37:51 GMT", "version": "v1" }, { "created": "Mon, 25 Jul 2016 18:00:17 GMT", "version": "v2" } ]
2016-07-26
[ [ "Spanjaard", "Remco", "" ], [ "Weaver", "David", "" ], [ "Erramilli", "Shyamsunder", "" ], [ "Mohanty", "Pritiraj", "" ] ]
Melanoma is one of the most dangerous forms of cancer. The five-year survival rate is 98% if it is detected early. However, this rate plummets to 63% for regional disease and 17% when tumors have metastasized, that is, spread to distant sites. Furthermore, the incidence of melanoma has been rising by about 3% per year, whereas the incidence of cancers that are more common is decreasing. A handful of targeted therapies have recently become available that have finally shown real promise for treatment, but for reasons that remain unclear only a fraction of patients respond long term. These drugs often increase survival by only a few months in metastatic patient groups before relapse occurs. More effective treatment may be possible if a diagnosis can be made when the tumor burden is still low. Here, an overview of the current state-of-the-art is provided along with an argument for newer technologies towards early point-of-care diagnosis of melanoma.
1112.4438
Stefano Lonardi
Stefano Lonardi, Denisa Duma, Matthew Alpert, Francesca Cordero, Marco Beccuti, Prasanna R. Bhat, Yonghui Wu, Gianfranco Ciardo, Burair Alsaihati, Yaqin Ma, Steve Wanamaker, Josh Resnik, Timothy J. Close
Barcoding-free BAC Pooling Enables Combinatorial Selective Sequencing of the Barley Gene Space
null
null
null
null
q-bio.GN cs.CE cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new sequencing protocol that combines recent advances in combinatorial pooling design and second-generation sequencing technology to efficiently approach de novo selective genome sequencing. We show that combinatorial pooling is a cost-effective and practical alternative to exhaustive DNA barcoding when dealing with hundreds or thousands of DNA samples, such as genome-tiling gene-rich BAC clones. The novelty of the protocol hinges on the computational ability to efficiently compare hundreds of million of short reads and assign them to the correct BAC clones so that the assembly can be carried out clone-by-clone. Experimental results on simulated data for the rice genome show that the deconvolution is extremely accurate (99.57% of the deconvoluted reads are assigned to the correct BAC), and the resulting BAC assemblies have very high quality (BACs are covered by contigs over about 77% of their length, on average). Experimental results on real data for a gene-rich subset of the barley genome confirm that the deconvolution is accurate (almost 70% of left/right pairs in paired-end reads are assigned to the same BAC, despite being processed independently) and the BAC assemblies have good quality (the average sum of all assembled contigs is about 88% of the estimated BAC length).
[ { "created": "Mon, 19 Dec 2011 19:06:57 GMT", "version": "v1" } ]
2011-12-20
[ [ "Lonardi", "Stefano", "" ], [ "Duma", "Denisa", "" ], [ "Alpert", "Matthew", "" ], [ "Cordero", "Francesca", "" ], [ "Beccuti", "Marco", "" ], [ "Bhat", "Prasanna R.", "" ], [ "Wu", "Yonghui", "" ], [ ...
We propose a new sequencing protocol that combines recent advances in combinatorial pooling design and second-generation sequencing technology to efficiently approach de novo selective genome sequencing. We show that combinatorial pooling is a cost-effective and practical alternative to exhaustive DNA barcoding when dealing with hundreds or thousands of DNA samples, such as genome-tiling gene-rich BAC clones. The novelty of the protocol hinges on the computational ability to efficiently compare hundreds of million of short reads and assign them to the correct BAC clones so that the assembly can be carried out clone-by-clone. Experimental results on simulated data for the rice genome show that the deconvolution is extremely accurate (99.57% of the deconvoluted reads are assigned to the correct BAC), and the resulting BAC assemblies have very high quality (BACs are covered by contigs over about 77% of their length, on average). Experimental results on real data for a gene-rich subset of the barley genome confirm that the deconvolution is accurate (almost 70% of left/right pairs in paired-end reads are assigned to the same BAC, despite being processed independently) and the BAC assemblies have good quality (the average sum of all assembled contigs is about 88% of the estimated BAC length).
2401.13012
Sina Remmers
Sina Remmers
Ready for climate change? The importance of adaptive thermoregulatory flexibility for the Malagasy bat species Triaenops menamena
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The balance between energy intake and expenditure is essential and crucial for survival for all organisms. The energy management is closely linked to the ecology. Thus, changes in environmental conditions can be challenging, especially for the animals physiology. Different strategies of thermoregulation have evolved and heterothermy seems to be the most efficient way for saving energy. Daily torpor, a temporally controlled reduction of the metabolic rate and body temperature, is one form of heterothermy and recent studies revealed that this physiological strategy is used by many tropical and subtropical species. Yet, little is known about torpor in bats and their intraspecific thermoregulatory flexibility. Therefore, three populations of the Malagasy bat species Triaenops menamena were investigated, to examine their metabolic rate, skin temperature and related energy expenditure during normothermic and torpid states in context of different microclimatic conditions. This study exposed significant physiological differences among these three populations along a gradient of fluctuation in environmental conditions. The greater the fluctuations in ambient temperature and humidity, the higher was the general resting metabolic rate and the rate of its reduction, but the lower was the torpid metabolic rate. This species shows a highly adaptive flexibility in their physiology and are able to cope with unfavorable environmental conditions by using different strategies of thermoregulation and hypometabolism, which is beneficial regarding ongoing climatic changes.
[ { "created": "Tue, 23 Jan 2024 11:57:30 GMT", "version": "v1" } ]
2024-01-25
[ [ "Remmers", "Sina", "" ] ]
The balance between energy intake and expenditure is essential and crucial for survival for all organisms. The energy management is closely linked to the ecology. Thus, changes in environmental conditions can be challenging, especially for the animals physiology. Different strategies of thermoregulation have evolved and heterothermy seems to be the most efficient way for saving energy. Daily torpor, a temporally controlled reduction of the metabolic rate and body temperature, is one form of heterothermy and recent studies revealed that this physiological strategy is used by many tropical and subtropical species. Yet, little is known about torpor in bats and their intraspecific thermoregulatory flexibility. Therefore, three populations of the Malagasy bat species Triaenops menamena were investigated, to examine their metabolic rate, skin temperature and related energy expenditure during normothermic and torpid states in context of different microclimatic conditions. This study exposed significant physiological differences among these three populations along a gradient of fluctuation in environmental conditions. The greater the fluctuations in ambient temperature and humidity, the higher was the general resting metabolic rate and the rate of its reduction, but the lower was the torpid metabolic rate. This species shows a highly adaptive flexibility in their physiology and are able to cope with unfavorable environmental conditions by using different strategies of thermoregulation and hypometabolism, which is beneficial regarding ongoing climatic changes.
2109.11783
Pablo Villegas G\'ongora
Victor Buend\'ia, Pablo Villegas, Raffaella Burioni, Miguel A. Mu\~noz
The broad edge of synchronisation: Griffiths effects and collective phenomena in brain networks
13 pages, 2 figures. Accepted to be published in Philos. Trans. Royal Soc. A
Phil. Trans. R. Soc. A.380: 20200424 (2022)
10.1098/rsta.2020.0424
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many of the amazing functional capabilities of the brain are collective properties stemming from the interactions of large sets of individual neurons. In particular, the most salient collective phenomena in brain activity are oscillations, which require the synchronous activation of many neurons. Here, we analyse parsimonious dynamical models of neural synchronisation running on top of synthetic networks that capture essential aspects of the actual brain anatomical connectivity such as a hierarchical-modular and core-periphery structure. These models reveal the emergence of complex collective states with intermediate and flexible levels of synchronisation, halfway in the synchronous-asynchronous spectrum. These states are best described as broad Griffiths-like phases, i.e. an extension of standard critical points that emerge in structurally heterogeneous systems. We analyse different routes (bifurcations) to synchronisation and stress the relevance of 'hybrid-type transitions' to generate rich dynamical patterns. Overall, our results illustrate the complex interplay between structure and dynamics, underlining key aspects leading to rich collective states needed to sustain brain functionality.
[ { "created": "Fri, 24 Sep 2021 07:24:14 GMT", "version": "v1" } ]
2022-06-29
[ [ "Buendía", "Victor", "" ], [ "Villegas", "Pablo", "" ], [ "Burioni", "Raffaella", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Many of the amazing functional capabilities of the brain are collective properties stemming from the interactions of large sets of individual neurons. In particular, the most salient collective phenomena in brain activity are oscillations, which require the synchronous activation of many neurons. Here, we analyse parsimonious dynamical models of neural synchronisation running on top of synthetic networks that capture essential aspects of the actual brain anatomical connectivity such as a hierarchical-modular and core-periphery structure. These models reveal the emergence of complex collective states with intermediate and flexible levels of synchronisation, halfway in the synchronous-asynchronous spectrum. These states are best described as broad Griffiths-like phases, i.e. an extension of standard critical points that emerge in structurally heterogeneous systems. We analyse different routes (bifurcations) to synchronisation and stress the relevance of 'hybrid-type transitions' to generate rich dynamical patterns. Overall, our results illustrate the complex interplay between structure and dynamics, underlining key aspects leading to rich collective states needed to sustain brain functionality.
2112.02592
Jessica Rogge
Mark Ralph Baker, Fleur Conway, Filippo Dal Ben, Elizabeth Lucinda Hawthorne, Licia Iacoviello, A Agodi, Saquib Mukhtar, Hai Phan, Yemurai Rabvukwa, Jessica R Rogge
A New Rapid Test to End COVID
3 pages
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite 93.1% to 95.8% of the UK adult population having been vaccinated and currently 83.5% to 89.8% of adults having received at least two doses (1), and despite many households testing twice a week with lateral flow tests (2), R at the time of writing is 0.9 to 1.1, with a growth rate range for England of between -1% and +1% (3). Furthermore, up to 30% of infected individuals are going on to experience Long Covid (4). The crisis is far from over and as new variants of concern like Omicron spread, the situation is not under control, even in the highly vaccinated and tested UK and far less so in many countries. The problem is likely to be replicated in other countries with currently low infection levels as isolation is eased in future, even if these countries reach a high level of vaccination. Additionally, concerns have been raised about fall in immunity by 6 months after receiving the Pfizer and AstraZeneca vaccines (5), and the ability of Omicron to re-infect and cause illness in vaccinated people, with urgent booster jabs now being given to attempt to mitigate this. A solution to stop the spread of all variants of COVID-19 is needed now, and we present it here: CLDC, a rapid test that is 98%+ sensitive, low cost and scalable.
[ { "created": "Sun, 5 Dec 2021 15:12:10 GMT", "version": "v1" } ]
2021-12-07
[ [ "Baker", "Mark Ralph", "" ], [ "Conway", "Fleur", "" ], [ "Ben", "Filippo Dal", "" ], [ "Hawthorne", "Elizabeth Lucinda", "" ], [ "Iacoviello", "Licia", "" ], [ "Agodi", "A", "" ], [ "Mukhtar", "Saquib", ""...
Despite 93.1% to 95.8% of the UK adult population having been vaccinated and currently 83.5% to 89.8% of adults having received at least two doses (1), and despite many households testing twice a week with lateral flow tests (2), R at the time of writing is 0.9 to 1.1, with a growth rate range for England of between -1% and +1% (3). Furthermore, up to 30% of infected individuals are going on to experience Long Covid (4). The crisis is far from over and as new variants of concern like Omicron spread, the situation is not under control, even in the highly vaccinated and tested UK and far less so in many countries. The problem is likely to be replicated in other countries with currently low infection levels as isolation is eased in future, even if these countries reach a high level of vaccination. Additionally, concerns have been raised about fall in immunity by 6 months after receiving the Pfizer and AstraZeneca vaccines (5), and the ability of Omicron to re-infect and cause illness in vaccinated people, with urgent booster jabs now being given to attempt to mitigate this. A solution to stop the spread of all variants of COVID-19 is needed now, and we present it here: CLDC, a rapid test that is 98%+ sensitive, low cost and scalable.
q-bio/0602012
Jose Vilar
Leonor Saiz and Jose M. G. Vilar
In vivo evidence of alternative loop geometries in DNA-protein complexes
3 pages, 2 figures
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
null
The in vivo free energy of looping double-stranded DNA by the lac repressor has a remarkable behavior whose origins are not fully understood. In addition to the intrinsic periodicity of the DNA double helix, the in vivo free energy has an oscillatory component of about half the helical period and oscillates asymmetrically with an amplitude significantly smaller than predicted by current theories. Here, we show that the in vivo behavior is accurately accounted for by the simultaneous presence of two distinct conformations of looped DNA. Our analysis reveals that these two conformations have different optimal free energies and phases and that they behave distinctly in the presence of key architectural proteins.
[ { "created": "Fri, 10 Feb 2006 23:00:31 GMT", "version": "v1" } ]
2007-05-23
[ [ "Saiz", "Leonor", "" ], [ "Vilar", "Jose M. G.", "" ] ]
The in vivo free energy of looping double-stranded DNA by the lac repressor has a remarkable behavior whose origins are not fully understood. In addition to the intrinsic periodicity of the DNA double helix, the in vivo free energy has an oscillatory component of about half the helical period and oscillates asymmetrically with an amplitude significantly smaller than predicted by current theories. Here, we show that the in vivo behavior is accurately accounted for by the simultaneous presence of two distinct conformations of looped DNA. Our analysis reveals that these two conformations have different optimal free energies and phases and that they behave distinctly in the presence of key architectural proteins.
1608.08995
Justin Yeakel
Justin D. Yeakel, Christopher P. Kempes, Sidney Redner
The dynamics of starvation and recovery
13 pages, 5 figures, 1 Supplement, 2 Supplementary figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The eco-evolutionary dynamics of species are fundamentally linked to the energetic constraints of its constituent individuals. Of particular importance is the interplay between reproduction and the dynamics of starvation and recovery. To elucidate this interplay, we introduce a nutritional state-structured model that incorporates two classes of consumer: nutritionally replete, reproducing consumers, and undernourished, non-reproducing consumers. We obtain strong constraints on starvation and recovery rates by deriving allometric scaling relationships and find that population dynamics are typically driven to a steady state. Moreover, these rates fall within a 'refuge' in parameter space, where the probability of population extinction is minimized. We also show that our model provides a natural framework to predict maximum mammalian body size by determining the relative stability of an otherwise homogeneous population to a competing population with altered percent body fat. This framework provides a principled mechanism for a selective driver of Cope's rule.
[ { "created": "Wed, 31 Aug 2016 18:53:01 GMT", "version": "v1" }, { "created": "Fri, 2 Sep 2016 22:32:47 GMT", "version": "v2" }, { "created": "Fri, 10 Mar 2017 02:11:28 GMT", "version": "v3" }, { "created": "Mon, 13 Mar 2017 16:17:45 GMT", "version": "v4" }, { "cr...
2017-11-13
[ [ "Yeakel", "Justin D.", "" ], [ "Kempes", "Christopher P.", "" ], [ "Redner", "Sidney", "" ] ]
The eco-evolutionary dynamics of species are fundamentally linked to the energetic constraints of its constituent individuals. Of particular importance is the interplay between reproduction and the dynamics of starvation and recovery. To elucidate this interplay, we introduce a nutritional state-structured model that incorporates two classes of consumer: nutritionally replete, reproducing consumers, and undernourished, non-reproducing consumers. We obtain strong constraints on starvation and recovery rates by deriving allometric scaling relationships and find that population dynamics are typically driven to a steady state. Moreover, these rates fall within a 'refuge' in parameter space, where the probability of population extinction is minimized. We also show that our model provides a natural framework to predict maximum mammalian body size by determining the relative stability of an otherwise homogeneous population to a competing population with altered percent body fat. This framework provides a principled mechanism for a selective driver of Cope's rule.
2002.01475
Alexey Shipunov
Alexey Shipunov
Ripeline and Rmanual speed up biological research and reporting
3 figures
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
The emergence of R, a freely available data analysis environment, brought to the researcher in any science field a set of well-concerted instruments of immense power and low cost. In botany and zoology, these instruments could be used, for example, to speed up work in two distant but related fields: analysis of DNA markers and preparation of natural history manuals. Both of these tasks require a significant amount of monotonous work, which could be automated with software. I developed "Ripeline and "Rmanual," two highly customizable R-based applications, designed with a goal of simplicity, reproducibility, and effectiveness. Ripeline is a pipeline that allows for a continuously updated analysis of multiple DNA markers. Rmanual is a "living book" which allows the creation and continuous update of manuals and checklists. Comparing with more traditional ways of DNA marker analysis and manual preparation, Ripeline and Rmanual allow for a significant reduction of time, which is usually spent doing repetitive tasks. They also provide tools which can be used in a broad spectrum of further applications.
[ { "created": "Tue, 4 Feb 2020 11:47:37 GMT", "version": "v1" } ]
2020-02-06
[ [ "Shipunov", "Alexey", "" ] ]
The emergence of R, a freely available data analysis environment, brought to the researcher in any science field a set of well-concerted instruments of immense power and low cost. In botany and zoology, these instruments could be used, for example, to speed up work in two distant but related fields: analysis of DNA markers and preparation of natural history manuals. Both of these tasks require a significant amount of monotonous work, which could be automated with software. I developed "Ripeline and "Rmanual," two highly customizable R-based applications, designed with a goal of simplicity, reproducibility, and effectiveness. Ripeline is a pipeline that allows for a continuously updated analysis of multiple DNA markers. Rmanual is a "living book" which allows the creation and continuous update of manuals and checklists. Comparing with more traditional ways of DNA marker analysis and manual preparation, Ripeline and Rmanual allow for a significant reduction of time, which is usually spent doing repetitive tasks. They also provide tools which can be used in a broad spectrum of further applications.
1709.06563
William Holmes
Jennifer S. Trueblood, William R. Holmes, Adam C. Seegmiller, Jonathan Douds, Margaret Compton, Megan Woodruff, Wenrui Huang, Charles Stratton, Quentin Eichbaum
The Impact of Speed and Bias on the Cognitive Processes of Experts and Novices in Medical Image Decision-making
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training individuals to make accurate decisions from medical images is a critical component of education in diagnostic pathology. We describe a joint experimental and computational modeling approach to examine the similarities and differences in the cognitive processes of novice participants and experienced participants (pathology residents and pathology faculty) in cancer cell image identification. For this study we collected a bank of hundreds of digital images that were identified by cell type and classified by difficulty by a panel of expert hematopathologists. The key manipulations in our study included examining the speed-accuracy tradeoff as well as the impact of prior expectations on decisions. In addition, our study examined individual differences in decision-making by comparing task performance to domain general visual ability (as measured using the Novel Object Memory Test (NOMT) (Richler et al., 2017). Using Signal Detection Theory (SDT) and the Diffusion Decision Model (DDM), we found many similarities between expert and novices in our task. While experts tended to have better discriminability, the two groups responded similarly to time pressure (i.e., reduced caution under speed instructions in the DDM) and to the introduction of a probabilistic cue (i.e., increased response bias in the DDM). These results have important implications for training in this area as well as using novice participants in research on medical image perception and decision-making.
[ { "created": "Tue, 19 Sep 2017 17:34:35 GMT", "version": "v1" }, { "created": "Fri, 18 May 2018 15:11:43 GMT", "version": "v2" } ]
2018-05-21
[ [ "Trueblood", "Jennifer S.", "" ], [ "Holmes", "William R.", "" ], [ "Seegmiller", "Adam C.", "" ], [ "Douds", "Jonathan", "" ], [ "Compton", "Margaret", "" ], [ "Woodruff", "Megan", "" ], [ "Huang", "Wenrui", ...
Training individuals to make accurate decisions from medical images is a critical component of education in diagnostic pathology. We describe a joint experimental and computational modeling approach to examine the similarities and differences in the cognitive processes of novice participants and experienced participants (pathology residents and pathology faculty) in cancer cell image identification. For this study we collected a bank of hundreds of digital images that were identified by cell type and classified by difficulty by a panel of expert hematopathologists. The key manipulations in our study included examining the speed-accuracy tradeoff as well as the impact of prior expectations on decisions. In addition, our study examined individual differences in decision-making by comparing task performance to domain general visual ability (as measured using the Novel Object Memory Test (NOMT) (Richler et al., 2017). Using Signal Detection Theory (SDT) and the Diffusion Decision Model (DDM), we found many similarities between expert and novices in our task. While experts tended to have better discriminability, the two groups responded similarly to time pressure (i.e., reduced caution under speed instructions in the DDM) and to the introduction of a probabilistic cue (i.e., increased response bias in the DDM). These results have important implications for training in this area as well as using novice participants in research on medical image perception and decision-making.
2305.18088
Imra Aqeel
Imra Aqeel, and Abdul Majid
Drug Repurposing Targeting COVID-19 3CL Protease using Molecular Docking and Machine Learning Regression Approach
26 Pages
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The COVID-19 pandemic has initiated a global health emergency, with an exigent need for effective cure. Progressively, drug repurposing is emerging a promise solution as it saves the time, cost and labor. However, the number of drug candidates that have been identified as being repurposed for the treatment of COVID-19 are still insufficient, so more effective and thorough drug exploring strategies are required. In this study, we joint the molecular docking with machine learning regression approaches to find some prospective therapeutic candidates for COVID-19 treatment. We screened the 5903 approved drugs for their inhibition by targeting the main protease 3CL of SARS-CoV-2, which is responsible to replicate the virus. Molecular docking is used to calculate the binding affinities of these drugs to the main protease 3CL. We employed several machine learning regression approaches for QSAR modeling to find out some potential drugs with high binding affinities. Our outcomes demonstrated that the Decision Tree Regression (DTR) model with best scores of R2 and RMSE, is the most suitable model to explore the potential drugs. We shortlisted six favorable drugs. These drugs have novel repurposing potential, except for one antiviral ZINC203757351 compound that has already been identified in other studies. We further examined the physiochemical and pharmacokinetic properties of these most potent drugs and their best binding interaction to specific target protease 3CLpro. Our verdicts contribute to the larger goal of finding effective cures for COVID-19, which is an acute global health challenge. The outcomes of our study provide valuable insights into potential therapeutic candidates for COVID-19 treatment.
[ { "created": "Thu, 25 May 2023 05:34:39 GMT", "version": "v1" }, { "created": "Tue, 6 Jun 2023 22:11:29 GMT", "version": "v2" }, { "created": "Sat, 24 Jun 2023 11:55:09 GMT", "version": "v3" }, { "created": "Thu, 20 Jul 2023 06:29:28 GMT", "version": "v4" }, { "cr...
2024-06-25
[ [ "Aqeel", "Imra", "" ], [ "Majid", "Abdul", "" ] ]
The COVID-19 pandemic has initiated a global health emergency, with an exigent need for effective cure. Progressively, drug repurposing is emerging a promise solution as it saves the time, cost and labor. However, the number of drug candidates that have been identified as being repurposed for the treatment of COVID-19 are still insufficient, so more effective and thorough drug exploring strategies are required. In this study, we joint the molecular docking with machine learning regression approaches to find some prospective therapeutic candidates for COVID-19 treatment. We screened the 5903 approved drugs for their inhibition by targeting the main protease 3CL of SARS-CoV-2, which is responsible to replicate the virus. Molecular docking is used to calculate the binding affinities of these drugs to the main protease 3CL. We employed several machine learning regression approaches for QSAR modeling to find out some potential drugs with high binding affinities. Our outcomes demonstrated that the Decision Tree Regression (DTR) model with best scores of R2 and RMSE, is the most suitable model to explore the potential drugs. We shortlisted six favorable drugs. These drugs have novel repurposing potential, except for one antiviral ZINC203757351 compound that has already been identified in other studies. We further examined the physiochemical and pharmacokinetic properties of these most potent drugs and their best binding interaction to specific target protease 3CLpro. Our verdicts contribute to the larger goal of finding effective cures for COVID-19, which is an acute global health challenge. The outcomes of our study provide valuable insights into potential therapeutic candidates for COVID-19 treatment.