id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2006.07209
Thomas Wieland
Thomas Wieland
Change points in the spread of COVID-19 question the effectiveness of nonpharmaceutical interventions in Germany
Updated with German COVID-19 case data (RKI) from June 28, 2020
Saf. Sci. 131 (2020) 104924
10.1016/j.ssci.2020.104924
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aims: Nonpharmaceutical interventions against the spread of SARS-CoV-2 in Germany included the cancellation of mass events (from March 8), closures of schools and child day care facilities (from March 16) as well as a "lockdown" (from March 23). This study attempts to assess the effectiveness of these interventions in terms of revealing their impact on infections over time. Methods: Dates of infections were estimated from official German case data by incorporating the incubation period and an empirical reporting delay. Exponential growth models for infections and reproduction numbers were estimated and investigated with respect to change points in the time series. Results: A significant decline of daily and cumulative infections as well as reproduction numbers is found at March 8 (CI [7, 9]), March 10 (CI [9, 11] and March 3 (CI [2, 4]), respectively. Further declines and stabilizations are found in the end of March. There is also a change point in new infections at April 19 (CI [18, 20]), but daily infections still show a negative growth. From March 19 (CI [18, 20]), the reproduction numbers fluctuate on a level below one. Conclusions: The decline of infections in early March 2020 can be attributed to relatively small interventions and voluntary behavioural changes. Additional effects of later interventions cannot be detected clearly. Liberalizations of measures did not induce a re-increase of infections. Thus, the effectiveness of most German interventions remains questionable. Moreover, assessing of interventions is impeded by the estimation of true infection dates and the influence of test volume.
[ { "created": "Fri, 12 Jun 2020 14:13:32 GMT", "version": "v1" }, { "created": "Mon, 29 Jun 2020 21:52:35 GMT", "version": "v2" }, { "created": "Mon, 6 Jul 2020 11:54:59 GMT", "version": "v3" } ]
2020-08-27
[ [ "Wieland", "Thomas", "" ] ]
Aims: Nonpharmaceutical interventions against the spread of SARS-CoV-2 in Germany included the cancellation of mass events (from March 8), closures of schools and child day care facilities (from March 16) as well as a "lockdown" (from March 23). This study attempts to assess the effectiveness of these interventions in terms of revealing their impact on infections over time. Methods: Dates of infections were estimated from official German case data by incorporating the incubation period and an empirical reporting delay. Exponential growth models for infections and reproduction numbers were estimated and investigated with respect to change points in the time series. Results: A significant decline of daily and cumulative infections as well as reproduction numbers is found at March 8 (CI [7, 9]), March 10 (CI [9, 11] and March 3 (CI [2, 4]), respectively. Further declines and stabilizations are found in the end of March. There is also a change point in new infections at April 19 (CI [18, 20]), but daily infections still show a negative growth. From March 19 (CI [18, 20]), the reproduction numbers fluctuate on a level below one. Conclusions: The decline of infections in early March 2020 can be attributed to relatively small interventions and voluntary behavioural changes. Additional effects of later interventions cannot be detected clearly. Liberalizations of measures did not induce a re-increase of infections. Thus, the effectiveness of most German interventions remains questionable. Moreover, assessing of interventions is impeded by the estimation of true infection dates and the influence of test volume.
1902.09625
Pedro Constantino
Pedro H. Constantino and Yiannis N. Kaznessis
Moment Closure Stability Analysis of Stochastic Reaction Networks with Oscillatory Dynamics
29 pages, 10 figures
null
null
null
q-bio.MN cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-sa/4.0/
Biochemical reactions with oscillatory behavior play an essential role in synthetic biology at the microscopic scale. Although a robust stability theory for deterministic chemical oscillators in the macroscopic limit exists, the dynamical stability of stochastic oscillators is an object of ongoing research. The Chemical Master Equation along with kinetic Monte Carlo simulations constitute the most accurate approach to modeling microscopic systems. However, because of the challenges of solving the fully probabilistic model, most studies in stability analysis have focused on the description of externally disturbed oscillators. Here we apply the Maximum Entropy Principle as closure criterion for moment equations of oscillatory networks and perform the stability analysis of the internally disturbed Brusselator network. Particularly, we discuss the effects of kinetic and size parameters on the dynamics of this stochastic oscillatory system with intrinsic noise. Our numerical experiments reveal that changes in kinetic parameters lead to phenomenological and dynamical Hopf bifurcations, while reduced system sizes in the oscillatory region can reverse the stochastic Hopf dynamical bifurcations at the ensemble level. This is a unique feature of the stochastic dynamics of oscillatory systems, with an unknown parallel in the macroscopic limit.
[ { "created": "Mon, 25 Feb 2019 21:25:37 GMT", "version": "v1" } ]
2019-02-27
[ [ "Constantino", "Pedro H.", "" ], [ "Kaznessis", "Yiannis N.", "" ] ]
Biochemical reactions with oscillatory behavior play an essential role in synthetic biology at the microscopic scale. Although a robust stability theory for deterministic chemical oscillators in the macroscopic limit exists, the dynamical stability of stochastic oscillators is an object of ongoing research. The Chemical Master Equation along with kinetic Monte Carlo simulations constitute the most accurate approach to modeling microscopic systems. However, because of the challenges of solving the fully probabilistic model, most studies in stability analysis have focused on the description of externally disturbed oscillators. Here we apply the Maximum Entropy Principle as closure criterion for moment equations of oscillatory networks and perform the stability analysis of the internally disturbed Brusselator network. Particularly, we discuss the effects of kinetic and size parameters on the dynamics of this stochastic oscillatory system with intrinsic noise. Our numerical experiments reveal that changes in kinetic parameters lead to phenomenological and dynamical Hopf bifurcations, while reduced system sizes in the oscillatory region can reverse the stochastic Hopf dynamical bifurcations at the ensemble level. This is a unique feature of the stochastic dynamics of oscillatory systems, with an unknown parallel in the macroscopic limit.
2405.16524
Polina Lakrisenko
Polina Lakrisenko, Dilan Pathirana, Daniel Weindl, Jan Hasenauer
Exploration of methods for computing sensitivities in ODE models at dynamic and steady states
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Estimating parameters of dynamic models from experimental data is a challenging, and often computationally-demanding task. It requires a large number of model simulations and objective function gradient computations, if gradient-based optimization is used. The gradient depends on derivatives of the state variables with respect to parameters, also called state sensitivities, which are expensive to compute. In many cases, steady-state computation is a part of model simulation, either due to steady-state data or an assumption that the system is at steady state at the initial time point. Various methods are available for steady-state and gradient computation. Yet, the most efficient pair of methods (one for steady states, one for gradients) for a particular model is often not clear. Moreover, depending on the model and the available data, some methods may not be applicable or sufficiently robust. In order to facilitate the selection of methods, we explore six method pairs for computing the steady state and sensitivities at steady state using six real-world problems. The method pairs involve numerical integration or Newton's method to compute the steady-state, and -- for both forward and adjoint sensitivity analysis -- numerical integration or a tailored method to compute the sensitivities at steady-state. Our evaluation shows that the two method pairs that combine numerical integration for the steady-state with a tailored method for the sensitivities at steady-state were the most robust, and amongst the most computationally-efficient. We also observed that while Newton's method for steady-state computation yields a substantial speedup compared to numerical integration, it may lead to a large number of simulation failures. Overall, our study provides a concise overview across current methods for computing sensitivities at steady state, guiding modelers to choose the right methods.
[ { "created": "Sun, 26 May 2024 11:21:05 GMT", "version": "v1" } ]
2024-05-28
[ [ "Lakrisenko", "Polina", "" ], [ "Pathirana", "Dilan", "" ], [ "Weindl", "Daniel", "" ], [ "Hasenauer", "Jan", "" ] ]
Estimating parameters of dynamic models from experimental data is a challenging, and often computationally-demanding task. It requires a large number of model simulations and objective function gradient computations, if gradient-based optimization is used. The gradient depends on derivatives of the state variables with respect to parameters, also called state sensitivities, which are expensive to compute. In many cases, steady-state computation is a part of model simulation, either due to steady-state data or an assumption that the system is at steady state at the initial time point. Various methods are available for steady-state and gradient computation. Yet, the most efficient pair of methods (one for steady states, one for gradients) for a particular model is often not clear. Moreover, depending on the model and the available data, some methods may not be applicable or sufficiently robust. In order to facilitate the selection of methods, we explore six method pairs for computing the steady state and sensitivities at steady state using six real-world problems. The method pairs involve numerical integration or Newton's method to compute the steady-state, and -- for both forward and adjoint sensitivity analysis -- numerical integration or a tailored method to compute the sensitivities at steady-state. Our evaluation shows that the two method pairs that combine numerical integration for the steady-state with a tailored method for the sensitivities at steady-state were the most robust, and amongst the most computationally-efficient. We also observed that while Newton's method for steady-state computation yields a substantial speedup compared to numerical integration, it may lead to a large number of simulation failures. Overall, our study provides a concise overview across current methods for computing sensitivities at steady state, guiding modelers to choose the right methods.
2005.08443
Cameron Mura
Menuka Jaiswal, Saad Saleem, Yonghyeon Kweon, Eli J Draizen, Stella Veretnik, Cameron Mura, Philip E. Bourne
Deep Learning of Protein Structural Classes: Any Evidence for an 'Urfold'?
6 pages, 3 figures, 1 table; IEEE SIEDS conference submission
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent computational advances in the accurate prediction of protein three-dimensional (3D) structures from amino acid sequences now present a unique opportunity to decipher the interrelationships between proteins. This task entails--but is not equivalent to--a problem of 3D structure comparison and classification. Historically, protein domain classification has been a largely manual and subjective activity, relying upon various heuristics. Databases such as CATH represent significant steps towards a more systematic (and automatable) approach, yet there still remains much room for the development of more scalable and quantitative classification methods, grounded in machine learning. We suspect that re-examining these relationships via a Deep Learning (DL) approach may entail a large-scale restructuring of classification schemes, improved with respect to the interpretability of distant relationships between proteins. Here, we describe our training of DL models on protein domain structures (and their associated physicochemical properties) in order to evaluate classification properties at CATH's "homologous superfamily" (SF) level. To achieve this, we have devised and applied an extension of image-classification methods and image segmentation techniques, utilizing a convolutional autoencoder model architecture. Our DL architecture allows models to learn structural features that, in a sense, 'define' different homologous SFs. We evaluate and quantify pairwise 'distances' between SFs by building one model per SF and comparing the loss functions of the models. Hierarchical clustering on these distance matrices provides a new view of protein interrelationships--a view that extends beyond simple structural/geometric similarity, and towards the realm of structure/function properties.
[ { "created": "Mon, 18 May 2020 03:55:01 GMT", "version": "v1" } ]
2020-05-19
[ [ "Jaiswal", "Menuka", "" ], [ "Saleem", "Saad", "" ], [ "Kweon", "Yonghyeon", "" ], [ "Draizen", "Eli J", "" ], [ "Veretnik", "Stella", "" ], [ "Mura", "Cameron", "" ], [ "Bourne", "Philip E.", "" ] ]
Recent computational advances in the accurate prediction of protein three-dimensional (3D) structures from amino acid sequences now present a unique opportunity to decipher the interrelationships between proteins. This task entails--but is not equivalent to--a problem of 3D structure comparison and classification. Historically, protein domain classification has been a largely manual and subjective activity, relying upon various heuristics. Databases such as CATH represent significant steps towards a more systematic (and automatable) approach, yet there still remains much room for the development of more scalable and quantitative classification methods, grounded in machine learning. We suspect that re-examining these relationships via a Deep Learning (DL) approach may entail a large-scale restructuring of classification schemes, improved with respect to the interpretability of distant relationships between proteins. Here, we describe our training of DL models on protein domain structures (and their associated physicochemical properties) in order to evaluate classification properties at CATH's "homologous superfamily" (SF) level. To achieve this, we have devised and applied an extension of image-classification methods and image segmentation techniques, utilizing a convolutional autoencoder model architecture. Our DL architecture allows models to learn structural features that, in a sense, 'define' different homologous SFs. We evaluate and quantify pairwise 'distances' between SFs by building one model per SF and comparing the loss functions of the models. Hierarchical clustering on these distance matrices provides a new view of protein interrelationships--a view that extends beyond simple structural/geometric similarity, and towards the realm of structure/function properties.
1504.06732
Paul Moore
P. J. Moore
A predictive coding account of OCD
arXiv admin note: substantial text overlap with arXiv:1503.00999
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a predictive coding account of obsessive-compulsive disorder (OCD). We extend the predictive coding model to include the concept of a 'formal narrative', or temporal sequence of cognitive states inferred from sense data. We propose that human cognition uses a hierarchy of narratives to predict changes in the natural and social environment. Each layer in the hierarchy represents a distinct view of the world, but it also contributes to a global unitary perspective. We suggest that the global perspective remains intact in OCD but there is a dysfunction at a sub-linguistic level of cognition. The consequent failure of recognition is experienced as the external world being 'not just right', and its automatic correction is felt as compulsion. A wide variety of symptoms and some neuropsychological findings are thus explained by a single dysfunction. We conclude that the model provides a deeper explanation for behavioural observations than current models, and that it has potential for further development for application to neuropsychological data.
[ { "created": "Sat, 25 Apr 2015 14:40:07 GMT", "version": "v1" }, { "created": "Mon, 8 Jun 2015 14:15:16 GMT", "version": "v2" } ]
2015-06-09
[ [ "Moore", "P. J.", "" ] ]
This paper presents a predictive coding account of obsessive-compulsive disorder (OCD). We extend the predictive coding model to include the concept of a 'formal narrative', or temporal sequence of cognitive states inferred from sense data. We propose that human cognition uses a hierarchy of narratives to predict changes in the natural and social environment. Each layer in the hierarchy represents a distinct view of the world, but it also contributes to a global unitary perspective. We suggest that the global perspective remains intact in OCD but there is a dysfunction at a sub-linguistic level of cognition. The consequent failure of recognition is experienced as the external world being 'not just right', and its automatic correction is felt as compulsion. A wide variety of symptoms and some neuropsychological findings are thus explained by a single dysfunction. We conclude that the model provides a deeper explanation for behavioural observations than current models, and that it has potential for further development for application to neuropsychological data.
2003.09861
Giulia Giordano
Giulia Giordano, Franco Blanchini, Raffaele Bruno, Patrizio Colaneri, Alessandro Di Filippo, Angela Di Matteo, Marta Colaneri, and the COVID19 IRCCS San Matteo Pavia Task Force
A SIDARTHE Model of COVID-19 Epidemic in Italy
null
Nature Medicine, 26, pages 855-860 (2020)
10.1038/s41591-020-0883-7
null
q-bio.PE cs.SY eess.SY math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In late December 2019, a novel strand of Coronavirus (SARS-CoV-2) causing a severe, potentially fatal respiratory syndrome (COVID-19) was identified in Wuhan, Hubei Province, China and is causing outbreaks in multiple world countries, soon becoming a pandemic. Italy has now become the most hit country outside of Asia: on March 16, 2020, the Italian Civil Protection documented a total of 27980 confirmed cases and 2158 deaths of people tested positive for SARS-CoV-2. In the context of an emerging infectious disease outbreak, it is of paramount importance to predict the trend of the epidemic in order to plan an effective control strategy and to determine its impact. This paper proposes a new epidemic model that discriminates between infected individuals depending on whether they have been diagnosed and on the severity of their symptoms. The distinction between diagnosed and non-diagnosed is important because non-diagnosed individuals are more likely to spread the infection than diagnosed ones, since the latter are typically isolated, and can explain misperceptions of the case fatality rate and of the seriousness of the epidemic phenomenon. Being able to predict the amount of patients that will develop life-threatening symptoms is important since the disease frequently requires hospitalisation (and even Intensive Care Unit admission) and challenges the healthcare system capacity. We show how the basic reproduction number can be redefined in the new framework, thus capturing the potential for epidemic containment. Simulation results are compared with real data on the COVID-19 epidemic in Italy, to show the validity of the model and compare different possible predicted scenarios depending on the adopted countermeasures.
[ { "created": "Sun, 22 Mar 2020 11:17:18 GMT", "version": "v1" } ]
2021-08-23
[ [ "Giordano", "Giulia", "" ], [ "Blanchini", "Franco", "" ], [ "Bruno", "Raffaele", "" ], [ "Colaneri", "Patrizio", "" ], [ "Di Filippo", "Alessandro", "" ], [ "Di Matteo", "Angela", "" ], [ "Colaneri", "Marta", "" ], [ "Force", "the COVID19 IRCCS San Matteo Pavia Task", "" ] ]
In late December 2019, a novel strand of Coronavirus (SARS-CoV-2) causing a severe, potentially fatal respiratory syndrome (COVID-19) was identified in Wuhan, Hubei Province, China and is causing outbreaks in multiple world countries, soon becoming a pandemic. Italy has now become the most hit country outside of Asia: on March 16, 2020, the Italian Civil Protection documented a total of 27980 confirmed cases and 2158 deaths of people tested positive for SARS-CoV-2. In the context of an emerging infectious disease outbreak, it is of paramount importance to predict the trend of the epidemic in order to plan an effective control strategy and to determine its impact. This paper proposes a new epidemic model that discriminates between infected individuals depending on whether they have been diagnosed and on the severity of their symptoms. The distinction between diagnosed and non-diagnosed is important because non-diagnosed individuals are more likely to spread the infection than diagnosed ones, since the latter are typically isolated, and can explain misperceptions of the case fatality rate and of the seriousness of the epidemic phenomenon. Being able to predict the amount of patients that will develop life-threatening symptoms is important since the disease frequently requires hospitalisation (and even Intensive Care Unit admission) and challenges the healthcare system capacity. We show how the basic reproduction number can be redefined in the new framework, thus capturing the potential for epidemic containment. Simulation results are compared with real data on the COVID-19 epidemic in Italy, to show the validity of the model and compare different possible predicted scenarios depending on the adopted countermeasures.
2406.03197
Filippo Agnesi
Filippo Agnesi, Lucia Carlucci, Gia Burjanadze, Fabio Bernini, Khatia Gabisonia, John W Osborn, Silvestro Micera and Fabio A. Recchia
Complex hemodynamic responses to trans-vascular electrical stimulation of the renal nerve in anesthetized pigs
5 pages, 2 tables, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of this study was to characterize hemodynamic changes during trans-vascular stimulation of the renal nerve and their dependence on stimulation parameters. We employed a stimulation catheter inserted in the right renal artery under fluoroscopic guidance, in pigs. Systolic, diastolic and pulse blood pressure and heart rate were recorded during stimulations delivered at different intravascular sites along the renal artery or while varying stimulation parameters (amplitude, frequency, and pulse width). Blood pressure changes during stimulation displayed a pattern more complex than previously described in literature, with a series of negative and positive peaks over the first two minutes, followed by a steady state elevation during the remainder of the stimulation. Pulse pressure and heart rate only showed transient responses, then they returned to baseline values despite constant stimulation. The amplitude of the evoked hemodynamic response was roughly linearly correlated with stimulation amplitude, frequency, and pulse width.
[ { "created": "Wed, 29 May 2024 14:36:47 GMT", "version": "v1" } ]
2024-06-06
[ [ "Agnesi", "Filippo", "" ], [ "Carlucci", "Lucia", "" ], [ "Burjanadze", "Gia", "" ], [ "Bernini", "Fabio", "" ], [ "Gabisonia", "Khatia", "" ], [ "Osborn", "John W", "" ], [ "Micera", "Silvestro", "" ], [ "Recchia", "Fabio A.", "" ] ]
The objective of this study was to characterize hemodynamic changes during trans-vascular stimulation of the renal nerve and their dependence on stimulation parameters. We employed a stimulation catheter inserted in the right renal artery under fluoroscopic guidance, in pigs. Systolic, diastolic and pulse blood pressure and heart rate were recorded during stimulations delivered at different intravascular sites along the renal artery or while varying stimulation parameters (amplitude, frequency, and pulse width). Blood pressure changes during stimulation displayed a pattern more complex than previously described in literature, with a series of negative and positive peaks over the first two minutes, followed by a steady state elevation during the remainder of the stimulation. Pulse pressure and heart rate only showed transient responses, then they returned to baseline values despite constant stimulation. The amplitude of the evoked hemodynamic response was roughly linearly correlated with stimulation amplitude, frequency, and pulse width.
1406.5197
Danielle Bassett
Shi Gu, Fabio Pasqualetti, Matthew Cieslak, Scott T. Grafton, Danielle S. Bassett
Controllability of Brain Networks
14 pages, 4 figures, supplementary materials
null
10.1038/ncomms9414
null
q-bio.NC cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behavior. Fundamental principles constraining these dynamic network processes have remained elusive. Here we use network control theory to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure. Our results suggest that densely connected areas, particularly in the default mode system, facilitate the movement of the brain to many easily-reachable states. Weakly connected areas, particularly in cognitive control systems, facilitate the movement of the brain to difficult-to-reach states. Areas located on the boundary between network communities, particularly in attentional control systems, facilitate the integration or segregation of diverse cognitive systems. Our results suggest that structural network differences between the cognitive circuits dictate their distinct roles in controlling dynamic trajectories of brain network function.
[ { "created": "Thu, 19 Jun 2014 20:05:48 GMT", "version": "v1" } ]
2015-10-28
[ [ "Gu", "Shi", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Cieslak", "Matthew", "" ], [ "Grafton", "Scott T.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behavior. Fundamental principles constraining these dynamic network processes have remained elusive. Here we use network control theory to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure. Our results suggest that densely connected areas, particularly in the default mode system, facilitate the movement of the brain to many easily-reachable states. Weakly connected areas, particularly in cognitive control systems, facilitate the movement of the brain to difficult-to-reach states. Areas located on the boundary between network communities, particularly in attentional control systems, facilitate the integration or segregation of diverse cognitive systems. Our results suggest that structural network differences between the cognitive circuits dictate their distinct roles in controlling dynamic trajectories of brain network function.
1010.4127
Pierre Sens
Pierre Sens and Matthew S. Turner
Microphase separation in nonequilibrium biomembranes
null
null
10.1103/PhysRevLett.106.238101
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microphase separation of membrane components is thought to play an important role in many physiological processes, from cell signaling to endocytosis and cellular trafficking. Here, we study how variations in the membrane composition can be driven by fluctuating forces. We show that the membrane steady state is not only controlled by the strength of the forces and how they couple to the membrane, but also by their dynamics: In a simple class of models this is captured by a single a correlation time. We conclude that the coupling of membrane composition to normal mechanical forces, such as might be exerted by polymerizing cytoskeleton filaments, could play an important role in controlling the steady state of a cell membrane that exhibits transient microphase separation on lengthscales in the 10-100 nm regime.
[ { "created": "Wed, 20 Oct 2010 08:40:50 GMT", "version": "v1" } ]
2015-05-20
[ [ "Sens", "Pierre", "" ], [ "Turner", "Matthew S.", "" ] ]
Microphase separation of membrane components is thought to play an important role in many physiological processes, from cell signaling to endocytosis and cellular trafficking. Here, we study how variations in the membrane composition can be driven by fluctuating forces. We show that the membrane steady state is not only controlled by the strength of the forces and how they couple to the membrane, but also by their dynamics: In a simple class of models this is captured by a single a correlation time. We conclude that the coupling of membrane composition to normal mechanical forces, such as might be exerted by polymerizing cytoskeleton filaments, could play an important role in controlling the steady state of a cell membrane that exhibits transient microphase separation on lengthscales in the 10-100 nm regime.
0708.0186
Eduardo Candelario-Jalil
E. Candelario-Jalil, H. Slawik, I. Ridelis, A. Waschbisch, R.S. Akundi, M. Hull, B.L. Fiebich
Regional distribution of the prostaglandin E2 receptor EP1 in the rat brain: accumulation in Purkinje cells of the cerebellum
null
Journal of Molecular Neuroscience 27(3): 303-310 (2005)
null
null
q-bio.TO
null
Prostaglandin E2 (PGE2), is a major prostanoid produced by the activity of cyclooxygenases (COX) in response to various physiological and pathological stimuli. PGE2 exerts its effects by activating four specific E-type prostanoid receptors (EP1, EP2, EP3, and EP4). In the present study, we analyzed the expression of the PGE2 receptor EP1 (mRNA and protein) in different regions of the adult rat brain (hippocampus, hypothalamus, striatum, prefrontal cerebral cortex, parietal cortex, brain stem, and cerebellum) using reverse transcription- polymerase chain reaction, Western blotting, and immunohistochemical methods. On a regional basis, levels of EP1 mRNA were the highest in parietal cortex and cerebellum. At the protein level, we found very strong expression of EP1 in cerebellum, as revealed by Western blotting experiments. Furthermore, the present study provides for the first time evidence that the EP1 receptor is highly expressed in the cerebellum, where the Purkinje cells displayed very high immunolabeling of their perikaryon and dendrites, as observed in the immunohistochemical analysis. Results from the present study indicate that the EP1 prostanoid receptor is expressed in specific neuronal populations, which possibly determine the region-specific response to PGE2.
[ { "created": "Wed, 1 Aug 2007 16:02:56 GMT", "version": "v1" } ]
2007-08-02
[ [ "Candelario-Jalil", "E.", "" ], [ "Slawik", "H.", "" ], [ "Ridelis", "I.", "" ], [ "Waschbisch", "A.", "" ], [ "Akundi", "R. S.", "" ], [ "Hull", "M.", "" ], [ "Fiebich", "B. L.", "" ] ]
Prostaglandin E2 (PGE2), is a major prostanoid produced by the activity of cyclooxygenases (COX) in response to various physiological and pathological stimuli. PGE2 exerts its effects by activating four specific E-type prostanoid receptors (EP1, EP2, EP3, and EP4). In the present study, we analyzed the expression of the PGE2 receptor EP1 (mRNA and protein) in different regions of the adult rat brain (hippocampus, hypothalamus, striatum, prefrontal cerebral cortex, parietal cortex, brain stem, and cerebellum) using reverse transcription- polymerase chain reaction, Western blotting, and immunohistochemical methods. On a regional basis, levels of EP1 mRNA were the highest in parietal cortex and cerebellum. At the protein level, we found very strong expression of EP1 in cerebellum, as revealed by Western blotting experiments. Furthermore, the present study provides for the first time evidence that the EP1 receptor is highly expressed in the cerebellum, where the Purkinje cells displayed very high immunolabeling of their perikaryon and dendrites, as observed in the immunohistochemical analysis. Results from the present study indicate that the EP1 prostanoid receptor is expressed in specific neuronal populations, which possibly determine the region-specific response to PGE2.
1211.6348
Fernando Fabian Montani
Fernando Montani, Elena Phoka, Mariela Portesi, Simon R. Schultz
Statistical modelling of higher-order correlations in pools of neural activity
42 pages, 12 Figures; Submitted to Physica A
Physica A 392 (2013) 3066-3086
10.1016/j.physa.2013.03.012
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous recordings from multiple neural units allow us to investigate the activity of very large neural ensembles. To understand how large ensembles of neurons process sensory information, it is necessary to develop suitable statistical models to describe the response variability of the recorded spike trains. Using the information geometry framework, it is possible to estimate higher-order correlations by assigning one interaction parameter to each degree of correlation, leading to a $(2^N-1)$-dimensional model for a population with $N$ neurons. However, this model suffers greatly from a combinatorial explosion, and the number of parameters to be estimated from the available sample size constitutes the main intractability reason of this approach. To quantify the extent of higher than pairwise spike correlations in pools of multiunit activity, we use an information-geometric approach within the framework of the extended central limit theorem considering all possible contributions from high-order spike correlations. The identification of a deformation parameter allows us to provide a statistical characterisation of the amount of high-order correlations in the case of a very large neural ensemble, significantly reducing the number of parameters, avoiding the sampling problem, and inferring the underlying dynamical properties of the network within pools of multiunit neural activity.
[ { "created": "Tue, 27 Nov 2012 16:26:15 GMT", "version": "v1" } ]
2013-05-30
[ [ "Montani", "Fernando", "" ], [ "Phoka", "Elena", "" ], [ "Portesi", "Mariela", "" ], [ "Schultz", "Simon R.", "" ] ]
Simultaneous recordings from multiple neural units allow us to investigate the activity of very large neural ensembles. To understand how large ensembles of neurons process sensory information, it is necessary to develop suitable statistical models to describe the response variability of the recorded spike trains. Using the information geometry framework, it is possible to estimate higher-order correlations by assigning one interaction parameter to each degree of correlation, leading to a $(2^N-1)$-dimensional model for a population with $N$ neurons. However, this model suffers greatly from a combinatorial explosion, and the number of parameters to be estimated from the available sample size constitutes the main intractability reason of this approach. To quantify the extent of higher than pairwise spike correlations in pools of multiunit activity, we use an information-geometric approach within the framework of the extended central limit theorem considering all possible contributions from high-order spike correlations. The identification of a deformation parameter allows us to provide a statistical characterisation of the amount of high-order correlations in the case of a very large neural ensemble, significantly reducing the number of parameters, avoiding the sampling problem, and inferring the underlying dynamical properties of the network within pools of multiunit neural activity.
1503.06726
Simon Tanaka Mr.
Simon Tanaka, David Sichau, Dagmar Iber
LBIBCell: A Cell-Based Simulation Environment for Morphogenetic Problems
15 pages, 23 pages with supplementary, 6 figures; Bioinformatics 2015
null
10.1093/bioinformatics/btv147
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simulation of morphogenetic problems requires the simultaneous and coupled simulation of signalling and tissue dynamics. A cellular resolution of the tissue domain is important to adequately describe the impact of cell-based events, such as cell division, cell-cell interactions, and spatially restricted signalling events. A tightly coupled cell-based mechano-regulatory simulation tool is therefore required. We developed an open-source software framework for morphogenetic problems. The environment offers core functionalities for the tissue and signalling models. In addition, the software offers great flexibility to add custom extensions and biologically motivated processes. Cells are represented as highly resolved, massless elastic polygons; the viscous properties of the tissue are modelled by a Newtonian fluid. The Immersed Boundary method is used to model the interaction between the viscous and elastic properties of the cells, thus extending on the IBCell model. The fluid and signalling processes are solved using the Lattice Boltzmann method. As application examples we simulate signalling-dependent tissue dynamics.
[ { "created": "Mon, 23 Mar 2015 16:59:14 GMT", "version": "v1" } ]
2015-03-29
[ [ "Tanaka", "Simon", "" ], [ "Sichau", "David", "" ], [ "Iber", "Dagmar", "" ] ]
The simulation of morphogenetic problems requires the simultaneous and coupled simulation of signalling and tissue dynamics. A cellular resolution of the tissue domain is important to adequately describe the impact of cell-based events, such as cell division, cell-cell interactions, and spatially restricted signalling events. A tightly coupled cell-based mechano-regulatory simulation tool is therefore required. We developed an open-source software framework for morphogenetic problems. The environment offers core functionalities for the tissue and signalling models. In addition, the software offers great flexibility to add custom extensions and biologically motivated processes. Cells are represented as highly resolved, massless elastic polygons; the viscous properties of the tissue are modelled by a Newtonian fluid. The Immersed Boundary method is used to model the interaction between the viscous and elastic properties of the cells, thus extending on the IBCell model. The fluid and signalling processes are solved using the Lattice Boltzmann method. As application examples we simulate signalling-dependent tissue dynamics.
q-bio/0401014
Mark Alber
Mark S. Alber, Yi Jiang and Maria A. Kiskowski
Lattice gas cellular automata model for rippling and aggregation in myxobacteria
null
null
10.1016/j.physd.2003.11.012
null
q-bio.QM q-bio.OT
null
A lattice-gas cellular automaton (LGCA) model is used to simulate rippling and aggregation in myxobacteria. An efficient way of representing cells of different cell size, shape and orientation is presented that may be easily extended to model later stages of fruiting body formation. This LGCA model is designed to investigate whether a refractory period, a minimum response time, a maximum oscillation period and non-linear dependence of reversals of cells on C-factor are necessary assumptions for rippling. It is shown that a refractory period of 2-3 minutes, a minimum response time of up to 1 minute and no maximum oscillation period best reproduce rippling in the experiments of {\it Myxoccoccus xanthus}. Non-linear dependence of reversals on C-factor is critical at high cell density. Quantitative simulations demonstrate that the increase in wavelength of ripples when a culture is diluted with non-signaling cells can be explained entirely by the decreased density of C-signaling cells. This result further supports the hypothesis that levels of C-signaling quantitatively depend on and modulate cell density. Analysis of the interpenetrating high density waves shows the presence of a phase shift analogous to the phase shift of interpenetrating solitons. Finally, a model for swarming, aggregation and early fruiting body formation is presented.
[ { "created": "Fri, 9 Jan 2004 22:21:11 GMT", "version": "v1" } ]
2009-11-10
[ [ "Alber", "Mark S.", "" ], [ "Jiang", "Yi", "" ], [ "Kiskowski", "Maria A.", "" ] ]
A lattice-gas cellular automaton (LGCA) model is used to simulate rippling and aggregation in myxobacteria. An efficient way of representing cells of different cell size, shape and orientation is presented that may be easily extended to model later stages of fruiting body formation. This LGCA model is designed to investigate whether a refractory period, a minimum response time, a maximum oscillation period and non-linear dependence of reversals of cells on C-factor are necessary assumptions for rippling. It is shown that a refractory period of 2-3 minutes, a minimum response time of up to 1 minute and no maximum oscillation period best reproduce rippling in the experiments of {\it Myxoccoccus xanthus}. Non-linear dependence of reversals on C-factor is critical at high cell density. Quantitative simulations demonstrate that the increase in wavelength of ripples when a culture is diluted with non-signaling cells can be explained entirely by the decreased density of C-signaling cells. This result further supports the hypothesis that levels of C-signaling quantitatively depend on and modulate cell density. Analysis of the interpenetrating high density waves shows the presence of a phase shift analogous to the phase shift of interpenetrating solitons. Finally, a model for swarming, aggregation and early fruiting body formation is presented.
1610.00121
Victor Solovyev
Victor Solovyev and Ramzan Umarov
Prediction of Prokaryotic and Eukaryotic Promoters Using Convolutional Deep Learning Neural Networks
12 pages, 4 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained the same CNN architecture on promoters of four very distant organisms: human, plant (Arabidopsis), and two bacteria (Escherichia coli and Mycoplasma pneumonia). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn=0.90, Sp=0.96, CC=0.84). The Bacillus subtilis promoters identification CNN model achieves Sn=0.91, Sp=0.95, and CC=0.86. For human and Arabidopsis promoters we employ CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNNs models nicely recognize these complex functional regions. For human Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models (implemented in CNNProm program) demonstrated the ability of deep learning with grasping complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.
[ { "created": "Sat, 1 Oct 2016 11:24:47 GMT", "version": "v1" } ]
2016-10-04
[ [ "Solovyev", "Victor", "" ], [ "Umarov", "Ramzan", "" ] ]
Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained the same CNN architecture on promoters of four very distant organisms: human, plant (Arabidopsis), and two bacteria (Escherichia coli and Mycoplasma pneumonia). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn=0.90, Sp=0.96, CC=0.84). The Bacillus subtilis promoters identification CNN model achieves Sn=0.91, Sp=0.95, and CC=0.86. For human and Arabidopsis promoters we employ CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNNs models nicely recognize these complex functional regions. For human Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models (implemented in CNNProm program) demonstrated the ability of deep learning with grasping complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.
1306.2167
Gergely J Sz\"oll\H{o}si
Gergely J. Sz\"oll\H{o}si and Wojciech Rosikiewicz and Bastien Boussau and Eric Tannier and Vincent Daubin
Efficient Exploration of the Space of Reconciled Gene Trees
Manuscript accepted pending revision in Systematic Biology
null
null
null
q-bio.PE q-bio.BM q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/3.0/
Gene trees record the combination of gene level events, such as duplication, transfer and loss, and species level events, such as speciation and extinction. Gene tree-species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree-species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of trees. We implement ALE in the context of a reconciliation model, which allows for the duplication, transfer and loss of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic topologies, branch lengths and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with 24%, 59% and 46% percent reductions in the mean numbers of duplications, transfers and losses.
[ { "created": "Mon, 10 Jun 2013 11:33:28 GMT", "version": "v1" } ]
2013-06-11
[ [ "Szöllősi", "Gergely J.", "" ], [ "Rosikiewicz", "Wojciech", "" ], [ "Boussau", "Bastien", "" ], [ "Tannier", "Eric", "" ], [ "Daubin", "Vincent", "" ] ]
Gene trees record the combination of gene level events, such as duplication, transfer and loss, and species level events, such as speciation and extinction. Gene tree-species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree-species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of trees. We implement ALE in the context of a reconciliation model, which allows for the duplication, transfer and loss of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic topologies, branch lengths and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with 24%, 59% and 46% percent reductions in the mean numbers of duplications, transfers and losses.
1204.3123
Mustafa Barasa
Barasa Mustafa, Mwangi Irungu Michael, Mutiso Muli Joshua, Kagasi Ambogo Esther, Ozwara Suba Hastings, Gicheru Muita Michael
Plasmodium knowlesi H strain pregnancy malaria immune responses in olive baboons (Papio anubis)
Five pages, six figures;This study was funded by the research capability strengthening WHO grant (Grant Number: A 50075) for malaria research in Africa under the Multilateral Initiative on Malaria/Special Programme for Research and Training in Tropical Diseases (WHO-MIM/TDR).The Institute of Primate Research (IPR) in Nairobi (Kenya) provided the baboons and laboratory facilities for the study
Int. J. Integ. Biol: Year 2010, Volume 10, Issue No. 1: 54 - 58
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximately 24 million pregnant women in Sub-Saharan Africa are at risk of suffering from pregnancy malaria complications. Mechanisms responsible for increased susceptibility to malaria in pregnant women are not fully understood. Baboons are susceptible to Plasmodium knowlesi and their reproductive physiology and host pathogen interactions are similar to those in humans, making them attractive for development as a model for studying mechanisms underlying pregnancy malaria. This study exploited the susceptibility of baboons to Plasmodium knowlesi infection to characterize cytokine and peripheral blood mononuclear cell recall proliferation responses underlying the pathogenesis of pregnancy malaria in baboons infected with Plasmodium knowlesi. The pregnancies of three time mated adult female baboons and their gestational levels were confirmed by ultrasonography. On the 150th day of gestation, the pregnant baboons together with four non pregnant controls were infected with Plasmodium knowlesi H strain parasites. Collection of peripheral sera, and mononuclear cells was then done on a weekly basis. Sera cytokine concentrations were measured by Enzyme Linked Immunosorbent Assay (ELISA) using respective enzyme conjugated antibodies. Peripheral blood mononuclear cell recall proliferation assays were also done on a weekly basis. Results indicate that pregnancy malaria in this model is associated with suppression of interferon gamma and interleukin 6 (IL-6) responses. Tumour necrosis factor alpha responses were upregulated while IL-4, IL-12 and recall proliferation responses were not different from controls. These data to a great extent are consistent with some findings from human studies, showing the feasibility of this model for studying mechanisms underlying pregnancy malaria.
[ { "created": "Sat, 14 Apr 2012 00:11:19 GMT", "version": "v1" }, { "created": "Tue, 17 Apr 2012 08:01:13 GMT", "version": "v2" } ]
2012-04-18
[ [ "Mustafa", "Barasa", "" ], [ "Michael", "Mwangi Irungu", "" ], [ "Joshua", "Mutiso Muli", "" ], [ "Esther", "Kagasi Ambogo", "" ], [ "Hastings", "Ozwara Suba", "" ], [ "Michael", "Gicheru Muita", "" ] ]
Approximately 24 million pregnant women in Sub-Saharan Africa are at risk of suffering from pregnancy malaria complications. Mechanisms responsible for increased susceptibility to malaria in pregnant women are not fully understood. Baboons are susceptible to Plasmodium knowlesi and their reproductive physiology and host pathogen interactions are similar to those in humans, making them attractive for development as a model for studying mechanisms underlying pregnancy malaria. This study exploited the susceptibility of baboons to Plasmodium knowlesi infection to characterize cytokine and peripheral blood mononuclear cell recall proliferation responses underlying the pathogenesis of pregnancy malaria in baboons infected with Plasmodium knowlesi. The pregnancies of three time mated adult female baboons and their gestational levels were confirmed by ultrasonography. On the 150th day of gestation, the pregnant baboons together with four non pregnant controls were infected with Plasmodium knowlesi H strain parasites. Collection of peripheral sera, and mononuclear cells was then done on a weekly basis. Sera cytokine concentrations were measured by Enzyme Linked Immunosorbent Assay (ELISA) using respective enzyme conjugated antibodies. Peripheral blood mononuclear cell recall proliferation assays were also done on a weekly basis. Results indicate that pregnancy malaria in this model is associated with suppression of interferon gamma and interleukin 6 (IL-6) responses. Tumour necrosis factor alpha responses were upregulated while IL-4, IL-12 and recall proliferation responses were not different from controls. These data to a great extent are consistent with some findings from human studies, showing the feasibility of this model for studying mechanisms underlying pregnancy malaria.
2006.02459
Slobodan Vucetic
Benjamin Seibold, Zivjena Vucetic, Slobodan Vucetic
Quantitative Relationship between Population Mobility and COVID-19 Growth Rate based on 14 Countries
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study develops a framework for quantification of the impact of changes in population mobility due to social distancing on the COVID-19 infection growth rate. Using the Susceptible-Infected-Recovered (SIR) epidemiological model we establish that under some mild assumptions the growth rate of COVID-19 deaths is a time-delayed approximation of the growth rate of COVID-19 infections. We then hypothesize that the growth rate of COVID-19 infections is a function of population mobility, which leads to a statistical model that predicts the growth rate of COVID-19 deaths as a delayed function of population mobility. The parameters of the statistical model directly reveal the growth rate of infections, the mobility-dependent transmission rate, the mobility-independent recovery rate, and the critical mobility, below which COVID-19 growth rate becomes negative. We fitted the proposed statistical model on publicly available data from 14 countries where daily death counts exceeded 100 for more than 3 days as of May 6th, 2020. The publicly available Google Mobility Index (GMI) was used as a measure of population mobility at the country level. Our results show that the growth rate of COVID-19 deaths can be accurately estimated 20 days ahead as a quadratic function of the transit category of GMI (adjusted R-squared = 0.784). The estimated 95% confidence interval for the critical mobility is in the range between 36.1% and 47.6% of the pre-COVID-19 mobility. This result indicates that a significant reduction in population mobility is needed to reverse the growth of COVID-19 epidemic. Moreover, the quantitative relationship established herein suggests that a readily available, population-level metric such as GMI can be a useful indicator of the course of COVID-19 epidemic.
[ { "created": "Wed, 3 Jun 2020 18:10:38 GMT", "version": "v1" } ]
2020-06-05
[ [ "Seibold", "Benjamin", "" ], [ "Vucetic", "Zivjena", "" ], [ "Vucetic", "Slobodan", "" ] ]
This study develops a framework for quantification of the impact of changes in population mobility due to social distancing on the COVID-19 infection growth rate. Using the Susceptible-Infected-Recovered (SIR) epidemiological model we establish that under some mild assumptions the growth rate of COVID-19 deaths is a time-delayed approximation of the growth rate of COVID-19 infections. We then hypothesize that the growth rate of COVID-19 infections is a function of population mobility, which leads to a statistical model that predicts the growth rate of COVID-19 deaths as a delayed function of population mobility. The parameters of the statistical model directly reveal the growth rate of infections, the mobility-dependent transmission rate, the mobility-independent recovery rate, and the critical mobility, below which COVID-19 growth rate becomes negative. We fitted the proposed statistical model on publicly available data from 14 countries where daily death counts exceeded 100 for more than 3 days as of May 6th, 2020. The publicly available Google Mobility Index (GMI) was used as a measure of population mobility at the country level. Our results show that the growth rate of COVID-19 deaths can be accurately estimated 20 days ahead as a quadratic function of the transit category of GMI (adjusted R-squared = 0.784). The estimated 95% confidence interval for the critical mobility is in the range between 36.1% and 47.6% of the pre-COVID-19 mobility. This result indicates that a significant reduction in population mobility is needed to reverse the growth of COVID-19 epidemic. Moreover, the quantitative relationship established herein suggests that a readily available, population-level metric such as GMI can be a useful indicator of the course of COVID-19 epidemic.
2109.12799
Cameron Zachreson
Cameron Zachreson, Freya M. Shearer, David J. Price, Michael J. Lydeamore, Jodie McVernon, James McCaw, and Nicholas Geard
COVID-19 in low-tolerance border quarantine systems: impact of the Delta variant of SARS-CoV-2
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In controlling transmission of COVID-19, the effectiveness of border quarantine strategies is a key concern for jurisdictions in which the local prevalence of disease and immunity is low. In settings like this such as China, Australia, and New Zealand, rare outbreak events can lead to escalating epidemics and trigger the imposition of large scale lockdown policies. Here, we examine to what degree vaccination status of incoming arrivals and the quarantine workforce can allow relaxation of quarantine requirements. To do so, we develop and apply a detailed model of COVID-19 disease progression and transmission taking into account nuanced timing factors. Key among these are disease incubation periods and the progression of infection detectability during incubation. Using the disease characteristics associated with the ancestral lineage of SARS-CoV-2 to benchmark the level of acceptable risk, we examine the performance of the border quarantine system for vaccinated arrivals. We examine disease transmission and vaccine efficacy parameters over a wide range, covering plausible values for the Delta variant currently circulating globally. Our results indicate a threshold in outbreak potential as a function of vaccine efficacy, with the time until an outbreak increasing by up to two orders of magnitude as vaccine efficacy against transmission increases from 70% to 90%. For parameters corresponding to the Delta variant, vaccination is able to maintain the capacity of quarantine systems to reduce case importation and outbreak risk, by counteracting the pathogen's increased infectiousness. To prevent outbreaks, heightened vaccination in border quarantine systems must be combined with mass vaccination. The ultimate success of these programs will depend sensitively on the efficacy of vaccines against viral transmission.
[ { "created": "Mon, 27 Sep 2021 05:09:32 GMT", "version": "v1" } ]
2021-09-28
[ [ "Zachreson", "Cameron", "" ], [ "Shearer", "Freya M.", "" ], [ "Price", "David J.", "" ], [ "Lydeamore", "Michael J.", "" ], [ "McVernon", "Jodie", "" ], [ "McCaw", "James", "" ], [ "Geard", "Nicholas", "" ] ]
In controlling transmission of COVID-19, the effectiveness of border quarantine strategies is a key concern for jurisdictions in which the local prevalence of disease and immunity is low. In settings like this such as China, Australia, and New Zealand, rare outbreak events can lead to escalating epidemics and trigger the imposition of large scale lockdown policies. Here, we examine to what degree vaccination status of incoming arrivals and the quarantine workforce can allow relaxation of quarantine requirements. To do so, we develop and apply a detailed model of COVID-19 disease progression and transmission taking into account nuanced timing factors. Key among these are disease incubation periods and the progression of infection detectability during incubation. Using the disease characteristics associated with the ancestral lineage of SARS-CoV-2 to benchmark the level of acceptable risk, we examine the performance of the border quarantine system for vaccinated arrivals. We examine disease transmission and vaccine efficacy parameters over a wide range, covering plausible values for the Delta variant currently circulating globally. Our results indicate a threshold in outbreak potential as a function of vaccine efficacy, with the time until an outbreak increasing by up to two orders of magnitude as vaccine efficacy against transmission increases from 70% to 90%. For parameters corresponding to the Delta variant, vaccination is able to maintain the capacity of quarantine systems to reduce case importation and outbreak risk, by counteracting the pathogen's increased infectiousness. To prevent outbreaks, heightened vaccination in border quarantine systems must be combined with mass vaccination. The ultimate success of these programs will depend sensitively on the efficacy of vaccines against viral transmission.
1302.1944
Anil Korkut
Anil Korkut, Wayne A Hendrickson
Stereochemistry of Polypeptide Conformation in Coarse Grained Analysis
12 pages, 3 figures, Postprint of book chapter submitted to the Biomolecular Forms and Functions, M. Bansal and N. Srinivasan, Eds. copyright (2013) [copyright World Scientific Publishing Company]
Biomolecular Forms and Functions, M. Bansal and N. Srinivasan, Eds. IISc Press - World Scientific Publishing, Singapore, pp. 136-147 (2013)
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The conformations available to polypeptides are determined by the interatomic forces acting on the peptide units, whereby backbone torsion angles are restricted as described by the Ramachandran plot. Although typical proteins are composed predominantly from {\alpha}-helices and {\beta}-sheets, they nevertheless adopt diverse tertiary structure, each folded as dictated by its unique amino-acid sequence. Despite such uniqueness, however, the functioning of many proteins involves changes between quite different conformations. The study of large-scale conformational changes, particularly in large systems, is facilitated by a coarse-grained representation such as provided by virtually bonded C{\alpha} atoms. We have developed a virtual atom molecular mechanics (VAMM) force field to describe conformational dynamics in proteins and a VAMM-based algorithm for computing conformational transition pathways. Here we describe the stereochemical analysis of proteins in this coarse-grained representation, comparing the relevant plots in coarse-grained conformational space to the corresponding Ramachandran plots, having contoured each at levels determined statistically from residues in a large database. The distributions shown for an all-{\alpha} protein, two all-{\beta} proteins and one {\alpha}+{\beta} protein serve to relate the coarse-grained distributions to the familiar Ramachandran plot.
[ { "created": "Fri, 8 Feb 2013 04:59:36 GMT", "version": "v1" } ]
2013-02-11
[ [ "Korkut", "Anil", "" ], [ "Hendrickson", "Wayne A", "" ] ]
The conformations available to polypeptides are determined by the interatomic forces acting on the peptide units, whereby backbone torsion angles are restricted as described by the Ramachandran plot. Although typical proteins are composed predominantly from {\alpha}-helices and {\beta}-sheets, they nevertheless adopt diverse tertiary structure, each folded as dictated by its unique amino-acid sequence. Despite such uniqueness, however, the functioning of many proteins involves changes between quite different conformations. The study of large-scale conformational changes, particularly in large systems, is facilitated by a coarse-grained representation such as provided by virtually bonded C{\alpha} atoms. We have developed a virtual atom molecular mechanics (VAMM) force field to describe conformational dynamics in proteins and a VAMM-based algorithm for computing conformational transition pathways. Here we describe the stereochemical analysis of proteins in this coarse-grained representation, comparing the relevant plots in coarse-grained conformational space to the corresponding Ramachandran plots, having contoured each at levels determined statistically from residues in a large database. The distributions shown for an all-{\alpha} protein, two all-{\beta} proteins and one {\alpha}+{\beta} protein serve to relate the coarse-grained distributions to the familiar Ramachandran plot.
1810.11195
Tianming Wang
Hailong Dou, Haitao Yang, James L.D. Smith, Limin Feng, Tianming Wang, Jianping Ge
Prey selection of Amur tigers in relation to the spatiotemporal overlap with prey across the Sino-Russian border
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The endangered Amur tiger is confined primarily to a narrow area along the border with Russia in Northeast China. Little is known about the foraging strategies of this small subpopulation in Hunchun Nature Reserve on the Chinese side of the border; at this location, the prey base and land use patterns are distinctly different from those in the larger population of the Sikhote-Alin Mountains of Russia. Using dietary analysis of scats and camera-trapping data from Hunchun Nature Reserve, we assessed spatiotemporal overlap of tigers and their prey and identified prey selection patterns to enhance understanding of the ecological requirements of tigers in Northeast China. Results indicated that wild prey constituted 94.9% of the total biomass consumed by tigers; domestic livestock represented 5.1% of the diet. Two species, wild boar and sika deer , collectively represented 83% of the biomass consumed by tigers. Despite lower spatial overlap of tigers and wild boar compared to tigers and sika deer, tigers preferentially preyed on boar, likely facilitated by high temporal overlap in activity patterns. Tigers exhibit significant spatial overlap with sika deer, likely favoring a high level of tiger predation on this large-sized ungulate. However, tigers did not preferred roe deer (Capreolus pygargus) and showed a low spatial overlap with roe deer. Overall, our results suggest that tiger prey selection is determined by prey body size and also overlap in tiger and prey use of time or space. Also, we suggest that strategies designed to minimize livestock forays into forested lands may be important for decreasing the livestock depredation by tigers. This study offers a framework to simultaneously integrate food habit analysis with the distribution of predators and prey through time and space to provide a comprehensive understanding of foraging strategies of large carnivores.
[ { "created": "Fri, 26 Oct 2018 05:56:36 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 09:33:09 GMT", "version": "v2" } ]
2019-03-29
[ [ "Dou", "Hailong", "" ], [ "Yang", "Haitao", "" ], [ "Smith", "James L. D.", "" ], [ "Feng", "Limin", "" ], [ "Wang", "Tianming", "" ], [ "Ge", "Jianping", "" ] ]
The endangered Amur tiger is confined primarily to a narrow area along the border with Russia in Northeast China. Little is known about the foraging strategies of this small subpopulation in Hunchun Nature Reserve on the Chinese side of the border; at this location, the prey base and land use patterns are distinctly different from those in the larger population of the Sikhote-Alin Mountains of Russia. Using dietary analysis of scats and camera-trapping data from Hunchun Nature Reserve, we assessed spatiotemporal overlap of tigers and their prey and identified prey selection patterns to enhance understanding of the ecological requirements of tigers in Northeast China. Results indicated that wild prey constituted 94.9% of the total biomass consumed by tigers; domestic livestock represented 5.1% of the diet. Two species, wild boar and sika deer , collectively represented 83% of the biomass consumed by tigers. Despite lower spatial overlap of tigers and wild boar compared to tigers and sika deer, tigers preferentially preyed on boar, likely facilitated by high temporal overlap in activity patterns. Tigers exhibit significant spatial overlap with sika deer, likely favoring a high level of tiger predation on this large-sized ungulate. However, tigers did not preferred roe deer (Capreolus pygargus) and showed a low spatial overlap with roe deer. Overall, our results suggest that tiger prey selection is determined by prey body size and also overlap in tiger and prey use of time or space. Also, we suggest that strategies designed to minimize livestock forays into forested lands may be important for decreasing the livestock depredation by tigers. This study offers a framework to simultaneously integrate food habit analysis with the distribution of predators and prey through time and space to provide a comprehensive understanding of foraging strategies of large carnivores.
2203.09798
Francisco Rowe Dr
Niall Newsham, Francisco Rowe
Understanding the Trajectories of Population Decline Across Rural and Urban Europe: A Sequence Analysis
24 pages, 5 tables, 5 figures
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Population decline is projected to become widespread in Europe, with the continental population set to reverse its longstanding trajectory of growth within the next five years. This represents unfamiliar demographic territory. Despite this, literature on decline remains sparse and our understanding porous. Particular epistemological deficiencies stem from a lack of both cross-national and temporal analyses of population decline. This study seeks to address these gapsthrough the novel application of sequence and cluster analysis techniques to examine variations in population decline trajectories since 2000 in 696 sub-national areas across 33 European territories. The methodology allows for a holistic understanding of decline trajectories capturing differences in the ordering, timing, magnitude and spatial structure of population decline. We identify a typology of population decline distinguishing seven distinct pathways to depopulation and chart their geographies. Results revealed differentiated pathways of depopulation in continental sub-regions, with consistent and rapid declines in the east, persistent but moderate declines in central Europe, accelerating declines in the south and decelerating population declines in the west. Results also revealed differentiated patterns of depopulation across the rural-urban continuum, with urban and populous areas experiencing deceleration in population decline, while population decline accelerates or stabilises in rural areas. Small and mid-sized areas displayed heterogeneous depopulation trajectories, highlighting the importance of local contextual factors in influencing trajectories of population decline.
[ { "created": "Fri, 18 Mar 2022 08:29:57 GMT", "version": "v1" }, { "created": "Tue, 10 May 2022 13:47:38 GMT", "version": "v2" } ]
2022-05-11
[ [ "Newsham", "Niall", "" ], [ "Rowe", "Francisco", "" ] ]
Population decline is projected to become widespread in Europe, with the continental population set to reverse its longstanding trajectory of growth within the next five years. This represents unfamiliar demographic territory. Despite this, literature on decline remains sparse and our understanding porous. Particular epistemological deficiencies stem from a lack of both cross-national and temporal analyses of population decline. This study seeks to address these gapsthrough the novel application of sequence and cluster analysis techniques to examine variations in population decline trajectories since 2000 in 696 sub-national areas across 33 European territories. The methodology allows for a holistic understanding of decline trajectories capturing differences in the ordering, timing, magnitude and spatial structure of population decline. We identify a typology of population decline distinguishing seven distinct pathways to depopulation and chart their geographies. Results revealed differentiated pathways of depopulation in continental sub-regions, with consistent and rapid declines in the east, persistent but moderate declines in central Europe, accelerating declines in the south and decelerating population declines in the west. Results also revealed differentiated patterns of depopulation across the rural-urban continuum, with urban and populous areas experiencing deceleration in population decline, while population decline accelerates or stabilises in rural areas. Small and mid-sized areas displayed heterogeneous depopulation trajectories, highlighting the importance of local contextual factors in influencing trajectories of population decline.
1212.2617
Peter Richtarik
William Hulme, Peter Richt\'arik, Lynne McGuire and Alison Green
Optimal diagnostic tests for sporadic Creutzfeldt-Jakob disease based on support vector machine classification of RT-QuIC data
32 pages, 12 figures, 1 table
null
null
null
q-bio.QM cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we study numerical construction of optimal clinical diagnostic tests for detecting sporadic Creutzfeldt-Jakob disease (sCJD). A cerebrospinal fluid sample (CSF) from a suspected sCJD patient is subjected to a process which initiates the aggregation of a protein present only in cases of sCJD. This aggregation is indirectly observed in real-time at regular intervals, so that a longitudinal set of data is constructed that is then analysed for evidence of this aggregation. The best existing test is based solely on the final value of this set of data, which is compared against a threshold to conclude whether or not aggregation, and thus sCJD, is present. This test criterion was decided upon by analysing data from a total of 108 sCJD and non-sCJD samples, but this was done subjectively and there is no supporting mathematical analysis declaring this criterion to be exploiting the available data optimally. This paper addresses this deficiency, seeking to validate or improve the test primarily via support vector machine (SVM) classification. Besides this, we address a number of additional issues such as i) early stopping of the measurement process, ii) the possibility of detecting the particular type of sCJD and iii) the incorporation of additional patient data such as age, sex, disease duration and timing of CSF sampling into the construction of the test.
[ { "created": "Tue, 11 Dec 2012 20:33:16 GMT", "version": "v1" } ]
2012-12-12
[ [ "Hulme", "William", "" ], [ "Richtárik", "Peter", "" ], [ "McGuire", "Lynne", "" ], [ "Green", "Alison", "" ] ]
In this work we study numerical construction of optimal clinical diagnostic tests for detecting sporadic Creutzfeldt-Jakob disease (sCJD). A cerebrospinal fluid sample (CSF) from a suspected sCJD patient is subjected to a process which initiates the aggregation of a protein present only in cases of sCJD. This aggregation is indirectly observed in real-time at regular intervals, so that a longitudinal set of data is constructed that is then analysed for evidence of this aggregation. The best existing test is based solely on the final value of this set of data, which is compared against a threshold to conclude whether or not aggregation, and thus sCJD, is present. This test criterion was decided upon by analysing data from a total of 108 sCJD and non-sCJD samples, but this was done subjectively and there is no supporting mathematical analysis declaring this criterion to be exploiting the available data optimally. This paper addresses this deficiency, seeking to validate or improve the test primarily via support vector machine (SVM) classification. Besides this, we address a number of additional issues such as i) early stopping of the measurement process, ii) the possibility of detecting the particular type of sCJD and iii) the incorporation of additional patient data such as age, sex, disease duration and timing of CSF sampling into the construction of the test.
q-bio/0703038
Zhong Li
Zhong Li, Aris Floratos, David Wang, Andrea Califano
A Pattern Discovery-Based Method for Detecting Multi-Locus Genetic Association
49 pages, 4 tables, 4 figures
null
null
null
q-bio.GN q-bio.PE
null
Methods to effectively detect multi-locus genetic association are becoming increasingly relevant in the genetic dissection of complex trait in humans. Current approaches typically consider a limited number of hypotheses, most of which are related to the effect of a single locus or of a relatively small number of neighboring loci on a chromosomal region. We have developed a novel method that is specifically designed to detect genetic association involving multiple disease-susceptibility loci, possibly on different chromosomes. Our approach relies on the efficient discovery of patterns comprising spatially unrestricted polymorphic markers and on the use of appropriate test statistics to evaluate pattern-trait association. Power calculations using multi-locus disease models demonstrate significant gain of power by using this method in detecting multi-locus genetic association when compared to a standard single marker analysis method. When analyzing a Schizophrenia dataset, we confirmed a previously identified gene-gene interaction. In addition, a less conspicuous association involving different markers on the same two genes was also identified, implicating genetic heterogeneity.
[ { "created": "Fri, 16 Mar 2007 18:08:01 GMT", "version": "v1" } ]
2007-05-23
[ [ "Li", "Zhong", "" ], [ "Floratos", "Aris", "" ], [ "Wang", "David", "" ], [ "Califano", "Andrea", "" ] ]
Methods to effectively detect multi-locus genetic association are becoming increasingly relevant in the genetic dissection of complex trait in humans. Current approaches typically consider a limited number of hypotheses, most of which are related to the effect of a single locus or of a relatively small number of neighboring loci on a chromosomal region. We have developed a novel method that is specifically designed to detect genetic association involving multiple disease-susceptibility loci, possibly on different chromosomes. Our approach relies on the efficient discovery of patterns comprising spatially unrestricted polymorphic markers and on the use of appropriate test statistics to evaluate pattern-trait association. Power calculations using multi-locus disease models demonstrate significant gain of power by using this method in detecting multi-locus genetic association when compared to a standard single marker analysis method. When analyzing a Schizophrenia dataset, we confirmed a previously identified gene-gene interaction. In addition, a less conspicuous association involving different markers on the same two genes was also identified, implicating genetic heterogeneity.
1808.07154
Ralf Schwamborn
R. Schwamborn, T. K. Mildenberger, M. H. Taylor
Assessing sources of uncertainty in length-based estimates of body growth in populations of fishes and macroinvertebrates with bootstrapped ELEFAN
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The determination of rates of body growth is the first step in many aquatic population studies and fisheries stock assessments. ELEFAN (Electronic LEngth Frequency ANalysis) is a widely used method to fit a growth curve to length-frequency distribution (LFD) data. However, up to now, it was not possible to assess its accuracy or the uncertainty inherent of this method, or to obtain confidence intervals for growth parameters within an unconstrained search space. In this study, experiments were conducted to assess the precision and accuracy of bootstrapped and single-fit ELEFAN-based curve fitting methods, using synthetic LFDs with known input parameters and a real data set of Abra alba shell lengths. The comparison of several types of bootstrap experiments and their outputs (95% confidence intervals and confidence contour plots) provided a first glimpse into the accuracy of modern ELEFAN-based fit methods. The main components of uncertainty (precision and reproducibility of fit algorithms, seed effects, sample size and matrix information content) could be assessed from partial bootstraps. Uncertainty was mainly determined by LFD matrix size, total number of non-zero bins and the sampling of large-sized individuals. A new pseudo-Rsquared index for the goodness-of-fit of VBGF models to LFD data is proposed. For a large, perfect synthetic data set, pseudo-RsquaredPhi was very high (88 to 100%), indicating an excellent fit of the VBGF model. The small Abra alba data set showed a low pseudo-RsquaredPhi, from to 54% to 68%, indicating the need for more samples and a larger LFD data matrix. New, robust, bootstrap-based methods for curve fitting are presented and discussed. This study demonstrates a promising new path for length-based analyses of growth and mortality in natural populations, which are the basis for a new suite of methods that are included in the new fishboot package.
[ { "created": "Tue, 21 Aug 2018 22:35:04 GMT", "version": "v1" } ]
2018-08-23
[ [ "Schwamborn", "R.", "" ], [ "Mildenberger", "T. K.", "" ], [ "Taylor", "M. H.", "" ] ]
The determination of rates of body growth is the first step in many aquatic population studies and fisheries stock assessments. ELEFAN (Electronic LEngth Frequency ANalysis) is a widely used method to fit a growth curve to length-frequency distribution (LFD) data. However, up to now, it was not possible to assess its accuracy or the uncertainty inherent of this method, or to obtain confidence intervals for growth parameters within an unconstrained search space. In this study, experiments were conducted to assess the precision and accuracy of bootstrapped and single-fit ELEFAN-based curve fitting methods, using synthetic LFDs with known input parameters and a real data set of Abra alba shell lengths. The comparison of several types of bootstrap experiments and their outputs (95% confidence intervals and confidence contour plots) provided a first glimpse into the accuracy of modern ELEFAN-based fit methods. The main components of uncertainty (precision and reproducibility of fit algorithms, seed effects, sample size and matrix information content) could be assessed from partial bootstraps. Uncertainty was mainly determined by LFD matrix size, total number of non-zero bins and the sampling of large-sized individuals. A new pseudo-Rsquared index for the goodness-of-fit of VBGF models to LFD data is proposed. For a large, perfect synthetic data set, pseudo-RsquaredPhi was very high (88 to 100%), indicating an excellent fit of the VBGF model. The small Abra alba data set showed a low pseudo-RsquaredPhi, from to 54% to 68%, indicating the need for more samples and a larger LFD data matrix. New, robust, bootstrap-based methods for curve fitting are presented and discussed. This study demonstrates a promising new path for length-based analyses of growth and mortality in natural populations, which are the basis for a new suite of methods that are included in the new fishboot package.
q-bio/0606005
Yoram Burak
Yoram Burak, Ted Brookings, and Ila Fiete
Triangular lattice neurons may implement an advanced numeral system to precisely encode rat position over large ranges
4 pages with one figure, and 2 pages of supplementary information
null
null
null
q-bio.NC
null
We argue by observation of the neural data that neurons in area dMEC of rats, which fire whenever the rat is on any vertex of a regular triangular lattice that tiles 2-d space, may be using an advanced numeral system to reversibly encode rat position. We interpret measured dMEC properties within the framework of a residue number system (RNS), and describe how RNS encoding -- which breaks the non-periodic variable of rat position into a set of narrowly distributed periodic variables -- allows a small set of cells to compactly represent and efficiently update rat position with high resolution over a large range. We show that the uniquely useful properties of RNS encoding still hold when the encoded and encoding quantities are relaxed to be real numbers with built-in uncertainties, and provide a numerical and functional estimate of the range and resolution of rat positions that can be uniquely encoded in dMEC. The use of a compact, `arithmetic-friendly' numeral system to encode a metric variable, as we propose is happening in dMEC, is qualitatively different from all previously identified examples of coding in the brain. We discuss the numerous neurobiological implications and predictions of our hypothesis.
[ { "created": "Sun, 4 Jun 2006 06:15:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Burak", "Yoram", "" ], [ "Brookings", "Ted", "" ], [ "Fiete", "Ila", "" ] ]
We argue by observation of the neural data that neurons in area dMEC of rats, which fire whenever the rat is on any vertex of a regular triangular lattice that tiles 2-d space, may be using an advanced numeral system to reversibly encode rat position. We interpret measured dMEC properties within the framework of a residue number system (RNS), and describe how RNS encoding -- which breaks the non-periodic variable of rat position into a set of narrowly distributed periodic variables -- allows a small set of cells to compactly represent and efficiently update rat position with high resolution over a large range. We show that the uniquely useful properties of RNS encoding still hold when the encoded and encoding quantities are relaxed to be real numbers with built-in uncertainties, and provide a numerical and functional estimate of the range and resolution of rat positions that can be uniquely encoded in dMEC. The use of a compact, `arithmetic-friendly' numeral system to encode a metric variable, as we propose is happening in dMEC, is qualitatively different from all previously identified examples of coding in the brain. We discuss the numerous neurobiological implications and predictions of our hypothesis.
1203.5914
Martin Burger
Michael Moeller and Martin Burger and Peter Dieterich and Albrecht Schwab
A Framework for Automated Cell Tracking in Phase Contrast Microscopic Videos based on Normal Velocities
null
null
null
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel framework for the automated tracking of cells, with a particular focus on the challenging situation of phase contrast microscopic videos. Our framework is based on a topology preserving variational segmentation approach applied to normal velocity components obtained from optical flow computations, which appears to yield robust tracking and automated extraction of cell trajectories. In order to obtain improved trackings of local shape features we discuss an additional correction step based on active contours and the image Laplacian which we optimize for an example class of transformed renal epithelial (MDCK-F) cells. We also test the framework for human melanoma cells and murine neutrophil granulocytes that were seeded on different types of extracellular matrices. The results are validated with manual tracking results.
[ { "created": "Tue, 27 Mar 2012 10:05:19 GMT", "version": "v1" } ]
2012-03-28
[ [ "Moeller", "Michael", "" ], [ "Burger", "Martin", "" ], [ "Dieterich", "Peter", "" ], [ "Schwab", "Albrecht", "" ] ]
This paper introduces a novel framework for the automated tracking of cells, with a particular focus on the challenging situation of phase contrast microscopic videos. Our framework is based on a topology preserving variational segmentation approach applied to normal velocity components obtained from optical flow computations, which appears to yield robust tracking and automated extraction of cell trajectories. In order to obtain improved trackings of local shape features we discuss an additional correction step based on active contours and the image Laplacian which we optimize for an example class of transformed renal epithelial (MDCK-F) cells. We also test the framework for human melanoma cells and murine neutrophil granulocytes that were seeded on different types of extracellular matrices. The results are validated with manual tracking results.
1408.2298
Atsushi Kamimura
Atsushi Kamimura and Kunihiko Kaneko
Transition to diversification by competition for resources in catalytic reaction networks
18 pages, 13 figure, submitted for publication
null
null
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All life, including cells and artificial protocells, must integrate diverse molecules into a single unit in order to reproduce. Despite expected pressure to evolve a simple system with the fastest replication speed, the mechanism by which the use of the great variety of components, and the coexistence of diverse cell-types with different compositions are achieved is as yet unknown. Here we show that coexistence of such diverse compositions and cell-types is the result of competitions for a variety of limited resources. We find that a transition to diversity occurs both in chemical compositions and in protocell types, as the resource supply is decreased, when the maximum inflow and consumption of resources are balanced.
[ { "created": "Mon, 11 Aug 2014 03:24:41 GMT", "version": "v1" } ]
2014-08-12
[ [ "Kamimura", "Atsushi", "" ], [ "Kaneko", "Kunihiko", "" ] ]
All life, including cells and artificial protocells, must integrate diverse molecules into a single unit in order to reproduce. Despite expected pressure to evolve a simple system with the fastest replication speed, the mechanism by which the use of the great variety of components, and the coexistence of diverse cell-types with different compositions are achieved is as yet unknown. Here we show that coexistence of such diverse compositions and cell-types is the result of competitions for a variety of limited resources. We find that a transition to diversity occurs both in chemical compositions and in protocell types, as the resource supply is decreased, when the maximum inflow and consumption of resources are balanced.
2003.12017
Ganesh Kumar M
Ganesh Kumar M, Soman K.P, Gopalakrishnan E.A, Vijay Krishna Menon, Sowmya V
Prediction of number of cases expected and estimation of the final size of coronavirus epidemic in India using the logistic model and genetic algorithm
null
null
null
null
q-bio.PE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we have applied the logistic growth regression model and genetic algorithm to predict the number of coronavirus infected cases that can be expected in upcoming days in India and also estimated the final size and its peak time of the coronavirus epidemic in India.
[ { "created": "Thu, 26 Mar 2020 16:32:54 GMT", "version": "v1" } ]
2020-03-27
[ [ "M", "Ganesh Kumar", "" ], [ "P", "Soman K.", "" ], [ "A", "Gopalakrishnan E.", "" ], [ "Menon", "Vijay Krishna", "" ], [ "V", "Sowmya", "" ] ]
In this paper, we have applied the logistic growth regression model and genetic algorithm to predict the number of coronavirus infected cases that can be expected in upcoming days in India and also estimated the final size and its peak time of the coronavirus epidemic in India.
1805.09603
Chen Beer
Chen Beer, Omri Barak
One step back, two steps forward: interference and learning in recurrent neural networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network - a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning has major consequences for the learning trajectory and its final outcome. Specifically, we show that Least Mean Squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning, in a learning rule dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.
[ { "created": "Thu, 24 May 2018 11:05:12 GMT", "version": "v1" }, { "created": "Wed, 30 Jan 2019 07:06:09 GMT", "version": "v2" }, { "created": "Thu, 2 May 2019 14:32:44 GMT", "version": "v3" } ]
2019-05-03
[ [ "Beer", "Chen", "" ], [ "Barak", "Omri", "" ] ]
Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network - a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning has major consequences for the learning trajectory and its final outcome. Specifically, we show that Least Mean Squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning, in a learning rule dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.
1303.0256
Marcelo Tragtenberg Dr.
M. Girardi-Schappo, M. H. R. Tragtenberg, O. Kinouchi
A Brief History of Excitable Map-Based Neurons and Neural Networks
53 pages, 13 figures, submitted to Journal of Neuroscience Methods
Journal of Neuroscience Methods, Volume 220, Issue 2, 15 November 2013, Pages 116-130
10.1016/j.jneumeth.2013.07.014
null
q-bio.NC cond-mat.dis-nn nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This review gives a short historical account of the excitable maps approach for modeling neurons and neuronal networks. Some early models, due to Pasemann (1993), Chialvo (1995) and Kinouchi and Tragtenberg (1996), are compared with more recent proposals by Rulkov (2002) and Izhikevich (2003). We also review map-based schemes for electrical and chemical synapses and some recent findings as critical avalanches in map-based neural networks. We conclude with suggestions for further work in this area like more efficient maps, compartmental modeling and close dynamical comparison with conductance-based models.
[ { "created": "Fri, 1 Mar 2013 19:25:54 GMT", "version": "v1" } ]
2016-02-03
[ [ "Girardi-Schappo", "M.", "" ], [ "Tragtenberg", "M. H. R.", "" ], [ "Kinouchi", "O.", "" ] ]
This review gives a short historical account of the excitable maps approach for modeling neurons and neuronal networks. Some early models, due to Pasemann (1993), Chialvo (1995) and Kinouchi and Tragtenberg (1996), are compared with more recent proposals by Rulkov (2002) and Izhikevich (2003). We also review map-based schemes for electrical and chemical synapses and some recent findings as critical avalanches in map-based neural networks. We conclude with suggestions for further work in this area like more efficient maps, compartmental modeling and close dynamical comparison with conductance-based models.
1605.06790
Marisa Eisenberg
Elizabeth C. Lee, Michael R. Kelly, Brad M. Ochocki, Segun M. Akinwumi, Karen E. S. Hamre, Joseph H. Tien, Marisa C. Eisenberg
Model distinguishability and inference robustness in mechanisms of cholera transmission and loss of immunity
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models of cholera and waterborne disease vary widely in their structures, in terms of transmission pathways, loss of immunity, and other features. These differences may yield different predictions and parameter estimates from the same data. Given the increasing use of models to inform public health decision-making, it is important to assess distinguishability (whether models can be distinguished based on fit to data) and inference robustness (whether model inferences are robust to realistic variations in model structure). We examined the effects of uncertainty in model structure in epidemic cholera, testing a range of models based on known features of cholera epidemiology. We fit to simulated epidemic and long-term data, as well as data from the 2006 Angola epidemic. We evaluated model distinguishability based on data fit, and whether parameter values and forecasts can accurately be inferred from incidence data. In general, all models were able to successfully fit to all data sets, even if misspecified. However, in the long-term data, the best model fits were achieved when the loss of immunity form matched those of the model that simulated the data. Two transmission and reporting parameters were accurately estimated across all models, while the remaining showed broad variation across the different models and data sets. Forecasting efforts were not successful early, but once the epidemic peak had been achieved, most models showed similar accuracy. Our results suggest that we are unlikely to be able to infer mechanistic details from epidemic case data alone, underscoring the need for broader data collection. Nonetheless, with sufficient data, conclusions from forecasting and some parameter estimates were robust to variations in the model structure, and comparative modeling can help determine how variations in model structure affect conclusions drawn from models and data.
[ { "created": "Sun, 22 May 2016 13:35:40 GMT", "version": "v1" } ]
2016-05-24
[ [ "Lee", "Elizabeth C.", "" ], [ "Kelly", "Michael R.", "" ], [ "Ochocki", "Brad M.", "" ], [ "Akinwumi", "Segun M.", "" ], [ "Hamre", "Karen E. S.", "" ], [ "Tien", "Joseph H.", "" ], [ "Eisenberg", "Marisa C.", "" ] ]
Mathematical models of cholera and waterborne disease vary widely in their structures, in terms of transmission pathways, loss of immunity, and other features. These differences may yield different predictions and parameter estimates from the same data. Given the increasing use of models to inform public health decision-making, it is important to assess distinguishability (whether models can be distinguished based on fit to data) and inference robustness (whether model inferences are robust to realistic variations in model structure). We examined the effects of uncertainty in model structure in epidemic cholera, testing a range of models based on known features of cholera epidemiology. We fit to simulated epidemic and long-term data, as well as data from the 2006 Angola epidemic. We evaluated model distinguishability based on data fit, and whether parameter values and forecasts can accurately be inferred from incidence data. In general, all models were able to successfully fit to all data sets, even if misspecified. However, in the long-term data, the best model fits were achieved when the loss of immunity form matched those of the model that simulated the data. Two transmission and reporting parameters were accurately estimated across all models, while the remaining showed broad variation across the different models and data sets. Forecasting efforts were not successful early, but once the epidemic peak had been achieved, most models showed similar accuracy. Our results suggest that we are unlikely to be able to infer mechanistic details from epidemic case data alone, underscoring the need for broader data collection. Nonetheless, with sufficient data, conclusions from forecasting and some parameter estimates were robust to variations in the model structure, and comparative modeling can help determine how variations in model structure affect conclusions drawn from models and data.
1905.09888
Farzad Khalvati
Yucheng Zhang, Edrise M. Lobo-Mueller, Paul Karanicolas, Steven Gallinger, Masoom A. Haider, Farzad Khalvati
Prognostic Value of Transfer Learning Based Features in Resectable Pancreatic Ductal Adenocarcinoma
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pancreatic Ductal Adenocarcinoma (PDAC) is one of the most aggressive cancers with an extremely poor prognosis. Radiomics has shown prognostic ability in multiple types of cancer including PDAC. However, the prognostic value of traditional radiomics pipelines, which are based on hand-crafted radiomic features alone is limited. Convolutional neural networks (CNNs) have been shown to outperform these feature-based models in computer vision tasks. However, training a CNN from scratch needs a large sample size which is not feasible in most medical imaging studies. As an alternative solution, CNN-based transfer learning has shown potential for achieving reasonable performance using small datasets. In this work, we developed and validated a CNN-based transfer learning approach for prognostication of PDAC patients for overall survival using two independent resectable PDAC cohorts. The proposed deep transfer learning model for prognostication of PDAC achieved the area under the receiver operating characteristic curve of 0.74, which was significantly higher than that of the traditional radiomics model (0.56) as well as a CNN model trained from scratch (0.50). These results suggest that deep transfer learning may significantly improve prognosis performance using small datasets in medical imaging.
[ { "created": "Thu, 23 May 2019 19:35:41 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 15:16:00 GMT", "version": "v2" } ]
2019-08-22
[ [ "Zhang", "Yucheng", "" ], [ "Lobo-Mueller", "Edrise M.", "" ], [ "Karanicolas", "Paul", "" ], [ "Gallinger", "Steven", "" ], [ "Haider", "Masoom A.", "" ], [ "Khalvati", "Farzad", "" ] ]
Pancreatic Ductal Adenocarcinoma (PDAC) is one of the most aggressive cancers with an extremely poor prognosis. Radiomics has shown prognostic ability in multiple types of cancer including PDAC. However, the prognostic value of traditional radiomics pipelines, which are based on hand-crafted radiomic features alone is limited. Convolutional neural networks (CNNs) have been shown to outperform these feature-based models in computer vision tasks. However, training a CNN from scratch needs a large sample size which is not feasible in most medical imaging studies. As an alternative solution, CNN-based transfer learning has shown potential for achieving reasonable performance using small datasets. In this work, we developed and validated a CNN-based transfer learning approach for prognostication of PDAC patients for overall survival using two independent resectable PDAC cohorts. The proposed deep transfer learning model for prognostication of PDAC achieved the area under the receiver operating characteristic curve of 0.74, which was significantly higher than that of the traditional radiomics model (0.56) as well as a CNN model trained from scratch (0.50). These results suggest that deep transfer learning may significantly improve prognosis performance using small datasets in medical imaging.
1501.06149
Iain Johnston
Iain G. Johnston and Nick S. Jones
Closed-form stochastic solutions for non-equilibrium dynamics and inheritance of cellular components over many cell divisions
null
null
10.1098/rspa.2015.0050
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic dynamics govern many important processes in cellular biology, and an underlying theoretical approach describing these dynamics is desirable to address a wealth of questions in biology and medicine. Mathematical tools exist for treating several important examples of these stochastic processes, most notably gene expression, and random partitioning at single cell divisions or after a steady state has been reached. Comparatively little work exists exploring different and specific ways that repeated cell divisions can lead to stochastic inheritance of unequilibrated cellular populations. Here we introduce a mathematical formalism to describe cellular agents that are subject to random creation, replication, and/or degradation, and are inherited according to a range of random dynamics at cell divisions. We obtain closed-form generating functions describing systems at any time after any number of cell divisions for binomial partitioning and divisions provoking a deterministic or random, subtractive or additive change in copy number, and show that these solutions agree exactly with stochastic simulation. We apply this general formalism to several example problems involving the dynamics of mitochondrial DNA (mtDNA) during development and organismal lifetimes.
[ { "created": "Sun, 25 Jan 2015 12:56:04 GMT", "version": "v1" } ]
2016-02-17
[ [ "Johnston", "Iain G.", "" ], [ "Jones", "Nick S.", "" ] ]
Stochastic dynamics govern many important processes in cellular biology, and an underlying theoretical approach describing these dynamics is desirable to address a wealth of questions in biology and medicine. Mathematical tools exist for treating several important examples of these stochastic processes, most notably gene expression, and random partitioning at single cell divisions or after a steady state has been reached. Comparatively little work exists exploring different and specific ways that repeated cell divisions can lead to stochastic inheritance of unequilibrated cellular populations. Here we introduce a mathematical formalism to describe cellular agents that are subject to random creation, replication, and/or degradation, and are inherited according to a range of random dynamics at cell divisions. We obtain closed-form generating functions describing systems at any time after any number of cell divisions for binomial partitioning and divisions provoking a deterministic or random, subtractive or additive change in copy number, and show that these solutions agree exactly with stochastic simulation. We apply this general formalism to several example problems involving the dynamics of mitochondrial DNA (mtDNA) during development and organismal lifetimes.
1704.08585
G Ambika
K. P. Harikrishnan, Rinku Jacob, R. Misra, G. Ambika
Determining the minimum embedding dimension for state space reconstruction through recurrence networks
13 pages, 8 figures, submitted to Pramana( J Phys)
Indian Academy of Sciences Conference Series (2017) 1:1
10.29195/iascs.01.01.0004
null
q-bio.NC nlin.CD physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of observed time series from nonlinear systems is usually done by making a time-delay reconstruction to unfold the dynamics on a multi-dimensional state space. An important aspect of the analysis is the choice of the correct embedding dimension. The conventional procedure used for this is either the method of false nearest neighbors or the saturation of some invariant measure, such as, correlation dimension. Here we examine this issue from a complex network perspective and propose a recurrence network based measure to determine the acceptable minimum embedding dimension to be used for such analysis. The measure proposed here is based on the well known Kullback-Leibler divergence commonly used in information theory. We show that the measure is simple and direct to compute and give accurate result for short time series. To show the significance of the measure in the analysis of practical data, we present the analysis of two EEG signals as examples.
[ { "created": "Wed, 26 Apr 2017 02:20:18 GMT", "version": "v1" } ]
2018-09-05
[ [ "Harikrishnan", "K. P.", "" ], [ "Jacob", "Rinku", "" ], [ "Misra", "R.", "" ], [ "Ambika", "G.", "" ] ]
The analysis of observed time series from nonlinear systems is usually done by making a time-delay reconstruction to unfold the dynamics on a multi-dimensional state space. An important aspect of the analysis is the choice of the correct embedding dimension. The conventional procedure used for this is either the method of false nearest neighbors or the saturation of some invariant measure, such as, correlation dimension. Here we examine this issue from a complex network perspective and propose a recurrence network based measure to determine the acceptable minimum embedding dimension to be used for such analysis. The measure proposed here is based on the well known Kullback-Leibler divergence commonly used in information theory. We show that the measure is simple and direct to compute and give accurate result for short time series. To show the significance of the measure in the analysis of practical data, we present the analysis of two EEG signals as examples.
2303.07250
Amit Samadder
Amit Samadder, Arnab Chattopadhyay, Sabyasachi Bhattacharya
The balance between contamination and predation determine species existence in prey-predator dynamics with contaminated and uncontaminated prey
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In freshwater ecosystems, aquatic insects that ontogenetically shift their habitat from aquatic to terrestrial play vital roles as prey subsidies that move nutrients and energy from aquatic to terrestrial food webs. As a result, these subsidies negatively affect alternative terrestrial prey by enhancing predator density. However, these aquatic insects can also transport contamination to the terrestrial community that is primarily produced in aquatic ecosystems. Which can reduce insectivore abundance and biomass, lower insectivore reproductive success, and increase predation pressure on alternative prey with consequences for aquatic and terrestrial food webs. Motivated by this, here we consider a prey-predator model where the predator consumes contaminated and uncontaminated prey together. We find that, at a high level of contamination, the vulnerability of contaminated prey and predator is determined by predation preference. More specifically, a very low predation preference towards contaminated prey ensures predator persistence, whereas a low, high preference excludes the predator from the system. Interestingly, either contaminated prey or the predator exist at intermediate predation preference due to bi-stability. Furthermore, when there is no contamination in one of the prey, the other prey can not coexist due to apparent competition for a specific range of predation preferences. However, when sufficient contamination exists in one prey, alternative uncontaminated prey coexists. With this, contamination also stabilizes and destabilizes the three species dynamics. Our result also indicates that if the intensity of the contamination in predator reproduction is low, then contaminated prey is more susceptible to the contamination.
[ { "created": "Mon, 13 Mar 2023 16:24:08 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 12:50:47 GMT", "version": "v2" } ]
2023-03-15
[ [ "Samadder", "Amit", "" ], [ "Chattopadhyay", "Arnab", "" ], [ "Bhattacharya", "Sabyasachi", "" ] ]
In freshwater ecosystems, aquatic insects that ontogenetically shift their habitat from aquatic to terrestrial play vital roles as prey subsidies that move nutrients and energy from aquatic to terrestrial food webs. As a result, these subsidies negatively affect alternative terrestrial prey by enhancing predator density. However, these aquatic insects can also transport contamination to the terrestrial community that is primarily produced in aquatic ecosystems. Which can reduce insectivore abundance and biomass, lower insectivore reproductive success, and increase predation pressure on alternative prey with consequences for aquatic and terrestrial food webs. Motivated by this, here we consider a prey-predator model where the predator consumes contaminated and uncontaminated prey together. We find that, at a high level of contamination, the vulnerability of contaminated prey and predator is determined by predation preference. More specifically, a very low predation preference towards contaminated prey ensures predator persistence, whereas a low, high preference excludes the predator from the system. Interestingly, either contaminated prey or the predator exist at intermediate predation preference due to bi-stability. Furthermore, when there is no contamination in one of the prey, the other prey can not coexist due to apparent competition for a specific range of predation preferences. However, when sufficient contamination exists in one prey, alternative uncontaminated prey coexists. With this, contamination also stabilizes and destabilizes the three species dynamics. Our result also indicates that if the intensity of the contamination in predator reproduction is low, then contaminated prey is more susceptible to the contamination.
2003.11008
Emanuele Olivetti
Gabriele Amorosino, Denis Peruzzo, Pietro Astolfi, Daniela Redaelli, Paolo Avesani, Filippo Arrigoni, Emanuele Olivetti
Automatic Tissue Segmentation with Deep Learning in Patients with Congenital or Acquired Distortion of Brain Anatomy
null
null
null
null
q-bio.TO eess.IV
http://creativecommons.org/licenses/by/4.0/
Brains with complex distortion of cerebral anatomy present several challenges to automatic tissue segmentation methods of T1-weighted MR images. First, the very high variability in the morphology of the tissues can be incompatible with the prior knowledge embedded within the algorithms. Second, the availability of MR images of distorted brains is very scarce, so the methods in the literature have not addressed such cases so far. In this work, we present the first evaluation of state-of-the-art automatic tissue segmentation pipelines on T1-weighted images of brains with different severity of congenital or acquired brain distortion. We compare traditional pipelines and a deep learning model, i.e. a 3D U-Net trained on normal-appearing brains. Unsurprisingly, traditional pipelines completely fail to segment the tissues with strong anatomical distortion. Surprisingly, the 3D U-Net provides useful segmentations that can be a valuable starting point for manual refinement by experts/neuroradiologists.
[ { "created": "Tue, 24 Mar 2020 17:50:39 GMT", "version": "v1" } ]
2020-03-25
[ [ "Amorosino", "Gabriele", "" ], [ "Peruzzo", "Denis", "" ], [ "Astolfi", "Pietro", "" ], [ "Redaelli", "Daniela", "" ], [ "Avesani", "Paolo", "" ], [ "Arrigoni", "Filippo", "" ], [ "Olivetti", "Emanuele", "" ] ]
Brains with complex distortion of cerebral anatomy present several challenges to automatic tissue segmentation methods of T1-weighted MR images. First, the very high variability in the morphology of the tissues can be incompatible with the prior knowledge embedded within the algorithms. Second, the availability of MR images of distorted brains is very scarce, so the methods in the literature have not addressed such cases so far. In this work, we present the first evaluation of state-of-the-art automatic tissue segmentation pipelines on T1-weighted images of brains with different severity of congenital or acquired brain distortion. We compare traditional pipelines and a deep learning model, i.e. a 3D U-Net trained on normal-appearing brains. Unsurprisingly, traditional pipelines completely fail to segment the tissues with strong anatomical distortion. Surprisingly, the 3D U-Net provides useful segmentations that can be a valuable starting point for manual refinement by experts/neuroradiologists.
1706.03920
Bernhard Mehlig
M. Rafajlovic, D. Kleinhans, C. Gulliksson, J. Fries, D. Johansson, A. Ardehed, L. Sundqvist, R. T. Pereyra, B. Mehlig, P. R. Jonsson and K. Johannesson
Stochastic mechanisms forming large clones during colonisation of new areas
33 pages, 5 figures, supplementary figures
J. Evol. Biol. 30 (2017) 1544
10.1111/jeb.13124
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In species reproducing both sexually and asexually clones are often more common in recently established populations. Earlier studies have suggested that this pattern arises from natural selection favouring asexual recruitment in young populations. Alternatively, as we show here, this pattern may result from stochastic processes during species-range expansions. We model a dioecious species expanding into a new area in which all individuals are capable of both sexual and asexual reproduction, and all individuals have equal survival rates and dispersal distances. Even under conditions that eventually favour sexual recruitment, colonisation starts with an asexual wave. Long after colonisation is completed, a sexual wave erodes clonal dominance. If individuals reproduce more than one season, and with only local dispersal, a few large clones typically dominate for thousands of reproductive seasons. Adding occasional long-distance dispersal, more dominant clones emerge, but they persist for a shorter period of time. The general mechanism involved is simple: edge effects at the expansion front favour asexual (uniparental) recruitment where potential mates are rare. Specifically, our stochastic model makes detailed predictions different from a selection model, and comparing these with empirical data from a postglacially established seaweed species (Fucus radicans) shows that in this case a stochastic mechanism is strongly supported.
[ { "created": "Tue, 13 Jun 2017 06:45:57 GMT", "version": "v1" } ]
2017-12-06
[ [ "Rafajlovic", "M.", "" ], [ "Kleinhans", "D.", "" ], [ "Gulliksson", "C.", "" ], [ "Fries", "J.", "" ], [ "Johansson", "D.", "" ], [ "Ardehed", "A.", "" ], [ "Sundqvist", "L.", "" ], [ "Pereyra", "R. T.", "" ], [ "Mehlig", "B.", "" ], [ "Jonsson", "P. R.", "" ], [ "Johannesson", "K.", "" ] ]
In species reproducing both sexually and asexually clones are often more common in recently established populations. Earlier studies have suggested that this pattern arises from natural selection favouring asexual recruitment in young populations. Alternatively, as we show here, this pattern may result from stochastic processes during species-range expansions. We model a dioecious species expanding into a new area in which all individuals are capable of both sexual and asexual reproduction, and all individuals have equal survival rates and dispersal distances. Even under conditions that eventually favour sexual recruitment, colonisation starts with an asexual wave. Long after colonisation is completed, a sexual wave erodes clonal dominance. If individuals reproduce more than one season, and with only local dispersal, a few large clones typically dominate for thousands of reproductive seasons. Adding occasional long-distance dispersal, more dominant clones emerge, but they persist for a shorter period of time. The general mechanism involved is simple: edge effects at the expansion front favour asexual (uniparental) recruitment where potential mates are rare. Specifically, our stochastic model makes detailed predictions different from a selection model, and comparing these with empirical data from a postglacially established seaweed species (Fucus radicans) shows that in this case a stochastic mechanism is strongly supported.
1903.07276
Nancy Forde
Michael W.H. Kirkness, Kathrin Lehmann and Nancy R. Forde
Mechanics and Structural Stability of the Collagen Triple Helix
Review article
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The primary building block of the body is collagen, which is found in the extracellular matrix and in many stress-bearing tissues such as tendon and cartilage. It provides elasticity and support to cells and tissues while influencing biological pathways including cell signaling, motility and differentiation. Collagen's unique triple helical structure is thought to impart mechanical stability. However, detailed experimental studies on its molecular mechanics have been only recently emerging. Here, we review the treatment of the triple helix as a homogeneous flexible rod, including bend (standard worm-like chain model), twist, and stretch deformations, and the assumption of backbone linearity. Additionally, we discuss protein-specific properties of the triple helix including sequence dependence, and relate single-molecule mechanics to collagen's physiological context.
[ { "created": "Mon, 18 Mar 2019 07:00:50 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2019 18:59:36 GMT", "version": "v2" } ]
2019-07-29
[ [ "Kirkness", "Michael W. H.", "" ], [ "Lehmann", "Kathrin", "" ], [ "Forde", "Nancy R.", "" ] ]
The primary building block of the body is collagen, which is found in the extracellular matrix and in many stress-bearing tissues such as tendon and cartilage. It provides elasticity and support to cells and tissues while influencing biological pathways including cell signaling, motility and differentiation. Collagen's unique triple helical structure is thought to impart mechanical stability. However, detailed experimental studies on its molecular mechanics have been only recently emerging. Here, we review the treatment of the triple helix as a homogeneous flexible rod, including bend (standard worm-like chain model), twist, and stretch deformations, and the assumption of backbone linearity. Additionally, we discuss protein-specific properties of the triple helix including sequence dependence, and relate single-molecule mechanics to collagen's physiological context.
2406.14246
Maren Philipps
Maren Philipps, Antonia K\"orner, Jakob Vanhoefer, Dilan Pathirana, Jan Hasenauer
Non-Negative Universal Differential Equations With Applications in Systems Biology
6 pages, This work has been submitted to IFAC for possible publication. Initial submission was March 18, 2024
null
null
null
q-bio.QM cs.LG math.DS stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Universal differential equations (UDEs) leverage the respective advantages of mechanistic models and artificial neural networks and combine them into one dynamic model. However, these hybrid models can suffer from unrealistic solutions, such as negative values for biochemical quantities. We present non-negative UDE (nUDEs), a constrained UDE variant that guarantees non-negative values. Furthermore, we explore regularisation techniques to improve generalisation and interpretability of UDEs.
[ { "created": "Thu, 20 Jun 2024 12:14:09 GMT", "version": "v1" } ]
2024-06-21
[ [ "Philipps", "Maren", "" ], [ "Körner", "Antonia", "" ], [ "Vanhoefer", "Jakob", "" ], [ "Pathirana", "Dilan", "" ], [ "Hasenauer", "Jan", "" ] ]
Universal differential equations (UDEs) leverage the respective advantages of mechanistic models and artificial neural networks and combine them into one dynamic model. However, these hybrid models can suffer from unrealistic solutions, such as negative values for biochemical quantities. We present non-negative UDE (nUDEs), a constrained UDE variant that guarantees non-negative values. Furthermore, we explore regularisation techniques to improve generalisation and interpretability of UDEs.
1805.00757
Ricardo Ruiz Baier I
Ricardo Ruiz Baier, Alessio Gizzi, Alessandro Loppini, Christian Cherubini, Simonetta Filippi
Modelling thermo-electro-mechanical effects in orthotropic cardiac tissue
null
Communications in Computational Physics (2019)
10.4208/cicp.OA-2018-0253
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a new mathematical model for the active contraction of cardiac muscle, featuring different thermo-electric and nonlinear conductivity properties. The passive hyperelastic response of the tissue is described by an orthotropic exponential model, whereas the ionic activity dictates active contraction incorporated through the concept of orthotropic active strain. We use a fully incompressible formulation, and the generated strain modifies directly the conductivity mechanisms in the medium through the pull-back transformation. We also investigate the influence of thermo-electric effects in the onset of multiphysics emergent spatiotemporal dynamics, using nonlinear diffusion. It turns out that these ingredients have a key role in reproducing pathological chaotic dynamics such as ventricular fibrillation during inflammatory events, for instance. The specific structure of the governing equations suggests to cast the problem in mixed-primal form and we write it in terms of Kirchhoff stress, displacements, solid pressure, electric potential, activation generation, and ionic variables. We also propose a new mixed-primal finite element method for its numerical approximation, and we use it to explore the properties of the model and to assess the importance of coupling terms, by means of a few computational experiments in 3D.
[ { "created": "Wed, 2 May 2018 12:06:57 GMT", "version": "v1" }, { "created": "Fri, 20 Jul 2018 10:18:16 GMT", "version": "v2" }, { "created": "Tue, 6 Nov 2018 15:49:40 GMT", "version": "v3" } ]
2019-05-02
[ [ "Baier", "Ricardo Ruiz", "" ], [ "Gizzi", "Alessio", "" ], [ "Loppini", "Alessandro", "" ], [ "Cherubini", "Christian", "" ], [ "Filippi", "Simonetta", "" ] ]
In this paper we introduce a new mathematical model for the active contraction of cardiac muscle, featuring different thermo-electric and nonlinear conductivity properties. The passive hyperelastic response of the tissue is described by an orthotropic exponential model, whereas the ionic activity dictates active contraction incorporated through the concept of orthotropic active strain. We use a fully incompressible formulation, and the generated strain modifies directly the conductivity mechanisms in the medium through the pull-back transformation. We also investigate the influence of thermo-electric effects in the onset of multiphysics emergent spatiotemporal dynamics, using nonlinear diffusion. It turns out that these ingredients have a key role in reproducing pathological chaotic dynamics such as ventricular fibrillation during inflammatory events, for instance. The specific structure of the governing equations suggests to cast the problem in mixed-primal form and we write it in terms of Kirchhoff stress, displacements, solid pressure, electric potential, activation generation, and ionic variables. We also propose a new mixed-primal finite element method for its numerical approximation, and we use it to explore the properties of the model and to assess the importance of coupling terms, by means of a few computational experiments in 3D.
1903.04925
Angana Chakraborty
Angana Chakraborty and Sanghamitra Bandyopadhyay
conLSH: Context based Locality Sensitive Hashing for Mapping of noisy SMRT Reads
arXiv admin note: text overlap with arXiv:1705.03933
null
null
null
q-bio.GN cs.DS cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single Molecule Real-Time (SMRT) sequencing is a recent advancement of Next Gen technology developed by Pacific Bio (PacBio). It comes with an explosion of long and noisy reads demanding cutting edge research to get most out of it. To deal with the high error probability of SMRT data, a novel contextual Locality Sensitive Hashing (conLSH) based algorithm is proposed in this article, which can effectively align the noisy SMRT reads to the reference genome. Here, sequences are hashed together based not only on their closeness, but also on similarity of context. The algorithm has $\mathcal{O}(n^{\rho+1})$ space requirement, where $n$ is the number of sequences in the corpus and $\rho$ is a constant. The indexing time and querying time are bounded by $\mathcal{O}( \frac{n^{\rho+1} \cdot \ln n}{\ln \frac{1}{P_2}})$ and $\mathcal{O}(n^\rho)$ respectively, where $P_2 > 0$, is a probability value. This algorithm is particularly useful for retrieving similar sequences, a widely used task in biology. The proposed conLSH based aligner is compared with rHAT, popularly used for aligning SMRT reads, and is found to comprehensively beat it in speed as well as in memory requirements. In particular, it takes approximately $24.2\%$ less processing time, while saving about $70.3\%$ in peak memory requirement for H.sapiens PacBio dataset.
[ { "created": "Mon, 11 Mar 2019 17:49:01 GMT", "version": "v1" } ]
2019-03-13
[ [ "Chakraborty", "Angana", "" ], [ "Bandyopadhyay", "Sanghamitra", "" ] ]
Single Molecule Real-Time (SMRT) sequencing is a recent advancement of Next Gen technology developed by Pacific Bio (PacBio). It comes with an explosion of long and noisy reads demanding cutting edge research to get most out of it. To deal with the high error probability of SMRT data, a novel contextual Locality Sensitive Hashing (conLSH) based algorithm is proposed in this article, which can effectively align the noisy SMRT reads to the reference genome. Here, sequences are hashed together based not only on their closeness, but also on similarity of context. The algorithm has $\mathcal{O}(n^{\rho+1})$ space requirement, where $n$ is the number of sequences in the corpus and $\rho$ is a constant. The indexing time and querying time are bounded by $\mathcal{O}( \frac{n^{\rho+1} \cdot \ln n}{\ln \frac{1}{P_2}})$ and $\mathcal{O}(n^\rho)$ respectively, where $P_2 > 0$, is a probability value. This algorithm is particularly useful for retrieving similar sequences, a widely used task in biology. The proposed conLSH based aligner is compared with rHAT, popularly used for aligning SMRT reads, and is found to comprehensively beat it in speed as well as in memory requirements. In particular, it takes approximately $24.2\%$ less processing time, while saving about $70.3\%$ in peak memory requirement for H.sapiens PacBio dataset.
2011.08632
Thomas G\"otz
Thomas G\"otz, Silja Mohrmann, Robert Rockenfeller, Moritz Sch\"afer and Karunia Putra Wijaya
Calculation of a local COVID-19 reproduction number for the northern Rhineland-Palatinate
16 pages, 23 figures, in German
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Since the beginning of the corona pandemic in March 2020, various parameters for describing the spread of the disease have been specified for Germany in addition to the daily infection figures (new infections and total infections), which are also used for political decisions. In addition to excess mortality and the weekly incidence, these include the doubling time $T_2$ and the reproduction number $R_t$. For the latter, various estimates can be found on the website of the Robert-Koch-Institute, see \cite{EstR:RKI}, which are calculated from the case numbers for all of Germany; local differences are not taken into account here. In the present article, the calculations of the RKI on a local level are examined using the example of northern Rhineland-Palatinate and its districts. Here, not the reporting date but the onset of illness is used as a reference for the calculation of $R_t$. For cases where the onset of illness is not known, an adjusted generalized extreme value distribution (GEV) is first fitted to the data for which the reporting delay (difference between the onset of illness and the reporting date) is available and examined for further characteristics such as local as well as demographic differences. This GEV distribution is then used to calculate the reporting delays of incomplete data points. The calculation of the daily value of $R_t$ between the end of February and the end of October showed a similar course of the reproductive rate compared to the nationwide figures. Expectably larger statistical fluctuations were observed in the summer, mainly due to lower case numbers. The values for northern Rhineland-Palatinate have been consistently above $1$ since about mid-September. The calculations can also be transferred to other regions and administrative districts.
[ { "created": "Tue, 17 Nov 2020 13:52:15 GMT", "version": "v1" } ]
2020-11-18
[ [ "Götz", "Thomas", "" ], [ "Mohrmann", "Silja", "" ], [ "Rockenfeller", "Robert", "" ], [ "Schäfer", "Moritz", "" ], [ "Wijaya", "Karunia Putra", "" ] ]
Since the beginning of the corona pandemic in March 2020, various parameters for describing the spread of the disease have been specified for Germany in addition to the daily infection figures (new infections and total infections), which are also used for political decisions. In addition to excess mortality and the weekly incidence, these include the doubling time $T_2$ and the reproduction number $R_t$. For the latter, various estimates can be found on the website of the Robert-Koch-Institute, see \cite{EstR:RKI}, which are calculated from the case numbers for all of Germany; local differences are not taken into account here. In the present article, the calculations of the RKI on a local level are examined using the example of northern Rhineland-Palatinate and its districts. Here, not the reporting date but the onset of illness is used as a reference for the calculation of $R_t$. For cases where the onset of illness is not known, an adjusted generalized extreme value distribution (GEV) is first fitted to the data for which the reporting delay (difference between the onset of illness and the reporting date) is available and examined for further characteristics such as local as well as demographic differences. This GEV distribution is then used to calculate the reporting delays of incomplete data points. The calculation of the daily value of $R_t$ between the end of February and the end of October showed a similar course of the reproductive rate compared to the nationwide figures. Expectably larger statistical fluctuations were observed in the summer, mainly due to lower case numbers. The values for northern Rhineland-Palatinate have been consistently above $1$ since about mid-September. The calculations can also be transferred to other regions and administrative districts.
1808.04443
Chuanqi Tan
Chuanqi Tan, Fuchun Sun, Wenchang Zhang, Shaobo Liu and Chunfang Liu
Spatial and Spectral Features Fusion for EEG Classification during Motor Imagery in BCI
International Conference on Biomedical and Health Informatics (BHI 2017)
null
null
null
q-bio.QM eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain computer interface (BCI) is the only way for some special patients to communicate with the outside world and provide a direct control channel between brain and the external devices. As a non-invasive interface, the scalp electroencephalography (EEG) has a significant potential to be a major input signal for future BCI systems. Traditional methods only focus on a particular feature in the EEG signal, which limits the practical applications of EEG-based BCI. In this paper, we propose a algorithm for EEG classification with the ability to fuse multiple features. First, use the common spatial pattern (CSP) as the spatial feature and use wavelet coefficient as the spectral feature. Second, fuse these features with a fusion algorithm in orchestrate way to improve the accuracy of classification. Our algorithms are applied to the dataset IVa from BCI complete \uppercase\expandafter{\romannumeral3}. By analyzing the experimental results, it is possible to conclude that we can speculate that our algorithm perform better than traditional methods.
[ { "created": "Mon, 6 Aug 2018 07:52:06 GMT", "version": "v1" } ]
2018-08-15
[ [ "Tan", "Chuanqi", "" ], [ "Sun", "Fuchun", "" ], [ "Zhang", "Wenchang", "" ], [ "Liu", "Shaobo", "" ], [ "Liu", "Chunfang", "" ] ]
Brain computer interface (BCI) is the only way for some special patients to communicate with the outside world and provide a direct control channel between brain and the external devices. As a non-invasive interface, the scalp electroencephalography (EEG) has a significant potential to be a major input signal for future BCI systems. Traditional methods only focus on a particular feature in the EEG signal, which limits the practical applications of EEG-based BCI. In this paper, we propose a algorithm for EEG classification with the ability to fuse multiple features. First, use the common spatial pattern (CSP) as the spatial feature and use wavelet coefficient as the spectral feature. Second, fuse these features with a fusion algorithm in orchestrate way to improve the accuracy of classification. Our algorithms are applied to the dataset IVa from BCI complete \uppercase\expandafter{\romannumeral3}. By analyzing the experimental results, it is possible to conclude that we can speculate that our algorithm perform better than traditional methods.
1601.03412
Jana Gevertz
Ami B. Shah, Katarzyna A. Rejniak, Jana L. Gevertz
Limiting the Development of Anti-Cancer Drug Resistance in a Spatial Model of Micrometastases
25 pages, 8 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While chemoresistance in primary tumors is well-studied, much less is known about the influence of systemic chemotherapy on the development of drug resistance at metastatic sites. In this work, we use a hybrid spatial model of tumor response to a DNA damaging drug to study how the development of chemoresistance in micrometastases depends on the drug dosing schedule. We separately consider cell populations that harbor pre-existing resistance to the drug, and those that acquire resistance during the course of treatment. For each of these independent scenarios, we consider one hypothetical cell line that is responsive to metronomic chemotherapy, and another that with high probability cannot be eradicated by a metronomic protocol. Motivated by experimental work on ovarian cancer xenografts, we consider all possible combinations of a one week treatment protocol, repeated for three weeks, and constrained by the total weekly drug dose. Simulations reveal a small number of fractionated-dose protocols that are at least as effective as metronomic therapy in eradicating micrometastases with acquired resistance (weak or strong), while also being at least as effective on those that harbor weakly pre-existing resistant cells. Given the responsiveness of very different theoretical cell lines to these few fractionated-dose protocols, these may represent more effective ways to schedule chemotherapy with the goal of limiting metastatic tumor progression.
[ { "created": "Wed, 13 Jan 2016 21:19:32 GMT", "version": "v1" }, { "created": "Wed, 2 Mar 2016 21:41:24 GMT", "version": "v2" } ]
2016-03-04
[ [ "Shah", "Ami B.", "" ], [ "Rejniak", "Katarzyna A.", "" ], [ "Gevertz", "Jana L.", "" ] ]
While chemoresistance in primary tumors is well-studied, much less is known about the influence of systemic chemotherapy on the development of drug resistance at metastatic sites. In this work, we use a hybrid spatial model of tumor response to a DNA damaging drug to study how the development of chemoresistance in micrometastases depends on the drug dosing schedule. We separately consider cell populations that harbor pre-existing resistance to the drug, and those that acquire resistance during the course of treatment. For each of these independent scenarios, we consider one hypothetical cell line that is responsive to metronomic chemotherapy, and another that with high probability cannot be eradicated by a metronomic protocol. Motivated by experimental work on ovarian cancer xenografts, we consider all possible combinations of a one week treatment protocol, repeated for three weeks, and constrained by the total weekly drug dose. Simulations reveal a small number of fractionated-dose protocols that are at least as effective as metronomic therapy in eradicating micrometastases with acquired resistance (weak or strong), while also being at least as effective on those that harbor weakly pre-existing resistant cells. Given the responsiveness of very different theoretical cell lines to these few fractionated-dose protocols, these may represent more effective ways to schedule chemotherapy with the goal of limiting metastatic tumor progression.
1710.00869
Nick Wasylyshyn
Nick Wasylyshyn (1, 2), Brett Hemenway (2), Javier O. Garcia (1, 2), Christopher N. Cascio (2), Matthew Brook O'Donnell (2), C. Raymond Bingham (3), Bruce Simons-Morton (4), Jean M. Vettel (1, 2, 5), Emily B. Falk (2) ((1) US Army Research Laboratory, (2) University of Pennsylvania, (3) University of Michigan Transportation Research Institute, (4) Eunice Kennedy Shriver National Institute on Child Health and Human Development, (5) University of California Santa Barbara)
Global Brain Dynamics During Social Exclusion Predict Subsequent Behavioral Conformity
32 pages, 5 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Individuals react differently to social experiences; for example, people who are more sensitive to negative social experiences, such as being excluded, may be more likely to adapt their behavior to fit in with others. We examined whether functional brain connectivity during social exclusion in the fMRI scanner can be used to predict subsequent conformity to peer norms. Adolescent males (N = 57) completed a two-part study on teen driving risk: a social exclusion task (Cyberball) during an fMRI session and a subsequent driving simulator session in which they drove alone and in the presence of a peer who expressed risk-averse or risk-accepting driving norms. We computed the difference in functional connectivity between social exclusion and social inclusion from each node in the brain to nodes in two brain networks, one previously associated with mentalizing (medial prefrontal cortex, temporoparietal junction, precuneus, temporal poles) and another with social pain (anterior cingulate cortex, anterior insula). Using cross-validated machine learning, this measure of global network connectivity during exclusion predicts the extent of conformity to peer pressure during driving in the subsequent experimental session. These findings extend our understanding of how global neural dynamics guide social behavior, revealing functional network activity that captures individual differences.
[ { "created": "Mon, 2 Oct 2017 19:08:41 GMT", "version": "v1" } ]
2017-10-04
[ [ "Wasylyshyn", "Nick", "" ], [ "Hemenway", "Brett", "" ], [ "Garcia", "Javier O.", "" ], [ "Cascio", "Christopher N.", "" ], [ "O'Donnell", "Matthew Brook", "" ], [ "Bingham", "C. Raymond", "" ], [ "Simons-Morton", "Bruce", "" ], [ "Vettel", "Jean M.", "" ], [ "Falk", "Emily B.", "" ] ]
Individuals react differently to social experiences; for example, people who are more sensitive to negative social experiences, such as being excluded, may be more likely to adapt their behavior to fit in with others. We examined whether functional brain connectivity during social exclusion in the fMRI scanner can be used to predict subsequent conformity to peer norms. Adolescent males (N = 57) completed a two-part study on teen driving risk: a social exclusion task (Cyberball) during an fMRI session and a subsequent driving simulator session in which they drove alone and in the presence of a peer who expressed risk-averse or risk-accepting driving norms. We computed the difference in functional connectivity between social exclusion and social inclusion from each node in the brain to nodes in two brain networks, one previously associated with mentalizing (medial prefrontal cortex, temporoparietal junction, precuneus, temporal poles) and another with social pain (anterior cingulate cortex, anterior insula). Using cross-validated machine learning, this measure of global network connectivity during exclusion predicts the extent of conformity to peer pressure during driving in the subsequent experimental session. These findings extend our understanding of how global neural dynamics guide social behavior, revealing functional network activity that captures individual differences.
2309.07271
Yujiang Wang
Christopher Thornton, Mariella Panagiotopoulou, Fahmida A Chowdhury, Beate Diehl, John S Duncan, Sarah J Gascoigne, Guillermo Besne, Andrew W McEvoy, Anna Miserocchi, Billy C Smith, Jane de Tisi, Peter N Taylor, Yujiang Wang
Diminished circadian and ultradian rhythms of human brain activity in pathological tissue in vivo
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chronobiological rhythms, such as the circadian rhythm, have long been linked to neurological disorders, but it is currently unknown how pathological processes affect the expression of biological rhythms in the brain. Here, we use the unique opportunity of long-term, continuous intracranially recorded EEG from 38 patients (totalling 6338 hours) to delineate circadian (daily) and ultradian (minute to hourly) rhythms in different brain regions. We show that functional circadian and ultradian rhythms are diminished in pathological tissue, independent of regional variations. We further demonstrate that these diminished rhythms are persistent in time, regardless of load or occurrence of pathological events. These findings provide evidence that brain pathology is functionally associated with persistently diminished chronobiological rhythms in vivo in humans, independent of regional variations or pathological events. Future work interacting with, and restoring, these modulatory chronobiological rhythms may allow for novel therapies.
[ { "created": "Wed, 13 Sep 2023 19:21:16 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2024 17:55:13 GMT", "version": "v2" } ]
2024-08-08
[ [ "Thornton", "Christopher", "" ], [ "Panagiotopoulou", "Mariella", "" ], [ "Chowdhury", "Fahmida A", "" ], [ "Diehl", "Beate", "" ], [ "Duncan", "John S", "" ], [ "Gascoigne", "Sarah J", "" ], [ "Besne", "Guillermo", "" ], [ "McEvoy", "Andrew W", "" ], [ "Miserocchi", "Anna", "" ], [ "Smith", "Billy C", "" ], [ "de Tisi", "Jane", "" ], [ "Taylor", "Peter N", "" ], [ "Wang", "Yujiang", "" ] ]
Chronobiological rhythms, such as the circadian rhythm, have long been linked to neurological disorders, but it is currently unknown how pathological processes affect the expression of biological rhythms in the brain. Here, we use the unique opportunity of long-term, continuous intracranially recorded EEG from 38 patients (totalling 6338 hours) to delineate circadian (daily) and ultradian (minute to hourly) rhythms in different brain regions. We show that functional circadian and ultradian rhythms are diminished in pathological tissue, independent of regional variations. We further demonstrate that these diminished rhythms are persistent in time, regardless of load or occurrence of pathological events. These findings provide evidence that brain pathology is functionally associated with persistently diminished chronobiological rhythms in vivo in humans, independent of regional variations or pathological events. Future work interacting with, and restoring, these modulatory chronobiological rhythms may allow for novel therapies.
2303.14986
Niharika S. D'Souza
Niharika S. D'Souza and Archana Venkataraman
mSPD-NN: A Geometrically Aware Neural Framework for Biomarker Discovery from Functional Connectomics Manifolds
Accepted into IPMI 2023
null
null
null
q-bio.QM cs.LG cs.NE eess.SP q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Connectomics has emerged as a powerful tool in neuroimaging and has spurred recent advancements in statistical and machine learning methods for connectivity data. Despite connectomes inhabiting a matrix manifold, most analytical frameworks ignore the underlying data geometry. This is largely because simple operations, such as mean estimation, do not have easily computable closed-form solutions. We propose a geometrically aware neural framework for connectomes, i.e., the mSPD-NN, designed to estimate the geodesic mean of a collections of symmetric positive definite (SPD) matrices. The mSPD-NN is comprised of bilinear fully connected layers with tied weights and utilizes a novel loss function to optimize the matrix-normal equation arising from Fr\'echet mean estimation. Via experiments on synthetic data, we demonstrate the efficacy of our mSPD-NN against common alternatives for SPD mean estimation, providing competitive performance in terms of scalability and robustness to noise. We illustrate the real-world flexibility of the mSPD-NN in multiple experiments on rs-fMRI data and demonstrate that it uncovers stable biomarkers associated with subtle network differences among patients with ADHD-ASD comorbidities and healthy controls.
[ { "created": "Mon, 27 Mar 2023 08:30:11 GMT", "version": "v1" } ]
2023-03-28
[ [ "D'Souza", "Niharika S.", "" ], [ "Venkataraman", "Archana", "" ] ]
Connectomics has emerged as a powerful tool in neuroimaging and has spurred recent advancements in statistical and machine learning methods for connectivity data. Despite connectomes inhabiting a matrix manifold, most analytical frameworks ignore the underlying data geometry. This is largely because simple operations, such as mean estimation, do not have easily computable closed-form solutions. We propose a geometrically aware neural framework for connectomes, i.e., the mSPD-NN, designed to estimate the geodesic mean of a collections of symmetric positive definite (SPD) matrices. The mSPD-NN is comprised of bilinear fully connected layers with tied weights and utilizes a novel loss function to optimize the matrix-normal equation arising from Fr\'echet mean estimation. Via experiments on synthetic data, we demonstrate the efficacy of our mSPD-NN against common alternatives for SPD mean estimation, providing competitive performance in terms of scalability and robustness to noise. We illustrate the real-world flexibility of the mSPD-NN in multiple experiments on rs-fMRI data and demonstrate that it uncovers stable biomarkers associated with subtle network differences among patients with ADHD-ASD comorbidities and healthy controls.
1605.07793
Mark Leake
Adam J. M. Wollman, Aisha H. Syeda, Peter McGlynn, Mark C. Leake
Single-molecule observation of DNA replication repair pathways in E. coli
null
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The method of action of many antibiotics is to interfere with DNA replication - quinolones trap DNA gyrase and topoisomerase proteins onto DNA while metronidazole causes single and double stranded breaks in DNA. To understand how bacteria respond to these drugs, it is important to understand the repair processes utilised when DNA replication is blocked. We have used tandem lac operators inserted into the chromosome bound by fluorescently labelled lac repressors as a model protein block to replication in E. coli. We have used dual-colour, alternating-laser, single-molecule narrowfield microscopy to quantify the amount of operator at the block and simultaneously image fluorescently labelled DNA polymerase. We anticipate use of this system as a quantitative platform to study replication stalling and repair proteins.
[ { "created": "Wed, 25 May 2016 09:21:23 GMT", "version": "v1" } ]
2016-05-26
[ [ "Wollman", "Adam J. M.", "" ], [ "Syeda", "Aisha H.", "" ], [ "McGlynn", "Peter", "" ], [ "Leake", "Mark C.", "" ] ]
The method of action of many antibiotics is to interfere with DNA replication - quinolones trap DNA gyrase and topoisomerase proteins onto DNA while metronidazole causes single and double stranded breaks in DNA. To understand how bacteria respond to these drugs, it is important to understand the repair processes utilised when DNA replication is blocked. We have used tandem lac operators inserted into the chromosome bound by fluorescently labelled lac repressors as a model protein block to replication in E. coli. We have used dual-colour, alternating-laser, single-molecule narrowfield microscopy to quantify the amount of operator at the block and simultaneously image fluorescently labelled DNA polymerase. We anticipate use of this system as a quantitative platform to study replication stalling and repair proteins.
2311.17194
Qian Gao
Qian Gao (1), G. Gurdeniz (1), Giulia Pratico (1), Camilla T. Damsgaard (1), Eduvigis Roldan-Marin (2), M. Pilar Cano (3), Concepcion Sanchez-Moreno (2), Lars O. Dragsted (1) ((1) Department of Nutrition, Exercise and Sports, University of Copenhagen, Copenhagen, Denmark (2) Institute of Food Science, Technology and Nutrition (ICTAN), Spanish National Research Council (CSIC), Madrid, Spain (3) Department of Biotechnology and Food Microbiology, Institute of Food Science Research (CIAL) (CSIC-UAM), Madrid, Spain)
Identification of urinary biomarkers of food intake for onion by untargeted LC-MS metabolomics
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scope: Biomarkers of food intake (BFIs) are useful tools for objective assessment of food intake and compliance. The aim of this study was to discover and identify urinary BFIs for onion. Methods and results: In a randomized controlled cross-over trial, 6 overweight participants (age 24-62 years) consumed meals with 20 g/d onion powder or no onion for 2 weeks. Untargeted UPLC-qTOF-MS metabolic profiling analysis was performed on urine samples and the profiles were analysed by multilevel-PLSDA, modified PLS, and nearest shrunken centroid to select features associated with onion intake. Eight biomarkers were tentatively identified; six of them originated from S-substituted cysteine derivatives such as isoalliin and propiin, which are considered the most specific for onion intake. Most of the biomarkers were completely excreted within 24 hours and no accumulation was observed during 2 weeks indicating their ability to reflect only recent intake of onions. Receiver-operator curves were made to evaluate the performance of individual biomarkers for predicting onion intake. The area under the curve values for these biomarkers ranged from 0.81 to 1. Conclusion: Promising biomarkers of recent onion intake have been identified in human urine. Further studies with complex diets are needed to validate the robustness of these biomarkers.
[ { "created": "Tue, 28 Nov 2023 19:55:01 GMT", "version": "v1" } ]
2023-11-30
[ [ "Gao", "Qian", "" ], [ "Gurdeniz", "G.", "" ], [ "Pratico", "Giulia", "" ], [ "Damsgaard", "Camilla T.", "" ], [ "Roldan-Marin", "Eduvigis", "" ], [ "Cano", "M. Pilar", "" ], [ "Sanchez-Moreno", "Concepcion", "" ], [ "Dragsted", "Lars O.", "" ] ]
Scope: Biomarkers of food intake (BFIs) are useful tools for objective assessment of food intake and compliance. The aim of this study was to discover and identify urinary BFIs for onion. Methods and results: In a randomized controlled cross-over trial, 6 overweight participants (age 24-62 years) consumed meals with 20 g/d onion powder or no onion for 2 weeks. Untargeted UPLC-qTOF-MS metabolic profiling analysis was performed on urine samples and the profiles were analysed by multilevel-PLSDA, modified PLS, and nearest shrunken centroid to select features associated with onion intake. Eight biomarkers were tentatively identified; six of them originated from S-substituted cysteine derivatives such as isoalliin and propiin, which are considered the most specific for onion intake. Most of the biomarkers were completely excreted within 24 hours and no accumulation was observed during 2 weeks indicating their ability to reflect only recent intake of onions. Receiver-operator curves were made to evaluate the performance of individual biomarkers for predicting onion intake. The area under the curve values for these biomarkers ranged from 0.81 to 1. Conclusion: Promising biomarkers of recent onion intake have been identified in human urine. Further studies with complex diets are needed to validate the robustness of these biomarkers.
1808.04322
Dong Xu
Chao Fang, Yi Shang and Dong Xu
MUFold-BetaTurn: A Deep Dense Inception Network for Protein Beta-Turn Prediction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Beta-turn prediction is useful in protein function studies and experimental design. Although recent approaches using machine-learning techniques such as SVM, neural networks, and K-NN have achieved good results for beta-turn pre-diction, there is still significant room for improvement. As previous predictors utilized features in a sliding window of 4-20 residues to capture interactions among sequentially neighboring residues, such feature engineering may result in incomplete or biased features, and neglect interactions among long-range residues. Deep neural networks provide a new opportunity to address these issues. Here, we proposed a deep dense inception network (DeepDIN) for beta-turn prediction, which takes advantages of the state-of-the-art deep neural network design of the DenseNet and the inception network. A test on a recent BT6376 benchmark shows that the DeepDIN outperformed the previous best BetaTPred3 significantly in both the overall prediction accuracy and the nine-type beta-turn classification. A tool, called MUFold-BetaTurn, was developed, which is the first beta-turn prediction tool utilizing deep neural networks. The tool can be downloaded at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldBetaTurn/download.html.
[ { "created": "Mon, 13 Aug 2018 16:28:50 GMT", "version": "v1" } ]
2018-08-14
[ [ "Fang", "Chao", "" ], [ "Shang", "Yi", "" ], [ "Xu", "Dong", "" ] ]
Beta-turn prediction is useful in protein function studies and experimental design. Although recent approaches using machine-learning techniques such as SVM, neural networks, and K-NN have achieved good results for beta-turn pre-diction, there is still significant room for improvement. As previous predictors utilized features in a sliding window of 4-20 residues to capture interactions among sequentially neighboring residues, such feature engineering may result in incomplete or biased features, and neglect interactions among long-range residues. Deep neural networks provide a new opportunity to address these issues. Here, we proposed a deep dense inception network (DeepDIN) for beta-turn prediction, which takes advantages of the state-of-the-art deep neural network design of the DenseNet and the inception network. A test on a recent BT6376 benchmark shows that the DeepDIN outperformed the previous best BetaTPred3 significantly in both the overall prediction accuracy and the nine-type beta-turn classification. A tool, called MUFold-BetaTurn, was developed, which is the first beta-turn prediction tool utilizing deep neural networks. The tool can be downloaded at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldBetaTurn/download.html.
1701.08443
Sebastian Schreiber
Sebastian J. Schreiber
A dynamical trichotomy for structured populations experiencing positive density-dependence in stochastic environments
null
(2017) In: Elaydi S., Hamaya Y., Matsunaga H., P\"otzsche C. (eds) Advances in Difference Equations and Discrete Dynamical Systems. ICDEA 2016. Springer Proceedings in Mathematics & Statistics, vol 212. Springer, Singapore
10.1007/978-981-10-6409-8_3
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Positive density-dependence occurs when individuals experience increased survivorship, growth, or reproduction with increased population densities. Mechanisms leading to these positive relationships include mate limitation, saturating predation risk, and cooperative breeding and foraging. Individuals within these populations may differ in age, size, or geographic location and thereby structure these populations. Here, I study structured population models accounting for positive density-dependence and environmental stochasticity i.e. random fluctuations in the demographic rates of the population. Under an accessibility assumption (roughly, stochastic fluctuations can lead to populations getting small and large), these models are shown to exhibit a dynamical trichotomy: (i) for all initial conditions, the population goes asymptotically extinct with probability one, (ii) for all positive initial conditions, the population persists and asymptotically exhibits unbounded growth, and (iii) for all positive initial conditions, there is a positive probability of asymptotic extinction and a complementary positive probability of unbounded growth. The main results are illustrated with applications to spatially structured populations with an Allee effect and age-structured populations experiencing mate limitation.
[ { "created": "Sun, 29 Jan 2017 22:21:11 GMT", "version": "v1" } ]
2019-02-12
[ [ "Schreiber", "Sebastian J.", "" ] ]
Positive density-dependence occurs when individuals experience increased survivorship, growth, or reproduction with increased population densities. Mechanisms leading to these positive relationships include mate limitation, saturating predation risk, and cooperative breeding and foraging. Individuals within these populations may differ in age, size, or geographic location and thereby structure these populations. Here, I study structured population models accounting for positive density-dependence and environmental stochasticity i.e. random fluctuations in the demographic rates of the population. Under an accessibility assumption (roughly, stochastic fluctuations can lead to populations getting small and large), these models are shown to exhibit a dynamical trichotomy: (i) for all initial conditions, the population goes asymptotically extinct with probability one, (ii) for all positive initial conditions, the population persists and asymptotically exhibits unbounded growth, and (iii) for all positive initial conditions, there is a positive probability of asymptotic extinction and a complementary positive probability of unbounded growth. The main results are illustrated with applications to spatially structured populations with an Allee effect and age-structured populations experiencing mate limitation.
2212.03859
Keith Li Chambers
Keith L Chambers, Mary R Myerscough and Helen M Byrne
A new lipid-structured model to investigate the opposing effects of LDL and HDL on atherosclerotic plaque macrophages
44 pages, 13 figures
null
null
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atherosclerotic plaques form in artery walls due to a chronic inflammatory response driven by lipid accumulation. A key component of the inflammatory response is the interaction between monocyte-derived macrophages and extracellular lipid. Although concentrations of low-density lipoprotein (LDL) and high-density lipoprotein (HDL) particles in the blood are known to affect plaque progression, their impact on the lipid load of plaque macrophages remains unexplored. In this paper, we develop a lipid-structured mathematical model to investigate the impact of blood LDL/HDL levels on plaque composition, and lipid distribution in plaque macrophages. A reduced subsystem, derived by summing the equations of the full model, describes the dynamics of biophysical quantities relating to plaque composition (e.g. total number of macrophages, total amount of intracellular lipid). We also derive a continuum approximation of the model to facilitate analysis of the macrophage lipid distribution. The results, which include time-dependent numerical solutions and asymptotic analysis of the unique steady state solution, indicate that plaque lipid content is sensitive to the influx of LDL relative to HDL capacity. The macrophage lipid distribution evolves in a wave-like manner towards an equilibrium profile which may be monotone decreasing, quasi-uniform or unimodal, attaining its maximum value at a non-zero lipid level. Our model also reveals that macrophage uptake may be severely impaired by lipid accumulation. We conclude that lipid accumulation in plaque macrophages may serve as a partial explanation for the defective uptake of apoptotic cells (efferocytosis) often reported in atherosclerotic plaques.
[ { "created": "Wed, 7 Dec 2022 18:57:59 GMT", "version": "v1" } ]
2022-12-08
[ [ "Chambers", "Keith L", "" ], [ "Myerscough", "Mary R", "" ], [ "Byrne", "Helen M", "" ] ]
Atherosclerotic plaques form in artery walls due to a chronic inflammatory response driven by lipid accumulation. A key component of the inflammatory response is the interaction between monocyte-derived macrophages and extracellular lipid. Although concentrations of low-density lipoprotein (LDL) and high-density lipoprotein (HDL) particles in the blood are known to affect plaque progression, their impact on the lipid load of plaque macrophages remains unexplored. In this paper, we develop a lipid-structured mathematical model to investigate the impact of blood LDL/HDL levels on plaque composition, and lipid distribution in plaque macrophages. A reduced subsystem, derived by summing the equations of the full model, describes the dynamics of biophysical quantities relating to plaque composition (e.g. total number of macrophages, total amount of intracellular lipid). We also derive a continuum approximation of the model to facilitate analysis of the macrophage lipid distribution. The results, which include time-dependent numerical solutions and asymptotic analysis of the unique steady state solution, indicate that plaque lipid content is sensitive to the influx of LDL relative to HDL capacity. The macrophage lipid distribution evolves in a wave-like manner towards an equilibrium profile which may be monotone decreasing, quasi-uniform or unimodal, attaining its maximum value at a non-zero lipid level. Our model also reveals that macrophage uptake may be severely impaired by lipid accumulation. We conclude that lipid accumulation in plaque macrophages may serve as a partial explanation for the defective uptake of apoptotic cells (efferocytosis) often reported in atherosclerotic plaques.
2111.07760
Rinaldo Schinazi
Rinaldo B. Schinazi
Evolutionary paths under catastrophes
null
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
We introduce a model to study the impact of catastrophes on evolutionary paths. If we do not allow catastrophes the number of changes in the maximum fitness of a population grows logarithmically with respect to time. Allowing catastrophes (no matter how rare) yields a drastically different behavior. When catastrophes are possible the number of changes in the maximum fitness of the population grows linearly with time. Moreover, the evolutionary paths are a lot less predictable when catastrophes are possible. Our results can be seen as supporting the hypothesis that catastrophes speed up evolution by disrupting dominant species and creating space for new species to emerge and evolve.
[ { "created": "Mon, 15 Nov 2021 13:58:35 GMT", "version": "v1" } ]
2021-11-16
[ [ "Schinazi", "Rinaldo B.", "" ] ]
We introduce a model to study the impact of catastrophes on evolutionary paths. If we do not allow catastrophes the number of changes in the maximum fitness of a population grows logarithmically with respect to time. Allowing catastrophes (no matter how rare) yields a drastically different behavior. When catastrophes are possible the number of changes in the maximum fitness of the population grows linearly with time. Moreover, the evolutionary paths are a lot less predictable when catastrophes are possible. Our results can be seen as supporting the hypothesis that catastrophes speed up evolution by disrupting dominant species and creating space for new species to emerge and evolve.
1803.05357
Yasuaki Kobayashi
Yasuaki Kobayashi, Yusuke Yasugahira, Hiroyuki Kitahata, Mika Watanabe, Ken Natsuga, Masaharu Nagayama
Interplay between epidermal stem cell dynamics and dermal deformations
10 pages, 8 figures
npj Computational Materials 4, 45 (2018)
10.1038/s41524-018-0101-z
null
q-bio.TO nlin.AO q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a particle-based model of self-replicating cells on a deformable substrate composed of the dermis and the basement membrane and investigate the relationship between dermal deformations and stem cell pattering on it. We show that our model reproduces the formation of dermal papillae, protuberances directing from the dermis to the epidermis, and the preferential stem cell distributions on the tips of the dermal papillae, which the basic buckling mechanism fails to explain. We argue that cell-type-dependent adhesion strength of the cells to the basement membrane is crucial factors of these patterns.
[ { "created": "Thu, 8 Mar 2018 07:50:29 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2018 08:41:56 GMT", "version": "v2" } ]
2018-08-22
[ [ "Kobayashi", "Yasuaki", "" ], [ "Yasugahira", "Yusuke", "" ], [ "Kitahata", "Hiroyuki", "" ], [ "Watanabe", "Mika", "" ], [ "Natsuga", "Ken", "" ], [ "Nagayama", "Masaharu", "" ] ]
We introduce a particle-based model of self-replicating cells on a deformable substrate composed of the dermis and the basement membrane and investigate the relationship between dermal deformations and stem cell pattering on it. We show that our model reproduces the formation of dermal papillae, protuberances directing from the dermis to the epidermis, and the preferential stem cell distributions on the tips of the dermal papillae, which the basic buckling mechanism fails to explain. We argue that cell-type-dependent adhesion strength of the cells to the basement membrane is crucial factors of these patterns.
q-bio/0505030
Guido Tiana
A. Amatori, G. Tiana, L. Sutto, J.Ferkinghoff-Borg, A. Trovato and R. A. Broglia
Design of amino acid sequences to fold into C_alpha-model proteins
null
null
10.1063/1.1992447
null
q-bio.BM
null
In order to extend the results obtained with minimal lattice models to more realistic systems, we study a model where proteins are described as a chain of 20 kinds of structureless amino acids moving in a continuum space and interacting through a contact potential controlled by a 20x20 quenched random matrix. The goal of the present work is to design and characterize amino acid sequences folding to the SH3 conformation, a 60-residues recognition domain common to many regulatory proteins. We show that a number of sequences can fold, starting from a random conformation, to within a distance root mean square deviation (dRMSD) of 2.6A from the native state. Good folders are those sequences displaying in the native conformation an energy lower than a sequence--independent threshold energy.
[ { "created": "Mon, 16 May 2005 15:11:00 GMT", "version": "v1" } ]
2009-11-11
[ [ "Amatori", "A.", "" ], [ "Tiana", "G.", "" ], [ "Sutto", "L.", "" ], [ "Ferkinghoff-Borg", "J.", "" ], [ "Trovato", "A.", "" ], [ "Broglia", "R. A.", "" ] ]
In order to extend the results obtained with minimal lattice models to more realistic systems, we study a model where proteins are described as a chain of 20 kinds of structureless amino acids moving in a continuum space and interacting through a contact potential controlled by a 20x20 quenched random matrix. The goal of the present work is to design and characterize amino acid sequences folding to the SH3 conformation, a 60-residues recognition domain common to many regulatory proteins. We show that a number of sequences can fold, starting from a random conformation, to within a distance root mean square deviation (dRMSD) of 2.6A from the native state. Good folders are those sequences displaying in the native conformation an energy lower than a sequence--independent threshold energy.
1310.3693
Andrea De Martino
Daniele De Martino, Fabrizio Capuani, Matteo Mori, Andrea De Martino, Enzo Marinari
Counting and correcting thermodynamically infeasible flux cycles in genome-scale metabolic networks
10 pages; see http://chimera.roma1.infn.it/SYSBIO/ for supporting files
Metabolites 3:946 (2013)
10.3390/metabo3040946
null
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermodynamics constrains the flow of matter in a reaction network to occur through routes along which the Gibbs energy decreases, implying that viable steady-state flux patterns should be void of closed reaction cycles. Identifying and removing cycles in large reaction networks can unfortunately be a highly challenging task from a computational viewpoint. We propose here a method that accomplishes it by combining a relaxation algorithm and a Monte Carlo procedure to detect loops, with ad hoc rules (discussed in detail) to eliminate them. As test cases, we tackle (a) the problem of identifying infeasible cycles in the E. coli metabolic network and (b) the problem of correcting thermodynamic infeasibilities in the Flux-Balance-Analysis solutions for 15 human cell-type specific metabolic networks. Results for (a) are compared with previous analyses of the same issue, while results for (b) are weighed against alternative methods to retrieve thermodynamically viable flux patterns based on minimizing specific global quantities. Our method on one hand outperforms previous techniques and, on the other, corrects loopy solutions to Flux Balance Analysis. As a byproduct, it also turns out to be able to reveal possible inconsistencies in model reconstructions.
[ { "created": "Mon, 14 Oct 2013 14:26:35 GMT", "version": "v1" } ]
2013-10-15
[ [ "De Martino", "Daniele", "" ], [ "Capuani", "Fabrizio", "" ], [ "Mori", "Matteo", "" ], [ "De Martino", "Andrea", "" ], [ "Marinari", "Enzo", "" ] ]
Thermodynamics constrains the flow of matter in a reaction network to occur through routes along which the Gibbs energy decreases, implying that viable steady-state flux patterns should be void of closed reaction cycles. Identifying and removing cycles in large reaction networks can unfortunately be a highly challenging task from a computational viewpoint. We propose here a method that accomplishes it by combining a relaxation algorithm and a Monte Carlo procedure to detect loops, with ad hoc rules (discussed in detail) to eliminate them. As test cases, we tackle (a) the problem of identifying infeasible cycles in the E. coli metabolic network and (b) the problem of correcting thermodynamic infeasibilities in the Flux-Balance-Analysis solutions for 15 human cell-type specific metabolic networks. Results for (a) are compared with previous analyses of the same issue, while results for (b) are weighed against alternative methods to retrieve thermodynamically viable flux patterns based on minimizing specific global quantities. Our method on one hand outperforms previous techniques and, on the other, corrects loopy solutions to Flux Balance Analysis. As a byproduct, it also turns out to be able to reveal possible inconsistencies in model reconstructions.
2310.09178
Simone Franchini Dr.
Giampiero Bardella, Simone Franchini, Liming Pan, Riccardo Balzan, Surabhi Ramawat, Emiliano Brunamonti, Pierpaolo Pani, and Stefano Ferraina
Neural activity in quarks language: Lattice Field Theory for a network of real neurons
79 pages, 20 figures
Entropy 2024, 26(6), 495
10.3390/e26060495
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Brain-computer interfaces surged extraordinary developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro- and meso-scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neurons activity, and generalize the maximum entropy model for neural networks so that also the time evolution of the system is taken into account. This is obtained by bridging particle physics and neuroscience, paving the way to particle physics-inspired models of neocortex.
[ { "created": "Fri, 13 Oct 2023 15:14:23 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2023 09:27:45 GMT", "version": "v2" }, { "created": "Sun, 24 Mar 2024 03:29:11 GMT", "version": "v3" } ]
2024-06-07
[ [ "Bardella", "Giampiero", "" ], [ "Franchini", "Simone", "" ], [ "Pan", "Liming", "" ], [ "Balzan", "Riccardo", "" ], [ "Ramawat", "Surabhi", "" ], [ "Brunamonti", "Emiliano", "" ], [ "Pani", "Pierpaolo", "" ], [ "Ferraina", "Stefano", "" ] ]
Brain-computer interfaces surged extraordinary developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro- and meso-scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neurons activity, and generalize the maximum entropy model for neural networks so that also the time evolution of the system is taken into account. This is obtained by bridging particle physics and neuroscience, paving the way to particle physics-inspired models of neocortex.
2004.12979
Patricio Arru\'e Pa
Patricio Arrue, Nima Toosizadeh, Hessam Babaee, Kaveh Laksari
Low-rank representation of head impact kinematics: A data-driven emulator
20 pages, 13 figures, 4 tables
Front. Bioeng. Biotechnol. (2020) 8:555493
10.3389/fbioe.2020.555493
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Head motion induced by impacts has been deemed as one of the most important measures in brain injury prediction, given that the majority of brain injury metrics use head kinematics as input. Recently, researchers have focused on using fast approaches, such as machine learning, to approximate brain deformation in real-time for early brain injury diagnosis. However, those requires large number of kinematic measurements, and therefore data augmentation is required given the limited on-field measured data available. In this study we present a principal component analysis-based method that emulates an empirical low-rank substitution for head impact kinematics, while requiring low computational cost. In characterizing our existing data set of 537 head impacts, consisting of 6 degrees of freedom measurements, we found that only a few modes, e.g. 15 in the case of angular velocity, is sufficient for accurate reconstruction of the entire data set. Furthermore, these modes are predominantly low frequency since over 70% to 90% of the angular velocity response can be captured by modes that have frequencies under 40Hz. We compared our proposed method against existing impact parametrization methods and showed significantly better performance in injury prediction using a range of kinematic-based metrics -- such as head injury criterion and rotational injury criterion (RIC) -- and brain tissue deformation-metrics -- such as brain angle metric, maximum principal strain (MPS) and axonal fiber strains (FS). In all cases, our approach reproduced injury metrics similar to the ground truth measurements with no significant difference, whereas the existing methods obtained significantly different (p<0.01) values as well as poor injury classification sensitivity and specificity. This emulator will enable us to provide the necessary data augmentation to build a head impact kinematic data set of any size.
[ { "created": "Mon, 27 Apr 2020 17:47:59 GMT", "version": "v1" }, { "created": "Wed, 9 Dec 2020 21:59:11 GMT", "version": "v2" } ]
2020-12-11
[ [ "Arrue", "Patricio", "" ], [ "Toosizadeh", "Nima", "" ], [ "Babaee", "Hessam", "" ], [ "Laksari", "Kaveh", "" ] ]
Head motion induced by impacts has been deemed as one of the most important measures in brain injury prediction, given that the majority of brain injury metrics use head kinematics as input. Recently, researchers have focused on using fast approaches, such as machine learning, to approximate brain deformation in real-time for early brain injury diagnosis. However, those requires large number of kinematic measurements, and therefore data augmentation is required given the limited on-field measured data available. In this study we present a principal component analysis-based method that emulates an empirical low-rank substitution for head impact kinematics, while requiring low computational cost. In characterizing our existing data set of 537 head impacts, consisting of 6 degrees of freedom measurements, we found that only a few modes, e.g. 15 in the case of angular velocity, is sufficient for accurate reconstruction of the entire data set. Furthermore, these modes are predominantly low frequency since over 70% to 90% of the angular velocity response can be captured by modes that have frequencies under 40Hz. We compared our proposed method against existing impact parametrization methods and showed significantly better performance in injury prediction using a range of kinematic-based metrics -- such as head injury criterion and rotational injury criterion (RIC) -- and brain tissue deformation-metrics -- such as brain angle metric, maximum principal strain (MPS) and axonal fiber strains (FS). In all cases, our approach reproduced injury metrics similar to the ground truth measurements with no significant difference, whereas the existing methods obtained significantly different (p<0.01) values as well as poor injury classification sensitivity and specificity. This emulator will enable us to provide the necessary data augmentation to build a head impact kinematic data set of any size.
q-bio/0701040
Brigitte Gaillard
J.Y. Georges (DEPE-Iphc), A. Billes, S. Ferraroli (DEPE-Iphc), S. Fossette (DEPE-Iphc), J. Fretey, D. Gr\'emillet (DEPE-Iphc), Y. Le Maho (DEPE-Iphc), A. E. Myers, H. Tanaka, G. C. Hays
Meta-analysis of movements in Atlantic leatherback turtles during nesting season : conservation implications
null
null
null
null
q-bio.PE
null
Despite decades of conservation efforts on the nesting beaches, the critical status of leatherback turtles shows that their survival predominantly depends on our ability to reduce at-sea mortality. Although areas where leatherbacks meet fisheries have been identified during the long distance movements between two consecutive nesting seasons, hotspots of lethal interactions are still poorly defined within the nesting season, when individuals concentrate close to land. Here we report movements of satellite-tracked gravid leatherback turtles during the nesting season in Western Central Africa, South America and Caribbean Sea, accounting for about 70% of the world population. We show that during, and at the end of, the nesting season leatherback turtles have the propensity to remain over the continental shelf, yet sometimes perform extended movements and may even nest in neighbouring countries. Leatherbacks exploit coastal commercial fishing grounds and face substantial accidental capture by regional coastal fisheries (e.g. at least 10% in French Guiana). This emphasises the need for regional conservation strategies to be developed at the ocean scale, both at sea and on land, to ensure the survival of the last leatherback turtles.
[ { "created": "Thu, 25 Jan 2007 14:34:19 GMT", "version": "v1" } ]
2016-08-14
[ [ "Georges", "J. Y.", "", "DEPE-Iphc" ], [ "Billes", "A.", "", "DEPE-Iphc" ], [ "Ferraroli", "S.", "", "DEPE-Iphc" ], [ "Fossette", "S.", "", "DEPE-Iphc" ], [ "Fretey", "J.", "", "DEPE-Iphc" ], [ "Grémillet", "D.", "", "DEPE-Iphc" ], [ "Maho", "Y. Le", "", "DEPE-Iphc" ], [ "Myers", "A. E.", "" ], [ "Tanaka", "H.", "" ], [ "Hays", "G. C.", "" ] ]
Despite decades of conservation efforts on the nesting beaches, the critical status of leatherback turtles shows that their survival predominantly depends on our ability to reduce at-sea mortality. Although areas where leatherbacks meet fisheries have been identified during the long distance movements between two consecutive nesting seasons, hotspots of lethal interactions are still poorly defined within the nesting season, when individuals concentrate close to land. Here we report movements of satellite-tracked gravid leatherback turtles during the nesting season in Western Central Africa, South America and Caribbean Sea, accounting for about 70% of the world population. We show that during, and at the end of, the nesting season leatherback turtles have the propensity to remain over the continental shelf, yet sometimes perform extended movements and may even nest in neighbouring countries. Leatherbacks exploit coastal commercial fishing grounds and face substantial accidental capture by regional coastal fisheries (e.g. at least 10% in French Guiana). This emphasises the need for regional conservation strategies to be developed at the ocean scale, both at sea and on land, to ensure the survival of the last leatherback turtles.
2304.05908
Moo K. Chung
Moo K. Chung, Tahmineh Azizi, Jamie L. Hanson, Andrew L. Alexander, Richard J. Davidson, Seth D. Pollak
Altered Topological Structure of the Brain White Matter in Maltreated Children through Topological Data Analysis
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Childhood maltreatment may adversely affect brain development and consequently influence behavioral, emotional, and psychological patterns during adulthood. In this study, we propose an analytical pipeline for modeling the altered topological structure of brain white matter in maltreated and typically developing children. We perform topological data analysis (TDA) to assess the alteration in the global topology of the brain white-matter structural covariance network among children. We use persistent homology, an algebraic technique in TDA, to analyze topological features in the brain covariance networks constructed from structural magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI). We develop a novel framework for statistical inference based on the Wasserstein distance to assess the significance of the observed topological differences. Using these methods in comparing maltreated children to a typically developing control group, we find that maltreatment may increase homogeneity in white matter structures and thus induce higher correlations in the structural covariance; this is reflected in the topological profile. Our findings strongly suggest that TDA can be a valuable framework to model altered topological structures of the brain. The MATLAB codes and processed data used in this study can be found at https://github.com/laplcebeltrami/maltreated.
[ { "created": "Wed, 12 Apr 2023 15:25:01 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2023 18:07:05 GMT", "version": "v2" }, { "created": "Wed, 15 Nov 2023 02:12:50 GMT", "version": "v3" } ]
2023-11-16
[ [ "Chung", "Moo K.", "" ], [ "Azizi", "Tahmineh", "" ], [ "Hanson", "Jamie L.", "" ], [ "Alexander", "Andrew L.", "" ], [ "Davidson", "Richard J.", "" ], [ "Pollak", "Seth D.", "" ] ]
Childhood maltreatment may adversely affect brain development and consequently influence behavioral, emotional, and psychological patterns during adulthood. In this study, we propose an analytical pipeline for modeling the altered topological structure of brain white matter in maltreated and typically developing children. We perform topological data analysis (TDA) to assess the alteration in the global topology of the brain white-matter structural covariance network among children. We use persistent homology, an algebraic technique in TDA, to analyze topological features in the brain covariance networks constructed from structural magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI). We develop a novel framework for statistical inference based on the Wasserstein distance to assess the significance of the observed topological differences. Using these methods in comparing maltreated children to a typically developing control group, we find that maltreatment may increase homogeneity in white matter structures and thus induce higher correlations in the structural covariance; this is reflected in the topological profile. Our findings strongly suggest that TDA can be a valuable framework to model altered topological structures of the brain. The MATLAB codes and processed data used in this study can be found at https://github.com/laplcebeltrami/maltreated.
1108.1775
Natalia Denesyuk
Natalia A. Denesyuk and D. Thirumalai
Crowding Promotes the Switch from Hairpin to Pseudoknot Conformation in Human Telomerase RNA
File "JACS_MAIN_archive_PDF_from_DOC.pdf" (PDF created from DOC) contains the main text of the paper File JACS_SI_archive.tex + 7 figures are the supplementary info
J. Am. Chem. Soc., 2011, 133 (31), pp 11858--11861
10.1021/ja2035128
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formation of a pseudoknot in the conserved RNA core domain in the ribonucleoprotein human telomerase is required for function. In vitro experiments show that the pseudoknot (PK) is in equilibrium with an extended hairpin (HP) structure. We use molecular simulations of a coarse-grained model, which reproduces most of the salient features of the experimental melting profiles of PK and HP, to show that crowding enhances the stability of PK relative to HP in the wild type and in a mutant associated with dyskeratosis congenita. In monodisperse suspensions, small crowding particles increase the stability of compact structures to a greater extent than larger crowders. If the sizes of crowders in a binary mixture are smaller than the unfolded RNA, the increase in melting temperature due to the two components is additive. In a ternary mixture of crowders that are larger than the unfolded RNA, which mimics the composition of ribosome, large enzyme complexes and proteins in E. coli, the marginal increase in stability is entirely determined by the smallest component. We predict that crowding can restore partially telomerase activity in mutants, which dramatically decrease the PK stability.
[ { "created": "Mon, 8 Aug 2011 18:25:40 GMT", "version": "v1" } ]
2011-08-09
[ [ "Denesyuk", "Natalia A.", "" ], [ "Thirumalai", "D.", "" ] ]
Formation of a pseudoknot in the conserved RNA core domain in the ribonucleoprotein human telomerase is required for function. In vitro experiments show that the pseudoknot (PK) is in equilibrium with an extended hairpin (HP) structure. We use molecular simulations of a coarse-grained model, which reproduces most of the salient features of the experimental melting profiles of PK and HP, to show that crowding enhances the stability of PK relative to HP in the wild type and in a mutant associated with dyskeratosis congenita. In monodisperse suspensions, small crowding particles increase the stability of compact structures to a greater extent than larger crowders. If the sizes of crowders in a binary mixture are smaller than the unfolded RNA, the increase in melting temperature due to the two components is additive. In a ternary mixture of crowders that are larger than the unfolded RNA, which mimics the composition of ribosome, large enzyme complexes and proteins in E. coli, the marginal increase in stability is entirely determined by the smallest component. We predict that crowding can restore partially telomerase activity in mutants, which dramatically decrease the PK stability.
1703.10627
Mans Henningson
M{\aa}ns Henningson and Sebastian Illes
Analysis and Modelling of Subthreshold Neural Multi-electrode Array Data by Statistical Field Theory
27 pages, 13 figures
null
null
null
q-bio.NC cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-electrode arrays (MEA) are increasingly used to investigate spontaneous neuronal network activity. The recorded signals comprise several distinct components: Apart from artefacts without biological significance, one can distinguish between spikes (action potentials) and subthreshold fluctuations (local fields potentials). Here we aim to develop a theoretical model that allows for a compact and robust characterization of subthreshold fluctuations in terms of a Gaussian statistical field theory in two spatial and one temporal dimension. What is usually referred to as the driving noise in the context of statistical physics is here interpreted as a representation of the neural activity. Spatial and temporal correlations of this activity give valuable information about the connectivity in the neural tissue. We apply our methods on a dataset obtained from MEA-measurements in an acute hippocampal brain slice from a rat. Our main finding is that the empirical correlation functions indeed obey the logarithmic behaviour that is a general feature of theoretical models of this kind. We also find a clear correlation between the activity and the occurence of spikes. Another important insight is the importance of correcly separating out certain artefacts from the data before proceeding with the analysis.
[ { "created": "Thu, 30 Mar 2017 18:21:30 GMT", "version": "v1" } ]
2017-04-03
[ [ "Henningson", "Måns", "" ], [ "Illes", "Sebastian", "" ] ]
Multi-electrode arrays (MEA) are increasingly used to investigate spontaneous neuronal network activity. The recorded signals comprise several distinct components: Apart from artefacts without biological significance, one can distinguish between spikes (action potentials) and subthreshold fluctuations (local fields potentials). Here we aim to develop a theoretical model that allows for a compact and robust characterization of subthreshold fluctuations in terms of a Gaussian statistical field theory in two spatial and one temporal dimension. What is usually referred to as the driving noise in the context of statistical physics is here interpreted as a representation of the neural activity. Spatial and temporal correlations of this activity give valuable information about the connectivity in the neural tissue. We apply our methods on a dataset obtained from MEA-measurements in an acute hippocampal brain slice from a rat. Our main finding is that the empirical correlation functions indeed obey the logarithmic behaviour that is a general feature of theoretical models of this kind. We also find a clear correlation between the activity and the occurence of spikes. Another important insight is the importance of correcly separating out certain artefacts from the data before proceeding with the analysis.
1307.7302
Thomas Dean
Thomas Dean, Biafra Ahanonu, Mainak Chowdhury, Anjali Datta, Andre Esteva, Daniel Eth, Nobie Redmon, Oleg Rumyantsev, Ysis Tarter
On the Technology Prospects and Investment Opportunities for Scalable Neuroscience
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two major initiatives to accelerate research in the brain sciences have focused attention on developing a new generation of scientific instruments for neuroscience. These instruments will be used to record static (structural) and dynamic (behavioral) information at unprecedented spatial and temporal resolution and report out that information in a form suitable for computational analysis. We distinguish between recording - taking measurements of individual cells and the extracellular matrix - and reporting - transcoding, packaging and transmitting the resulting information for subsequent analysis - as these represent very different challenges as we scale the relevant technologies to support simultaneously tracking the many neurons that comprise neural circuits of interest. We investigate a diverse set of technologies with the purpose of anticipating their development over the span of the next 10 years and categorizing their impact in terms of short-term [1-2 years], medium-term [2-5 years] and longer-term [5-10 years] deliverables.
[ { "created": "Sat, 27 Jul 2013 20:25:00 GMT", "version": "v1" } ]
2013-07-30
[ [ "Dean", "Thomas", "" ], [ "Ahanonu", "Biafra", "" ], [ "Chowdhury", "Mainak", "" ], [ "Datta", "Anjali", "" ], [ "Esteva", "Andre", "" ], [ "Eth", "Daniel", "" ], [ "Redmon", "Nobie", "" ], [ "Rumyantsev", "Oleg", "" ], [ "Tarter", "Ysis", "" ] ]
Two major initiatives to accelerate research in the brain sciences have focused attention on developing a new generation of scientific instruments for neuroscience. These instruments will be used to record static (structural) and dynamic (behavioral) information at unprecedented spatial and temporal resolution and report out that information in a form suitable for computational analysis. We distinguish between recording - taking measurements of individual cells and the extracellular matrix - and reporting - transcoding, packaging and transmitting the resulting information for subsequent analysis - as these represent very different challenges as we scale the relevant technologies to support simultaneously tracking the many neurons that comprise neural circuits of interest. We investigate a diverse set of technologies with the purpose of anticipating their development over the span of the next 10 years and categorizing their impact in terms of short-term [1-2 years], medium-term [2-5 years] and longer-term [5-10 years] deliverables.
1604.06300
Serge Sheremet'ev
Serge Sheremet'ev, Xenia Chebotareva
Current and Cretaceous-Cenozoic diversification of Angiosperms
51 pp with 18 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cretaceous-Cenozoic history of angiosperms led to the a certain character of the distribution of taxa of different levels (the number of species and genera in families, species/genera ratio in families, the number of species in the genera). In most cases, these distributions are satisfactorily described by a power law (Pareto distribution). In logarithmic coordinates power function is a straight line. Empirical curves repeat this line is good enough, but on the right side of the graph (at low volumes taxa), there is a marked deviation of theoretical from the empirical curves. This suggests that the small volumes of taxa should be greater for full compliance with the theoretical curves. Modeling ratios among genera and species in families showed that only in the case of dynamic extinction factor observed satisfactory agreement between observed and calculated the number of species in a wide range of iterations. This suggested that there was a differential extinction of species during the evolution of angiosperms. This implies that the rate of extinction had to be minimal in genera with a large number of species. On the contrary, extinction rates may be increased by orders of magnitude with a decrease in the number of species. As a result, large genera became getting bigger and small genera become less. The species frequencies of distribution in the genera varied according to a power law. The initial divergence of taxa volume, which led to their further division into large and small, could be caused by the emergence and expansion of herbs with their functional and adaptive capabilities.
[ { "created": "Thu, 21 Apr 2016 13:37:11 GMT", "version": "v1" } ]
2016-04-22
[ [ "Sheremet'ev", "Serge", "" ], [ "Chebotareva", "Xenia", "" ] ]
Cretaceous-Cenozoic history of angiosperms led to the a certain character of the distribution of taxa of different levels (the number of species and genera in families, species/genera ratio in families, the number of species in the genera). In most cases, these distributions are satisfactorily described by a power law (Pareto distribution). In logarithmic coordinates power function is a straight line. Empirical curves repeat this line is good enough, but on the right side of the graph (at low volumes taxa), there is a marked deviation of theoretical from the empirical curves. This suggests that the small volumes of taxa should be greater for full compliance with the theoretical curves. Modeling ratios among genera and species in families showed that only in the case of dynamic extinction factor observed satisfactory agreement between observed and calculated the number of species in a wide range of iterations. This suggested that there was a differential extinction of species during the evolution of angiosperms. This implies that the rate of extinction had to be minimal in genera with a large number of species. On the contrary, extinction rates may be increased by orders of magnitude with a decrease in the number of species. As a result, large genera became getting bigger and small genera become less. The species frequencies of distribution in the genera varied according to a power law. The initial divergence of taxa volume, which led to their further division into large and small, could be caused by the emergence and expansion of herbs with their functional and adaptive capabilities.
2211.16638
Justin Yeakel
Taran Rallings, Christopher P. Kempes, Justin D. Yeakel
On the dynamics of mortality and the ephemeral nature of mammalian megafauna
10 pages, 5 figures, 1 table, 4 appendices, 8 supplementary figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Energy flow through consumer-resource interactions is largely determined by body size. Allometric relationships govern the dynamics of populations by impacting rates of reproduction, as well as alternative sources of mortality, which have differential impacts on smaller to larger organisms. Here we derive and investigate the timescales associated with four alternative sources of mortality for terrestrial mammals: mortality from starvation, mortality associated with aging, mortality from consumption by predators, and mortality introduced by anthropogenic subsidized harvest. The incorporation of these allometric relationships into a minimal consumer-resource model illuminates central constraints that may contribute to the structure of mammalian communities. Our framework reveals that while starvation largely impacts smaller-bodied species, the allometry of senescence is expected to be more difficult to observe. In contrast, external predation and subsidized harvest have greater impacts on the populations of larger-bodied species. Moreover, the inclusion of predation mortality reveals mass thresholds for mammalian herbivores, where dynamic instabilities may limit the feasibility of megafaunal populations. We show how these thresholds vary with alternative predator-prey mass relationships, which are not well understood within terrestrial systems. Finally, we use our framework to predict the harvest pressure required to induce mass-specific extinctions, which closely align with previous estimates of anthropogenic megafaunal exploitation in both paleontological and historical contexts. Together our results underscore the tenuous nature of megafaunal populations, and how different sources of mortality may contribute to their ephemeral nature over evolutionary time.
[ { "created": "Wed, 30 Nov 2022 00:08:45 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 21:46:54 GMT", "version": "v2" } ]
2023-08-28
[ [ "Rallings", "Taran", "" ], [ "Kempes", "Christopher P.", "" ], [ "Yeakel", "Justin D.", "" ] ]
Energy flow through consumer-resource interactions is largely determined by body size. Allometric relationships govern the dynamics of populations by impacting rates of reproduction, as well as alternative sources of mortality, which have differential impacts on smaller to larger organisms. Here we derive and investigate the timescales associated with four alternative sources of mortality for terrestrial mammals: mortality from starvation, mortality associated with aging, mortality from consumption by predators, and mortality introduced by anthropogenic subsidized harvest. The incorporation of these allometric relationships into a minimal consumer-resource model illuminates central constraints that may contribute to the structure of mammalian communities. Our framework reveals that while starvation largely impacts smaller-bodied species, the allometry of senescence is expected to be more difficult to observe. In contrast, external predation and subsidized harvest have greater impacts on the populations of larger-bodied species. Moreover, the inclusion of predation mortality reveals mass thresholds for mammalian herbivores, where dynamic instabilities may limit the feasibility of megafaunal populations. We show how these thresholds vary with alternative predator-prey mass relationships, which are not well understood within terrestrial systems. Finally, we use our framework to predict the harvest pressure required to induce mass-specific extinctions, which closely align with previous estimates of anthropogenic megafaunal exploitation in both paleontological and historical contexts. Together our results underscore the tenuous nature of megafaunal populations, and how different sources of mortality may contribute to their ephemeral nature over evolutionary time.
1804.06050
Jingyi Jessica Li
Wei Vivian Li, Jingyi Jessica Li
Modeling and analysis of RNA-seq data: a review from a statistical perspective
null
Quantitative Biology 6 (2018) 195-209
10.1007/s40484-018-0144-7
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Since the invention of next-generation RNA sequencing (RNA-seq) technologies, they have become a powerful tool to study the presence and quantity of RNA molecules in biological samples and have revolutionized transcriptomic studies. The analysis of RNA-seq data at four different levels (samples, genes, transcripts, and exons) involve multiple statistical and computational questions, some of which remain challenging up to date. Results: We review RNA-seq analysis tools at the sample, gene, transcript, and exon levels from a statistical perspective. We also highlight the biological and statistical questions of most practical considerations. Conclusion: The development of statistical and computational methods for analyzing RNA- seq data has made significant advances in the past decade. However, methods developed to answer the same biological question often rely on diverse statical models and exhibit different performance under different scenarios. This review discusses and compares multiple commonly used statistical models regarding their assumptions, in the hope of helping users select appropriate methods as needed, as well as assisting developers for future method development.
[ { "created": "Tue, 17 Apr 2018 05:26:53 GMT", "version": "v1" }, { "created": "Sun, 29 Apr 2018 09:10:58 GMT", "version": "v2" }, { "created": "Tue, 1 May 2018 15:01:45 GMT", "version": "v3" } ]
2021-12-01
[ [ "Li", "Wei Vivian", "" ], [ "Li", "Jingyi Jessica", "" ] ]
Background: Since the invention of next-generation RNA sequencing (RNA-seq) technologies, they have become a powerful tool to study the presence and quantity of RNA molecules in biological samples and have revolutionized transcriptomic studies. The analysis of RNA-seq data at four different levels (samples, genes, transcripts, and exons) involve multiple statistical and computational questions, some of which remain challenging up to date. Results: We review RNA-seq analysis tools at the sample, gene, transcript, and exon levels from a statistical perspective. We also highlight the biological and statistical questions of most practical considerations. Conclusion: The development of statistical and computational methods for analyzing RNA- seq data has made significant advances in the past decade. However, methods developed to answer the same biological question often rely on diverse statical models and exhibit different performance under different scenarios. This review discusses and compares multiple commonly used statistical models regarding their assumptions, in the hope of helping users select appropriate methods as needed, as well as assisting developers for future method development.
2101.00500
Akke Mats Houben
Akke Mats Houben
Signal anticipation and delay in excitable media: group delay of the FitzHugh-Nagumo model
9 pages, 6 figures
null
null
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An expression for the group delay of the FitzHugh-Nagumo model in response to low amplitude input is obtained by linearisation of the cubic term of the voltage equation around its stable fixed-point. It is found that a negative group delay exists for low frequencies, indicating that the evolution of slowly fluctuating signals are anticipated by the voltage dynamics. The effects of the group delay for different types of signals are shown numerically for the non-linearised FitzHugh-Nagumo model, and some observations on the signal aspects that are anticipated are stated.
[ { "created": "Sat, 2 Jan 2021 19:14:24 GMT", "version": "v1" } ]
2021-01-05
[ [ "Houben", "Akke Mats", "" ] ]
An expression for the group delay of the FitzHugh-Nagumo model in response to low amplitude input is obtained by linearisation of the cubic term of the voltage equation around its stable fixed-point. It is found that a negative group delay exists for low frequencies, indicating that the evolution of slowly fluctuating signals are anticipated by the voltage dynamics. The effects of the group delay for different types of signals are shown numerically for the non-linearised FitzHugh-Nagumo model, and some observations on the signal aspects that are anticipated are stated.
1612.07468
Sang Kwan Choi
Sang Kwan Choi, Chaiho Rim and Hwajin Um
RNA substructure as a random matrix ensemble
8 pages, 12 figures; v2: data set and figure added, comments added, references updated; v3: appendix and references added, few sentences including abstract paraphrased for clarification, remarks added in the conclusion; v4: published version
Phys. Rev. E 100, 062404 (2019)
10.1103/PhysRevE.100.062404
null
q-bio.QM cond-mat.stat-mech hep-th q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combinatorial analysis of a certain abstract of RNA structures has been studied to investigate their statistics. Our approach regards the backbone of secondary structures as an alternate sequence of paired and unpaired sets of nucleotides, which can be described by random matrix model. We obtain the generating function of the structures using Hermitian matrix model with Chebyshev polynomial of the second kind and analyze the statistics with respect to the number of stems. To match the experimental findings of the statistical behavior, we consider the structures in a grand canonical ensemble and find a fugacity value corresponding to an appropriate number of stems.
[ { "created": "Thu, 22 Dec 2016 07:25:43 GMT", "version": "v1" }, { "created": "Thu, 9 Feb 2017 04:55:00 GMT", "version": "v2" }, { "created": "Wed, 17 Jul 2019 02:17:11 GMT", "version": "v3" }, { "created": "Mon, 9 Mar 2020 08:44:44 GMT", "version": "v4" } ]
2020-03-10
[ [ "Choi", "Sang Kwan", "" ], [ "Rim", "Chaiho", "" ], [ "Um", "Hwajin", "" ] ]
Combinatorial analysis of a certain abstract of RNA structures has been studied to investigate their statistics. Our approach regards the backbone of secondary structures as an alternate sequence of paired and unpaired sets of nucleotides, which can be described by random matrix model. We obtain the generating function of the structures using Hermitian matrix model with Chebyshev polynomial of the second kind and analyze the statistics with respect to the number of stems. To match the experimental findings of the statistical behavior, we consider the structures in a grand canonical ensemble and find a fugacity value corresponding to an appropriate number of stems.
1308.6014
Robert Rosenbaum
Robert Rosenbaum and Brent Doiron
Balanced networks of spiking neurons with spatially dependent recurrent connections
null
null
10.1103/PhysRevX.4.021039
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of model neurons with balanced recurrent excitation and inhibition produce irregular and asynchronous spiking activity. We extend the analysis of balanced networks to include the known dependence of connection probability on the spatial separation between neurons. In the continuum limit we derive that stable, balanced firing rate solutions require that the spatial spread of external inputs be broader than that of recurrent excitation, which in turn must be broader than or equal to that of recurrent inhibition. For finite size networks we investigate the pattern forming dynamics arising when balanced conditions are not satisfied. The spatiotemporal dynamics of balanced networks offer new challenges in the statistical mechanics of complex systems.
[ { "created": "Tue, 27 Aug 2013 23:39:34 GMT", "version": "v1" }, { "created": "Fri, 15 Nov 2013 18:14:06 GMT", "version": "v2" } ]
2014-06-02
[ [ "Rosenbaum", "Robert", "" ], [ "Doiron", "Brent", "" ] ]
Networks of model neurons with balanced recurrent excitation and inhibition produce irregular and asynchronous spiking activity. We extend the analysis of balanced networks to include the known dependence of connection probability on the spatial separation between neurons. In the continuum limit we derive that stable, balanced firing rate solutions require that the spatial spread of external inputs be broader than that of recurrent excitation, which in turn must be broader than or equal to that of recurrent inhibition. For finite size networks we investigate the pattern forming dynamics arising when balanced conditions are not satisfied. The spatiotemporal dynamics of balanced networks offer new challenges in the statistical mechanics of complex systems.
2103.08198
Li Xu
Li Xu, Denis Patterson, Ann Carla Staver, Simon Asher Levin, Jin Wang
Unifying deterministic and stochastic ecological dynamics via a landscape-flux approach
null
null
10.1073/pnas.2103779118
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a landscape-flux framework to investigate observed frequency distributions of vegetation and the stability of these ecological systems under fluctuations. The frequency distributions can characterize the population-potential landscape related to the stability of ecological states. We illustrate the practical utility of this approach by analyzing a forest-savanna model. Savanna, and Forest states coexist under certain conditions, consistent with past theoretical work and empirical observations. However, a new Grassland state, unseen in the corresponding deterministic model, emerges as an alternative quasi-stable state under fluctuations, providing a novel theoretical basis for the appearance of widespread grasslands in some empirical analyses. The ecological dynamics are determined by both the population-potential landscape gradient and the steady-state probability flux. The flux quantifies the net input/output to the ecological system and therefore the degree of nonequilibriumness. Landscape and flux together determine the transitions between stable states characterized by dominant paths and switching rates. The intrinsic potential landscape admits a Lyapunov function, which provides a quantitative measure of global stability. We find that the average flux, entropy production rate, and free energy have significant changes near bifurcations under both finite and zero fluctuation. These may provide both dynamical and thermodynamic origins of the bifurcations. We identified the variances in observed frequency time traces, fluctuations and time irreversibility as kinematic measures for bifurcations. This new framework opens the way to characterize ecological systems globally, to uncover how they change among states, and to quantify the emergence of new quasi-stable states under stochastic fluctuations.
[ { "created": "Mon, 15 Mar 2021 08:09:31 GMT", "version": "v1" }, { "created": "Sat, 27 Mar 2021 06:18:34 GMT", "version": "v2" } ]
2022-10-12
[ [ "Xu", "Li", "" ], [ "Patterson", "Denis", "" ], [ "Staver", "Ann Carla", "" ], [ "Levin", "Simon Asher", "" ], [ "Wang", "Jin", "" ] ]
We develop a landscape-flux framework to investigate observed frequency distributions of vegetation and the stability of these ecological systems under fluctuations. The frequency distributions can characterize the population-potential landscape related to the stability of ecological states. We illustrate the practical utility of this approach by analyzing a forest-savanna model. Savanna, and Forest states coexist under certain conditions, consistent with past theoretical work and empirical observations. However, a new Grassland state, unseen in the corresponding deterministic model, emerges as an alternative quasi-stable state under fluctuations, providing a novel theoretical basis for the appearance of widespread grasslands in some empirical analyses. The ecological dynamics are determined by both the population-potential landscape gradient and the steady-state probability flux. The flux quantifies the net input/output to the ecological system and therefore the degree of nonequilibriumness. Landscape and flux together determine the transitions between stable states characterized by dominant paths and switching rates. The intrinsic potential landscape admits a Lyapunov function, which provides a quantitative measure of global stability. We find that the average flux, entropy production rate, and free energy have significant changes near bifurcations under both finite and zero fluctuation. These may provide both dynamical and thermodynamic origins of the bifurcations. We identified the variances in observed frequency time traces, fluctuations and time irreversibility as kinematic measures for bifurcations. This new framework opens the way to characterize ecological systems globally, to uncover how they change among states, and to quantify the emergence of new quasi-stable states under stochastic fluctuations.
2103.04162
Junqiu Wu
Ke Liu, Zekun Ni, Zhenyu Zhou, Suocheng Tan, Xun Zou, Haoming Xing, Xiangyan Sun, Qi Han, Junqiu Wu and Jie Fan
Molecular modeling with machine-learned universal potential functions
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular modeling is an important topic in drug discovery. Decades of research have led to the development of high quality scalable molecular force fields. In this paper, we show that neural networks can be used to train a universal approximator for energy potential functions. By incorporating a fully automated training process we have been able to train smooth, differentiable, and predictive potential functions on large-scale crystal structures. A variety of tests have also been performed to show the superiority and versatility of the machine-learned model.
[ { "created": "Sat, 6 Mar 2021 17:36:39 GMT", "version": "v1" }, { "created": "Mon, 19 Apr 2021 06:30:52 GMT", "version": "v2" } ]
2021-04-20
[ [ "Liu", "Ke", "" ], [ "Ni", "Zekun", "" ], [ "Zhou", "Zhenyu", "" ], [ "Tan", "Suocheng", "" ], [ "Zou", "Xun", "" ], [ "Xing", "Haoming", "" ], [ "Sun", "Xiangyan", "" ], [ "Han", "Qi", "" ], [ "Wu", "Junqiu", "" ], [ "Fan", "Jie", "" ] ]
Molecular modeling is an important topic in drug discovery. Decades of research have led to the development of high quality scalable molecular force fields. In this paper, we show that neural networks can be used to train a universal approximator for energy potential functions. By incorporating a fully automated training process we have been able to train smooth, differentiable, and predictive potential functions on large-scale crystal structures. A variety of tests have also been performed to show the superiority and versatility of the machine-learned model.
2311.09140
Ying-Cheng Lai
Shirin Panahi, Younghae Do, Alan Hastings, and Ying-Cheng Lai
Rate-induced tipping in complex high-dimensional ecological networks
8 pages, 5 figures
null
null
null
q-bio.PE math.DS nlin.AO
http://creativecommons.org/licenses/by/4.0/
In an ecosystem, environmental changes as a result of natural and human processes can cause some key parameters of the system to change with time. Depending on how fast such a parameter changes, a tipping point can occur. Existing works on rate-induced tipping, or R-tipping, offered a theoretical way to study this phenomenon but from a local dynamical point of view, revealing, e.g., the existence of a critical rate for some specific initial condition above which a tipping point will occur. As ecosystems are subject to constant disturbances and can drift away from their equilibrium point, it is necessary to study R-tipping from a global perspective in terms of the initial conditions in the entire relevant phase space region. In particular, we introduce the notion of the probability of R-tipping defined for initial conditions taken from the whole relevant phase space. Using a number of real-world, complex mutualistic networks as a paradigm, we discover a scaling law between this probability and the rate of parameter change and provide a geometric theory to explain the law. The real-world implication is that even a slow parameter change can lead to a system collapse with catastrophic consequences. In fact, to mitigate the environmental changes by merely slowing down the parameter drift may not always be effective: only when the rate of parameter change is reduced to practically zero would the tipping be avoided. Our global dynamics approach offers a more complete and physically meaningful way to understand the important phenomenon of R-tipping.
[ { "created": "Wed, 15 Nov 2023 17:32:08 GMT", "version": "v1" } ]
2023-11-16
[ [ "Panahi", "Shirin", "" ], [ "Do", "Younghae", "" ], [ "Hastings", "Alan", "" ], [ "Lai", "Ying-Cheng", "" ] ]
In an ecosystem, environmental changes as a result of natural and human processes can cause some key parameters of the system to change with time. Depending on how fast such a parameter changes, a tipping point can occur. Existing works on rate-induced tipping, or R-tipping, offered a theoretical way to study this phenomenon but from a local dynamical point of view, revealing, e.g., the existence of a critical rate for some specific initial condition above which a tipping point will occur. As ecosystems are subject to constant disturbances and can drift away from their equilibrium point, it is necessary to study R-tipping from a global perspective in terms of the initial conditions in the entire relevant phase space region. In particular, we introduce the notion of the probability of R-tipping defined for initial conditions taken from the whole relevant phase space. Using a number of real-world, complex mutualistic networks as a paradigm, we discover a scaling law between this probability and the rate of parameter change and provide a geometric theory to explain the law. The real-world implication is that even a slow parameter change can lead to a system collapse with catastrophic consequences. In fact, to mitigate the environmental changes by merely slowing down the parameter drift may not always be effective: only when the rate of parameter change is reduced to practically zero would the tipping be avoided. Our global dynamics approach offers a more complete and physically meaningful way to understand the important phenomenon of R-tipping.
2306.11536
Yu Takagi
Yu Takagi, Shinji Nishimoto
Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs
null
null
null
null
q-bio.NC cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
The integration of deep learning and neuroscience has been advancing rapidly, which has led to improvements in the analysis of brain activity and the understanding of deep learning models from a neuroscientific perspective. The reconstruction of visual experience from human brain activity is an area that has particularly benefited: the use of deep learning models trained on large amounts of natural images has greatly improved its quality, and approaches that combine the diverse information contained in visual experiences have proliferated rapidly in recent years. In this technical paper, by taking advantage of the simple and generic framework that we proposed (Takagi and Nishimoto, CVPR 2023), we examine the extent to which various additional decoding techniques affect the performance of visual experience reconstruction. Specifically, we combined our earlier work with the following three techniques: using decoded text from brain activity, nonlinear optimization for structural image reconstruction, and using decoded depth information from brain activity. We confirmed that these techniques contributed to improving accuracy over the baseline. We also discuss what researchers should consider when performing visual reconstruction using deep generative models trained on large datasets. Please check our webpage at https://sites.google.com/view/stablediffusion-with-brain/. Code is also available at https://github.com/yu-takagi/StableDiffusionReconstruction.
[ { "created": "Tue, 20 Jun 2023 13:48:02 GMT", "version": "v1" } ]
2023-06-21
[ [ "Takagi", "Yu", "" ], [ "Nishimoto", "Shinji", "" ] ]
The integration of deep learning and neuroscience has been advancing rapidly, which has led to improvements in the analysis of brain activity and the understanding of deep learning models from a neuroscientific perspective. The reconstruction of visual experience from human brain activity is an area that has particularly benefited: the use of deep learning models trained on large amounts of natural images has greatly improved its quality, and approaches that combine the diverse information contained in visual experiences have proliferated rapidly in recent years. In this technical paper, by taking advantage of the simple and generic framework that we proposed (Takagi and Nishimoto, CVPR 2023), we examine the extent to which various additional decoding techniques affect the performance of visual experience reconstruction. Specifically, we combined our earlier work with the following three techniques: using decoded text from brain activity, nonlinear optimization for structural image reconstruction, and using decoded depth information from brain activity. We confirmed that these techniques contributed to improving accuracy over the baseline. We also discuss what researchers should consider when performing visual reconstruction using deep generative models trained on large datasets. Please check our webpage at https://sites.google.com/view/stablediffusion-with-brain/. Code is also available at https://github.com/yu-takagi/StableDiffusionReconstruction.
2204.10073
Gabriel Palma
Gabriel R. Palma, Silvio S. Zocchi, Wesley A.C. Godoy and Jorge A. Wiendl
New confidence interval methods for Shannon index
17 pages
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
Several factors affect the structure of communities, including biological, physical and chemical phenomena, impacting the quantification of biodiversity, measured by diversity indexes such as Shannon's entropy. Then, once a point estimate is obtained, confidence intervals methods such as the bootstrap ones are often used. These methods, however, can have different performances, which many authors have revealed in the last decade. Furthermore, problems such as the asymmetry of the distribution of estimates and the possibility of Shannon's diversity index estimator bias can lead to incorrect recommendations to the research community. Thus, we propose two methods and compare them with seven others using their performances to face these problems. The first idea uses the credible interval (CI) method to build a bootstrap confidence interval. The second one starts by correcting the bias and then uses an asymptotic approach. We considered 27 community structures representing scenarios with high dominance, high codominance or moderate dominance, the number of species equal to 4, 20 or 80 and 10, 50 or 500 individuals to compare their performances. Then, we generated 1000 samples, built 95% confidence intervals, and calculated the percentage of times they included the community diversity index (coverage percentage) for each community structure. Our results showed the feasibility of both proposed methods to estimate Shannon's diversity. The simulation study revealed the bootstrap-t technique had the best performance, i.e., best coverage percentage, compared with the other methods. Finally, we illustrate the methodology by applying it to an original aphid and parasitoid species dataset. We recommend the bootstrap-t when the community structure analysed is similar to the simulated ones. Also, the methods provided high performance for the high dominance scenarios.
[ { "created": "Thu, 21 Apr 2022 13:08:35 GMT", "version": "v1" } ]
2022-04-22
[ [ "Palma", "Gabriel R.", "" ], [ "Zocchi", "Silvio S.", "" ], [ "Godoy", "Wesley A. C.", "" ], [ "Wiendl", "Jorge A.", "" ] ]
Several factors affect the structure of communities, including biological, physical and chemical phenomena, impacting the quantification of biodiversity, measured by diversity indexes such as Shannon's entropy. Then, once a point estimate is obtained, confidence intervals methods such as the bootstrap ones are often used. These methods, however, can have different performances, which many authors have revealed in the last decade. Furthermore, problems such as the asymmetry of the distribution of estimates and the possibility of Shannon's diversity index estimator bias can lead to incorrect recommendations to the research community. Thus, we propose two methods and compare them with seven others using their performances to face these problems. The first idea uses the credible interval (CI) method to build a bootstrap confidence interval. The second one starts by correcting the bias and then uses an asymptotic approach. We considered 27 community structures representing scenarios with high dominance, high codominance or moderate dominance, the number of species equal to 4, 20 or 80 and 10, 50 or 500 individuals to compare their performances. Then, we generated 1000 samples, built 95% confidence intervals, and calculated the percentage of times they included the community diversity index (coverage percentage) for each community structure. Our results showed the feasibility of both proposed methods to estimate Shannon's diversity. The simulation study revealed the bootstrap-t technique had the best performance, i.e., best coverage percentage, compared with the other methods. Finally, we illustrate the methodology by applying it to an original aphid and parasitoid species dataset. We recommend the bootstrap-t when the community structure analysed is similar to the simulated ones. Also, the methods provided high performance for the high dominance scenarios.
1305.0159
Anthony J Cox
Lilian Janin and Giovanna Rosone and Anthony J. Cox
Adaptive reference-free compression of sequence quality scores
Accepted paper for HiTSeq 2013, to appear in Bioinformatics. Bioinformatics should be considered the original place of publication of this work, please cite accordingly
null
null
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Rapid technological progress in DNA sequencing has stimulated interest in compressing the vast datasets that are now routinely produced. Relatively little attention has been paid to compressing the quality scores that are assigned to each sequence, even though these scores may be harder to compress than the sequences themselves. By aggregating a set of reads into a compressed index, we find that the majority of bases can be predicted from the sequence of bases that are adjacent to them and hence are likely to be less informative for variant calling or other applications. The quality scores for such bases are aggressively compressed, leaving a relatively small number at full resolution. Since our approach relies directly on redundancy present in the reads, it does not need a reference sequence and is therefore applicable to data from metagenomics and de novo experiments as well as to resequencing data. Results: We show that a conservative smoothing strategy affecting 75% of the quality scores above Q2 leads to an overall quality score compression of 1 bit per value with a negligible effect on variant calling. A compression of 0.68 bit per quality value is achieved using a more aggressive smoothing strategy, again with a very small effect on variant calling. Availability: Code to construct the BWT and LCP-array on large genomic data sets is part of the BEETL library, available as a github respository at http://git@github.com:BEETL/BEETL.git .
[ { "created": "Wed, 1 May 2013 12:51:10 GMT", "version": "v1" } ]
2013-05-02
[ [ "Janin", "Lilian", "" ], [ "Rosone", "Giovanna", "" ], [ "Cox", "Anthony J.", "" ] ]
Motivation: Rapid technological progress in DNA sequencing has stimulated interest in compressing the vast datasets that are now routinely produced. Relatively little attention has been paid to compressing the quality scores that are assigned to each sequence, even though these scores may be harder to compress than the sequences themselves. By aggregating a set of reads into a compressed index, we find that the majority of bases can be predicted from the sequence of bases that are adjacent to them and hence are likely to be less informative for variant calling or other applications. The quality scores for such bases are aggressively compressed, leaving a relatively small number at full resolution. Since our approach relies directly on redundancy present in the reads, it does not need a reference sequence and is therefore applicable to data from metagenomics and de novo experiments as well as to resequencing data. Results: We show that a conservative smoothing strategy affecting 75% of the quality scores above Q2 leads to an overall quality score compression of 1 bit per value with a negligible effect on variant calling. A compression of 0.68 bit per quality value is achieved using a more aggressive smoothing strategy, again with a very small effect on variant calling. Availability: Code to construct the BWT and LCP-array on large genomic data sets is part of the BEETL library, available as a github respository at http://git@github.com:BEETL/BEETL.git .
1908.03686
Yiqiang Wang
Yiqiang Wang
Shuyi, A Name After Dendritic Cell-mediated Immunological Memory
6 pages; 2 figures, 13 references
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Immunological memory is a fundamental theory of modern immunology, which is traditionally believed to be mediated only by B and T lymphocytes that recognize antigen epitopes in a receptor-restricted manner. During the last decade data accumulated to show that monocytes and macrophages, the two main initiators of innate immune response, also built up a "memory" to antigens they encountered, though in most concerned publications a different wording (i.e. "train" or"educate") was utilized to describe this feature. More recently, Hole et al demonstrated a "memory-like" response of dendritic cells (DCs). In brief, if fungal-challenged mice could develop a protective immune response, DCs immediately (in 3 weeks) isolated from those mice would manifest a pro-inflammatory phenotype. Even after the mice were allowed to rest for 10 weeks, DCs from them still exhibited an enhanced immune activation profile in their transcriptome and cytokine productions upon re-challenge with same pathogens. Lastly, Hole showed that the "training" or memory-building in DCs was achieved by histone modification. All above findings obtained in monocytes, macrophages or DCs emphasized the necessity for rechecking the questions whether antigen presenting cells (APCs) as a whole could be classified the third class of cells that would mediate immunological memory. In this essay, the author described the effort he made in late 1990s to identify dendtitic cell-mediated memory, and how he named his daughter SHUYI to memorize that hypothesis.
[ { "created": "Sat, 10 Aug 2019 04:14:24 GMT", "version": "v1" }, { "created": "Fri, 5 Nov 2021 01:59:28 GMT", "version": "v2" } ]
2023-09-27
[ [ "Wang", "Yiqiang", "" ] ]
Immunological memory is a fundamental theory of modern immunology, which is traditionally believed to be mediated only by B and T lymphocytes that recognize antigen epitopes in a receptor-restricted manner. During the last decade data accumulated to show that monocytes and macrophages, the two main initiators of innate immune response, also built up a "memory" to antigens they encountered, though in most concerned publications a different wording (i.e. "train" or"educate") was utilized to describe this feature. More recently, Hole et al demonstrated a "memory-like" response of dendritic cells (DCs). In brief, if fungal-challenged mice could develop a protective immune response, DCs immediately (in 3 weeks) isolated from those mice would manifest a pro-inflammatory phenotype. Even after the mice were allowed to rest for 10 weeks, DCs from them still exhibited an enhanced immune activation profile in their transcriptome and cytokine productions upon re-challenge with same pathogens. Lastly, Hole showed that the "training" or memory-building in DCs was achieved by histone modification. All above findings obtained in monocytes, macrophages or DCs emphasized the necessity for rechecking the questions whether antigen presenting cells (APCs) as a whole could be classified the third class of cells that would mediate immunological memory. In this essay, the author described the effort he made in late 1990s to identify dendtitic cell-mediated memory, and how he named his daughter SHUYI to memorize that hypothesis.
q-bio/0607024
Claire Christensen
C. Christensen, A. Gupta, C.D. Maranas and R. Albert
Large-scale inference and graph theoretical analysis of gene-regulatory networks in B. stubtilis
22 pages, 4 figures, accepted for publication in Physica A (2006)
null
10.1016/j.physa.2006.04.118
null
q-bio.MN cond-mat.stat-mech q-bio.SC
null
We present the methods and results of a two-stage modeling process that generates candidate gene-regulatory networks of the bacterium B. subtilis from experimentally obtained, yet mathematically underdetermined microchip array data. By employing a computational, linear correlative procedure to generate these networks, and by analyzing the networks from a graph theoretical perspective, we are able to verify the biological viability of our inferred networks, and we demonstrate that our networks' graph theoretical properties are remarkably similar to those of other biological systems. In addition, by comparing our inferred networks to those of a previous, noisier implementation of the linear inference process [17], we are able to identify trends in graph theoretical behavior that occur both in our networks as well as in their perturbed counterparts. These commonalities in behavior at multiple levels of complexity allow us to ascertain the level of complexity to which our process is robust to noise.
[ { "created": "Tue, 18 Jul 2006 20:50:53 GMT", "version": "v1" } ]
2009-11-13
[ [ "Christensen", "C.", "" ], [ "Gupta", "A.", "" ], [ "Maranas", "C. D.", "" ], [ "Albert", "R.", "" ] ]
We present the methods and results of a two-stage modeling process that generates candidate gene-regulatory networks of the bacterium B. subtilis from experimentally obtained, yet mathematically underdetermined microchip array data. By employing a computational, linear correlative procedure to generate these networks, and by analyzing the networks from a graph theoretical perspective, we are able to verify the biological viability of our inferred networks, and we demonstrate that our networks' graph theoretical properties are remarkably similar to those of other biological systems. In addition, by comparing our inferred networks to those of a previous, noisier implementation of the linear inference process [17], we are able to identify trends in graph theoretical behavior that occur both in our networks as well as in their perturbed counterparts. These commonalities in behavior at multiple levels of complexity allow us to ascertain the level of complexity to which our process is robust to noise.
2204.12526
Sakira Hassan
Syeda Sakira Hassan, Rahul Mangayil, Tommi Aho, Olli Yli-Harja, Matti Karp
Identification of feasible pathway information for c-di-GMP binding proteins in cellulose production
null
EMBEC & NBC 2017. EMBEC NBC 2017 2017. IFMBE Proceedings, vol 65. Springer, Singapore
null
null
q-bio.QM cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In this paper, we utilize a machine learning approach to identify the significant pathways for c-di-GMP signaling proteins. The dataset involves gene counts from 12 pathways and 5 essential c-di-GMP binding domains for 1024 bacterial genomes. Two novel approaches, Least absolute shrinkage and selection operator (Lasso) and Random forests, have been applied for analyzing and modeling the dataset. Both approaches show that bacterial chemotaxis is the most essential pathway for c-di-GMP encoding domains. Though popular for feature selection, the strong regularization of Lasso method fails to associate any pathway to MshE domain. Results from the analysis may help to understand and emphasize the supporting pathways involved in bacterial cellulose production. These findings demonstrate the need for a chassis to restrict the behavior or functionality by deactivating the selective pathways in cellulose production.
[ { "created": "Tue, 26 Apr 2022 18:22:13 GMT", "version": "v1" } ]
2022-04-28
[ [ "Hassan", "Syeda Sakira", "" ], [ "Mangayil", "Rahul", "" ], [ "Aho", "Tommi", "" ], [ "Yli-Harja", "Olli", "" ], [ "Karp", "Matti", "" ] ]
In this paper, we utilize a machine learning approach to identify the significant pathways for c-di-GMP signaling proteins. The dataset involves gene counts from 12 pathways and 5 essential c-di-GMP binding domains for 1024 bacterial genomes. Two novel approaches, Least absolute shrinkage and selection operator (Lasso) and Random forests, have been applied for analyzing and modeling the dataset. Both approaches show that bacterial chemotaxis is the most essential pathway for c-di-GMP encoding domains. Though popular for feature selection, the strong regularization of Lasso method fails to associate any pathway to MshE domain. Results from the analysis may help to understand and emphasize the supporting pathways involved in bacterial cellulose production. These findings demonstrate the need for a chassis to restrict the behavior or functionality by deactivating the selective pathways in cellulose production.
1904.03290
Mike Steel Prof.
Mike Steel, Wim Hordijk, Stuart A. Kauffman
Dynamics of a birth-death process based on combinatorial innovation
21 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A feature of human creativity is the ability to take a subset of existing items (e.g. objects, ideas, or techniques) and combine them in various ways to give rise to new items, which, in turn, fuel further growth. Occasionally, some of these items may also disappear (extinction). We model this process by a simple stochastic birth--death model, with non-linear combinatorial terms in the growth coefficients to capture the propensity of subsets of items to give rise to new items. In its simplest form, this model involves just two parameters $(P, \alpha)$. This process exhibits a characteristic 'hockey-stick' behaviour: a long period of relatively little growth followed by a relatively sudden 'explosive' increase. We provide exact expressions for the mean and variance of this time to explosion and compare the results with simulations. We then generalise our results to allow for more general parameter assignments, and consider possible applications to data involving human productivity and creativity.
[ { "created": "Fri, 5 Apr 2019 21:38:14 GMT", "version": "v1" }, { "created": "Mon, 19 Aug 2019 20:16:43 GMT", "version": "v2" } ]
2019-08-21
[ [ "Steel", "Mike", "" ], [ "Hordijk", "Wim", "" ], [ "Kauffman", "Stuart A.", "" ] ]
A feature of human creativity is the ability to take a subset of existing items (e.g. objects, ideas, or techniques) and combine them in various ways to give rise to new items, which, in turn, fuel further growth. Occasionally, some of these items may also disappear (extinction). We model this process by a simple stochastic birth--death model, with non-linear combinatorial terms in the growth coefficients to capture the propensity of subsets of items to give rise to new items. In its simplest form, this model involves just two parameters $(P, \alpha)$. This process exhibits a characteristic 'hockey-stick' behaviour: a long period of relatively little growth followed by a relatively sudden 'explosive' increase. We provide exact expressions for the mean and variance of this time to explosion and compare the results with simulations. We then generalise our results to allow for more general parameter assignments, and consider possible applications to data involving human productivity and creativity.
0911.2387
Sylvain Hanneton
Sylvain Hanneton (NPSM), Ana\"is Varenne (NPSM)
Coaching the Wii : evaluation of a physical training experiment assisted by a video game
4 pages
"Haptic, Audio, Visual Environments and Games" (HAVE'09), Lecco : Italy (2009)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aging or sedentary behavior can decrease motor capabilities causing a loss of autonomy. Prevention or readaptation programs that involve practice of physical activities can be precious tools to fight against this phenomenon. ?Serious? video game have the potential to help people to train their body mainly due to the immersion of the participant in a motivating interaction with virtual environments. We propose here to discuss the results of a preliminary study that evaluated a training program using the well-known WiiFit game and Wii balance board device in participants of different ages. Our results showed that participants were satisfied with the program and that they progressed in their level of performance. The most important observation of this study, however was that the presence of a real human coach is necessary in particular for senior participants, for security reasons but also to help them to deal with difficulties with immersive situations.
[ { "created": "Thu, 12 Nov 2009 13:31:33 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hanneton", "Sylvain", "", "NPSM" ], [ "Varenne", "Anaïs", "", "NPSM" ] ]
Aging or sedentary behavior can decrease motor capabilities causing a loss of autonomy. Prevention or readaptation programs that involve practice of physical activities can be precious tools to fight against this phenomenon. ?Serious? video game have the potential to help people to train their body mainly due to the immersion of the participant in a motivating interaction with virtual environments. We propose here to discuss the results of a preliminary study that evaluated a training program using the well-known WiiFit game and Wii balance board device in participants of different ages. Our results showed that participants were satisfied with the program and that they progressed in their level of performance. The most important observation of this study, however was that the presence of a real human coach is necessary in particular for senior participants, for security reasons but also to help them to deal with difficulties with immersive situations.
2311.17785
Claire Guerrier
Claire Guerrier (LJAD, IRL CRM-CNRS), Tristan Dellazizzo Toth, Nicolas Galtier, Kurt Haas
An Algorithm Based on a Cable-Nernst Planck Model Predicting Synaptic Activity throughout the Dendritic Arbor with Micron Specificity
null
Neuroinformatics, 2023, 21 (1), pp.207-220
10.1007/s12021-022-09609-z
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent technological advances have enabled the recording of neurons in intact circuits with a high spatial and temporal resolution, creating the need for modeling with the same precision. In particular, the development of ultra-fast two-photon microscopy combined with fluorescence-based genetically-encoded Ca2+-indicators allows capture of full-dendritic arbor and somatic responses associated with synaptic input and action potential output. The complexity of dendritic arbor structures and distributed patterns of activity over time results in the generation of incredibly rich 4D datasets that are challenging to analyze (Sakaki, 2020). Interpreting neural activity from fluorescence-based Ca2+ biosensors is challenging due to non-linear interactions between several factors influencing intracellular calcium ion concentration and its binding to sensors, including the ionic dynamics driven by diffusion, electrical gradients and voltage-gated conductance.To investigate those dynamics, we designed a model based on a Cable-like equation coupled to the Nernst-Planck equations for ionic fluxes in electrolytes. We employ this model to simulate signal propagation and ionic electrodiffusion across a dendritic arbor. Using these simulation results, we then designed an algorithm to detect synapses from Ca2+ imaging datasets. We finally apply this algorithm to experimental Ca2+-indicator datasets from neurons expressing jGCaMP7s (Dana et al., 2019), using full-dendritic arbor sampling in vivo in the Xenopus laevis optic tectum using fast random-access two-photon microscopy.Our model reproduces the dynamics of visual stimulus-evoked jGCaMP7s-mediated calcium signals observed experimentally, and the resulting algorithm allows prediction of the location of synapses across the dendritic arbor.Our study provides a way to predict synaptic activity and location on dendritic arbors, from fluorescence data in the full dendritic arbor of a neuron recorded in the intact and awake developing vertebrate brain.
[ { "created": "Wed, 29 Nov 2023 16:32:06 GMT", "version": "v1" } ]
2023-11-30
[ [ "Guerrier", "Claire", "", "LJAD, IRL CRM-CNRS" ], [ "Toth", "Tristan Dellazizzo", "" ], [ "Galtier", "Nicolas", "" ], [ "Haas", "Kurt", "" ] ]
Recent technological advances have enabled the recording of neurons in intact circuits with a high spatial and temporal resolution, creating the need for modeling with the same precision. In particular, the development of ultra-fast two-photon microscopy combined with fluorescence-based genetically-encoded Ca2+-indicators allows capture of full-dendritic arbor and somatic responses associated with synaptic input and action potential output. The complexity of dendritic arbor structures and distributed patterns of activity over time results in the generation of incredibly rich 4D datasets that are challenging to analyze (Sakaki, 2020). Interpreting neural activity from fluorescence-based Ca2+ biosensors is challenging due to non-linear interactions between several factors influencing intracellular calcium ion concentration and its binding to sensors, including the ionic dynamics driven by diffusion, electrical gradients and voltage-gated conductance.To investigate those dynamics, we designed a model based on a Cable-like equation coupled to the Nernst-Planck equations for ionic fluxes in electrolytes. We employ this model to simulate signal propagation and ionic electrodiffusion across a dendritic arbor. Using these simulation results, we then designed an algorithm to detect synapses from Ca2+ imaging datasets. We finally apply this algorithm to experimental Ca2+-indicator datasets from neurons expressing jGCaMP7s (Dana et al., 2019), using full-dendritic arbor sampling in vivo in the Xenopus laevis optic tectum using fast random-access two-photon microscopy.Our model reproduces the dynamics of visual stimulus-evoked jGCaMP7s-mediated calcium signals observed experimentally, and the resulting algorithm allows prediction of the location of synapses across the dendritic arbor.Our study provides a way to predict synaptic activity and location on dendritic arbors, from fluorescence data in the full dendritic arbor of a neuron recorded in the intact and awake developing vertebrate brain.
1902.05723
Iaroslav Ispolatov
Iaroslav Ispolatov, Evgeniia Alekseeva, and Michael Doebeli
Competition-driven evolution of organismal complexity
Open PDF with Acrobat to see movies, 22 pages, 9 figures
PLoS Comput Biol 15(10): e1007388. https://doi.org/10.1371/journal.pcbi.1007388 (2019)
10.1371/journal.pcbi.1007388
null
q-bio.PE cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-uniform rates of morphological evolution and evolutionary increases in organismal complexity, captured in metaphors like "adaptive zones", "punctuated equilibrium" and "blunderbuss patterns", require more elaborate explanations than a simple gradual accumulation of mutations. Here we argue that non-uniform evolutionary increases in phenotypic complexity can be caused by a threshold-like response to growing ecological pressures resulting from evolutionary diversification at a given level of complexity. Acquisition of a new phenotypic feature allows an evolving species to escape this pressure but can typically be expected to carry significant physiological costs. Therefore, the ecological pressure should exceed a certain level to make such an acquisition evolutionarily successful. We present a detailed quantitative description of this process using a microevolutionary competition model as an example. The model exhibits sequential increases in phenotypic complexity driven by diversification at existing levels of complexity and the resulting increase in competitive pressure, which can push an evolving species over the barrier of physiological costs of new phenotypic features.
[ { "created": "Fri, 15 Feb 2019 08:41:40 GMT", "version": "v1" } ]
2020-07-01
[ [ "Ispolatov", "Iaroslav", "" ], [ "Alekseeva", "Evgeniia", "" ], [ "Doebeli", "Michael", "" ] ]
Non-uniform rates of morphological evolution and evolutionary increases in organismal complexity, captured in metaphors like "adaptive zones", "punctuated equilibrium" and "blunderbuss patterns", require more elaborate explanations than a simple gradual accumulation of mutations. Here we argue that non-uniform evolutionary increases in phenotypic complexity can be caused by a threshold-like response to growing ecological pressures resulting from evolutionary diversification at a given level of complexity. Acquisition of a new phenotypic feature allows an evolving species to escape this pressure but can typically be expected to carry significant physiological costs. Therefore, the ecological pressure should exceed a certain level to make such an acquisition evolutionarily successful. We present a detailed quantitative description of this process using a microevolutionary competition model as an example. The model exhibits sequential increases in phenotypic complexity driven by diversification at existing levels of complexity and the resulting increase in competitive pressure, which can push an evolving species over the barrier of physiological costs of new phenotypic features.
1604.04903
Thorsten Pr\"ustel
Thorsten Pr\"ustel and Martin Meier-Schellersheim
Path integral approach to theories of diffusion-influenced reactions
13 pages
null
10.1103/PhysRevE.96.022151
null
q-bio.QM cond-mat.stat-mech
http://creativecommons.org/publicdomain/zero/1.0/
The path decomposition expansion represents the propagator of the irreversible reaction as a convolution of the first-passage, last-passage and rebinding time probability densities. Using path integral technique, we give an elementary, yet rigorous, proof of the path decomposition expansion of the Green's functions describing the non-reactive case and the irreversible reaction of an isolated pair of molecules. To this end, we exploit the connection between boundary value problems and interaction potential problems with $\delta$- and $\delta'$-function perturbation. In particular, we employ a known exact summation of a perturbation series to derive exact relations between the Green's functions of the perturbed and unperturbed problem. Along the way, we are able to derive a number of additional exact identities that relate the propagators describing the free-space, the non-reactive as well as the completely and partially reactive case.
[ { "created": "Sun, 17 Apr 2016 18:05:02 GMT", "version": "v1" } ]
2017-09-13
[ [ "Prüstel", "Thorsten", "" ], [ "Meier-Schellersheim", "Martin", "" ] ]
The path decomposition expansion represents the propagator of the irreversible reaction as a convolution of the first-passage, last-passage and rebinding time probability densities. Using path integral technique, we give an elementary, yet rigorous, proof of the path decomposition expansion of the Green's functions describing the non-reactive case and the irreversible reaction of an isolated pair of molecules. To this end, we exploit the connection between boundary value problems and interaction potential problems with $\delta$- and $\delta'$-function perturbation. In particular, we employ a known exact summation of a perturbation series to derive exact relations between the Green's functions of the perturbed and unperturbed problem. Along the way, we are able to derive a number of additional exact identities that relate the propagators describing the free-space, the non-reactive as well as the completely and partially reactive case.
2009.01923
Yaron Oz
Yaron Oz, Ittai Rubinstein, Muli Safra
Heterogeneity and Superspreading Effect on Herd Immunity
16 pages, 5 figures, includes population based simulations
null
null
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model and calculate the fraction of infected population necessary to reach herd immunity, taking into account the heterogeneity in infectiousness and susceptibility, as well as the correlation between those two parameters. We show that these cause the effective reproduction number to decrease more rapidly, and consequently have a drastic effect on the estimate of the necessary percentage of the population that has to contract the disease for herd immunity to be reached. We quantify the difference between the size of the infected population when the effective reproduction number decreases below 1 vs. the ultimate fraction of population that had contracted the disease. This sheds light on an important distinction between herd immunity and the end of the disease and highlights the importance of limiting the spread of the disease even if we plan to naturally reach herd immunity. We analyze the effect of various lock-down scenarios on the resulting final fraction of infected population. We discuss implications to COVID-19 and other pandemics and compare our theoretical results to population-based simulations. We consider the dependence of the disease spread on the architecture of the infectiousness graph and analyze different graph architectures and the limitations of the graph models.
[ { "created": "Tue, 1 Sep 2020 09:43:38 GMT", "version": "v1" }, { "created": "Mon, 12 Oct 2020 18:03:04 GMT", "version": "v2" }, { "created": "Fri, 15 Jan 2021 15:06:06 GMT", "version": "v3" } ]
2021-01-19
[ [ "Oz", "Yaron", "" ], [ "Rubinstein", "Ittai", "" ], [ "Safra", "Muli", "" ] ]
We model and calculate the fraction of infected population necessary to reach herd immunity, taking into account the heterogeneity in infectiousness and susceptibility, as well as the correlation between those two parameters. We show that these cause the effective reproduction number to decrease more rapidly, and consequently have a drastic effect on the estimate of the necessary percentage of the population that has to contract the disease for herd immunity to be reached. We quantify the difference between the size of the infected population when the effective reproduction number decreases below 1 vs. the ultimate fraction of population that had contracted the disease. This sheds light on an important distinction between herd immunity and the end of the disease and highlights the importance of limiting the spread of the disease even if we plan to naturally reach herd immunity. We analyze the effect of various lock-down scenarios on the resulting final fraction of infected population. We discuss implications to COVID-19 and other pandemics and compare our theoretical results to population-based simulations. We consider the dependence of the disease spread on the architecture of the infectiousness graph and analyze different graph architectures and the limitations of the graph models.
2401.12498
Chuanbo Liu
Chuanbo Liu, Yu Fu, Lu Lin, Elliot L. Elson and Jin Wang
Understanding Cellular Noise with Optical Perturbation and Deep Learning
33 pages, 4 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Noise plays a crucial role in the regulation of cellular and organismal function and behavior. Exploring noise's impact is key to understanding fundamental biological processes, such as gene expression, signal transduction, and the mechanisms of development and evolution. Currently, a comprehensive method to quantify dynamical behavior of cellular noise within these biochemical systems is lacking. In this study, we introduce an optically-controlled perturbation system utilizing the light-sensitive Phytochrome B (PhyB) from \textit{Arabidopsis thaliana}, which enables precise noise modulation with high spatial-temporal resolution. Our system exhibits exceptional sensitivity to light, reacting consistently to pulsed light signals, distinguishing it from other photoreceptor-based promoter systems that respond to a single light wavelength. To characterize our system, we developed a stochastic model for phytochromes that accounts for photoactivation/deactivation, thermal reversion, and the dynamics of the light-activated gene promoter system. To precisely control our system, we determined the rate constants for this model using an omniscient deep neural network that can directly map rate constant combinations to time-dependent state joint distributions. By adjusting the activation rates through light intensity and degradation rates via N-terminal mutagenesis, we illustrate that out optical-controlled perturbation can effectively modulate molecular expression level as well as noise. Our results highlight the potential of employing an optically-controlled gene perturbation system as a noise-controlled stimulus source. This approach, when combined with the analytical capabilities of a sophisticated deep neural network, enables the accurate estimation of rate constants from observational data in a broad range of biochemical reaction networks.
[ { "created": "Tue, 23 Jan 2024 05:48:20 GMT", "version": "v1" } ]
2024-01-24
[ [ "Liu", "Chuanbo", "" ], [ "Fu", "Yu", "" ], [ "Lin", "Lu", "" ], [ "Elson", "Elliot L.", "" ], [ "Wang", "Jin", "" ] ]
Noise plays a crucial role in the regulation of cellular and organismal function and behavior. Exploring noise's impact is key to understanding fundamental biological processes, such as gene expression, signal transduction, and the mechanisms of development and evolution. Currently, a comprehensive method to quantify dynamical behavior of cellular noise within these biochemical systems is lacking. In this study, we introduce an optically-controlled perturbation system utilizing the light-sensitive Phytochrome B (PhyB) from \textit{Arabidopsis thaliana}, which enables precise noise modulation with high spatial-temporal resolution. Our system exhibits exceptional sensitivity to light, reacting consistently to pulsed light signals, distinguishing it from other photoreceptor-based promoter systems that respond to a single light wavelength. To characterize our system, we developed a stochastic model for phytochromes that accounts for photoactivation/deactivation, thermal reversion, and the dynamics of the light-activated gene promoter system. To precisely control our system, we determined the rate constants for this model using an omniscient deep neural network that can directly map rate constant combinations to time-dependent state joint distributions. By adjusting the activation rates through light intensity and degradation rates via N-terminal mutagenesis, we illustrate that out optical-controlled perturbation can effectively modulate molecular expression level as well as noise. Our results highlight the potential of employing an optically-controlled gene perturbation system as a noise-controlled stimulus source. This approach, when combined with the analytical capabilities of a sophisticated deep neural network, enables the accurate estimation of rate constants from observational data in a broad range of biochemical reaction networks.
1510.02989
Emmanuel Quansah Mr
Aladeen Basheer, Emmanuel Quansah, Schuman Bhowmick and Rana D. Parshad
Prey cannibalism alters the dynamics of Holling-Tanner type predator-prey models
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cannibalism, which is the act of killing and at least partial consumption of conspecifics, is ubiquitous in nature. Mathematical models have considered cannibalism in the predator primarily, and show that predator cannibalism in two species ODE models provides a strong stabilizing effect. There is strong ecological evidence that cannibalism exists among prey as well, yet this phenomenon has been much less investigated. In the current manuscript, we investigate both the ODE and spatially explicit forms of a Holling-Tanner model, with ratio dependent functional response. We show that cannibalism in the predator provides a stabilizing influence as expected. However, when cannibalism in the prey is considered, we show that it cannot stabilise the unstable interior equilibrium in the ODE case, but can destabilise the stable interior equilibrium. In the spatially explicit case, we show that in certain parameter regime, prey cannibalism can lead to pattern forming Turing dynamics, which is an impossibility without it. Lastly we consider a stochastic prey cannibalism rate, and find that it can alter both spatial patterns, as well as limit cycle dynamics.
[ { "created": "Sat, 10 Oct 2015 23:11:48 GMT", "version": "v1" } ]
2015-10-13
[ [ "Basheer", "Aladeen", "" ], [ "Quansah", "Emmanuel", "" ], [ "Bhowmick", "Schuman", "" ], [ "Parshad", "Rana D.", "" ] ]
Cannibalism, which is the act of killing and at least partial consumption of conspecifics, is ubiquitous in nature. Mathematical models have considered cannibalism in the predator primarily, and show that predator cannibalism in two species ODE models provides a strong stabilizing effect. There is strong ecological evidence that cannibalism exists among prey as well, yet this phenomenon has been much less investigated. In the current manuscript, we investigate both the ODE and spatially explicit forms of a Holling-Tanner model, with ratio dependent functional response. We show that cannibalism in the predator provides a stabilizing influence as expected. However, when cannibalism in the prey is considered, we show that it cannot stabilise the unstable interior equilibrium in the ODE case, but can destabilise the stable interior equilibrium. In the spatially explicit case, we show that in certain parameter regime, prey cannibalism can lead to pattern forming Turing dynamics, which is an impossibility without it. Lastly we consider a stochastic prey cannibalism rate, and find that it can alter both spatial patterns, as well as limit cycle dynamics.
2406.09989
Zhichao Liang
Zhichao Liang, Guanyi Zhao, Yinuo Zhang, Weiting Sun, Jingzhe Lin, Jialin Wang, Quanying Liu
Suppressing seizure via optimal electrical stimulation to the hub of epileptic brain network
null
null
null
null
q-bio.NC cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The electrical stimulation to the seizure onset zone (SOZ) serves as an efficient approach to seizure suppression. Recently, seizure dynamics have gained widespread attendance in its network propagation mechanisms. Compared with the direct stimulation to SOZ, other brain network-level approaches that can effectively suppress epileptic seizures remain under-explored. In this study, we introduce a platform equipped with a system identification module and a control strategy module, to validate the effectiveness of the hub of the epileptic brain network in suppressing seizure. The identified surrogate dynamics show high predictive performance in reconstructing neural dynamics which enables the model predictive framework to achieve accurate neural stimulation. The electrical stimulation on the hub of the epileptic brain network shows remarkable performance as the direct stimulation of SOZ in suppressing seizure dynamics. Underpinned by network control theory, our platform offers a general tool for the validation of neural stimulation.
[ { "created": "Fri, 14 Jun 2024 12:54:12 GMT", "version": "v1" } ]
2024-06-17
[ [ "Liang", "Zhichao", "" ], [ "Zhao", "Guanyi", "" ], [ "Zhang", "Yinuo", "" ], [ "Sun", "Weiting", "" ], [ "Lin", "Jingzhe", "" ], [ "Wang", "Jialin", "" ], [ "Liu", "Quanying", "" ] ]
The electrical stimulation to the seizure onset zone (SOZ) serves as an efficient approach to seizure suppression. Recently, seizure dynamics have gained widespread attendance in its network propagation mechanisms. Compared with the direct stimulation to SOZ, other brain network-level approaches that can effectively suppress epileptic seizures remain under-explored. In this study, we introduce a platform equipped with a system identification module and a control strategy module, to validate the effectiveness of the hub of the epileptic brain network in suppressing seizure. The identified surrogate dynamics show high predictive performance in reconstructing neural dynamics which enables the model predictive framework to achieve accurate neural stimulation. The electrical stimulation on the hub of the epileptic brain network shows remarkable performance as the direct stimulation of SOZ in suppressing seizure dynamics. Underpinned by network control theory, our platform offers a general tool for the validation of neural stimulation.
q-bio/0404030
Paul Higgs
Bin Tang, Philippe Boisvert and Paul G. Higgs
Comparison of tRNA and rRNA Phylogenies in Proteobacteria: Implications for the Frequency of Horizontal Gene Transfer
null
null
null
null
q-bio.PE
null
The current picture of bacterial evolution is based largely on studies of 16S rRNA. However, this is just one gene. It is known that horizontal gene transfer can occur between bacterial species, although the frequency and implications of this are not fully understood. If horizontal transfer were frequent, there would be no single evolutionary tree for bacteria because each gene would follow a different tree. We carried out phylogenetic analyses of rRNA and tRNA genes from Proteobacteria (a diverse group for which many complete genome sequences are available) using RNA-specific phylogenetic methods that account for the conservation of the secondary structure. We compared trees for 16S rRNA and 23S rRNA with those derived from concatenated alignments of 29 tRNA genes that are found in all the genomes studied. The tRNA genes are scattered throughout the genomes, and would not follow the same evolutionary history if horizontal transfer were frequent. Nevertheless, the tRNA tree is consistent with the rRNA tree in most respects. Minor differences can almost all be attributed to uncertainty or unreliability of the phylogenetic method. We therefore conclude that tRNA genes give a coherent picture of the phylogeny of the organisms, and that horizontal transfer of tRNAs is too rare to obscure the signal of the organismal tree. Some tRNA genes are not present in all the genomes. We discuss possible explanations for the observed patterns of presence and absence of genes: these involve gene deletion, gene duplication, and mutations in the tRNA anticodons.
[ { "created": "Fri, 23 Apr 2004 14:08:34 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tang", "Bin", "" ], [ "Boisvert", "Philippe", "" ], [ "Higgs", "Paul G.", "" ] ]
The current picture of bacterial evolution is based largely on studies of 16S rRNA. However, this is just one gene. It is known that horizontal gene transfer can occur between bacterial species, although the frequency and implications of this are not fully understood. If horizontal transfer were frequent, there would be no single evolutionary tree for bacteria because each gene would follow a different tree. We carried out phylogenetic analyses of rRNA and tRNA genes from Proteobacteria (a diverse group for which many complete genome sequences are available) using RNA-specific phylogenetic methods that account for the conservation of the secondary structure. We compared trees for 16S rRNA and 23S rRNA with those derived from concatenated alignments of 29 tRNA genes that are found in all the genomes studied. The tRNA genes are scattered throughout the genomes, and would not follow the same evolutionary history if horizontal transfer were frequent. Nevertheless, the tRNA tree is consistent with the rRNA tree in most respects. Minor differences can almost all be attributed to uncertainty or unreliability of the phylogenetic method. We therefore conclude that tRNA genes give a coherent picture of the phylogeny of the organisms, and that horizontal transfer of tRNAs is too rare to obscure the signal of the organismal tree. Some tRNA genes are not present in all the genomes. We discuss possible explanations for the observed patterns of presence and absence of genes: these involve gene deletion, gene duplication, and mutations in the tRNA anticodons.
1912.10302
Polly Y. Yu
Bal\'azs Boros, Gheorghe Craciun, Polly Y. Yu
Weakly reversible mass-action systems with infinitely many positive steady states
null
SIAM Journal on Applied Mathematics, 80(4):1936-1946, 2020
10.1137/19M1303034
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that weakly reversible mass-action systems can have a continuum of positive steady states, coming from the zeroes of a multivariate polynomial. Moreover, the same is true of systems whose underlying reaction network is reversible and has a single connected component. In our construction, we relate operations on the reaction network to the multivariate polynomial occurring as a common factor in the system of differential equations.
[ { "created": "Sat, 21 Dec 2019 17:31:27 GMT", "version": "v1" }, { "created": "Thu, 10 Sep 2020 16:27:09 GMT", "version": "v2" } ]
2022-09-14
[ [ "Boros", "Balázs", "" ], [ "Craciun", "Gheorghe", "" ], [ "Yu", "Polly Y.", "" ] ]
We show that weakly reversible mass-action systems can have a continuum of positive steady states, coming from the zeroes of a multivariate polynomial. Moreover, the same is true of systems whose underlying reaction network is reversible and has a single connected component. In our construction, we relate operations on the reaction network to the multivariate polynomial occurring as a common factor in the system of differential equations.
1908.00572
Guo-Wei Wei
Rundong Zhao, Menglun Wang, Yiying Tong and Guo-Wei Wei
The de Rham-Hodge analysis and modeling of biomolecules
13 figures, one table
null
null
null
q-bio.BM math.AT math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed a trend that advanced mathematical tools, such as algebraic topology, differential geometry, graph theory, and partial differential equations, have been developed for describing biological macromolecules. These tools have considerably strengthened our ability to understand the molecular mechanism of macromolecular function, dynamics and transport from their structures. However, currently, there is no unified mathematical theory to analyze, describe and characterize biological macromolecular geometry, topology, flexibility and natural mode at a variety of scales. We introduce the de Rham-Hodge theory, a landmark of 20th Century's mathematics, as a unified paradigm for analyzing biological macromolecular geometry and algebraic topology, for predicting macromolecular flexibility, and for modeling macromolecular natural modes at a variety of scales. In this paradigm, macromolecular geometric characteristic and topological invariants are revealed by de Rham-Hodge spectral analysis. By using the Helmholtz-Hodge decomposition, every macromolecular vector field is split into orthogonal divergence-free, curl-free, and harmonic components with a distinct physical interpretation. We organize the eigenvalues and eigenvectors of the 0-form Laplace-de Rham operator into one of the most accurate protein B-factor predictors. By combining the 1-form Laplace-de Rham operator and the Helfrich-type curvature energy, we predict the natural modes of both X-ray protein structures and cryo-EM maps. We construct accurate and efficient three-dimensional discrete exterior calculus algorithms for the aforementioned modeling and analysis of biological macromolecules. Using extensive experiments, we validate that the proposed paradigm is one of the most versatile and powerful tools for biological macromolecular studies.
[ { "created": "Thu, 1 Aug 2019 18:37:13 GMT", "version": "v1" } ]
2019-08-05
[ [ "Zhao", "Rundong", "" ], [ "Wang", "Menglun", "" ], [ "Tong", "Yiying", "" ], [ "Wei", "Guo-Wei", "" ] ]
Recent years have witnessed a trend that advanced mathematical tools, such as algebraic topology, differential geometry, graph theory, and partial differential equations, have been developed for describing biological macromolecules. These tools have considerably strengthened our ability to understand the molecular mechanism of macromolecular function, dynamics and transport from their structures. However, currently, there is no unified mathematical theory to analyze, describe and characterize biological macromolecular geometry, topology, flexibility and natural mode at a variety of scales. We introduce the de Rham-Hodge theory, a landmark of 20th Century's mathematics, as a unified paradigm for analyzing biological macromolecular geometry and algebraic topology, for predicting macromolecular flexibility, and for modeling macromolecular natural modes at a variety of scales. In this paradigm, macromolecular geometric characteristic and topological invariants are revealed by de Rham-Hodge spectral analysis. By using the Helmholtz-Hodge decomposition, every macromolecular vector field is split into orthogonal divergence-free, curl-free, and harmonic components with a distinct physical interpretation. We organize the eigenvalues and eigenvectors of the 0-form Laplace-de Rham operator into one of the most accurate protein B-factor predictors. By combining the 1-form Laplace-de Rham operator and the Helfrich-type curvature energy, we predict the natural modes of both X-ray protein structures and cryo-EM maps. We construct accurate and efficient three-dimensional discrete exterior calculus algorithms for the aforementioned modeling and analysis of biological macromolecules. Using extensive experiments, we validate that the proposed paradigm is one of the most versatile and powerful tools for biological macromolecular studies.
1504.03832
Laura Hindersin
Laura Hindersin and Arne Traulsen
Most undirected random graphs are amplifiers of selection for Birth-death dynamics, but suppressors of selection for death-Birth dynamics
null
Hindersin, L. & Traulsen A. PLoS Comput. Biol. 2015; 11(11):e1004437
10.1371/journal.pcbi.1004437
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
[ { "created": "Wed, 15 Apr 2015 09:14:44 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2016 14:52:53 GMT", "version": "v2" } ]
2016-10-27
[ [ "Hindersin", "Laura", "" ], [ "Traulsen", "Arne", "" ] ]
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
2009.04027
Sophia David
Sophia U. David (1), Sophie E. Loman (1), Christopher W. Lynn (1 and 2), Ann S. Blevins (1), Danielle S. Bassett (1-6) ((1) Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, USA, (2) Department of Physics & Astronomy, College of Arts & Sciences University of Pennsylvania, Philadelphia, USA, (3) Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, USA, (4) Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA, (5) Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA, (6) Santa Fe Institute, Santa Fe, USA)
How We Learn About our Networked World
11 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When presented with information of any type, from music to language to mathematics, the human mind subconsciously arranges it into a network. A network puts pieces of information like musical notes, syllables or mathematical concepts into context by linking them together. These networks help our minds organize information and anticipate what is coming. Here we present two questions about network building. 1) Can humans more easily learn some types of networks than others? 2) Do humans find some links between ideas more surprising than others? The answer to both questions is "Yes," and we explain why. The findings provide much-needed insight into the ways that humans learn about the networked world around them. Moreover, the study paves the way for future efforts seeking to optimize how information is presented to accelerate human learning.
[ { "created": "Tue, 8 Sep 2020 23:17:54 GMT", "version": "v1" } ]
2020-09-10
[ [ "David", "Sophia U.", "", "1 and\n 2" ], [ "Loman", "Sophie E.", "", "1 and\n 2" ], [ "Lynn", "Christopher W.", "", "1 and\n 2" ], [ "Blevins", "Ann S.", "", "1-6" ], [ "Bassett", "Danielle S.", "", "1-6" ] ]
When presented with information of any type, from music to language to mathematics, the human mind subconsciously arranges it into a network. A network puts pieces of information like musical notes, syllables or mathematical concepts into context by linking them together. These networks help our minds organize information and anticipate what is coming. Here we present two questions about network building. 1) Can humans more easily learn some types of networks than others? 2) Do humans find some links between ideas more surprising than others? The answer to both questions is "Yes," and we explain why. The findings provide much-needed insight into the ways that humans learn about the networked world around them. Moreover, the study paves the way for future efforts seeking to optimize how information is presented to accelerate human learning.
1701.01975
Bar{\i}\c{s} Ekim
Bar{\i}\c{s} Ekim
A novel entropy-based hierarchical clustering framework for ultrafast protein structure search and alignment
16 pages, 13 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification and alignment of three-dimensional folding of proteins may yield useful information about relationships too remote to be detected by conventional methods, such as sequence comparison, and may potentially lead to prediction of patterns and motifs in mutual structural fragments. With the exponential increase of structural proteomics data, the methods that scale with the rate of increase of data lose efficiency. Hence, new methods that reduce the computational expense of this problem should be developed. We present a novel framework through which we are able to find and align protein structure neighbors via hierarchical clustering and entropy-based query search, and present a web-based protein database search and alignment tool to demonstrate the applicability of our approach. The resulting method replicates the results of the current gold standard with a minimal loss in sensitivity in a significantly shorter amount of time, while ameliorating the existing web workspace of protein structure comparison with a customized and dynamic web-based environment. Our tool serves as both a functional industrial means of protein structure comparison and a valid demonstration of heuristics in proteomics.
[ { "created": "Sun, 8 Jan 2017 15:51:07 GMT", "version": "v1" } ]
2017-01-10
[ [ "Ekim", "Barış", "" ] ]
Identification and alignment of three-dimensional folding of proteins may yield useful information about relationships too remote to be detected by conventional methods, such as sequence comparison, and may potentially lead to prediction of patterns and motifs in mutual structural fragments. With the exponential increase of structural proteomics data, the methods that scale with the rate of increase of data lose efficiency. Hence, new methods that reduce the computational expense of this problem should be developed. We present a novel framework through which we are able to find and align protein structure neighbors via hierarchical clustering and entropy-based query search, and present a web-based protein database search and alignment tool to demonstrate the applicability of our approach. The resulting method replicates the results of the current gold standard with a minimal loss in sensitivity in a significantly shorter amount of time, while ameliorating the existing web workspace of protein structure comparison with a customized and dynamic web-based environment. Our tool serves as both a functional industrial means of protein structure comparison and a valid demonstration of heuristics in proteomics.
1909.07508
Changbong Hyeon
D. Thirumalai and George H. Lorimer and Changbong Hyeon
Iterative Annealing Mechanism Explains the Functions of the GroEL and RNA Chaperones
39 pages, 11 figures
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular chaperones are ATP-consuming biological machines, which facilitate the folding of proteins and RNA molecules that are kinetically trapped in misfolded states for long times. Unassisted folding occurs by the kinetic partitioning mechanism according to which folding to the native state, with low probability as well as misfolding to one of the many metastable states, with high probability, occur rapidly on similar time scales. GroEL is an all-purpose stochastic machine that assists misfolded substrate proteins (SPs) to fold. The RNA chaperones (CYT-19) help the folding of ribozymes that readily misfold. GroEL does not interact with the folded proteins but CYT-19 disrupts both the folded and misfolded ribozymes. Despite this major difference, the Iterative Annealing Mechanism (IAM) quantitatively explains all the available experimental data for assisted folding of proteins and ribozymes. Driven by ATP binding and hydrolysis and GroES binding, GroEL undergoes a catalytic cycle during which it samples three allosteric states, referred to as T (apo), R (ATP bound), and R'' (ADP bound). In accord with the IAM predictions, analyses of the experimental data shows that the efficiency of the GroEL-GroES machinery and mutants is determined by the resetting rate $k_{R''\rightarrow T}$, which is largest for the wild type GroEL. Generalized IAM accurately predicts the folding kinetics of Tetrahymena ribozyme and its variants. Chaperones maximize the product of the folding rate and the steady state native state fold by driving the substrates out of equilibrium. Neither the absolute yield nor the folding rate is optimized.
[ { "created": "Mon, 16 Sep 2019 22:42:11 GMT", "version": "v1" } ]
2019-09-18
[ [ "Thirumalai", "D.", "" ], [ "Lorimer", "George H.", "" ], [ "Hyeon", "Changbong", "" ] ]
Molecular chaperones are ATP-consuming biological machines, which facilitate the folding of proteins and RNA molecules that are kinetically trapped in misfolded states for long times. Unassisted folding occurs by the kinetic partitioning mechanism according to which folding to the native state, with low probability as well as misfolding to one of the many metastable states, with high probability, occur rapidly on similar time scales. GroEL is an all-purpose stochastic machine that assists misfolded substrate proteins (SPs) to fold. The RNA chaperones (CYT-19) help the folding of ribozymes that readily misfold. GroEL does not interact with the folded proteins but CYT-19 disrupts both the folded and misfolded ribozymes. Despite this major difference, the Iterative Annealing Mechanism (IAM) quantitatively explains all the available experimental data for assisted folding of proteins and ribozymes. Driven by ATP binding and hydrolysis and GroES binding, GroEL undergoes a catalytic cycle during which it samples three allosteric states, referred to as T (apo), R (ATP bound), and R'' (ADP bound). In accord with the IAM predictions, analyses of the experimental data shows that the efficiency of the GroEL-GroES machinery and mutants is determined by the resetting rate $k_{R''\rightarrow T}$, which is largest for the wild type GroEL. Generalized IAM accurately predicts the folding kinetics of Tetrahymena ribozyme and its variants. Chaperones maximize the product of the folding rate and the steady state native state fold by driving the substrates out of equilibrium. Neither the absolute yield nor the folding rate is optimized.
1102.3793
Arni S.R. Srinivasa Rao
Arni S.R. Srinivasa Rao and Thomas Kurien
Theoretical Framework and Empirical Modeling for Time Required to Vaccinate a Population in an Epidemic
14 pages, 1 Table, 5 Figures (A preliminary draft)
Handbook of Statistics (2017), Volume 37
10.1016/bs.host.2017.07.005
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes a method to understand time required to vaccinate against viruses in total as well as subpopulations. As a demonstration, a model based estimate for time required to vaccinate H1N1 in India, given its administrative difficulties is provided. We have proved novel theorems for the time functions defined in the paper. Such results are useful in planning for future epidemics. The number of days required to vaccinate entire high risk population in three subpopulations (villages, tehsils and towns) are noted to be 84, 89 and 88 respectively. There exists state wise disparities in the health infrastructure and capacities to deliver vaccines and hence national estimates need to be re-evaluated based on individual performances in the states.
[ { "created": "Fri, 18 Feb 2011 09:41:30 GMT", "version": "v1" } ]
2021-06-15
[ [ "Rao", "Arni S. R. Srinivasa", "" ], [ "Kurien", "Thomas", "" ] ]
The paper describes a method to understand time required to vaccinate against viruses in total as well as subpopulations. As a demonstration, a model based estimate for time required to vaccinate H1N1 in India, given its administrative difficulties is provided. We have proved novel theorems for the time functions defined in the paper. Such results are useful in planning for future epidemics. The number of days required to vaccinate entire high risk population in three subpopulations (villages, tehsils and towns) are noted to be 84, 89 and 88 respectively. There exists state wise disparities in the health infrastructure and capacities to deliver vaccines and hence national estimates need to be re-evaluated based on individual performances in the states.
1801.09831
Sanjana Gupta
Sanjana Gupta, Liam Hainsworth, Justin S. Hogg, Robin E. C. Lee, and James R. Faeder
Evaluation of Parallel Tempering to Accelerate Bayesian Parameter Estimation in Systems Biology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models of biological systems often have many unknown parameters that must be determined in order for model behavior to match experimental observations. Commonly-used methods for parameter estimation that return point estimates of the best-fit parameters are insufficient when models are high dimensional and under-constrained. As a result, Bayesian methods, which treat model parameters as random variables and attempt to estimate their probability distributions given data, have become popular in systems biology. Bayesian parameter estimation often relies on Markov Chain Monte Carlo (MCMC) methods to sample model parameter distributions, but the slow convergence of MCMC sampling can be a major bottleneck. One approach to improving performance is parallel tempering (PT), a physics-based method that uses swapping between multiple Markov chains run in parallel at different temperatures to accelerate sampling. The temperature of a Markov chain determines the probability of accepting an unfavorable move, so swapping with higher temperatures chains enables the sampling chain to escape from local minima. In this work we compared the MCMC performance of PT and the commonly-used Metropolis-Hastings (MH) algorithm on six biological models of varying complexity. We found that for simpler models PT accelerated convergence and sampling, and that for more complex models, PT often converged in cases MH became trapped in non-optimal local minima. We also developed a freely-available MATLAB package for Bayesian parameter estimation called PTempEst (http://github.com/RuleWorld/ptempest), which is closely integrated with the popular BioNetGen software for rule-based modeling of biological systems.
[ { "created": "Tue, 30 Jan 2018 02:45:59 GMT", "version": "v1" } ]
2018-01-31
[ [ "Gupta", "Sanjana", "" ], [ "Hainsworth", "Liam", "" ], [ "Hogg", "Justin S.", "" ], [ "Lee", "Robin E. C.", "" ], [ "Faeder", "James R.", "" ] ]
Models of biological systems often have many unknown parameters that must be determined in order for model behavior to match experimental observations. Commonly-used methods for parameter estimation that return point estimates of the best-fit parameters are insufficient when models are high dimensional and under-constrained. As a result, Bayesian methods, which treat model parameters as random variables and attempt to estimate their probability distributions given data, have become popular in systems biology. Bayesian parameter estimation often relies on Markov Chain Monte Carlo (MCMC) methods to sample model parameter distributions, but the slow convergence of MCMC sampling can be a major bottleneck. One approach to improving performance is parallel tempering (PT), a physics-based method that uses swapping between multiple Markov chains run in parallel at different temperatures to accelerate sampling. The temperature of a Markov chain determines the probability of accepting an unfavorable move, so swapping with higher temperatures chains enables the sampling chain to escape from local minima. In this work we compared the MCMC performance of PT and the commonly-used Metropolis-Hastings (MH) algorithm on six biological models of varying complexity. We found that for simpler models PT accelerated convergence and sampling, and that for more complex models, PT often converged in cases MH became trapped in non-optimal local minima. We also developed a freely-available MATLAB package for Bayesian parameter estimation called PTempEst (http://github.com/RuleWorld/ptempest), which is closely integrated with the popular BioNetGen software for rule-based modeling of biological systems.
2012.05454
Adrian Joseph Alva
Adrian Joseph Alva and Harjinder Singh
A minimal model for synaptic integration in simple neurons
25 pages, 8 figures
Physica D 426 (2021) 132988
10.1016/j.physd.2021.132988
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synaptic integration is a prominent aspect of neuronal information processing. The detailed mechanisms that modulate synaptic inputs determine the computational properties of any given neuron. We study a simple model for the summation of excitatory inputs from synapses and illustrate its use by characterizing some functional properties of postsynaptic neurons. In this regard, we study the response of postsynaptic neurons as defined by the model to two well known noise driven processes: stochastic and coherence resonance. The model requires a small number of parameters and is especially useful to isolate the role of integration mechanisms that rely on summation of inputs with little dendritic processing.
[ { "created": "Thu, 10 Dec 2020 05:03:07 GMT", "version": "v1" }, { "created": "Wed, 17 Feb 2021 13:54:48 GMT", "version": "v2" }, { "created": "Wed, 14 Jul 2021 06:14:20 GMT", "version": "v3" } ]
2021-07-15
[ [ "Alva", "Adrian Joseph", "" ], [ "Singh", "Harjinder", "" ] ]
Synaptic integration is a prominent aspect of neuronal information processing. The detailed mechanisms that modulate synaptic inputs determine the computational properties of any given neuron. We study a simple model for the summation of excitatory inputs from synapses and illustrate its use by characterizing some functional properties of postsynaptic neurons. In this regard, we study the response of postsynaptic neurons as defined by the model to two well known noise driven processes: stochastic and coherence resonance. The model requires a small number of parameters and is especially useful to isolate the role of integration mechanisms that rely on summation of inputs with little dendritic processing.
1706.08422
Edmund Barter
Edmund Barter and Thilo Gross
Spatial effects in meta-food-webs
15 pages, 8 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In ecology it is widely recognised that many landscapes comprise a network of discrete patches of habitat. The species that inhabit the patches interact with each other through a foodweb, the network of feeding interactions. The meta-foodweb model proposed by Pillai et al. combines the feeding relationships at each patch with the dispersal of species between patches, such that the whole system is represented by a network of networks. Previous work on meta-foodwebs has focussed on landscape networks that do not have an explicit spatial embedding, but in real landscapes the patches are usually distributed in space. Here we compare the dispersal of a meta-foodweb on \ER networks, that do not have a spatial embedding, and random geometric networks, that do have a spatial embedding. We found that local structure and large network distances in spatially embedded networks, lead to meso-scale patterns of patch occupation by both specialist and omnivorous species. In particular, we found that spatial separations make the coexistence of competing species more likely. Our results highlight the effects of spatial embeddings for meta-foodweb models, and the need for new analytical approaches to them.
[ { "created": "Mon, 26 Jun 2017 14:56:01 GMT", "version": "v1" }, { "created": "Thu, 10 Aug 2017 09:47:35 GMT", "version": "v2" } ]
2017-08-11
[ [ "Barter", "Edmund", "" ], [ "Gross", "Thilo", "" ] ]
In ecology it is widely recognised that many landscapes comprise a network of discrete patches of habitat. The species that inhabit the patches interact with each other through a foodweb, the network of feeding interactions. The meta-foodweb model proposed by Pillai et al. combines the feeding relationships at each patch with the dispersal of species between patches, such that the whole system is represented by a network of networks. Previous work on meta-foodwebs has focussed on landscape networks that do not have an explicit spatial embedding, but in real landscapes the patches are usually distributed in space. Here we compare the dispersal of a meta-foodweb on \ER networks, that do not have a spatial embedding, and random geometric networks, that do have a spatial embedding. We found that local structure and large network distances in spatially embedded networks, lead to meso-scale patterns of patch occupation by both specialist and omnivorous species. In particular, we found that spatial separations make the coexistence of competing species more likely. Our results highlight the effects of spatial embeddings for meta-foodweb models, and the need for new analytical approaches to them.
1009.4480
Adriaan (Ard) A. Louis
Thomas E. Ouldridge, Ard A. Louis, Jonathan P.K. Doye
Structural, mechanical and thermodynamic properties of a coarse-grained DNA model
25 pages, 16 figures
J. Chem. Phys, 134, 085101 (2011)
10.1063/1.3552946
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore in detail the structural, mechanical and thermodynamic properties of a coarse-grained model of DNA similar to that introduced in Thomas E. Ouldridge, Ard A. Louis, Jonathan P.K. Doye, Phys. Rev. Lett. 104 178101 (2010). Effective interactions are used to represent chain connectivity, excluded volume, base stacking and hydrogen bonding, naturally reproducing a range of DNA behaviour. We quantify the relation to experiment of the thermodynamics of single-stranded stacking, duplex hybridization and hairpin formation, as well as structural properties such as the persistence length of single strands and duplexes, and the torsional and stretching stiffness of double helices. We also explore the model's representation of more complex motifs involving dangling ends, bulged bases and internal loops, and the effect of stacking and fraying on the thermodynamics of the duplex formation transition.
[ { "created": "Wed, 22 Sep 2010 21:14:06 GMT", "version": "v1" } ]
2013-06-25
[ [ "Ouldridge", "Thomas E.", "" ], [ "Louis", "Ard A.", "" ], [ "Doye", "Jonathan P. K.", "" ] ]
We explore in detail the structural, mechanical and thermodynamic properties of a coarse-grained model of DNA similar to that introduced in Thomas E. Ouldridge, Ard A. Louis, Jonathan P.K. Doye, Phys. Rev. Lett. 104 178101 (2010). Effective interactions are used to represent chain connectivity, excluded volume, base stacking and hydrogen bonding, naturally reproducing a range of DNA behaviour. We quantify the relation to experiment of the thermodynamics of single-stranded stacking, duplex hybridization and hairpin formation, as well as structural properties such as the persistence length of single strands and duplexes, and the torsional and stretching stiffness of double helices. We also explore the model's representation of more complex motifs involving dangling ends, bulged bases and internal loops, and the effect of stacking and fraying on the thermodynamics of the duplex formation transition.
2110.09204
Andrea De Martino
A.R. Batista-Tomas, Andrea De Martino, Roberto Mulet
Path-integral solution of MacArthur's resource-competition model for large ecosystems with random species-resources couplings
This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. This article appeared in Chaos 31, 103113 (2021) and may be found at https://aip.scitation.org/doi/full/10.1063/5.0046972
Chaos 31, 103113 (2021)
10.1063/5.0046972
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We solve MacArthur's resource-competition model with random species-resource couplings in the `thermodynamic' limit of infinitely many species and resources using dynamical path-integrals a la De Domincis. We analyze how the steady state picture changes upon modifying several parameters, including the degree of heterogeneity of metabolic strategies (encoding the preferences of species) and of maximal resource levels (carrying capacities), and discuss its stability. Ultimately, the scenario obtained by other approaches is recovered by analyzing an effective one-species-one-resource ecosystem that is fully equivalent to the original multi-species one. The technique used here can be applied for the analysis of other model ecosystems related to the version of MacArthur's model considered here.
[ { "created": "Mon, 18 Oct 2021 11:43:05 GMT", "version": "v1" } ]
2021-10-19
[ [ "Batista-Tomas", "A. R.", "" ], [ "De Martino", "Andrea", "" ], [ "Mulet", "Roberto", "" ] ]
We solve MacArthur's resource-competition model with random species-resource couplings in the `thermodynamic' limit of infinitely many species and resources using dynamical path-integrals a la De Domincis. We analyze how the steady state picture changes upon modifying several parameters, including the degree of heterogeneity of metabolic strategies (encoding the preferences of species) and of maximal resource levels (carrying capacities), and discuss its stability. Ultimately, the scenario obtained by other approaches is recovered by analyzing an effective one-species-one-resource ecosystem that is fully equivalent to the original multi-species one. The technique used here can be applied for the analysis of other model ecosystems related to the version of MacArthur's model considered here.