id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.15134 | Jakub K\"ory | J. K\"ory, P. S. Stewart, N. A. Hill, X. Y. Luo, A. Pandolfi | A discrete-to-continuum model for the human cornea with application to
keratoconus | 32 pages, 8 figures | null | null | null | q-bio.QM physics.app-ph physics.bio-ph q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | We introduce a discrete mathematical model for the mechanical behaviour of a
planar slice of human corneal tissue, in equilibrium under the action of
physiological intraocular pressure (IOP). The model considers a regular
(two-dimensional) network of structural elements mimicking a discrete number of
parallel collagen lamellae connected by proteoglycan-based chemical bonds
(crosslinks). Since the thickness of each collagen lamella is small compared to
the overall corneal thickness, we upscale the discrete force balance into a
continuum system of partial differential equations and deduce the corresponding
macroscopic stress tensor and strain energy function for the micro-structured
corneal tissue. We demonstrate that, for physiological values of the IOP, the
predictions of the discrete model converge to those of the continuum model. We
use the continuum model to simulate the progression of the degenerative disease
known as keratoconus, characterized by a localized bulging of the corneal
shell. We assign a spatial distribution of damage (i. e., reduction of the
stiffness) to the mechanical properties of the structural elements and predict
the resulting macroscopic shape of the cornea, showing that a large reduction
in the element stiffness results in substantial corneal thinning and a
significant increase in the curvature of both the anterior and posterior
surfaces.
| [
{
"created": "Sat, 23 Dec 2023 01:54:08 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Köry",
"J.",
""
],
[
"Stewart",
"P. S.",
""
],
[
"Hill",
"N. A.",
""
],
[
"Luo",
"X. Y.",
""
],
[
"Pandolfi",
"A.",
""
]
] | We introduce a discrete mathematical model for the mechanical behaviour of a planar slice of human corneal tissue, in equilibrium under the action of physiological intraocular pressure (IOP). The model considers a regular (two-dimensional) network of structural elements mimicking a discrete number of parallel collagen lamellae connected by proteoglycan-based chemical bonds (crosslinks). Since the thickness of each collagen lamella is small compared to the overall corneal thickness, we upscale the discrete force balance into a continuum system of partial differential equations and deduce the corresponding macroscopic stress tensor and strain energy function for the micro-structured corneal tissue. We demonstrate that, for physiological values of the IOP, the predictions of the discrete model converge to those of the continuum model. We use the continuum model to simulate the progression of the degenerative disease known as keratoconus, characterized by a localized bulging of the corneal shell. We assign a spatial distribution of damage (i. e., reduction of the stiffness) to the mechanical properties of the structural elements and predict the resulting macroscopic shape of the cornea, showing that a large reduction in the element stiffness results in substantial corneal thinning and a significant increase in the curvature of both the anterior and posterior surfaces. |
1908.02532 | Youness Azimzade | Youness Azimzade, Mahdi Sasar, V\'ictor M. P\'erez Garc\'ia | Environmental Disorder Regulation of Invasion and Genetic Loss | null | null | null | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many physical and natural systems, including the population of species,
evolve in habitats with spatial stochastic variations of the individuals'
motility. We study here the effect of those fluctuations on invasion and
genetic loss. A Langevin equation for the \textit{position} and \textit{border}
of the invasion front is obtained. A striking result is that small/large
fluctuations of diffusivity suppress/intensify genetic loss. Our findings
reveal the potential role of environmental fluctuations as a regulating factor
for genetic loss and provide a simple explanation for the regional differences
in the intensity of genetic drift observed during the final stages of human
evolution and in tumor mutational landscapes.
| [
{
"created": "Wed, 7 Aug 2019 11:28:55 GMT",
"version": "v1"
}
] | 2019-08-08 | [
[
"Azimzade",
"Youness",
""
],
[
"Sasar",
"Mahdi",
""
],
[
"García",
"Víctor M. Pérez",
""
]
] | Many physical and natural systems, including the population of species, evolve in habitats with spatial stochastic variations of the individuals' motility. We study here the effect of those fluctuations on invasion and genetic loss. A Langevin equation for the \textit{position} and \textit{border} of the invasion front is obtained. A striking result is that small/large fluctuations of diffusivity suppress/intensify genetic loss. Our findings reveal the potential role of environmental fluctuations as a regulating factor for genetic loss and provide a simple explanation for the regional differences in the intensity of genetic drift observed during the final stages of human evolution and in tumor mutational landscapes. |
2212.14537 | William Marshall | William Marshall, Matteo Grasso, William GP Mayner, Alireza
Zaeemzadeh, Leonardo S Barbosa, Erick Chastain, Graham Findlay, Shuntaro
Sasai, Larissa Albantakis, Giulio Tononi | System Integrated Information | 16 pages, 4 figures | null | 10.3390/e25020334 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Integrated information theory (IIT) starts from consciousness itself and
identifies a set of properties (axioms) that are true of every conceivable
experience. The axioms are translated into a set of postulates about the
substrate of consciousness (called a complex), which are then used to formulate
a mathematical framework for assessing both the quality and quantity of
experience. The explanatory identity proposed by IIT is that an experience is
identical to the cause-effect structure unfolded from a maximally irreducible
substrate (a $\Phi$-structure). In this work we introduce a definition for the
integrated information of a system ($\varphi_s$) that is based on the
existence, intrinsicality, information, and integration postulates of IIT. We
explore how notions of determinism, degeneracy, and fault lines in the
connectivity impact system integrated information. We then demonstrate how the
proposed measure identifies complexes as systems whose $\varphi_s$ is greater
than the $\varphi_s$ of any overlapping candidate systems.
| [
{
"created": "Fri, 30 Dec 2022 03:43:57 GMT",
"version": "v1"
}
] | 2023-03-22 | [
[
"Marshall",
"William",
""
],
[
"Grasso",
"Matteo",
""
],
[
"Mayner",
"William GP",
""
],
[
"Zaeemzadeh",
"Alireza",
""
],
[
"Barbosa",
"Leonardo S",
""
],
[
"Chastain",
"Erick",
""
],
[
"Findlay",
"Graham",
""
],
[
"Sasai",
"Shuntaro",
""
],
[
"Albantakis",
"Larissa",
""
],
[
"Tononi",
"Giulio",
""
]
] | Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause-effect structure unfolded from a maximally irreducible substrate (a $\Phi$-structure). In this work we introduce a definition for the integrated information of a system ($\varphi_s$) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system integrated information. We then demonstrate how the proposed measure identifies complexes as systems whose $\varphi_s$ is greater than the $\varphi_s$ of any overlapping candidate systems. |
1904.02610 | Wenping Cui | Wenping Cui, Robert Marsland III and Pankaj Mehta | Diverse communities behave like typical random ecosystems | 24 pages | Phys. Rev. E 104, 034416 (2021) | 10.1103/PhysRevE.104.034416 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 1972, Robert May triggered a worldwide research program studying
ecological communities using random matrix theory. Yet, it remains unclear if
and when we can treat real communities as random ecosystems. Here, we draw on
recent progress in random matrix theory and statistical physics to extend May's
approach to generalized consumer-resource models. We show that in diverse
ecosystems adding even modest amounts of noise to consumer preferences results
in a transition to "typicality" where macroscopic ecological properties of
communities are indistinguishable from those of random ecosystems, even when
resource preferences have prominent designed structures. We test these ideas
using numerical simulations on a wide variety of ecological models. Our work
offers an explanation for the success of random consumer-resource models in
reproducing experimentally observed ecological patterns in microbial
communities and highlights the difficulty of scaling up bottom-up approaches in
synthetic ecology to diverse communities.
| [
{
"created": "Mon, 1 Apr 2019 23:55:30 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2019 15:55:33 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Mar 2021 02:26:25 GMT",
"version": "v3"
},
{
"created": "Mon, 27 Sep 2021 17:02:59 GMT",
"version": "v4"
}
] | 2021-09-28 | [
[
"Cui",
"Wenping",
""
],
[
"Marsland",
"Robert",
"III"
],
[
"Mehta",
"Pankaj",
""
]
] | In 1972, Robert May triggered a worldwide research program studying ecological communities using random matrix theory. Yet, it remains unclear if and when we can treat real communities as random ecosystems. Here, we draw on recent progress in random matrix theory and statistical physics to extend May's approach to generalized consumer-resource models. We show that in diverse ecosystems adding even modest amounts of noise to consumer preferences results in a transition to "typicality" where macroscopic ecological properties of communities are indistinguishable from those of random ecosystems, even when resource preferences have prominent designed structures. We test these ideas using numerical simulations on a wide variety of ecological models. Our work offers an explanation for the success of random consumer-resource models in reproducing experimentally observed ecological patterns in microbial communities and highlights the difficulty of scaling up bottom-up approaches in synthetic ecology to diverse communities. |
1803.03304 | Ryan Pyle | Ryan Pyle, Robert Rosenbaum | A model of reward-modulated motor learning with parallelcortical and
basal ganglia pathways | null | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many recent studies of the motor system are divided into two distinct
approaches: Those that investigate how motor responses are encoded in cortical
neurons' firing rate dynamics and those that study the learning rules by which
mammals and songbirds develop reliable motor responses. Computationally, the
first approach is encapsulated by reservoir computing models, which can learn
intricate motor tasks and produce internal dynamics strikingly similar to those
of motor cortical neurons, but rely on biologically unrealistic learning rules.
The more realistic learning rules developed by the second approach are often
derived for simplified, discrete tasks in contrast to the intricate dynamics
that characterize real motor responses. We bridge these two approaches to
develop a biologically realistic learning rule for reservoir computing. Our
algorithm learns simulated motor tasks on which previous reservoir computing
algorithms fail, and reproduces experimental findings including those that
relate motor learning to Parkinson's disease and its treatment.
| [
{
"created": "Thu, 8 Mar 2018 21:01:02 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Mar 2019 23:25:16 GMT",
"version": "v2"
}
] | 2019-03-05 | [
[
"Pyle",
"Ryan",
""
],
[
"Rosenbaum",
"Robert",
""
]
] | Many recent studies of the motor system are divided into two distinct approaches: Those that investigate how motor responses are encoded in cortical neurons' firing rate dynamics and those that study the learning rules by which mammals and songbirds develop reliable motor responses. Computationally, the first approach is encapsulated by reservoir computing models, which can learn intricate motor tasks and produce internal dynamics strikingly similar to those of motor cortical neurons, but rely on biologically unrealistic learning rules. The more realistic learning rules developed by the second approach are often derived for simplified, discrete tasks in contrast to the intricate dynamics that characterize real motor responses. We bridge these two approaches to develop a biologically realistic learning rule for reservoir computing. Our algorithm learns simulated motor tasks on which previous reservoir computing algorithms fail, and reproduces experimental findings including those that relate motor learning to Parkinson's disease and its treatment. |
2005.11935 | Min-Liang Wang | Anurag Lal, Ming-Hsien Hu, Pei-Yuan Lee, Min Liang Wang | A Novel Approach of using AR and Smart Surgical Glasses Supported Trauma
Care | 10 pages, 9 Figures, Conference. arXiv admin note: text overlap with
arXiv:1801.01560 by other authors | null | null | null | q-bio.QM cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BACKGROUND: Augmented reality (AR) is gaining popularity in varying field
such as computer gaming and medical education fields. However, still few of
applications in real surgeries. Orthopedic surgical applications are currently
limited and underdeveloped. - METHODS: The clinic validation was prepared with
the currently available AR equipment and software. A total of 1 Vertebroplasty,
2 ORIF Pelvis fracture, 1 ORIF with PFN for Proximal Femoral Fracture, 1 CRIF
for distal radius fracture and 2 ORIF for Tibia Fracture cases were performed
with fluoroscopy combined with AR smart surgical glasses system. - RESULTS: A
total of 1 Vertebroplasty, 2 ORIF Pelvis fracture, 1 ORIF with PFN for Proximal
Femoral Fracture, 1 CRIF for distal radius fracture and 2 ORIF for Tibia
Fracture cases are performed to evaluate the benefits of AR surgery. Among the
AR surgeries, surgeons wear the smart surgical are lot reduce of eyes of turns
to focus on the monitors. This paper shows the potential ability of augmented
reality technology for trauma surgery.
| [
{
"created": "Mon, 25 May 2020 06:03:30 GMT",
"version": "v1"
}
] | 2020-05-27 | [
[
"Lal",
"Anurag",
""
],
[
"Hu",
"Ming-Hsien",
""
],
[
"Lee",
"Pei-Yuan",
""
],
[
"Wang",
"Min Liang",
""
]
] | BACKGROUND: Augmented reality (AR) is gaining popularity in varying field such as computer gaming and medical education fields. However, still few of applications in real surgeries. Orthopedic surgical applications are currently limited and underdeveloped. - METHODS: The clinic validation was prepared with the currently available AR equipment and software. A total of 1 Vertebroplasty, 2 ORIF Pelvis fracture, 1 ORIF with PFN for Proximal Femoral Fracture, 1 CRIF for distal radius fracture and 2 ORIF for Tibia Fracture cases were performed with fluoroscopy combined with AR smart surgical glasses system. - RESULTS: A total of 1 Vertebroplasty, 2 ORIF Pelvis fracture, 1 ORIF with PFN for Proximal Femoral Fracture, 1 CRIF for distal radius fracture and 2 ORIF for Tibia Fracture cases are performed to evaluate the benefits of AR surgery. Among the AR surgeries, surgeons wear the smart surgical are lot reduce of eyes of turns to focus on the monitors. This paper shows the potential ability of augmented reality technology for trauma surgery. |
2202.05889 | Muhammad Ardiyansyah | Muhammad Ardiyansyah, Dimitra Kosta, Jordi Roca-Lacostena | Embeddability of centrosymmetric matrices capturing the double-helix
structure in natural and synthetic DNA | 34 pages, 9 tables | null | null | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by/4.0/ | In this paper, we discuss the embedding problem for centrosymmetric matrices,
which are higher order generalizations of the matrices occurring in Strand
Symmetric Models. These models capture the substitution symmetries arising from
the double helix structure of the DNA. Deciding whether a transition matrix is
embeddable or not enables us to know if the observed substitution probabilities
are consistent with a homogeneous continuous time substitution model, such as
the Kimura models, the Jukes-Cantor model or the general time-reversible model.
On the other hand, the generalization to higher order matrices is motivated by
the setting of synthetic biology, which works with different sizes of genetic
alphabets.
| [
{
"created": "Fri, 11 Feb 2022 20:13:16 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Nov 2022 08:43:13 GMT",
"version": "v2"
}
] | 2022-11-09 | [
[
"Ardiyansyah",
"Muhammad",
""
],
[
"Kosta",
"Dimitra",
""
],
[
"Roca-Lacostena",
"Jordi",
""
]
] | In this paper, we discuss the embedding problem for centrosymmetric matrices, which are higher order generalizations of the matrices occurring in Strand Symmetric Models. These models capture the substitution symmetries arising from the double helix structure of the DNA. Deciding whether a transition matrix is embeddable or not enables us to know if the observed substitution probabilities are consistent with a homogeneous continuous time substitution model, such as the Kimura models, the Jukes-Cantor model or the general time-reversible model. On the other hand, the generalization to higher order matrices is motivated by the setting of synthetic biology, which works with different sizes of genetic alphabets. |
1509.09104 | Alexander Andreychenko | Alexander Andreychenko, Luca Bortolussi, Ramon Grima, Philipp Thomas,
Verena Wolf | Distribution approximations for the chemical master equation: comparison
of the method of moments and the system size expansion | 28 pages, 6 figures | null | null | null | q-bio.QM cond-mat.stat-mech math.NA q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stochastic nature of chemical reactions involving randomly fluctuating
population sizes has lead to a growing research interest in discrete-state
stochastic models and their analysis. A widely-used approach is the description
of the temporal evolution of the system in terms of a chemical master equation
(CME). In this paper we study two approaches for approximating the underlying
probability distributions of the CME. The first approach is based on an
integration of the statistical moments and the reconstruction of the
distribution based on the maximum entropy principle. The second approach relies
on an analytical approximation of the probability distribution of the CME using
the system size expansion, considering higher-order terms than the linear noise
approximation. We consider gene expression networks with unimodal and
multimodal protein distributions to compare the accuracy of the two approaches.
We find that both methods provide accurate approximations to the distributions
of the CME while having different benefits and limitations in applications.
| [
{
"created": "Wed, 30 Sep 2015 09:53:38 GMT",
"version": "v1"
}
] | 2015-10-01 | [
[
"Andreychenko",
"Alexander",
""
],
[
"Bortolussi",
"Luca",
""
],
[
"Grima",
"Ramon",
""
],
[
"Thomas",
"Philipp",
""
],
[
"Wolf",
"Verena",
""
]
] | The stochastic nature of chemical reactions involving randomly fluctuating population sizes has lead to a growing research interest in discrete-state stochastic models and their analysis. A widely-used approach is the description of the temporal evolution of the system in terms of a chemical master equation (CME). In this paper we study two approaches for approximating the underlying probability distributions of the CME. The first approach is based on an integration of the statistical moments and the reconstruction of the distribution based on the maximum entropy principle. The second approach relies on an analytical approximation of the probability distribution of the CME using the system size expansion, considering higher-order terms than the linear noise approximation. We consider gene expression networks with unimodal and multimodal protein distributions to compare the accuracy of the two approaches. We find that both methods provide accurate approximations to the distributions of the CME while having different benefits and limitations in applications. |
2005.02071 | Christoph Leitner | Christoph Leitner, Robert Jarolim, Andreas Konrad, Annika Kruse,
Markus Tilp, J\"org Schr\"ottner, Christian Baumgartner | Automatic Tracking of the Muscle Tendon Junction in Healthy and Impaired
Subjects using Deep Learning | Accepted version to be published in 2020, 42nd Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
Montreal, Canada | null | 10.1109/EMBC44109.2020.9176145 | null | q-bio.QM cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recording muscle tendon junction displacements during movement, allows
separate investigation of the muscle and tendon behaviour, respectively. In
order to provide a fully-automatic tracking method, we employ a novel deep
learning approach to detect the position of the muscle tendon junction in
ultrasound images. We utilize the attention mechanism to enable the network to
focus on relevant regions and to obtain a better interpretation of the results.
Our data set consists of a large cohort of 79 healthy subjects and 28 subjects
with movement limitations performing passive full range of motion and maximum
contraction movements. Our trained network shows robust detection of the muscle
tendon junction on a diverse data set of varying quality with a mean absolute
error of 2.55$\pm$1 mm. We show that our approach can be applied for various
subjects and can be operated in real-time. The complete software package is
available for open-source use via: https://github.com/luuleitner/deepMTJ
| [
{
"created": "Tue, 5 May 2020 11:24:40 GMT",
"version": "v1"
}
] | 2020-09-09 | [
[
"Leitner",
"Christoph",
""
],
[
"Jarolim",
"Robert",
""
],
[
"Konrad",
"Andreas",
""
],
[
"Kruse",
"Annika",
""
],
[
"Tilp",
"Markus",
""
],
[
"Schröttner",
"Jörg",
""
],
[
"Baumgartner",
"Christian",
""
]
] | Recording muscle tendon junction displacements during movement, allows separate investigation of the muscle and tendon behaviour, respectively. In order to provide a fully-automatic tracking method, we employ a novel deep learning approach to detect the position of the muscle tendon junction in ultrasound images. We utilize the attention mechanism to enable the network to focus on relevant regions and to obtain a better interpretation of the results. Our data set consists of a large cohort of 79 healthy subjects and 28 subjects with movement limitations performing passive full range of motion and maximum contraction movements. Our trained network shows robust detection of the muscle tendon junction on a diverse data set of varying quality with a mean absolute error of 2.55$\pm$1 mm. We show that our approach can be applied for various subjects and can be operated in real-time. The complete software package is available for open-source use via: https://github.com/luuleitner/deepMTJ |
2402.13658 | Cedric Sueur | Maxime Herbrich (IPHC), Eythan Cousin, Ivan Puga-Gonzalez, Barbara
Tiddi, Claudia Fichtel, Meg Crofoot, Andrew Jj Macintosh, Erica van de Waal,
C\'edric Sueur (IPHC) | Network nestedness in primates: a structural constraint or a biological
advantage of social complexity? | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the prevalence and implications of nestedness within
primate social networks, examining its relationship with cognitive and
structural factors. We analysed data from 51 primate groups across 21 species,
employing network analysis to evaluate nestedness and its correlation with
modularity, neocortex ratio, and group size. We used Bayesian mixed effects
modelling to investigate nestedness in primate social networks, controlling for
phylogenetic dependencies and exploring various factors like neocortex ratio
and group size. Our findings reveal a significant occurrence of nestedness in
66% of the species studied, exceeding chance expectations. This nestedness was
more pronounced in groups with less steep dominance hierarchies, contrary to
traditional assumptions linking it to hierarchical social structures. A notable
inverse relationship between nestedness and modularity was observed, suggesting
a structural trade-off in network formation. This pattern persisted even after
controlling for species-specific social behaviours, indicating a general
structural feature of primate networks. Surprisingly, our analysis showed no
significant correlation between nestedness and neocortex ratio or group size,
challenging the social brain hypothesis and suggesting a greater role for
ecological factors in cognitive evolution. This study emphasises the importance
of weak links in maintaining network resilience. Overall, our research provides
new insights into primate social network structures, highlighting complex
interplays between network characteristics and challenging existing paradigms
in cognitive and evolutionary biology.
| [
{
"created": "Wed, 21 Feb 2024 09:44:14 GMT",
"version": "v1"
}
] | 2024-02-22 | [
[
"Herbrich",
"Maxime",
"",
"IPHC"
],
[
"Cousin",
"Eythan",
"",
"IPHC"
],
[
"Puga-Gonzalez",
"Ivan",
"",
"IPHC"
],
[
"Tiddi",
"Barbara",
"",
"IPHC"
],
[
"Fichtel",
"Claudia",
"",
"IPHC"
],
[
"Crofoot",
"Meg",
"",
"IPHC"
],
[
"Macintosh",
"Andrew Jj",
"",
"IPHC"
],
[
"van de Waal",
"Erica",
"",
"IPHC"
],
[
"Sueur",
"Cédric",
"",
"IPHC"
]
] | This study investigates the prevalence and implications of nestedness within primate social networks, examining its relationship with cognitive and structural factors. We analysed data from 51 primate groups across 21 species, employing network analysis to evaluate nestedness and its correlation with modularity, neocortex ratio, and group size. We used Bayesian mixed effects modelling to investigate nestedness in primate social networks, controlling for phylogenetic dependencies and exploring various factors like neocortex ratio and group size. Our findings reveal a significant occurrence of nestedness in 66% of the species studied, exceeding chance expectations. This nestedness was more pronounced in groups with less steep dominance hierarchies, contrary to traditional assumptions linking it to hierarchical social structures. A notable inverse relationship between nestedness and modularity was observed, suggesting a structural trade-off in network formation. This pattern persisted even after controlling for species-specific social behaviours, indicating a general structural feature of primate networks. Surprisingly, our analysis showed no significant correlation between nestedness and neocortex ratio or group size, challenging the social brain hypothesis and suggesting a greater role for ecological factors in cognitive evolution. This study emphasises the importance of weak links in maintaining network resilience. Overall, our research provides new insights into primate social network structures, highlighting complex interplays between network characteristics and challenging existing paradigms in cognitive and evolutionary biology. |
1309.6208 | Eric Frichot | Eric Frichot, Fran\c{c}ois Mathieu, Th\'eo Trouillon, Guillaume
Bouchard, Olivier Fran\c{c}ois | Fast Inference of Admixture Coefficients Using Sparse Non-negative
Matrix Factorization Algorithms | 31 pages, 5 figures, 3 tables, 2 supplementary tables, 4
supplementary figures | null | null | null | q-bio.PE q-bio.QM stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference of individual admixture coefficients, which is important for
population genetic and association studies, is commonly performed using
compute-intensive likelihood algorithms. With the availability of large
population genomic data sets, fast versions of likelihood algorithms have
attracted considerable attention. Reducing the computational burden of
estimation algorithms remains, however, a major challenge. Here, we present a
fast and efficient method for estimating individual admixture coefficients
based on sparse non-negative matrix factorization algorithms. We implemented
our method in the computer program sNMF, and applied it to human and plant
genomic data sets. The performances of sNMF were then compared to the
likelihood algorithm implemented in the computer program ADMIXTURE. Without
loss of accuracy, sNMF computed estimates of admixture coefficients within
run-times approximately 10 to 30 times faster than those of ADMIXTURE.
| [
{
"created": "Tue, 24 Sep 2013 15:19:38 GMT",
"version": "v1"
}
] | 2013-09-25 | [
[
"Frichot",
"Eric",
""
],
[
"Mathieu",
"François",
""
],
[
"Trouillon",
"Théo",
""
],
[
"Bouchard",
"Guillaume",
""
],
[
"François",
"Olivier",
""
]
] | Inference of individual admixture coefficients, which is important for population genetic and association studies, is commonly performed using compute-intensive likelihood algorithms. With the availability of large population genomic data sets, fast versions of likelihood algorithms have attracted considerable attention. Reducing the computational burden of estimation algorithms remains, however, a major challenge. Here, we present a fast and efficient method for estimating individual admixture coefficients based on sparse non-negative matrix factorization algorithms. We implemented our method in the computer program sNMF, and applied it to human and plant genomic data sets. The performances of sNMF were then compared to the likelihood algorithm implemented in the computer program ADMIXTURE. Without loss of accuracy, sNMF computed estimates of admixture coefficients within run-times approximately 10 to 30 times faster than those of ADMIXTURE. |
1204.2198 | Eugene Shakhnovich | Shimon Bershtein, Wanmeng Mu, and Eugene I. Shakhnovich | Soluble oligomerization provides a beneficial fitness effect on
destabilizing mutations | null | PNAS, v.109, pp.4857-62 MARCH 27, 2012 | 10.1073/pnas.1118157109 | null | q-bio.BM physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutations create the genetic diversity on which selective pressures can act,
yet also create structural instability in proteins. How, then, is it possible
for organisms to ameliorate mutation-induced perturbations of protein stability
while maintaining biological fitness and gaining a selective advantage? Here we
used a new technique of site-specific chromosomal mutagenesis to introduce a
selected set of mostly destabilizing mutations into folA - an essential
chromosomal gene of E. coli encoding dihydrofolate reductase (DHFR) - to
determine how changes in protein stability, activity and abundance affect
fitness. In total, 27 E.coli strains carrying mutant DHFR were created. We
found no significant correlation between protein stability and its catalytic
activity nor between catalytic activity and fitness in a limited range of
variation of catalytic activity observed in mutants. The stability of these
mutants is strongly correlated with their intracellular abundance; suggesting
that protein homeostatic machinery plays an active role in maintaining
intracellular concentrations of proteins. Fitness also shows a significant
correlation with intracellular abundance of soluble DHFR in cells growing at
30oC. At 42oC, on the other hand, the picture was mixed, yet remarkable: a few
strains carrying mutant DHFR proteins aggregated rendering them nonviable, but,
intriguingly, the majority exhibited fitness higher than wild type. We found
that mutational destabilization of DHFR proteins in E. coli is counterbalanced
at 42oC by their soluble oligomerization, thereby restoring structural
stability and protecting against aggregation.
| [
{
"created": "Tue, 10 Apr 2012 15:42:13 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Bershtein",
"Shimon",
""
],
[
"Mu",
"Wanmeng",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | Mutations create the genetic diversity on which selective pressures can act, yet also create structural instability in proteins. How, then, is it possible for organisms to ameliorate mutation-induced perturbations of protein stability while maintaining biological fitness and gaining a selective advantage? Here we used a new technique of site-specific chromosomal mutagenesis to introduce a selected set of mostly destabilizing mutations into folA - an essential chromosomal gene of E. coli encoding dihydrofolate reductase (DHFR) - to determine how changes in protein stability, activity and abundance affect fitness. In total, 27 E.coli strains carrying mutant DHFR were created. We found no significant correlation between protein stability and its catalytic activity nor between catalytic activity and fitness in a limited range of variation of catalytic activity observed in mutants. The stability of these mutants is strongly correlated with their intracellular abundance; suggesting that protein homeostatic machinery plays an active role in maintaining intracellular concentrations of proteins. Fitness also shows a significant correlation with intracellular abundance of soluble DHFR in cells growing at 30oC. At 42oC, on the other hand, the picture was mixed, yet remarkable: a few strains carrying mutant DHFR proteins aggregated rendering them nonviable, but, intriguingly, the majority exhibited fitness higher than wild type. We found that mutational destabilization of DHFR proteins in E. coli is counterbalanced at 42oC by their soluble oligomerization, thereby restoring structural stability and protecting against aggregation. |
2202.02143 | Delfim F. M. Torres | Abdesslem Lamrani Alaoui, Moulay Rchid Sidi Ammi, Mouhcine Tilioua,
Delfim F. M. Torres | Global Stability of a Diffusive SEIR Epidemic Model with Distributed
Delay | This is a preprint whose final form is published by Elsevier in the
book 'Mathematical Analysis of Infectious Diseases', 1st Edition - June 1,
2022. ISBN: 9780323905046 | null | 10.1016/B978-0-32-390504-6.00016-4 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the global dynamics of a reaction-diffusion SEIR infection model
with distributed delay and nonlinear incidence rate. The well-posedness of the
proposed model is proved. By means of Lyapunov functionals, we show that the
disease free equilibrium state is globally asymptotically stable when the basic
reproduction number is less or equal than one, and that the disease endemic
equilibrium is globally asymptotically stable when the basic reproduction
number is greater than one. Numerical simulations are provided to illustrate
the obtained theoretical results.
| [
{
"created": "Tue, 1 Feb 2022 18:41:47 GMT",
"version": "v1"
}
] | 2022-04-21 | [
[
"Alaoui",
"Abdesslem Lamrani",
""
],
[
"Ammi",
"Moulay Rchid Sidi",
""
],
[
"Tilioua",
"Mouhcine",
""
],
[
"Torres",
"Delfim F. M.",
""
]
] | We study the global dynamics of a reaction-diffusion SEIR infection model with distributed delay and nonlinear incidence rate. The well-posedness of the proposed model is proved. By means of Lyapunov functionals, we show that the disease free equilibrium state is globally asymptotically stable when the basic reproduction number is less or equal than one, and that the disease endemic equilibrium is globally asymptotically stable when the basic reproduction number is greater than one. Numerical simulations are provided to illustrate the obtained theoretical results. |
1806.09900 | Joe Greener | Joe G Greener, Lewis Moffat, David T Jones | Design of metalloproteins and novel protein folds using variational
autoencoders | JGG and LM contributed equally to the work | Scientific Reports 8:16189 (2018) | 10.1038/s41598-018-34533-1 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The design of novel proteins has many applications but remains an attritional
process with success in isolated cases. Meanwhile, deep learning technologies
have exploded in popularity in recent years and are increasingly applicable to
biology due to the rise in available data. We attempt to link protein design
and deep learning by using variational autoencoders to generate protein
sequences conditioned on desired properties. Potential copper and calcium
binding sites are added to non-metal binding proteins without human
intervention and compared to a hidden Markov model. In another use case, a
grammar of protein structures is developed and used to produce sequences for a
novel protein topology. One candidate structure is found to be stable by
molecular dynamics simulation. The ability of our model to confine the vast
search space of protein sequences and to scale easily has the potential to
assist in a variety of protein design tasks.
| [
{
"created": "Tue, 26 Jun 2018 11:00:22 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Sep 2018 10:49:41 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Nov 2018 11:23:48 GMT",
"version": "v3"
}
] | 2018-11-07 | [
[
"Greener",
"Joe G",
""
],
[
"Moffat",
"Lewis",
""
],
[
"Jones",
"David T",
""
]
] | The design of novel proteins has many applications but remains an attritional process with success in isolated cases. Meanwhile, deep learning technologies have exploded in popularity in recent years and are increasingly applicable to biology due to the rise in available data. We attempt to link protein design and deep learning by using variational autoencoders to generate protein sequences conditioned on desired properties. Potential copper and calcium binding sites are added to non-metal binding proteins without human intervention and compared to a hidden Markov model. In another use case, a grammar of protein structures is developed and used to produce sequences for a novel protein topology. One candidate structure is found to be stable by molecular dynamics simulation. The ability of our model to confine the vast search space of protein sequences and to scale easily has the potential to assist in a variety of protein design tasks. |
1303.3054 | Xu Yang | Guangwei Si, Min Tang and Xu Yang | A pathway-based mean-field model for E. coli chemotaxis: Mathematical
derivation and Keller-Segel limit | 21 pages, 3 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A pathway-based mean-field theory (PBMFT) was recently proposed for E. coli
chemotaxis in [G. Si, T. Wu, Q. Quyang and Y. Tu, Phys. Rev. Lett., 109 (2012),
048101]. In this paper, we derived a new moment system of PBMFT by using the
moment closure technique in kinetic theory under the assumption that the
methylation level is locally concentrated. The new system is hyperbolic with
linear convection terms. Under certain assumptions, the new system can recover
the original model. Especially the assumption on the methylation difference
made there can be understood explicitly in this new moment system. We obtain
the Keller-Segel limit by taking into account the different physical time
scales of tumbling, adaptation and the experimental observations. We also
present numerical evidence to show the quantitative agreement of the moment
system with the individual based E. coli chemotaxis simulator.
| [
{
"created": "Tue, 12 Mar 2013 23:21:39 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2013 05:04:10 GMT",
"version": "v2"
}
] | 2013-04-29 | [
[
"Si",
"Guangwei",
""
],
[
"Tang",
"Min",
""
],
[
"Yang",
"Xu",
""
]
] | A pathway-based mean-field theory (PBMFT) was recently proposed for E. coli chemotaxis in [G. Si, T. Wu, Q. Quyang and Y. Tu, Phys. Rev. Lett., 109 (2012), 048101]. In this paper, we derived a new moment system of PBMFT by using the moment closure technique in kinetic theory under the assumption that the methylation level is locally concentrated. The new system is hyperbolic with linear convection terms. Under certain assumptions, the new system can recover the original model. Especially the assumption on the methylation difference made there can be understood explicitly in this new moment system. We obtain the Keller-Segel limit by taking into account the different physical time scales of tumbling, adaptation and the experimental observations. We also present numerical evidence to show the quantitative agreement of the moment system with the individual based E. coli chemotaxis simulator. |
2403.20239 | Marc Fiammante | Marc Fiammante (1,2), Anne-Isabelle Vermersch (3), Marie Vidailhet
(1,4), Mario Chavez (5) ((1) Paris Brain Institute, Inserm U1127, CNRS
UMR7225, Sorbonne Universite UM75, Inria Paris (Team Nerv), Pitie-Salpetriere
Hospital, Paris, France, (2) Retired IBM Fellow, (3) Physiology & Paediatric
Functional Explorations Unit, Armand Trousseau Hospital, Paris, France, (4)
Institut de Neurologie, Pitie-Salpetriere Hospital, Paris, France, (5) CNRS
UMR-7225, Pitie-Salpetriere Hospital, Paris, France) | A simple EEG-based decision tool for neonatal therapeutic hypothermia in
hypoxic-ischemic encephalopathy | 20 pages, 1 table, 2 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Indication of therapeutic hypothermia needs an accurate identification of
brain injury in the early neonatal period. Here, we aim to provide a simple
hypothermia decision-making tool for the term neonates with hypoxic-ischemic
encephalopathy (HIE) based on features of conventional electroencephalogram
(EEG) taken less than 6 hours from birth. EEG recordings from one hundred
full-term babies with HIE were included in the study. Each EEG recording was
graded by pediatric neurologists for HIE severity. Amplitude of each EEG
segment was analyzed in the slow frequency bands. Temporal fluctuations of
spectral power in delta (0.5 - 4 Hz) frequency band was used to characterize
each HIE grade. For each grade of abnormality, we estimated level and duration
(number of consecutive segments above a given level) probability densities for
power of delta oscillations. These 2D representation of EEG dynamics can
identify mild HIE group from those of requiring hypothermia. Our discrimination
system yielded an accuracy, recall, positive predictive value (precision),
negative predictive value, false alarm ratio and F1-score of 98%, 99%, 99%,
0.94%, 0.06 and 99%, respectively. These results provided an accurate
discrimination of mild versus moderate or severe HIE, and only one mild case
was erroneously detected as relevant for hypothermia. Quantized probability
densities of slow spectral features (delta power) from early conventional EEG
(withing 6 hours of birth) revealed significant differences in slow spectral
dynamics between infants with mild HIE grades and those relevant for
hypothermia.
| [
{
"created": "Fri, 29 Mar 2024 15:33:16 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Apr 2024 16:13:46 GMT",
"version": "v2"
}
] | 2024-04-04 | [
[
"Fiammante",
"Marc",
""
],
[
"Vermersch",
"Anne-Isabelle",
""
],
[
"Vidailhet",
"Marie",
""
],
[
"Chavez",
"Mario",
""
]
] | Indication of therapeutic hypothermia needs an accurate identification of brain injury in the early neonatal period. Here, we aim to provide a simple hypothermia decision-making tool for the term neonates with hypoxic-ischemic encephalopathy (HIE) based on features of conventional electroencephalogram (EEG) taken less than 6 hours from birth. EEG recordings from one hundred full-term babies with HIE were included in the study. Each EEG recording was graded by pediatric neurologists for HIE severity. Amplitude of each EEG segment was analyzed in the slow frequency bands. Temporal fluctuations of spectral power in delta (0.5 - 4 Hz) frequency band was used to characterize each HIE grade. For each grade of abnormality, we estimated level and duration (number of consecutive segments above a given level) probability densities for power of delta oscillations. These 2D representation of EEG dynamics can identify mild HIE group from those of requiring hypothermia. Our discrimination system yielded an accuracy, recall, positive predictive value (precision), negative predictive value, false alarm ratio and F1-score of 98%, 99%, 99%, 0.94%, 0.06 and 99%, respectively. These results provided an accurate discrimination of mild versus moderate or severe HIE, and only one mild case was erroneously detected as relevant for hypothermia. Quantized probability densities of slow spectral features (delta power) from early conventional EEG (withing 6 hours of birth) revealed significant differences in slow spectral dynamics between infants with mild HIE grades and those relevant for hypothermia. |
2107.06738 | Antonio Mart\'inez-Sanchez | Antonio Martinez-Sanchez, Wolfgang Baumeister and Vladan Lu\v{c}i\'c | Statistical spatial analysis for cryo-electron tomography | null | Computer Methods and Programs in Biomedicine 218 (2022) 106693 | 10.1016/j.cmpb.2022.106693 | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryo-electron tomography (cryo-ET) is uniquely suited to precisely localize
macromolecular complexes in situ, that is in a close-to-native state within
their cellular compartments, in three-dimensions at high resolution. Point
pattern analysis (PPA) allows quantitative characterization of the spatial
organization of particles. However, current implementations of PPA functions
are not suitable for applications to cryo-ET data because they do not consider
the real, typically irregular 3D shape of cellular compartments and molecular
complexes. Here, we designed and implemented first and the second-order, uni-
and bivariate PPA functions in a Python package for statistical spatial
analysis of particles located in three dimensional regions of arbitrary shape,
such as those encountered in cellular cryo-ET imaging (PyOrg).
To validate the implemented functions, we applied them to specially designed
synthetic datasets. This allowed us to find the algorithmic solutions that
provide the best accuracy and computational performance, and to evaluate the
precision of the implemented functions. Applications to experimental data
showed that despite the higher computational demand, the use of the
second-order functions is advantageous to the first-order ones, because they
allow characterization of the particle organization and statistical inference
over a range of distance scales, as well as the comparative analysis between
experimental groups comprising multiple tomograms.
Altogether, PyOrg is a versatile, precise, and efficient open-source software
for reliable quantitative characterization of macromolecular organization
within cellular compartments imaged in situ by cryo-ET, as well as to other 3D
imaging systems where real-size particles are located within regions possessing
complex geometry.
| [
{
"created": "Wed, 14 Jul 2021 14:31:18 GMT",
"version": "v1"
}
] | 2022-03-02 | [
[
"Martinez-Sanchez",
"Antonio",
""
],
[
"Baumeister",
"Wolfgang",
""
],
[
"Lučić",
"Vladan",
""
]
] | Cryo-electron tomography (cryo-ET) is uniquely suited to precisely localize macromolecular complexes in situ, that is in a close-to-native state within their cellular compartments, in three-dimensions at high resolution. Point pattern analysis (PPA) allows quantitative characterization of the spatial organization of particles. However, current implementations of PPA functions are not suitable for applications to cryo-ET data because they do not consider the real, typically irregular 3D shape of cellular compartments and molecular complexes. Here, we designed and implemented first and the second-order, uni- and bivariate PPA functions in a Python package for statistical spatial analysis of particles located in three dimensional regions of arbitrary shape, such as those encountered in cellular cryo-ET imaging (PyOrg). To validate the implemented functions, we applied them to specially designed synthetic datasets. This allowed us to find the algorithmic solutions that provide the best accuracy and computational performance, and to evaluate the precision of the implemented functions. Applications to experimental data showed that despite the higher computational demand, the use of the second-order functions is advantageous to the first-order ones, because they allow characterization of the particle organization and statistical inference over a range of distance scales, as well as the comparative analysis between experimental groups comprising multiple tomograms. Altogether, PyOrg is a versatile, precise, and efficient open-source software for reliable quantitative characterization of macromolecular organization within cellular compartments imaged in situ by cryo-ET, as well as to other 3D imaging systems where real-size particles are located within regions possessing complex geometry. |
1811.12499 | Kevin Keys | Alfonso Landeros, Timothy Stutz, Kevin L. Keys, Alexander Alekseyenko,
Janet S. Sinsheimer, Kenneth Lange, Mary Sehl | BioSimulator.jl: Stochastic simulation in Julia | 27 pages, 5 figures, 3 tables | Computer Methods and Programs in Biomedicine, Volume 167, December
2018, Pages 23-35 | 10.1016/j.cmpb.2018.09.009 | null | q-bio.QM math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological systems with intertwined feedback loops pose a challenge to
mathematical modeling efforts. Moreover, rare events, such as mutation and
extinction, complicate system dynamics. Stochastic simulation algorithms are
useful in generating time-evolution trajectories for these systems because they
can adequately capture the influence of random fluctuations and quantify rare
events. We present a simple and flexible package, BioSimulator.jl, for
implementing the Gillespie algorithm, $\tau$-leaping, and related stochastic
simulation algorithms. The objective of this work is to provide scientists
across domains with fast, user-friendly simulation tools. We used the
high-performance programming language Julia because of its emphasis on
scientific computing. Our software package implements a suite of stochastic
simulation algorithms based on Markov chain theory. We provide the ability to
(a) diagram Petri Nets describing interactions, (b) plot average trajectories
and attached standard deviations of each participating species over time, and
(c) generate frequency distributions of each species at a specified time.
BioSimulator.jl's interface allows users to build models programmatically
within Julia. A model is then passed to the simulate routine to generate
simulation data. The built-in tools allow one to visualize results and compute
summary statistics. Our examples highlight the broad applicability of our
software to systems of varying complexity from ecology, systems biology,
chemistry, and genetics. The user-friendly nature of BioSimulator.jl encourages
the use of stochastic simulation, minimizes tedious programming efforts, and
reduces errors during model specification.
| [
{
"created": "Thu, 29 Nov 2018 21:38:16 GMT",
"version": "v1"
}
] | 2018-12-10 | [
[
"Landeros",
"Alfonso",
""
],
[
"Stutz",
"Timothy",
""
],
[
"Keys",
"Kevin L.",
""
],
[
"Alekseyenko",
"Alexander",
""
],
[
"Sinsheimer",
"Janet S.",
""
],
[
"Lange",
"Kenneth",
""
],
[
"Sehl",
"Mary",
""
]
] | Biological systems with intertwined feedback loops pose a challenge to mathematical modeling efforts. Moreover, rare events, such as mutation and extinction, complicate system dynamics. Stochastic simulation algorithms are useful in generating time-evolution trajectories for these systems because they can adequately capture the influence of random fluctuations and quantify rare events. We present a simple and flexible package, BioSimulator.jl, for implementing the Gillespie algorithm, $\tau$-leaping, and related stochastic simulation algorithms. The objective of this work is to provide scientists across domains with fast, user-friendly simulation tools. We used the high-performance programming language Julia because of its emphasis on scientific computing. Our software package implements a suite of stochastic simulation algorithms based on Markov chain theory. We provide the ability to (a) diagram Petri Nets describing interactions, (b) plot average trajectories and attached standard deviations of each participating species over time, and (c) generate frequency distributions of each species at a specified time. BioSimulator.jl's interface allows users to build models programmatically within Julia. A model is then passed to the simulate routine to generate simulation data. The built-in tools allow one to visualize results and compute summary statistics. Our examples highlight the broad applicability of our software to systems of varying complexity from ecology, systems biology, chemistry, and genetics. The user-friendly nature of BioSimulator.jl encourages the use of stochastic simulation, minimizes tedious programming efforts, and reduces errors during model specification. |
1609.09421 | Yuri Shestopaloff | Yuri K. Shestopaloff | Physical mechanisms influencing life origin and development.
Physical-biochemical paradigm of Life | 49 pages, 8 figures, 1 table. Mathematical derivations are presented
with more intermediate transformations | Biophysical Reviews and Letters, 2023,
https://www.worldscientific.com/doi/epdf/10.1142/S1793048023500030 | 10.1142/S1793048023500030 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present view of biological phenomena is based on a biochemical paradigm
that development of living organisms is defined by information stored in a
molecular form as some genetic code. However, new discoveries indicate that
biological phenomena cannot be confined to a biochemical realm alone, but are
also influenced by physical mechanisms. These mechanisms work at cellular,
organ and whole organism spatial levels. They impose uniquely defined
constraints on distribution of nutrients between biomass synthesis and
maintenance of existing biomass, thus influencing the composition of
biochemical reactions, their successive change and irreversibility during the
organismal life cycle. Mathematically, such a growth mechanism is represented
by a growth equation. Using this equation, we introduce growth models, show
their adequacy to experimental data, and discover two types of division
mechanisms, examining growth of unicellular organisms Amoeba, S. pombe, E.
coli, B. subtilis, Staphylococcus. Also, on the basis of the growth equation,
we find different metabolic characteristics of these organisms. For instance,
it was shown that in logarithmic coordinates the values of their metabolic
allometric exponents are located on a straight line. This fact has important
implications with regard to evolutionary process of organisms within a food
chain, considered as a single system. High adequateness of obtained results to
experimental data, from different perspectives, as well as excellent compliance
with previously proven more particular knowledge, and with general criteria for
validation of scientific truths, proves validity of the introduced general
growth mechanism and the growth equation. Taken together, the obtained results
set solid grounds for introduction of a more comprehensive physical-biochemical
paradigm of Life origin, development and evolution.
| [
{
"created": "Thu, 29 Sep 2016 16:45:49 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2017 01:47:18 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jan 2018 20:04:13 GMT",
"version": "v3"
},
{
"created": "Fri, 1 Jun 2018 21:24:38 GMT",
"version": "v4"
},
{
"created": "Tue, 13 Jun 2023 00:54:48 GMT",
"version": "v5"
},
{
"created": "Sat, 2 Sep 2023 15:22:00 GMT",
"version": "v6"
}
] | 2023-11-14 | [
[
"Shestopaloff",
"Yuri K.",
""
]
] | The present view of biological phenomena is based on a biochemical paradigm that development of living organisms is defined by information stored in a molecular form as some genetic code. However, new discoveries indicate that biological phenomena cannot be confined to a biochemical realm alone, but are also influenced by physical mechanisms. These mechanisms work at cellular, organ and whole organism spatial levels. They impose uniquely defined constraints on distribution of nutrients between biomass synthesis and maintenance of existing biomass, thus influencing the composition of biochemical reactions, their successive change and irreversibility during the organismal life cycle. Mathematically, such a growth mechanism is represented by a growth equation. Using this equation, we introduce growth models, show their adequacy to experimental data, and discover two types of division mechanisms, examining growth of unicellular organisms Amoeba, S. pombe, E. coli, B. subtilis, Staphylococcus. Also, on the basis of the growth equation, we find different metabolic characteristics of these organisms. For instance, it was shown that in logarithmic coordinates the values of their metabolic allometric exponents are located on a straight line. This fact has important implications with regard to evolutionary process of organisms within a food chain, considered as a single system. High adequateness of obtained results to experimental data, from different perspectives, as well as excellent compliance with previously proven more particular knowledge, and with general criteria for validation of scientific truths, proves validity of the introduced general growth mechanism and the growth equation. Taken together, the obtained results set solid grounds for introduction of a more comprehensive physical-biochemical paradigm of Life origin, development and evolution. |
2109.07925 | Christopher Wood | Leonardo V. Castorina, Rokas Petrenas, Kartic Subr and Christopher W.
Wood | PDBench: Evaluating Computational Methods for Protein Sequence Design | 9 pages, 5 figures | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Proteins perform critical processes in all living systems: converting solar
energy into chemical energy, replicating DNA, as the basis of highly performant
materials, sensing and much more. While an incredible range of functionality
has been sampled in nature, it accounts for a tiny fraction of the possible
protein universe. If we could tap into this pool of unexplored protein
structures, we could search for novel proteins with useful properties that we
could apply to tackle the environmental and medical challenges facing humanity.
This is the purpose of protein design.
Sequence design is an important aspect of protein design, and many successful
methods to do this have been developed. Recently, deep-learning methods that
frame it as a classification problem have emerged as a powerful approach.
Beyond their reported improvement in performance, their primary advantage over
physics-based methods is that the computational burden is shifted from the user
to the developers, thereby increasing accessibility to the design method.
Despite this trend, the tools for assessment and comparison of such models
remain quite generic. The goal of this paper is to both address the timely
problem of evaluation and to shine a spotlight, within the Machine Learning
community, on specific assessment criteria that will accelerate impact.
We present a carefully curated benchmark set of proteins and propose a number
of standard tests to assess the performance of deep learning based methods. Our
robust benchmark provides biological insight into the behaviour of design
methods, which is essential for evaluating their performance and utility. We
compare five existing models with two novel models for sequence prediction.
Finally, we test the designs produced by these models with AlphaFold2, a
state-of-the-art structure-prediction algorithm, to determine if they are
likely to fold into the intended 3D shapes.
| [
{
"created": "Thu, 16 Sep 2021 12:20:03 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Sep 2021 09:23:31 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Sep 2021 13:34:33 GMT",
"version": "v3"
}
] | 2021-09-29 | [
[
"Castorina",
"Leonardo V.",
""
],
[
"Petrenas",
"Rokas",
""
],
[
"Subr",
"Kartic",
""
],
[
"Wood",
"Christopher W.",
""
]
] | Proteins perform critical processes in all living systems: converting solar energy into chemical energy, replicating DNA, as the basis of highly performant materials, sensing and much more. While an incredible range of functionality has been sampled in nature, it accounts for a tiny fraction of the possible protein universe. If we could tap into this pool of unexplored protein structures, we could search for novel proteins with useful properties that we could apply to tackle the environmental and medical challenges facing humanity. This is the purpose of protein design. Sequence design is an important aspect of protein design, and many successful methods to do this have been developed. Recently, deep-learning methods that frame it as a classification problem have emerged as a powerful approach. Beyond their reported improvement in performance, their primary advantage over physics-based methods is that the computational burden is shifted from the user to the developers, thereby increasing accessibility to the design method. Despite this trend, the tools for assessment and comparison of such models remain quite generic. The goal of this paper is to both address the timely problem of evaluation and to shine a spotlight, within the Machine Learning community, on specific assessment criteria that will accelerate impact. We present a carefully curated benchmark set of proteins and propose a number of standard tests to assess the performance of deep learning based methods. Our robust benchmark provides biological insight into the behaviour of design methods, which is essential for evaluating their performance and utility. We compare five existing models with two novel models for sequence prediction. Finally, we test the designs produced by these models with AlphaFold2, a state-of-the-art structure-prediction algorithm, to determine if they are likely to fold into the intended 3D shapes. |
0903.4161 | Federico Zertuche | Federico Zertuche | On the Robustness of NK-Kauffman Networks Against Changes in their
Connections and Boolean Functions | 17 pages, 1 figure, Accepted in Journal of Mathematical Physics | null | 10.1063/1.3116166 | null | q-bio.QM math-ph math.MP nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NK-Kauffman networks {\cal L}^N_K are a subset of the Boolean functions on N
Boolean variables to themselves, \Lambda_N = {\xi: \IZ_2^N \to \IZ_2^N}. To
each NK-Kauffman network it is possible to assign a unique Boolean function on
N variables through the function \Psi: {\cal L}^N_K \to \Lambda_N. The
probability {\cal P}_K that \Psi (f) = \Psi (f'), when f' is obtained through f
by a change of one of its K-Boolean functions (b_K: \IZ_2^K \to \IZ_2), and/or
connections; is calculated. The leading term of the asymptotic expansion of
{\cal P}_K, for N \gg 1, turns out to depend on: the probability to extract the
tautology and contradiction Boolean functions, and in the average value of the
distribution of probability of the Boolean functions; the other terms decay as
{\cal O} (1 / N). In order to accomplish this, a classification of the Boolean
functions in terms of what I have called their irreducible degree of
connectivity is established. The mathematical findings are discussed in the
biological context where, \Psi is used to model the genotype-phenotype map.
| [
{
"created": "Tue, 24 Mar 2009 18:56:40 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Zertuche",
"Federico",
""
]
] | NK-Kauffman networks {\cal L}^N_K are a subset of the Boolean functions on N Boolean variables to themselves, \Lambda_N = {\xi: \IZ_2^N \to \IZ_2^N}. To each NK-Kauffman network it is possible to assign a unique Boolean function on N variables through the function \Psi: {\cal L}^N_K \to \Lambda_N. The probability {\cal P}_K that \Psi (f) = \Psi (f'), when f' is obtained through f by a change of one of its K-Boolean functions (b_K: \IZ_2^K \to \IZ_2), and/or connections; is calculated. The leading term of the asymptotic expansion of {\cal P}_K, for N \gg 1, turns out to depend on: the probability to extract the tautology and contradiction Boolean functions, and in the average value of the distribution of probability of the Boolean functions; the other terms decay as {\cal O} (1 / N). In order to accomplish this, a classification of the Boolean functions in terms of what I have called their irreducible degree of connectivity is established. The mathematical findings are discussed in the biological context where, \Psi is used to model the genotype-phenotype map. |
2208.06360 | Kisung Moon | Kisung Moon, Sunyoung Kwon | 3D Graph Contrastive Learning for Molecular Property Prediction | need to be edited | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised learning (SSL) is a method that learns the data
representation by utilizing supervision inherent in the data. This learning
method is in the spotlight in the drug field, lacking annotated data due to
time-consuming and expensive experiments. SSL using enormous unlabeled data has
shown excellent performance for molecular property prediction, but a few issues
exist. (1) Existing SSL models are large-scale; there is a limitation to
implementing SSL where the computing resource is insufficient. (2) In most
cases, they do not utilize 3D structural information for molecular
representation learning. The activity of a drug is closely related to the
structure of the drug molecule. Nevertheless, most current models do not use 3D
information or use it partially. (3) Previous models that apply contrastive
learning to molecules use the augmentation of permuting atoms and bonds.
Therefore, molecules having different characteristics can be in the same
positive samples. We propose a novel contrastive learning framework,
small-scale 3D Graph Contrastive Learning (3DGCL) for molecular property
prediction, to solve the above problems. 3DGCL learns the molecular
representation by reflecting the molecule's structure through the pre-training
process that does not change the semantics of the drug. Using only 1,128
samples for pre-train data and 1 million model parameters, we achieved the
state-of-the-art or comparable performance in four regression benchmark
datasets. Extensive experiments demonstrate that 3D structural information
based on chemical knowledge is essential to molecular representation learning
for property prediction.
| [
{
"created": "Tue, 31 May 2022 04:45:31 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Aug 2022 13:10:50 GMT",
"version": "v2"
}
] | 2022-08-19 | [
[
"Moon",
"Kisung",
""
],
[
"Kwon",
"Sunyoung",
""
]
] | Self-supervised learning (SSL) is a method that learns the data representation by utilizing supervision inherent in the data. This learning method is in the spotlight in the drug field, lacking annotated data due to time-consuming and expensive experiments. SSL using enormous unlabeled data has shown excellent performance for molecular property prediction, but a few issues exist. (1) Existing SSL models are large-scale; there is a limitation to implementing SSL where the computing resource is insufficient. (2) In most cases, they do not utilize 3D structural information for molecular representation learning. The activity of a drug is closely related to the structure of the drug molecule. Nevertheless, most current models do not use 3D information or use it partially. (3) Previous models that apply contrastive learning to molecules use the augmentation of permuting atoms and bonds. Therefore, molecules having different characteristics can be in the same positive samples. We propose a novel contrastive learning framework, small-scale 3D Graph Contrastive Learning (3DGCL) for molecular property prediction, to solve the above problems. 3DGCL learns the molecular representation by reflecting the molecule's structure through the pre-training process that does not change the semantics of the drug. Using only 1,128 samples for pre-train data and 1 million model parameters, we achieved the state-of-the-art or comparable performance in four regression benchmark datasets. Extensive experiments demonstrate that 3D structural information based on chemical knowledge is essential to molecular representation learning for property prediction. |
2008.05897 | Mouhamadou Aliou Mountaga Tall Bald\'e | Fulgence Mansal, Mouhamadou A.M.T. Bald\'e and Alpha O. Bah | Study of COVID-19 anti-pandemic strategies by using optimal control | 21 pages, 32 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we present a new epidemiological model, with contamination
from confirmed and unreported. We also compute equilibria and study their
stability without intervention strategies. Optimal control theory has proven to
be a successful tool in understanding ways to curtail the spread of infectious
diseases by devising the optimal disease intervention strategies. We
investigate the impact of distancing, case finding, and case holding controls
while at the same time, we minimize the number of infected and dead
individuals. The method consists of minimizing the cost functional related to
infectious, death, and controls through some strategies to reduce the spread of
the COVID19 epidemic.
| [
{
"created": "Sun, 9 Aug 2020 21:42:03 GMT",
"version": "v1"
}
] | 2020-08-14 | [
[
"Mansal",
"Fulgence",
""
],
[
"Baldé",
"Mouhamadou A. M. T.",
""
],
[
"Bah",
"Alpha O.",
""
]
] | In this study, we present a new epidemiological model, with contamination from confirmed and unreported. We also compute equilibria and study their stability without intervention strategies. Optimal control theory has proven to be a successful tool in understanding ways to curtail the spread of infectious diseases by devising the optimal disease intervention strategies. We investigate the impact of distancing, case finding, and case holding controls while at the same time, we minimize the number of infected and dead individuals. The method consists of minimizing the cost functional related to infectious, death, and controls through some strategies to reduce the spread of the COVID19 epidemic. |
2401.06199 | Xingyi Cheng | Bo Chen, Xingyi Cheng, Pan Li, Yangli-ao Geng, Jing Gong, Shen Li,
Zhilei Bei, Xu Tan, Boyan Wang, Xin Zeng, Chiming Liu, Aohan Zeng, Yuxiao
Dong, Jie Tang, Le Song | xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering
the Language of Protein | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Protein language models have shown remarkable success in learning biological
information from protein sequences. However, most existing models are limited
by either autoencoding or autoregressive pre-training objectives, which makes
them struggle to handle protein understanding and generation tasks
concurrently. We propose a unified protein language model, xTrimoPGLM, to
address these two types of tasks simultaneously through an innovative
pre-training framework. Our key technical contribution is an exploration of the
compatibility and the potential for joint optimization of the two types of
objectives, which has led to a strategy for training xTrimoPGLM at an
unprecedented scale of 100 billion parameters and 1 trillion training tokens.
Our extensive experiments reveal that 1) xTrimoPGLM significantly outperforms
other advanced baselines in 18 protein understanding benchmarks across four
categories. The model also facilitates an atomic-resolution view of protein
structures, leading to an advanced 3D structural prediction model that
surpasses existing language model-based tools. 2) xTrimoPGLM not only can
generate de novo protein sequences following the principles of natural ones,
but also can perform programmable generation after supervised fine-tuning (SFT)
on curated sequences. These results highlight the substantial capability and
versatility of xTrimoPGLM in understanding and generating protein sequences,
contributing to the evolving landscape of foundation models in protein science.
| [
{
"created": "Thu, 11 Jan 2024 15:03:17 GMT",
"version": "v1"
}
] | 2024-01-15 | [
[
"Chen",
"Bo",
""
],
[
"Cheng",
"Xingyi",
""
],
[
"Li",
"Pan",
""
],
[
"Geng",
"Yangli-ao",
""
],
[
"Gong",
"Jing",
""
],
[
"Li",
"Shen",
""
],
[
"Bei",
"Zhilei",
""
],
[
"Tan",
"Xu",
""
],
[
"Wang",
"Boyan",
""
],
[
"Zeng",
"Xin",
""
],
[
"Liu",
"Chiming",
""
],
[
"Zeng",
"Aohan",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
],
[
"Song",
"Le",
""
]
] | Protein language models have shown remarkable success in learning biological information from protein sequences. However, most existing models are limited by either autoencoding or autoregressive pre-training objectives, which makes them struggle to handle protein understanding and generation tasks concurrently. We propose a unified protein language model, xTrimoPGLM, to address these two types of tasks simultaneously through an innovative pre-training framework. Our key technical contribution is an exploration of the compatibility and the potential for joint optimization of the two types of objectives, which has led to a strategy for training xTrimoPGLM at an unprecedented scale of 100 billion parameters and 1 trillion training tokens. Our extensive experiments reveal that 1) xTrimoPGLM significantly outperforms other advanced baselines in 18 protein understanding benchmarks across four categories. The model also facilitates an atomic-resolution view of protein structures, leading to an advanced 3D structural prediction model that surpasses existing language model-based tools. 2) xTrimoPGLM not only can generate de novo protein sequences following the principles of natural ones, but also can perform programmable generation after supervised fine-tuning (SFT) on curated sequences. These results highlight the substantial capability and versatility of xTrimoPGLM in understanding and generating protein sequences, contributing to the evolving landscape of foundation models in protein science. |
1501.02124 | Ildefonso De la Fuente M | Ildefonso M. De la Fuente | New insights on the Dynamic Cellular Metabolism | 1 figure | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large number of studies have shown the existence of metabolic covalent
modifications in different molecular structures, able to store biochemical
information that is not encoded by the DNA. Some of these covalent mark
patterns can be transmitted across generations (epigenetic changes). Recently,
the emergence of Hopfield-like attractor dynamics has been observed in the
self-organized enzymatic networks, which have the capacity to store functional
catalytic patterns that can be correctly recovered by the specific input
stimuli. The Hopfield-like metabolic dynamics are stable and can be maintained
as a long-term biochemical memory. In addition, specific molecular information
can be transferred from the functional dynamics of the metabolic networks to
the enzymatic activity involved in the covalent post-translational modulation
so that determined functional memory can be embedded in multiple stable
molecular marks. Both the metabolic dynamics governed by Hopfield-type
attractors (functional processes) and the enzymatic covalent modifications of
determined molecules (structural dynamic processes) seem to represent the two
stages of the dynamical memory of cellular metabolism (metabolic memory).
Epigenetic processes appear to be the structural manifestation of this cellular
metabolic memory. Here, a new framework for molecular information storage in
the cell is presented, which is characterized by two functionally and
molecularly interrelated systems: a dynamic, flexible and adaptive system
(metabolic memory) and an essentially conservative system (genetic memory). The
molecular information of both systems seems to coordinate the physiological
development of the whole cell.
| [
{
"created": "Fri, 9 Jan 2015 12:53:08 GMT",
"version": "v1"
}
] | 2015-01-12 | [
[
"De la Fuente",
"Ildefonso M.",
""
]
] | A large number of studies have shown the existence of metabolic covalent modifications in different molecular structures, able to store biochemical information that is not encoded by the DNA. Some of these covalent mark patterns can be transmitted across generations (epigenetic changes). Recently, the emergence of Hopfield-like attractor dynamics has been observed in the self-organized enzymatic networks, which have the capacity to store functional catalytic patterns that can be correctly recovered by the specific input stimuli. The Hopfield-like metabolic dynamics are stable and can be maintained as a long-term biochemical memory. In addition, specific molecular information can be transferred from the functional dynamics of the metabolic networks to the enzymatic activity involved in the covalent post-translational modulation so that determined functional memory can be embedded in multiple stable molecular marks. Both the metabolic dynamics governed by Hopfield-type attractors (functional processes) and the enzymatic covalent modifications of determined molecules (structural dynamic processes) seem to represent the two stages of the dynamical memory of cellular metabolism (metabolic memory). Epigenetic processes appear to be the structural manifestation of this cellular metabolic memory. Here, a new framework for molecular information storage in the cell is presented, which is characterized by two functionally and molecularly interrelated systems: a dynamic, flexible and adaptive system (metabolic memory) and an essentially conservative system (genetic memory). The molecular information of both systems seems to coordinate the physiological development of the whole cell. |
2105.02811 | Weikai Li | Weikai Li, Yongxiang Tang, Zhengxia Wang, Shuo Hu and Xin Gao | The Reconfiguration Pattern of Individual Brain Metabolic Connectome for
Parkinson's Disease Identification | 9 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Background: Positron Emission Tomography (PET) with 18F-fluorodeoxyglucose
(18F-FDG) reveals metabolic abnormalities in Parkinson's disease (PD) at a
systemic level. Previous metabolic connectome studies derived from groups of
patients have failed to identify the individual neurophysiological details. We
aim to establish an individual metabolic connectome method to characterize the
aberrant connectivity patterns and topological alterations of the
individual-level brain metabolic connectome and their diagnostic value in PD.
Methods: The 18F-FDG PET data of 49 PD patients and 49 healthy controls (HCs)
were recruited. Each individual's metabolic brain network was ascertained using
the proposed Jensen-Shannon Divergence Similarity Estimation (JSSE) method. The
intergroup difference of the individual's metabolic brain network and its
global and local graph metrics were analyzed to investigate the metabolic
connectome's alterations. The identification of the PD from HC individuals was
used by the multiple kernel support vector machine (MK-SVM) to combine the
information from connection and topological metrics. The validation was
conducted using the nest leave-one-out cross-validation strategy to confirm the
performance of the methods. Results: The proposed JSSE metabolic connectome
method showed the most involved metabolic motor networks were PUT-PCG, THA-PCG,
and SMA pathways in PD, which was similar to the typical group-level method,
and yielded another detailed individual pathological connectivity in ACG-PCL,
DCG-PHG and ACG pathways. These aberrant functional network measures exhibited
an ideal classification performance in the identifying of PD individuals from
HC individuals at an accuracy of up to 91.84%.
| [
{
"created": "Thu, 29 Apr 2021 06:46:52 GMT",
"version": "v1"
}
] | 2021-05-07 | [
[
"Li",
"Weikai",
""
],
[
"Tang",
"Yongxiang",
""
],
[
"Wang",
"Zhengxia",
""
],
[
"Hu",
"Shuo",
""
],
[
"Gao",
"Xin",
""
]
] | Background: Positron Emission Tomography (PET) with 18F-fluorodeoxyglucose (18F-FDG) reveals metabolic abnormalities in Parkinson's disease (PD) at a systemic level. Previous metabolic connectome studies derived from groups of patients have failed to identify the individual neurophysiological details. We aim to establish an individual metabolic connectome method to characterize the aberrant connectivity patterns and topological alterations of the individual-level brain metabolic connectome and their diagnostic value in PD. Methods: The 18F-FDG PET data of 49 PD patients and 49 healthy controls (HCs) were recruited. Each individual's metabolic brain network was ascertained using the proposed Jensen-Shannon Divergence Similarity Estimation (JSSE) method. The intergroup difference of the individual's metabolic brain network and its global and local graph metrics were analyzed to investigate the metabolic connectome's alterations. The identification of the PD from HC individuals was used by the multiple kernel support vector machine (MK-SVM) to combine the information from connection and topological metrics. The validation was conducted using the nest leave-one-out cross-validation strategy to confirm the performance of the methods. Results: The proposed JSSE metabolic connectome method showed the most involved metabolic motor networks were PUT-PCG, THA-PCG, and SMA pathways in PD, which was similar to the typical group-level method, and yielded another detailed individual pathological connectivity in ACG-PCL, DCG-PHG and ACG pathways. These aberrant functional network measures exhibited an ideal classification performance in the identifying of PD individuals from HC individuals at an accuracy of up to 91.84%. |
2205.13816 | Paolo Muratore | Paolo Muratore, Sina Tafazoli, Eugenio Piasini, Alessandro Laio and
Davide Zoccolan | Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks | 11 pages, 5 fiures | Advances in Neural Information Processing Systems (2022) Vol. 35
pp. 30206-30218 | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Visual object recognition has been extensively studied in both neuroscience
and computer vision. Recently, the most popular class of artificial systems for
this task, deep convolutional neural networks (CNNs), has been shown to provide
excellent models for its functional analogue in the brain, the ventral stream
in visual cortex. This has prompted questions on what, if any, are the common
principles underlying the reformatting of visual information as it flows
through a CNN or the ventral stream. Here we consider some prominent
statistical patterns that are known to exist in the internal representations of
either CNNs or the visual cortex and look for them in the other system. We show
that intrinsic dimensionality (ID) of object representations along the rat
homologue of the ventral stream presents two distinct expansion-contraction
phases, as previously shown for CNNs. Conversely, in CNNs, we show that
training results in both distillation and active pruning (mirroring the
increase in ID) of low- to middle-level image information in single units, as
representations gain the ability to support invariant discrimination, in
agreement with previous observations in rat visual cortex. Taken together, our
findings suggest that CNNs and visual cortex share a similarly tight
relationship between dimensionality expansion/reduction of object
representations and reformatting of image information.
| [
{
"created": "Fri, 27 May 2022 08:06:40 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Muratore",
"Paolo",
""
],
[
"Tafazoli",
"Sina",
""
],
[
"Piasini",
"Eugenio",
""
],
[
"Laio",
"Alessandro",
""
],
[
"Zoccolan",
"Davide",
""
]
] | Visual object recognition has been extensively studied in both neuroscience and computer vision. Recently, the most popular class of artificial systems for this task, deep convolutional neural networks (CNNs), has been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex. This has prompted questions on what, if any, are the common principles underlying the reformatting of visual information as it flows through a CNN or the ventral stream. Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex and look for them in the other system. We show that intrinsic dimensionality (ID) of object representations along the rat homologue of the ventral stream presents two distinct expansion-contraction phases, as previously shown for CNNs. Conversely, in CNNs, we show that training results in both distillation and active pruning (mirroring the increase in ID) of low- to middle-level image information in single units, as representations gain the ability to support invariant discrimination, in agreement with previous observations in rat visual cortex. Taken together, our findings suggest that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information. |
0807.0499 | Melanie J.I. M\"uller | Melanie J.I. M\"uller, Stefan Klumpp, Reinhard Lipowsky | Tug-of-war as a cooperative mechanism for bidirectional cargo transport
by molecular motors | 17 pages, latex, 11 figures, 4 tables, includes Supporting
Information | Proc. Natl. Acad. Sci. USA 105, 4609-4614 (2008) | 10.1073/pnas.0706825105 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intracellular transport is based on molecular motors that pull cargos along
cytoskeletal filaments. One motor species always moves in one direction, e.g.
conventional kinesin moves to the microtubule plus end, while cytoplasmic
dynein moves to the microtubule minus end. However, many cellular cargos are
observed to move bidirectionally, involving both plus-end and minus-end
directed motors. The presumably simplest mechanism for such bidirectional
transport is provided by a tug-of-war between the two motor species. This
mechanism is studied theoretically using the load-dependent transport
properties of individual motors as measured in single-molecule experiments. In
contrast to previous expectations, such a tug-of-war is found to be highly
cooperative and to exhibit seven different motility regimes depending on the
precise values of the single motor parameters. The sensitivity of the transport
process to small parameter changes can be used by the cell to regulate its
cargo traffic.
| [
{
"created": "Thu, 3 Jul 2008 07:48:23 GMT",
"version": "v1"
}
] | 2008-07-04 | [
[
"Müller",
"Melanie J. I.",
""
],
[
"Klumpp",
"Stefan",
""
],
[
"Lipowsky",
"Reinhard",
""
]
] | Intracellular transport is based on molecular motors that pull cargos along cytoskeletal filaments. One motor species always moves in one direction, e.g. conventional kinesin moves to the microtubule plus end, while cytoplasmic dynein moves to the microtubule minus end. However, many cellular cargos are observed to move bidirectionally, involving both plus-end and minus-end directed motors. The presumably simplest mechanism for such bidirectional transport is provided by a tug-of-war between the two motor species. This mechanism is studied theoretically using the load-dependent transport properties of individual motors as measured in single-molecule experiments. In contrast to previous expectations, such a tug-of-war is found to be highly cooperative and to exhibit seven different motility regimes depending on the precise values of the single motor parameters. The sensitivity of the transport process to small parameter changes can be used by the cell to regulate its cargo traffic. |
2004.13485 | Nathalie Henrich Bernardoni | Jean-Philippe Epron, Jocelyne Sarfati, Nathalie Henrich Bernardoni
(GIPSA-GAMA) | Callas or the trajectory of the meteor | in French | Revue de Laryngologie Otologie Rhinologie, Revue de Laryngologie,
2010, 131 (1), pp.35-38 | null | null | q-bio.OT physics.class-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lyric career of Maria Callas, though exceptional, is also noteworthy for
its brevity. The first signs of downturn appeared at the age of 36 and her
voice fell silent at only 40. Though the literature has massively commented on
this premature worsening, few analyses of its characteristics have been made
public so far. The purpose of our study was to realise a perceptual and
acoustical analysis of recorded arias by the artist at the climax to the fall.
The audible impairments were first verbally described, and then compared to
acoustical observations based on spectrographic analyses and
fundamental-frequency measurements.
| [
{
"created": "Thu, 23 Apr 2020 12:23:45 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Epron",
"Jean-Philippe",
"",
"GIPSA-GAMA"
],
[
"Sarfati",
"Jocelyne",
"",
"GIPSA-GAMA"
],
[
"Bernardoni",
"Nathalie Henrich",
"",
"GIPSA-GAMA"
]
] | The lyric career of Maria Callas, though exceptional, is also noteworthy for its brevity. The first signs of downturn appeared at the age of 36 and her voice fell silent at only 40. Though the literature has massively commented on this premature worsening, few analyses of its characteristics have been made public so far. The purpose of our study was to realise a perceptual and acoustical analysis of recorded arias by the artist at the climax to the fall. The audible impairments were first verbally described, and then compared to acoustical observations based on spectrographic analyses and fundamental-frequency measurements. |
1606.07221 | Dennis C. Rapaport | D. C. Rapaport | Packaging stiff polymers in small containers: A molecular dynamics study | 4 pages, 4 figures (minor changes in revised version) | Phys. Rev. E 94, 030401 (2016) | 10.1103/PhysRevE.94.030401 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The question of how stiff polymers are able to pack into small containers is
particularly relevant to the study of DNA packaging in viruses. A reduced
version of the problem based on coarse-grained representations of the main
components of the system -- the DNA polymer and the spherical viral capsid --
has been studied by molecular dynamics simulation. The results, involving
longer polymers than in earlier work, show that as polymers become more rigid
there is an increasing tendency to self-organize as spools that wrap from the
inside out, rather than the inverse direction seen previously. In the final
state, a substantial part of the polymer is packed into one or more coaxial
spools, concentrically layered with different orientations, a form of packaging
achievable without twisting the polymer.
| [
{
"created": "Thu, 23 Jun 2016 08:25:43 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Sep 2016 14:01:49 GMT",
"version": "v2"
}
] | 2016-10-12 | [
[
"Rapaport",
"D. C.",
""
]
] | The question of how stiff polymers are able to pack into small containers is particularly relevant to the study of DNA packaging in viruses. A reduced version of the problem based on coarse-grained representations of the main components of the system -- the DNA polymer and the spherical viral capsid -- has been studied by molecular dynamics simulation. The results, involving longer polymers than in earlier work, show that as polymers become more rigid there is an increasing tendency to self-organize as spools that wrap from the inside out, rather than the inverse direction seen previously. In the final state, a substantial part of the polymer is packed into one or more coaxial spools, concentrically layered with different orientations, a form of packaging achievable without twisting the polymer. |
1307.5728 | Josef Ladenbauer | Josef Ladenbauer, Moritz Augustin and Klaus Obermayer | How adaptation currents change threshold, gain and variability of
neuronal spiking | 20 pages, 8 figures; Journal of Neurophysiology (in press) | null | 10.1152/jn.00586.2013 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many types of neurons exhibit spike rate adaptation, mediated by intrinsic
slow $\mathrm{K}^+$-currents, which effectively inhibit neuronal responses. How
these adaptation currents change the relationship between in-vivo like
fluctuating synaptic input, spike rate output and the spike train statistics,
however, is not well understood. In this computational study we show that an
adaptation current which primarily depends on the subthreshold membrane voltage
changes the neuronal input-output relationship (I-O curve) subtractively,
thereby increasing the response threshold. A spike-dependent adaptation current
alters the I-O curve divisively, thus reducing the response gain. Both types of
adaptation currents naturally increase the mean inter-spike interval (ISI), but
they can affect ISI variability in opposite ways. A subthreshold current always
causes an increase of variability while a spike-triggered current decreases
high variability caused by fluctuation-dominated inputs and increases low
variability when the average input is large. The effects on I-O curves match
those caused by synaptic inhibition in networks with asynchronous irregular
activity, for which we find subtractive and divisive changes caused by external
and recurrent inhibition, respectively. Synaptic inhibition, however, always
increases the ISI variability. We analytically derive expressions for the I-O
curve and ISI variability, which demonstrate the robustness of our results.
Furthermore, we show how the biophysical parameters of slow
$\mathrm{K}^+$-conductances contribute to the two different types of adaptation
currents and find that $\mathrm{Ca}^{2+}$-activated $\mathrm{K}^+$-currents are
effectively captured by a simple spike-dependent description, while
muscarine-sensitive or $\mathrm{Na}^+$-activated $\mathrm{K}^+$-currents show a
dominant subthreshold component.
| [
{
"created": "Mon, 22 Jul 2013 14:24:43 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Nov 2013 10:39:01 GMT",
"version": "v2"
}
] | 2013-11-08 | [
[
"Ladenbauer",
"Josef",
""
],
[
"Augustin",
"Moritz",
""
],
[
"Obermayer",
"Klaus",
""
]
] | Many types of neurons exhibit spike rate adaptation, mediated by intrinsic slow $\mathrm{K}^+$-currents, which effectively inhibit neuronal responses. How these adaptation currents change the relationship between in-vivo like fluctuating synaptic input, spike rate output and the spike train statistics, however, is not well understood. In this computational study we show that an adaptation current which primarily depends on the subthreshold membrane voltage changes the neuronal input-output relationship (I-O curve) subtractively, thereby increasing the response threshold. A spike-dependent adaptation current alters the I-O curve divisively, thus reducing the response gain. Both types of adaptation currents naturally increase the mean inter-spike interval (ISI), but they can affect ISI variability in opposite ways. A subthreshold current always causes an increase of variability while a spike-triggered current decreases high variability caused by fluctuation-dominated inputs and increases low variability when the average input is large. The effects on I-O curves match those caused by synaptic inhibition in networks with asynchronous irregular activity, for which we find subtractive and divisive changes caused by external and recurrent inhibition, respectively. Synaptic inhibition, however, always increases the ISI variability. We analytically derive expressions for the I-O curve and ISI variability, which demonstrate the robustness of our results. Furthermore, we show how the biophysical parameters of slow $\mathrm{K}^+$-conductances contribute to the two different types of adaptation currents and find that $\mathrm{Ca}^{2+}$-activated $\mathrm{K}^+$-currents are effectively captured by a simple spike-dependent description, while muscarine-sensitive or $\mathrm{Na}^+$-activated $\mathrm{K}^+$-currents show a dominant subthreshold component. |
2104.03406 | Nicholas Guttenberg | Nicholas Guttenberg | Evolutionary rates of information gain and decay in fluctuating
environments | 7 pages, 4 figures, ALife 2019 | null | null | null | q-bio.PE cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | In this paper, we wish to investigate the dynamics of information transfer in
evolutionary dynamics. We use information theoretic tools to track how much
information an evolving population has obtained and managed to retain about
different environments that it is exposed to. By understanding the dynamics of
information gain and loss in a static environment, we predict how that same
evolutionary system would behave when the environment is fluctuating.
Specifically, we anticipate a cross-over between the regime in which
fluctuations improve the ability of the evolutionary system to capture
environmental information and the regime in which the fluctuations inhibit it,
governed by a cross-over in the timescales of information gain and decay.
| [
{
"created": "Wed, 7 Apr 2021 21:42:37 GMT",
"version": "v1"
}
] | 2021-04-09 | [
[
"Guttenberg",
"Nicholas",
""
]
] | In this paper, we wish to investigate the dynamics of information transfer in evolutionary dynamics. We use information theoretic tools to track how much information an evolving population has obtained and managed to retain about different environments that it is exposed to. By understanding the dynamics of information gain and loss in a static environment, we predict how that same evolutionary system would behave when the environment is fluctuating. Specifically, we anticipate a cross-over between the regime in which fluctuations improve the ability of the evolutionary system to capture environmental information and the regime in which the fluctuations inhibit it, governed by a cross-over in the timescales of information gain and decay. |
2007.15727 | David Mori\~na Prof. | David Mori\~na, Amanda Fern\'andez-Fontelo, Alejandra Caba\~na,
Argimiro Arratia, Gustavo \'Avalos and Pedro Puig | Cumulated burden of Covid-19 in Spain from a Bayesian perspective | null | null | 10.1093/eurpub/ckab118 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main goal of this work is to estimate the actual number of cases of
Covid-19 in Spain in the period 01-31-2020 / 06-01-2020 by Autonomous
Communities. Based on these estimates, this work allows us to accurately
re-estimate the lethality of the disease in Spain, taking into account
unreported cases. A hierarchical Bayesian model recently proposed in the
literature has been adapted to model the actual number of Covid-19 cases in
Spain. The results of this work show that the real load of Covid-19 in Spain in
the period considered is well above the data registered by the public health
system. Specifically, the model estimates show that, cumulatively until June
1st, 2020, there were 2,425,930 cases of Covid-19 in Spain with characteristics
similar to those reported (95\% credibility interval: 2,148,261 - 2,813,864),
from which were actually registered only 518,664. Considering the results
obtained from the second wave of the Spanish seroprevalence study, which
estimates 2,350,324 cases of Covid-19 produced in Spain, in the period of time
considered, it can be seen that the estimates provided by the model are quite
good. This work clearly shows the key importance of having good quality data to
optimize decision-making in the critical context of dealing with a pandemic.
| [
{
"created": "Thu, 30 Jul 2020 20:28:15 GMT",
"version": "v1"
}
] | 2021-08-18 | [
[
"Moriña",
"David",
""
],
[
"Fernández-Fontelo",
"Amanda",
""
],
[
"Cabaña",
"Alejandra",
""
],
[
"Arratia",
"Argimiro",
""
],
[
"Ávalos",
"Gustavo",
""
],
[
"Puig",
"Pedro",
""
]
] | The main goal of this work is to estimate the actual number of cases of Covid-19 in Spain in the period 01-31-2020 / 06-01-2020 by Autonomous Communities. Based on these estimates, this work allows us to accurately re-estimate the lethality of the disease in Spain, taking into account unreported cases. A hierarchical Bayesian model recently proposed in the literature has been adapted to model the actual number of Covid-19 cases in Spain. The results of this work show that the real load of Covid-19 in Spain in the period considered is well above the data registered by the public health system. Specifically, the model estimates show that, cumulatively until June 1st, 2020, there were 2,425,930 cases of Covid-19 in Spain with characteristics similar to those reported (95\% credibility interval: 2,148,261 - 2,813,864), from which were actually registered only 518,664. Considering the results obtained from the second wave of the Spanish seroprevalence study, which estimates 2,350,324 cases of Covid-19 produced in Spain, in the period of time considered, it can be seen that the estimates provided by the model are quite good. This work clearly shows the key importance of having good quality data to optimize decision-making in the critical context of dealing with a pandemic. |
1507.06614 | Haralambos Hatzikirou | A. I. Reppas, J. C. L. Alfonso and H. Hatzikirou | In silico tumor control induced via alternating immunostimulating and
immunosuppressive phases | null | null | null | null | q-bio.TO q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advances in the field of Oncoimmunology, the success potential
of immunomodulatory therapies against cancer remains to be elucidated. One of
the reasons is the lack of understanding on the complex interplay between tumor
growth dynamics and the associated immune system responses. Towards this goal,
we consider a mathematical model of vascularized tumor growth and the
corresponding effector cell recruitment dynamics. Bifurcation analysis allows
for the exploration of model's dynamic behavior and the determination of these
parameter regimes that result in immune-mediated tumor control. Here, we focus
on a particular tumor evasion regime that involves tumor and effector cell
concentration oscillations of slowly increasing and decreasing amplitude,
respectively. Considering a temporal multiscale analysis, we derive an
analytically tractable mapping of model solutions onto a weakly negatively
damped harmonic oscillator. Based on our analysis, we propose a theory-driven
intervention strategy involving immunostimulating and immunosuppressive phases
to induce long-term tumor control.
| [
{
"created": "Wed, 22 Jul 2015 11:30:28 GMT",
"version": "v1"
}
] | 2015-07-24 | [
[
"Reppas",
"A. I.",
""
],
[
"Alfonso",
"J. C. L.",
""
],
[
"Hatzikirou",
"H.",
""
]
] | Despite recent advances in the field of Oncoimmunology, the success potential of immunomodulatory therapies against cancer remains to be elucidated. One of the reasons is the lack of understanding on the complex interplay between tumor growth dynamics and the associated immune system responses. Towards this goal, we consider a mathematical model of vascularized tumor growth and the corresponding effector cell recruitment dynamics. Bifurcation analysis allows for the exploration of model's dynamic behavior and the determination of these parameter regimes that result in immune-mediated tumor control. Here, we focus on a particular tumor evasion regime that involves tumor and effector cell concentration oscillations of slowly increasing and decreasing amplitude, respectively. Considering a temporal multiscale analysis, we derive an analytically tractable mapping of model solutions onto a weakly negatively damped harmonic oscillator. Based on our analysis, we propose a theory-driven intervention strategy involving immunostimulating and immunosuppressive phases to induce long-term tumor control. |
2106.12297 | Victor Popescu | Nicoleta Siminea, Victor Popescu, Jose Angel Sanchez Martin, Daniela
Florea, Georgiana Gavril, Ana-Maria Gheorghe, Corina Itcus, Krishna Kanhaiya,
Octavian Pacioglu, Laura Ioana Popa, Romica Trandafir, Maria Iris Tusa,
Manuela Sidoroff, Mihaela Paun, Eugen Czeizler, Andrei Paun, Ion Petre | Network analytics for drug repurposing in COVID-19 | 21 pages, 3 tables, 9 figures, supplementary information included at
the end, 4 files as supplementary material | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | To better understand the potential of drug repurposing in COVID-19, we
analyzed control strategies over essential host factors for SARS-CoV-2
infection. We constructed comprehensive directed protein-protein interaction
networks integrating the top ranked host factors, drug target proteins, and
directed protein-protein interaction data. We analyzed the networks to identify
drug targets and combinations thereof that offer efficient control over the
host factors. We validated our findings against clinical studies data and
bioinformatics studies. Our method offers a new insight into the molecular
details of the disease and into potentially new therapy targets for it. Our
approach for drug repurposing is significant beyond COVID-19 and may be applied
also to other diseases.
| [
{
"created": "Wed, 23 Jun 2021 10:32:15 GMT",
"version": "v1"
}
] | 2021-06-24 | [
[
"Siminea",
"Nicoleta",
""
],
[
"Popescu",
"Victor",
""
],
[
"Martin",
"Jose Angel Sanchez",
""
],
[
"Florea",
"Daniela",
""
],
[
"Gavril",
"Georgiana",
""
],
[
"Gheorghe",
"Ana-Maria",
""
],
[
"Itcus",
"Corina",
""
],
[
"Kanhaiya",
"Krishna",
""
],
[
"Pacioglu",
"Octavian",
""
],
[
"Popa",
"Laura Ioana",
""
],
[
"Trandafir",
"Romica",
""
],
[
"Tusa",
"Maria Iris",
""
],
[
"Sidoroff",
"Manuela",
""
],
[
"Paun",
"Mihaela",
""
],
[
"Czeizler",
"Eugen",
""
],
[
"Paun",
"Andrei",
""
],
[
"Petre",
"Ion",
""
]
] | To better understand the potential of drug repurposing in COVID-19, we analyzed control strategies over essential host factors for SARS-CoV-2 infection. We constructed comprehensive directed protein-protein interaction networks integrating the top ranked host factors, drug target proteins, and directed protein-protein interaction data. We analyzed the networks to identify drug targets and combinations thereof that offer efficient control over the host factors. We validated our findings against clinical studies data and bioinformatics studies. Our method offers a new insight into the molecular details of the disease and into potentially new therapy targets for it. Our approach for drug repurposing is significant beyond COVID-19 and may be applied also to other diseases. |
2102.03438 | Ekkehard Ullner | Afifurrahman and Ekkehard Ullner and Antonio Politi | Collective dynamics in the presence of finite-width pulses | 12 pages, 12 figures | null | 10.1063/5.0046691 | null | q-bio.NC math.DS nlin.AO | http://creativecommons.org/licenses/by/4.0/ | The idealisation of neuronal pulses as $\delta$-spikes is a convenient
approach in neuroscience but can sometimes lead to erroneous conclusions. We
investigate the effect of a finite pulse-width on the dynamics of balanced
neuronal networks. In particular, we study two populations of identical
excitatory and inhibitory neurons in a random network of phase oscillators
coupled through exponential pulses with different widths. We consider three
coupling functions, inspired by leaky integrate-and-fire neurons with delay and
type-I phase-response curves. By exploring the role of the pulse-widths for
different coupling strengths we find a robust collective irregular dynamics,
which collapses onto a fully synchronous regime if the inhibitory pulses are
sufficiently wider than the excitatory ones. The transition to synchrony is
accompanied by hysteretic phenomena (i.e. the co-existence of collective
irregular and synchronous dynamics). Our numerical results are supported by a
detailed scaling and stability analysis of the fully synchronous solution. A
conjectured first-order phase transition emerging for $\delta$-spikes is
smoothed out for finite-width pulses.
| [
{
"created": "Fri, 5 Feb 2021 22:24:43 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 16:29:30 GMT",
"version": "v2"
}
] | 2024-06-19 | [
[
"Afifurrahman",
"",
""
],
[
"Ullner",
"Ekkehard",
""
],
[
"Politi",
"Antonio",
""
]
] | The idealisation of neuronal pulses as $\delta$-spikes is a convenient approach in neuroscience but can sometimes lead to erroneous conclusions. We investigate the effect of a finite pulse-width on the dynamics of balanced neuronal networks. In particular, we study two populations of identical excitatory and inhibitory neurons in a random network of phase oscillators coupled through exponential pulses with different widths. We consider three coupling functions, inspired by leaky integrate-and-fire neurons with delay and type-I phase-response curves. By exploring the role of the pulse-widths for different coupling strengths we find a robust collective irregular dynamics, which collapses onto a fully synchronous regime if the inhibitory pulses are sufficiently wider than the excitatory ones. The transition to synchrony is accompanied by hysteretic phenomena (i.e. the co-existence of collective irregular and synchronous dynamics). Our numerical results are supported by a detailed scaling and stability analysis of the fully synchronous solution. A conjectured first-order phase transition emerging for $\delta$-spikes is smoothed out for finite-width pulses. |
1110.0235 | Pablo Cordero | Pablo Cordero, Julius Lucks, Rhiju Das | The Stanford RNA Mapping Database for sharing and visualizing RNA
structure mapping experiments | 20 pages, 2 figures | null | null | null | q-bio.BM cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have established an RNA Mapping Database (RMDB) to enable a new generation
of structural, thermodynamic, and kinetic studies from quantitative
single-nucleotide-resolution RNA structure mapping (freely available at
http://rmdb.stanford.edu). Chemical and enzymatic mapping is a rapid, robust,
and widespread approach to RNA characterization. Since its recent coupling with
high-throughput sequencing techniques, accelerated software pipelines, and
large-scale mutagenesis, the volume of mapping data has greatly increased, and
there is a critical need for a database to enable sharing, visualization, and
meta-analyses of these data. Through its on-line front-end, the RMDB allows
users to explore single-nucleotide-resolution chemical accessibility data in
heat-map, bar-graph, and colored secondary structure graphics; to leverage
these data to generate secondary structure hypotheses; and to download the data
in standardized and computer-friendly files, including the RDAT and
community-consensus SNRNASM formats. At the time of writing, the database
houses 38 entries, describing 2659 RNA sequences and comprising 355,084 data
points, and is growing rapidly.
| [
{
"created": "Sun, 2 Oct 2011 20:56:47 GMT",
"version": "v1"
}
] | 2011-10-04 | [
[
"Cordero",
"Pablo",
""
],
[
"Lucks",
"Julius",
""
],
[
"Das",
"Rhiju",
""
]
] | We have established an RNA Mapping Database (RMDB) to enable a new generation of structural, thermodynamic, and kinetic studies from quantitative single-nucleotide-resolution RNA structure mapping (freely available at http://rmdb.stanford.edu). Chemical and enzymatic mapping is a rapid, robust, and widespread approach to RNA characterization. Since its recent coupling with high-throughput sequencing techniques, accelerated software pipelines, and large-scale mutagenesis, the volume of mapping data has greatly increased, and there is a critical need for a database to enable sharing, visualization, and meta-analyses of these data. Through its on-line front-end, the RMDB allows users to explore single-nucleotide-resolution chemical accessibility data in heat-map, bar-graph, and colored secondary structure graphics; to leverage these data to generate secondary structure hypotheses; and to download the data in standardized and computer-friendly files, including the RDAT and community-consensus SNRNASM formats. At the time of writing, the database houses 38 entries, describing 2659 RNA sequences and comprising 355,084 data points, and is growing rapidly. |
2405.16357 | Tingting Dan | Tingting Dan and Ziquan Wei and Won Hwa Kim and Guorong Wu | Exploring the Enigma of Neural Dynamics Through A Scattering-Transform
Mixer Landscape for Riemannian Manifold | 15 pages, 6 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The human brain is a complex inter-wired system that emerges spontaneous
functional fluctuations. In spite of tremendous success in the experimental
neuroscience field, a system-level understanding of how brain anatomy supports
various neural activities remains elusive. Capitalizing on the unprecedented
amount of neuroimaging data, we present a physics-informed deep model to
uncover the coupling mechanism between brain structure and function through the
lens of data geometry that is rooted in the widespread wiring topology of
connections between distant brain regions. Since deciphering the puzzle of
self-organized patterns in functional fluctuations is the gateway to
understanding the emergence of cognition and behavior, we devise a geometric
deep model to uncover manifold mapping functions that characterize the
intrinsic feature representations of evolving functional fluctuations on the
Riemannian manifold. In lieu of learning unconstrained mapping functions, we
introduce a set of graph-harmonic scattering transforms to impose the
brain-wide geometry on top of manifold mapping functions, which allows us to
cast the manifold-based deep learning into a reminiscent of MLP-Mixer
architecture (in computer vision) for Riemannian manifold. As a
proof-of-concept approach, we explore a neural-manifold perspective to
understand the relationship between (static) brain structure and (dynamic)
function, challenging the prevailing notion in cognitive neuroscience by
proposing that neural activities are essentially excited by brain-wide
oscillation waves living on the geometry of human connectomes, instead of being
confined to focal areas.
| [
{
"created": "Sat, 25 May 2024 21:35:50 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Dan",
"Tingting",
""
],
[
"Wei",
"Ziquan",
""
],
[
"Kim",
"Won Hwa",
""
],
[
"Wu",
"Guorong",
""
]
] | The human brain is a complex inter-wired system that emerges spontaneous functional fluctuations. In spite of tremendous success in the experimental neuroscience field, a system-level understanding of how brain anatomy supports various neural activities remains elusive. Capitalizing on the unprecedented amount of neuroimaging data, we present a physics-informed deep model to uncover the coupling mechanism between brain structure and function through the lens of data geometry that is rooted in the widespread wiring topology of connections between distant brain regions. Since deciphering the puzzle of self-organized patterns in functional fluctuations is the gateway to understanding the emergence of cognition and behavior, we devise a geometric deep model to uncover manifold mapping functions that characterize the intrinsic feature representations of evolving functional fluctuations on the Riemannian manifold. In lieu of learning unconstrained mapping functions, we introduce a set of graph-harmonic scattering transforms to impose the brain-wide geometry on top of manifold mapping functions, which allows us to cast the manifold-based deep learning into a reminiscent of MLP-Mixer architecture (in computer vision) for Riemannian manifold. As a proof-of-concept approach, we explore a neural-manifold perspective to understand the relationship between (static) brain structure and (dynamic) function, challenging the prevailing notion in cognitive neuroscience by proposing that neural activities are essentially excited by brain-wide oscillation waves living on the geometry of human connectomes, instead of being confined to focal areas. |
2008.01692 | Almaz Tesfay | Almaz Tesfay, Daniel Tesfay, James Brannan, Jinqiao Duan | A Logistic-Harvest Model with Allee Effect under Multiplicative Noise | 18 pages, 14 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work is devoted to the study of a stochastic logistic growth model with
and without the Allee effect. Such a model describes the evolution of a
population under environmental stochastic fluctuations and is in the form of a
stochastic differential equation driven by multiplicative Gaussian noise. With
the help of the associated Fokker-Planck equation, we analyze the population
extinction probability and the probability of reaching a large population size
before reaching a small one. We further study the impact of the harvest rate,
noise intensity, and the Allee effect on population evolution. The analysis and
numerical experiments show that if the noise intensity and harvest rate are
small, the population grows exponentially, and upon reaching the carrying
capacity, the population size fluctuates around it. In the stochastic
logistic-harvest model without the Allee effect, when noise intensity becomes
small (or goes to zero), the stationary probability density becomes more acute
and its maximum point approaches one. However, for large noise intensity and
harvest rate, the population size fluctuates wildly and does not grow
exponentially to the carrying capacity. So as far as biological meanings are
concerned, we must catch at small values of noise intensity and harvest rate.
Finally, we discuss the biological implications of our results.
| [
{
"created": "Tue, 4 Aug 2020 16:50:20 GMT",
"version": "v1"
}
] | 2020-08-05 | [
[
"Tesfay",
"Almaz",
""
],
[
"Tesfay",
"Daniel",
""
],
[
"Brannan",
"James",
""
],
[
"Duan",
"Jinqiao",
""
]
] | This work is devoted to the study of a stochastic logistic growth model with and without the Allee effect. Such a model describes the evolution of a population under environmental stochastic fluctuations and is in the form of a stochastic differential equation driven by multiplicative Gaussian noise. With the help of the associated Fokker-Planck equation, we analyze the population extinction probability and the probability of reaching a large population size before reaching a small one. We further study the impact of the harvest rate, noise intensity, and the Allee effect on population evolution. The analysis and numerical experiments show that if the noise intensity and harvest rate are small, the population grows exponentially, and upon reaching the carrying capacity, the population size fluctuates around it. In the stochastic logistic-harvest model without the Allee effect, when noise intensity becomes small (or goes to zero), the stationary probability density becomes more acute and its maximum point approaches one. However, for large noise intensity and harvest rate, the population size fluctuates wildly and does not grow exponentially to the carrying capacity. So as far as biological meanings are concerned, we must catch at small values of noise intensity and harvest rate. Finally, we discuss the biological implications of our results. |
1409.7208 | Ruibang Luo | Dinghua Li, Chi-Man Liu, Ruibang Luo, Kunihiko Sadakane and Tak-Wah
Lam | MEGAHIT: An ultra-fast single-node solution for large and complex
metagenomics assembly via succinct de Bruijn graph | 2 pages, 2 tables, 1 figure, submitted to Oxford Bioinformatics as an
Application Note | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MEGAHIT is a NGS de novo assembler for assembling large and complex
metagenomics data in a time- and cost-efficient manner. It finished assembling
a soil metagenomics dataset with 252Gbps in 44.1 hours and 99.6 hours on a
single computing node with and without a GPU, respectively. MEGAHIT assembles
the data as a whole, i.e., it avoids pre-processing like partitioning and
normalization, which might compromise on result integrity. MEGAHIT generates 3
times larger assembly, with longer contig N50 and average contig length than
the previous assembly. 55.8% of the reads were aligned to the assembly, which
is 4 times higher than the previous. The source code of MEGAHIT is freely
available at https://github.com/voutcn/megahit under GPLv3 license.
| [
{
"created": "Thu, 25 Sep 2014 10:49:30 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Dec 2014 13:10:03 GMT",
"version": "v2"
}
] | 2014-12-24 | [
[
"Li",
"Dinghua",
""
],
[
"Liu",
"Chi-Man",
""
],
[
"Luo",
"Ruibang",
""
],
[
"Sadakane",
"Kunihiko",
""
],
[
"Lam",
"Tak-Wah",
""
]
] | MEGAHIT is a NGS de novo assembler for assembling large and complex metagenomics data in a time- and cost-efficient manner. It finished assembling a soil metagenomics dataset with 252Gbps in 44.1 hours and 99.6 hours on a single computing node with and without a GPU, respectively. MEGAHIT assembles the data as a whole, i.e., it avoids pre-processing like partitioning and normalization, which might compromise on result integrity. MEGAHIT generates 3 times larger assembly, with longer contig N50 and average contig length than the previous assembly. 55.8% of the reads were aligned to the assembly, which is 4 times higher than the previous. The source code of MEGAHIT is freely available at https://github.com/voutcn/megahit under GPLv3 license. |
1304.5952 | Eduardo Eyras | Gael P. Alamancos, Eneritz Agirre, Eduardo Eyras | Methods to study splicing from high-throughput RNA Sequencing data | 31 pages, 1 figure, 9 tables. Small corrections added | Methods Mol Biol. 2014;1126:357-97 | 10.1007/978-1-62703-980-2_26 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of novel high-throughput sequencing (HTS) methods for RNA
(RNA-Seq) has provided a very powerful mean to study splicing under multiple
conditions at unprecedented depth. However, the complexity of the information
to be analyzed has turned this into a challenging task. In the last few years,
a plethora of tools have been developed, allowing researchers to process
RNA-Seq data to study the expression of isoforms and splicing events, and their
relative changes under different conditions. We provide an overview of the
methods available to study splicing from short RNA-Seq data. We group the
methods according to the different questions they address: 1) Assignment of the
sequencing reads to their likely gene of origin. This is addressed by methods
that map reads to the genome and/or to the available gene annotations. 2)
Recovering the sequence of splicing events and isoforms. This is addressed by
transcript reconstruction and de novo assembly methods. 3) Quantification of
events and isoforms. Either after reconstructing transcripts or using an
annotation, many methods estimate the expression level or the relative usage of
isoforms and/or events. 4) Providing an isoform or event view of differential
splicing or expression. These include methods that compare relative
event/isoform abundance or isoform expression across two or more conditions. 5)
Visualizing splicing regulation. Various tools facilitate the visualization of
the RNA-Seq data in the context of alternative splicing. In this review, we do
not describe the specific mathematical models behind each method. Our aim is
rather to provide an overview that could serve as an entry point for users who
need to decide on a suitable tool for a specific analysis. We also attempt to
propose a classification of the tools according to the operations they do, to
facilitate the comparison and choice of methods.
| [
{
"created": "Mon, 22 Apr 2013 13:58:54 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Feb 2014 18:03:30 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Jul 2015 23:15:02 GMT",
"version": "v3"
}
] | 2015-08-03 | [
[
"Alamancos",
"Gael P.",
""
],
[
"Agirre",
"Eneritz",
""
],
[
"Eyras",
"Eduardo",
""
]
] | The development of novel high-throughput sequencing (HTS) methods for RNA (RNA-Seq) has provided a very powerful mean to study splicing under multiple conditions at unprecedented depth. However, the complexity of the information to be analyzed has turned this into a challenging task. In the last few years, a plethora of tools have been developed, allowing researchers to process RNA-Seq data to study the expression of isoforms and splicing events, and their relative changes under different conditions. We provide an overview of the methods available to study splicing from short RNA-Seq data. We group the methods according to the different questions they address: 1) Assignment of the sequencing reads to their likely gene of origin. This is addressed by methods that map reads to the genome and/or to the available gene annotations. 2) Recovering the sequence of splicing events and isoforms. This is addressed by transcript reconstruction and de novo assembly methods. 3) Quantification of events and isoforms. Either after reconstructing transcripts or using an annotation, many methods estimate the expression level or the relative usage of isoforms and/or events. 4) Providing an isoform or event view of differential splicing or expression. These include methods that compare relative event/isoform abundance or isoform expression across two or more conditions. 5) Visualizing splicing regulation. Various tools facilitate the visualization of the RNA-Seq data in the context of alternative splicing. In this review, we do not describe the specific mathematical models behind each method. Our aim is rather to provide an overview that could serve as an entry point for users who need to decide on a suitable tool for a specific analysis. We also attempt to propose a classification of the tools according to the operations they do, to facilitate the comparison and choice of methods. |
1508.02085 | Toan T. Nguyen | Toan T. Nguyen | Grand-canonical simulation of DNA condensation with two salts, affect of
divalent counterion size | Final revision, published online at J. Chem. Phys. arXiv admin note:
text overlap with arXiv:0912.3595 | J. Chem. Phys., 144 (2016) 065102 | 10.1063/1.4940312 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of DNA$-$DNA interaction mediated by divalent counterions is
studied using a generalized Grand-canonical Monte-Carlo simulation for a system
of two salts. The effect of the divalent counterion size on the condensation
behavior of the DNA bundle is investigated. Experimentally, it is known that
multivalent counterions have strong effect on the DNA condensation phenomenon.
While tri- and tetra-valent counterions are shown to easily condense free DNA
molecules in solution into toroidal bundles, the situation with divalent
counterions are not as clear cut. Some divalent counterions like Mg$^{+2}$ are
not able to condense free DNA molecules in solution, while some like Mn$^{+2}$
can condense them into disorder bundles. In restricted environment such as in
two dimensional system or inside viral capsid, Mg$^{+2}$ can have strong effect
and able to condense them, but the condensation varies qualitatively with
different system, different coions. It has been suggested that divalent
counterions can induce attraction between DNA molecules but the strength of the
attraction is not strong enough to condense free DNA in solution. However, if
the configuration entropy of DNA is restricted, these attractions are enough to
cause appreciable effects. The variations among different divalent salts might
be due to the hydration effect of the divalent counterions. In this paper, we
try to understand this variation using a very simple parameter, the size of the
divalent counterions. We investigate how divalent counterions with different
sizes can leads to varying qualitative behavior of DNA condensation in
restricted environments. Additionally a Grand canonical Monte-Carlo method for
simulation of systems with two different salts is presented in detail.
| [
{
"created": "Sun, 9 Aug 2015 21:00:45 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Aug 2015 04:43:44 GMT",
"version": "v2"
},
{
"created": "Sat, 13 Feb 2016 03:41:40 GMT",
"version": "v3"
}
] | 2016-02-19 | [
[
"Nguyen",
"Toan T.",
""
]
] | The problem of DNA$-$DNA interaction mediated by divalent counterions is studied using a generalized Grand-canonical Monte-Carlo simulation for a system of two salts. The effect of the divalent counterion size on the condensation behavior of the DNA bundle is investigated. Experimentally, it is known that multivalent counterions have strong effect on the DNA condensation phenomenon. While tri- and tetra-valent counterions are shown to easily condense free DNA molecules in solution into toroidal bundles, the situation with divalent counterions are not as clear cut. Some divalent counterions like Mg$^{+2}$ are not able to condense free DNA molecules in solution, while some like Mn$^{+2}$ can condense them into disorder bundles. In restricted environment such as in two dimensional system or inside viral capsid, Mg$^{+2}$ can have strong effect and able to condense them, but the condensation varies qualitatively with different system, different coions. It has been suggested that divalent counterions can induce attraction between DNA molecules but the strength of the attraction is not strong enough to condense free DNA in solution. However, if the configuration entropy of DNA is restricted, these attractions are enough to cause appreciable effects. The variations among different divalent salts might be due to the hydration effect of the divalent counterions. In this paper, we try to understand this variation using a very simple parameter, the size of the divalent counterions. We investigate how divalent counterions with different sizes can leads to varying qualitative behavior of DNA condensation in restricted environments. Additionally a Grand canonical Monte-Carlo method for simulation of systems with two different salts is presented in detail. |
1408.5007 | Krzysztof Bartoszek | Krzysztof Bartoszek and Serik Sagitov | A Consistent Estimator of the Evolutionary Rate | null | Journal of Theoretical Biology 371:69-78, 2015 | 10.1016/j.jtbi.2015.01.019 | null | q-bio.PE math.PR q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a branching particle system where particles reproduce according
to the pure birth Yule process with the birth rate L, conditioned on the
observed number of particles to be equal n. Particles are assumed to move
independently on the real line according to the Brownian motion with the local
variance s2. In this paper we treat $n$ particles as a sample of related
species. The spatial Brownian motion of a particle describes the development of
a trait value of interest (e.g. log-body-size). We propose an unbiased
estimator Rn2 of the evolutionary rate r2=s2/L. The estimator Rn2 is
proportional to the sample variance Sn2 computed from n trait values. We find
an approximate formula for the standard error of Rn2 based on a neat asymptotic
relation for the variance of Sn2.
| [
{
"created": "Thu, 21 Aug 2014 14:08:47 GMT",
"version": "v1"
}
] | 2020-11-23 | [
[
"Bartoszek",
"Krzysztof",
""
],
[
"Sagitov",
"Serik",
""
]
] | We consider a branching particle system where particles reproduce according to the pure birth Yule process with the birth rate L, conditioned on the observed number of particles to be equal n. Particles are assumed to move independently on the real line according to the Brownian motion with the local variance s2. In this paper we treat $n$ particles as a sample of related species. The spatial Brownian motion of a particle describes the development of a trait value of interest (e.g. log-body-size). We propose an unbiased estimator Rn2 of the evolutionary rate r2=s2/L. The estimator Rn2 is proportional to the sample variance Sn2 computed from n trait values. We find an approximate formula for the standard error of Rn2 based on a neat asymptotic relation for the variance of Sn2. |
1810.03687 | Xiao-Jun Tian | Xiao-Jun Tian, Dong Zhou, Haiyan Fu, Rong Zhang, Xiaojie Wang, Sui
Huang, Youhua Liu, Jianhua Xing | Sequential Wnt Agonist then Antagonist Treatment Accelerates Tissue
Repair and Minimizes Fibrosis | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tissue fibrosis compromises organ function and occurs as a potential
long-term outcome in response to acute tissue injuries. Currently, lack of
mechanistic understanding prevents effective prevention and treatment of the
progression from acute injury to fibrosis. Here, we combined quantitative
experimental studies with a mouse kidney injury model and a computational
approach to determine how the physiological consequences are determined by the
severity of ischemia injury, and to identify how to manipulate Wnt signaling to
accelerate repair of ischemic tissue damage while minimizing fibrosis. The
study reveals that Wnt-mediated memory of prior injury contributes to fibrosis
progression, and ischemic preconditioning reduces the risk of death but
increases the risk of fibrosis. Furthermore, we validated the prediction that
sequential combination therapy of initial treatment with a Wnt agonist followed
by treatment with a Wnt antagonist can reduce both the risk of death and
fibrosis in response to acute injuries.
| [
{
"created": "Mon, 8 Oct 2018 20:19:43 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2019 15:06:07 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jul 2019 16:23:48 GMT",
"version": "v3"
}
] | 2019-07-05 | [
[
"Tian",
"Xiao-Jun",
""
],
[
"Zhou",
"Dong",
""
],
[
"Fu",
"Haiyan",
""
],
[
"Zhang",
"Rong",
""
],
[
"Wang",
"Xiaojie",
""
],
[
"Huang",
"Sui",
""
],
[
"Liu",
"Youhua",
""
],
[
"Xing",
"Jianhua",
""
]
] | Tissue fibrosis compromises organ function and occurs as a potential long-term outcome in response to acute tissue injuries. Currently, lack of mechanistic understanding prevents effective prevention and treatment of the progression from acute injury to fibrosis. Here, we combined quantitative experimental studies with a mouse kidney injury model and a computational approach to determine how the physiological consequences are determined by the severity of ischemia injury, and to identify how to manipulate Wnt signaling to accelerate repair of ischemic tissue damage while minimizing fibrosis. The study reveals that Wnt-mediated memory of prior injury contributes to fibrosis progression, and ischemic preconditioning reduces the risk of death but increases the risk of fibrosis. Furthermore, we validated the prediction that sequential combination therapy of initial treatment with a Wnt agonist followed by treatment with a Wnt antagonist can reduce both the risk of death and fibrosis in response to acute injuries. |
1504.00120 | Andrew Teschendorff | Andrew E. Teschendorff and Christopher R. S. Banerji and Simone
Severini and Reimer Kuehn and Peter Sollich | Increased signaling entropy in cancer requires the scale-free property
of protein interaction networks | 20 pages, 5 figures. In Press in Sci Rep 2015 | Scientific Reports (2015) 5, 9646 | 10.1038/srep09646 | null | q-bio.MN q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | One of the key characteristics of cancer cells is an increased phenotypic
plasticity, driven by underlying genetic and epigenetic perturbations. However,
at a systems-level it is unclear how these perturbations give rise to the
observed increased plasticity. Elucidating such systems-level principles is key
for an improved understanding of cancer. Recently, it has been shown that
signaling entropy, an overall measure of signaling pathway promiscuity, and
computable from integrating a sample's gene expression profile with a protein
interaction network, correlates with phenotypic plasticity and is increased in
cancer compared to normal tissue. Here we develop a computational framework for
studying the effects of network perturbations on signaling entropy. We
demonstrate that the increased signaling entropy of cancer is driven by two
factors: (i) the scale-free (or near scale-free) topology of the interaction
network, and (ii) a subtle positive correlation between differential gene
expression and node connectivity. Indeed, we show that if protein interaction
networks were random graphs, described by Poisson degree distributions, that
cancer would generally not exhibit an increased signaling entropy. In summary,
this work exposes a deep connection between cancer, signaling entropy and
interaction network topology.
| [
{
"created": "Wed, 1 Apr 2015 06:50:41 GMT",
"version": "v1"
}
] | 2015-04-30 | [
[
"Teschendorff",
"Andrew E.",
""
],
[
"Banerji",
"Christopher R. S.",
""
],
[
"Severini",
"Simone",
""
],
[
"Kuehn",
"Reimer",
""
],
[
"Sollich",
"Peter",
""
]
] | One of the key characteristics of cancer cells is an increased phenotypic plasticity, driven by underlying genetic and epigenetic perturbations. However, at a systems-level it is unclear how these perturbations give rise to the observed increased plasticity. Elucidating such systems-level principles is key for an improved understanding of cancer. Recently, it has been shown that signaling entropy, an overall measure of signaling pathway promiscuity, and computable from integrating a sample's gene expression profile with a protein interaction network, correlates with phenotypic plasticity and is increased in cancer compared to normal tissue. Here we develop a computational framework for studying the effects of network perturbations on signaling entropy. We demonstrate that the increased signaling entropy of cancer is driven by two factors: (i) the scale-free (or near scale-free) topology of the interaction network, and (ii) a subtle positive correlation between differential gene expression and node connectivity. Indeed, we show that if protein interaction networks were random graphs, described by Poisson degree distributions, that cancer would generally not exhibit an increased signaling entropy. In summary, this work exposes a deep connection between cancer, signaling entropy and interaction network topology. |
2011.08081 | Morteza Nattagh Najafi | M. Rahimi-Majd, M. A. Seifi, L. de Arcangelis, M. N. Najafi | On the role of anaxonic local neurons in the crossover to continuously
varying exponents for avalanche activity | null | Phys. Rev. E 103, 042402 (2021) | 10.1103/PhysRevE.103.042402 | null | q-bio.NC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local anaxonic neurons with graded potential release are important
ingredients of nervous systems, present in the olfactory bulb system of
mammalians, in the human visual system, as well as in arthropods and nematodes.
We develop a neuronal network model including both axonic and anaxonic neurons
and monitor the activity tuned by the following parameters: The decay length of
the graded potential in local neurons, the fraction of local neurons, the
largest eigenvalue of the adjacency matrix and the range of connections of the
local neurons. Tuning the fraction of local neurons, we derive the phase
diagram including two transition lines: A critical line separating subcritical
and supercritical regions, characterized by power law distributions of
avalanche sizes and durations, and a bifurcation line. We find that the overall
behavior of the system is controlled by a parameter tuning the relevance of
local neuron transmission with respect to the axonal one. The statistical
properties of spontaneous activity are affected by local neurons at large
fractions and in the condition that the graded potential transmission dominates
the axonal one. In this case the scaling properties of spontaneous activity
exhibit continuously varying exponents, rather than the mean field branching
model universality class.
| [
{
"created": "Mon, 16 Nov 2020 16:27:29 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Rahimi-Majd",
"M.",
""
],
[
"Seifi",
"M. A.",
""
],
[
"de Arcangelis",
"L.",
""
],
[
"Najafi",
"M. N.",
""
]
] | Local anaxonic neurons with graded potential release are important ingredients of nervous systems, present in the olfactory bulb system of mammalians, in the human visual system, as well as in arthropods and nematodes. We develop a neuronal network model including both axonic and anaxonic neurons and monitor the activity tuned by the following parameters: The decay length of the graded potential in local neurons, the fraction of local neurons, the largest eigenvalue of the adjacency matrix and the range of connections of the local neurons. Tuning the fraction of local neurons, we derive the phase diagram including two transition lines: A critical line separating subcritical and supercritical regions, characterized by power law distributions of avalanche sizes and durations, and a bifurcation line. We find that the overall behavior of the system is controlled by a parameter tuning the relevance of local neuron transmission with respect to the axonal one. The statistical properties of spontaneous activity are affected by local neurons at large fractions and in the condition that the graded potential transmission dominates the axonal one. In this case the scaling properties of spontaneous activity exhibit continuously varying exponents, rather than the mean field branching model universality class. |
2211.08673 | Thomas Harris | Thomas Harris, Nicholas Geard, Cameron Zachreson | Correlation of viral loads in disease transmission chains could bias
early estimates of the reproduction number | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Early estimates of the transmission properties of a newly emerged pathogen
are critical to an effective public health response, and are often based on
limited outbreak data. Here, we use simulations to investigate a potential
source of bias in such estimates, arising from correlations between the viral
load of cases in transmission chains. We show that this mechanism can affect
estimates of fundamental transmission properties characterising the spread of a
virus. Our computational model simulates a disease transmission mechanism in
which the viral load of the infector at the time of transmission influences the
infectiousness of the infectee. These correlations in transmission pairs
produce a population-level decoherence process during which the distributions
of initial viral loads in each subsequent generation converge to a steady
state. We find that outbreaks arising from index cases with low initial viral
loads give rise to early estimates of transmission properties that are subject
to large biases. These findings demonstrate the potential for bias arising from
transmission mechanics to affect estimates of the transmission properties of
newly emerged viruses.
| [
{
"created": "Wed, 16 Nov 2022 04:59:56 GMT",
"version": "v1"
}
] | 2022-11-17 | [
[
"Harris",
"Thomas",
""
],
[
"Geard",
"Nicholas",
""
],
[
"Zachreson",
"Cameron",
""
]
] | Early estimates of the transmission properties of a newly emerged pathogen are critical to an effective public health response, and are often based on limited outbreak data. Here, we use simulations to investigate a potential source of bias in such estimates, arising from correlations between the viral load of cases in transmission chains. We show that this mechanism can affect estimates of fundamental transmission properties characterising the spread of a virus. Our computational model simulates a disease transmission mechanism in which the viral load of the infector at the time of transmission influences the infectiousness of the infectee. These correlations in transmission pairs produce a population-level decoherence process during which the distributions of initial viral loads in each subsequent generation converge to a steady state. We find that outbreaks arising from index cases with low initial viral loads give rise to early estimates of transmission properties that are subject to large biases. These findings demonstrate the potential for bias arising from transmission mechanics to affect estimates of the transmission properties of newly emerged viruses. |
2210.09574 | Shuqiang Huang | Shuqiang Huang, Cuiyu Tan, Jinzhen Zheng, Zhugu Huang, Zhihong Li,
Ziyin Lv, Wanru Chen | Integrative Pan-Cancer Analysis of RNMT: a Potential Prognostic and
Immunological Biomarker | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: RNA guanine-7 methyltransferase (RNMT) is one of the main
regulators of N7-methylguanosine, and the deregulation of RNMT correlated with
tumor development and immune metabolism. However, the specific function of RNMT
in pan-cancer remains unclear.
Methods: RNMT expression in different cancers was analyzed using multiple
databases, including Cancer Cell Line Encyclopedia (CCLE), Genotype-Tissue
Expression Project (GTEx), and The Cancer Genome Atlas (TCGA). Cox regression
analysis and Kaplan-Meier analysis were used to estimate the correlation of
RNMT expression to prognosis. The data was also used to research the
relationship between RNMT expression and common immunoregulators, tumor
mutation burden (TMB), microsatellite instability (MSI), mismatch repair (MMR),
and DNA methyltransferase (DNMT). Additionally, the cBioPortal website was used
to evaluate the characteristics of RNMT alteration. The TISDB database was used
to obtain the expression of different subtypes. The Tumor Immune Estimation
Resource (TIMER) database was used to analyze the association between RNMT and
tumor immune infiltration. Gene set enrichment analysis (GSEA) was used to
identify the relevant pathways.
Results: RNMT was ubiquitously highly expressed across cancers and survival
analysis revealed that its expression was highly associated with the clinical
prognosis of various cancer types. Remarkably, RNMT participates in immune
regulation and plays a crucial part in the tumor microenvironment. A positive
association was found between RNMT expression and six immune cell types
expression in colon adenocarcinoma, kidney renal clear cell carcinoma, and
liver hepatocellular carcinoma. Moreover, RNMT expression was highly associated
with immunoregulators in most cancer types, and correlated to TMB, MSI, MMR,
and DNMT. Finally, GSEA indicated that RNMT may correlate with tumor immunity.
| [
{
"created": "Tue, 18 Oct 2022 04:07:32 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2024 11:04:21 GMT",
"version": "v2"
}
] | 2024-03-22 | [
[
"Huang",
"Shuqiang",
""
],
[
"Tan",
"Cuiyu",
""
],
[
"Zheng",
"Jinzhen",
""
],
[
"Huang",
"Zhugu",
""
],
[
"Li",
"Zhihong",
""
],
[
"Lv",
"Ziyin",
""
],
[
"Chen",
"Wanru",
""
]
] | Background: RNA guanine-7 methyltransferase (RNMT) is one of the main regulators of N7-methylguanosine, and the deregulation of RNMT correlated with tumor development and immune metabolism. However, the specific function of RNMT in pan-cancer remains unclear. Methods: RNMT expression in different cancers was analyzed using multiple databases, including Cancer Cell Line Encyclopedia (CCLE), Genotype-Tissue Expression Project (GTEx), and The Cancer Genome Atlas (TCGA). Cox regression analysis and Kaplan-Meier analysis were used to estimate the correlation of RNMT expression to prognosis. The data was also used to research the relationship between RNMT expression and common immunoregulators, tumor mutation burden (TMB), microsatellite instability (MSI), mismatch repair (MMR), and DNA methyltransferase (DNMT). Additionally, the cBioPortal website was used to evaluate the characteristics of RNMT alteration. The TISDB database was used to obtain the expression of different subtypes. The Tumor Immune Estimation Resource (TIMER) database was used to analyze the association between RNMT and tumor immune infiltration. Gene set enrichment analysis (GSEA) was used to identify the relevant pathways. Results: RNMT was ubiquitously highly expressed across cancers and survival analysis revealed that its expression was highly associated with the clinical prognosis of various cancer types. Remarkably, RNMT participates in immune regulation and plays a crucial part in the tumor microenvironment. A positive association was found between RNMT expression and six immune cell types expression in colon adenocarcinoma, kidney renal clear cell carcinoma, and liver hepatocellular carcinoma. Moreover, RNMT expression was highly associated with immunoregulators in most cancer types, and correlated to TMB, MSI, MMR, and DNMT. Finally, GSEA indicated that RNMT may correlate with tumor immunity. |
2104.05989 | Swapna Sasi | Mahak Kothari, Swapna Sasi, Jun Chen, Elham Zareian, Basabdatta Sen
Bhattacharya | Bayesian Optimisation for a Biologically Inspired Population Neural
Network | 7 pages, 7 figures | null | null | null | q-bio.QM cs.NE q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We have used Bayesian Optimisation (BO) to find hyper-parameters in an
existing biologically plausible population neural network. The 8-dimensional
optimal hyper-parameter combination should be such that the network dynamics
simulate the resting state alpha rhythm (8 - 13 Hz rhythms in brain signals).
Each combination of these eight hyper-parameters constitutes a 'datapoint' in
the parameter space. The best combination of these parameters leads to the
neural network's output power spectral peak being constraint within the alpha
band. Further, constraints were introduced to the BO algorithm based on
qualitative observation of the network output time series, so that high
amplitude pseudo-periodic oscillations are removed. Upon successful
implementation for alpha band, we further optimised the network to oscillate
within the theta (4 - 8 Hz) and beta (13 - 30 Hz) bands. The changing rhythms
in the model can now be studied using the identified optimal hyper-parameters
for the respective frequency bands. We have previously tuned parameters in the
existing neural network by the trial-and-error approach; however, due to time
and computational constraints, we could not vary more than three parameters at
once. The approach detailed here, allows an automatic hyper-parameter search,
producing reliable parameter sets for the network.
| [
{
"created": "Tue, 13 Apr 2021 07:48:42 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Kothari",
"Mahak",
""
],
[
"Sasi",
"Swapna",
""
],
[
"Chen",
"Jun",
""
],
[
"Zareian",
"Elham",
""
],
[
"Bhattacharya",
"Basabdatta Sen",
""
]
] | We have used Bayesian Optimisation (BO) to find hyper-parameters in an existing biologically plausible population neural network. The 8-dimensional optimal hyper-parameter combination should be such that the network dynamics simulate the resting state alpha rhythm (8 - 13 Hz rhythms in brain signals). Each combination of these eight hyper-parameters constitutes a 'datapoint' in the parameter space. The best combination of these parameters leads to the neural network's output power spectral peak being constraint within the alpha band. Further, constraints were introduced to the BO algorithm based on qualitative observation of the network output time series, so that high amplitude pseudo-periodic oscillations are removed. Upon successful implementation for alpha band, we further optimised the network to oscillate within the theta (4 - 8 Hz) and beta (13 - 30 Hz) bands. The changing rhythms in the model can now be studied using the identified optimal hyper-parameters for the respective frequency bands. We have previously tuned parameters in the existing neural network by the trial-and-error approach; however, due to time and computational constraints, we could not vary more than three parameters at once. The approach detailed here, allows an automatic hyper-parameter search, producing reliable parameter sets for the network. |
q-bio/0509011 | Uwe Grimm | Michael Baake (Bielefeld), Uwe Grimm (Milton Keynes) and Harald
Jockusch (Bielefeld) | Freely forming groups: Trying to be rare | 8 pages with 1 figure; final version | The ANZIAM Journal 48 (2006) 1-10 | null | null | q-bio.PE math.DS | null | A simple weakly frequency dependent model for the dynamics of a population
with a finite number of types is proposed, based upon an advantage of being
rare. In the infinite population limit, this model gives rise to a non-smooth
dynamical system that reaches its globally stable equilibrium in finite time.
This dynamical system is sufficiently simple to permit an explicit solution,
built piecewise from solutions of the logistic equation in continuous time. It
displays an interesting tree-like structure of coalescing components.
| [
{
"created": "Fri, 9 Sep 2005 16:02:06 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Sep 2005 08:49:18 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Jun 2006 12:31:02 GMT",
"version": "v3"
},
{
"created": "Fri, 29 Sep 2006 14:41:57 GMT",
"version": "v4"
}
] | 2007-05-23 | [
[
"Baake",
"Michael",
"",
"Bielefeld"
],
[
"Grimm",
"Uwe",
"",
"Milton Keynes"
],
[
"Jockusch",
"Harald",
"",
"Bielefeld"
]
] | A simple weakly frequency dependent model for the dynamics of a population with a finite number of types is proposed, based upon an advantage of being rare. In the infinite population limit, this model gives rise to a non-smooth dynamical system that reaches its globally stable equilibrium in finite time. This dynamical system is sufficiently simple to permit an explicit solution, built piecewise from solutions of the logistic equation in continuous time. It displays an interesting tree-like structure of coalescing components. |
0704.3619 | Marcus Kaiser | Luciano da F Costa, Marcus Kaiser, Claus C Hilgetag | Predicting the connectivity of primate cortical networks from
topological and spatial node properties | null | BMC Systems Biology 2007, 1:16 | 10.1186/1752-0509-1-16 | null | q-bio.NC physics.soc-ph | null | The organization of the connectivity between mammalian cortical areas has
become a major subject of study, because of its important role in scaffolding
the macroscopic aspects of animal behavior and intelligence. In this study we
present a computational reconstruction approach to the problem of network
organization, by considering the topological and spatial features of each area
in the primate cerebral cortex as subsidy for the reconstruction of the global
cortical network connectivity. Starting with all areas being disconnected,
pairs of areas with similar sets of features are linked together, in an attempt
to recover the original network structure. Inferring primate cortical
connectivity from the properties of the nodes, remarkably good reconstructions
of the global network organization could be obtained, with the topological
features allowing slightly superior accuracy to the spatial ones. Analogous
reconstruction attempts for the C. elegans neuronal network resulted in
substantially poorer recovery, indicating that cortical area interconnections
are relatively stronger related to the considered topological and spatial
properties than neuronal projections in the nematode. The close relationship
between area-based features and global connectivity may hint on developmental
rules and constraints for cortical networks. Particularly, differences between
the predictions from topological and spatial properties, together with the
poorer recovery resulting from spatial properties, indicate that the
organization of cortical networks is not entirely determined by spatial
constraints.
| [
{
"created": "Thu, 26 Apr 2007 20:13:58 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Costa",
"Luciano da F",
""
],
[
"Kaiser",
"Marcus",
""
],
[
"Hilgetag",
"Claus C",
""
]
] | The organization of the connectivity between mammalian cortical areas has become a major subject of study, because of its important role in scaffolding the macroscopic aspects of animal behavior and intelligence. In this study we present a computational reconstruction approach to the problem of network organization, by considering the topological and spatial features of each area in the primate cerebral cortex as subsidy for the reconstruction of the global cortical network connectivity. Starting with all areas being disconnected, pairs of areas with similar sets of features are linked together, in an attempt to recover the original network structure. Inferring primate cortical connectivity from the properties of the nodes, remarkably good reconstructions of the global network organization could be obtained, with the topological features allowing slightly superior accuracy to the spatial ones. Analogous reconstruction attempts for the C. elegans neuronal network resulted in substantially poorer recovery, indicating that cortical area interconnections are relatively stronger related to the considered topological and spatial properties than neuronal projections in the nematode. The close relationship between area-based features and global connectivity may hint on developmental rules and constraints for cortical networks. Particularly, differences between the predictions from topological and spatial properties, together with the poorer recovery resulting from spatial properties, indicate that the organization of cortical networks is not entirely determined by spatial constraints. |
1910.04100 | Thomas Sturm | Dima Grigoriev, Alexandru Iosif, Hamid Rahkooy, Thomas Sturm, Andreas
Weber | Efficiently and Effectively Recognizing Toricity of Steady State
Varieties | We made the presentation clearer and fixed many small flaws and
typos. A database with our computations is now available as ancillary file | Math. Comput. Sci., 15(2):199-232, Jun 2021 | 10.1007/s11786-020-00479-9 | null | q-bio.MN cs.SC math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of testing whether the points in a complex or real
variety with non-zero coordinates form a multiplicative group or, more
generally, a coset of a multiplicative group. For the coset case, we study the
notion of shifted toric varieties which generalizes the notion of toric
varieties. This requires a geometric view on the varieties rather than an
algebraic view on the ideals. We present algorithms and computations on 129
models from the BioModels repository testing for group and coset structures
over both the complex numbers and the real numbers. Our methods over the
complex numbers are based on Gr\"obner basis techniques and binomiality tests.
Over the real numbers we use first-order characterizations and employ real
quantifier elimination. In combination with suitable prime decompositions and
restrictions to subspaces it turns out that almost all models show coset
structure. Beyond our practical computations, we give upper bounds on the
asymptotic worst-case complexity of the corresponding problems by proposing
single exponential algorithms that test complex or real varieties for toricity
or shifted toricity. In the positive case, these algorithms produce generating
binomials. In addition, we propose an asymptotically fast algorithm for testing
membership in a binomial variety over the algebraic closure of the rational
numbers.
| [
{
"created": "Wed, 9 Oct 2019 16:25:58 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Apr 2020 07:40:47 GMT",
"version": "v2"
}
] | 2021-07-06 | [
[
"Grigoriev",
"Dima",
""
],
[
"Iosif",
"Alexandru",
""
],
[
"Rahkooy",
"Hamid",
""
],
[
"Sturm",
"Thomas",
""
],
[
"Weber",
"Andreas",
""
]
] | We consider the problem of testing whether the points in a complex or real variety with non-zero coordinates form a multiplicative group or, more generally, a coset of a multiplicative group. For the coset case, we study the notion of shifted toric varieties which generalizes the notion of toric varieties. This requires a geometric view on the varieties rather than an algebraic view on the ideals. We present algorithms and computations on 129 models from the BioModels repository testing for group and coset structures over both the complex numbers and the real numbers. Our methods over the complex numbers are based on Gr\"obner basis techniques and binomiality tests. Over the real numbers we use first-order characterizations and employ real quantifier elimination. In combination with suitable prime decompositions and restrictions to subspaces it turns out that almost all models show coset structure. Beyond our practical computations, we give upper bounds on the asymptotic worst-case complexity of the corresponding problems by proposing single exponential algorithms that test complex or real varieties for toricity or shifted toricity. In the positive case, these algorithms produce generating binomials. In addition, we propose an asymptotically fast algorithm for testing membership in a binomial variety over the algebraic closure of the rational numbers. |
0905.0991 | Tom Michoel | Tom Michoel, Riet De Smet, Anagha Joshi, Yves Van de Peer, Kathleen
Marchal | Comparative analysis of module-based versus direct methods for
reverse-engineering transcriptional regulatory networks | 13 pages, 1 table, 6 figures + 6 pages supplementary information (1
table, 5 figures) | BMC Systems Biology 2009, 3:49 | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have compared a recently developed module-based algorithm LeMoNe for
reverse-engineering transcriptional regulatory networks to a mutual information
based direct algorithm CLR, using benchmark expression data and databases of
known transcriptional regulatory interactions for Escherichia coli and
Saccharomyces cerevisiae. A global comparison using recall versus precision
curves hides the topologically distinct nature of the inferred networks and is
not informative about the specific subtasks for which each method is most
suited. Analysis of the degree distributions and a regulator specific
comparison show that CLR is 'regulator-centric', making true predictions for a
higher number of regulators, while LeMoNe is 'target-centric', recovering a
higher number of known targets for fewer regulators, with limited overlap in
the predicted interactions between both methods. Detailed biological examples
in E. coli and S. cerevisiae are used to illustrate these differences and to
prove that each method is able to infer parts of the network where the other
fails. Biological validation of the inferred networks cautions against
over-interpreting recall and precision values computed using incomplete
reference networks.
| [
{
"created": "Thu, 7 May 2009 10:39:44 GMT",
"version": "v1"
}
] | 2009-05-08 | [
[
"Michoel",
"Tom",
""
],
[
"De Smet",
"Riet",
""
],
[
"Joshi",
"Anagha",
""
],
[
"Van de Peer",
"Yves",
""
],
[
"Marchal",
"Kathleen",
""
]
] | We have compared a recently developed module-based algorithm LeMoNe for reverse-engineering transcriptional regulatory networks to a mutual information based direct algorithm CLR, using benchmark expression data and databases of known transcriptional regulatory interactions for Escherichia coli and Saccharomyces cerevisiae. A global comparison using recall versus precision curves hides the topologically distinct nature of the inferred networks and is not informative about the specific subtasks for which each method is most suited. Analysis of the degree distributions and a regulator specific comparison show that CLR is 'regulator-centric', making true predictions for a higher number of regulators, while LeMoNe is 'target-centric', recovering a higher number of known targets for fewer regulators, with limited overlap in the predicted interactions between both methods. Detailed biological examples in E. coli and S. cerevisiae are used to illustrate these differences and to prove that each method is able to infer parts of the network where the other fails. Biological validation of the inferred networks cautions against over-interpreting recall and precision values computed using incomplete reference networks. |
2305.03925 | Wei Xie | Hua Zheng, Wei Xie, Paul Whitford, Ailun Wang, Chunsheng Fang, Wandi
Xu | Structure-Function Dynamics Hybrid Modeling: RNA Degradation | 12 pages, 5 figures | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | RNA structure and functional dynamics play fundamental roles in controlling
biological systems. Molecular dynamics simulation, which can characterize
interactions at an atomistic level, can advance the understanding on new drug
discovery, manufacturing, and delivery mechanisms. However, it is
computationally unattainable to support the development of a digital twin for
enzymatic reaction network mechanism learning, and end-to-end bioprocess design
and control. Thus, we create a hybrid ("mechanistic + machine learning") model
characterizing the interdependence of RNA structure and functional dynamics
from atomistic to macroscopic levels. To assess the proposed modeling strategy,
in this paper, we consider RNA degradation which is a critical process in
cellular biology that affects gene expression. The empirical study on RNA
lifetime prediction demonstrates the promising performance of the proposed
multi-scale bioprocess hybrid modeling strategy.
| [
{
"created": "Sat, 6 May 2023 04:40:48 GMT",
"version": "v1"
},
{
"created": "Wed, 10 May 2023 01:47:01 GMT",
"version": "v2"
},
{
"created": "Sun, 18 Jun 2023 00:25:36 GMT",
"version": "v3"
}
] | 2023-06-21 | [
[
"Zheng",
"Hua",
""
],
[
"Xie",
"Wei",
""
],
[
"Whitford",
"Paul",
""
],
[
"Wang",
"Ailun",
""
],
[
"Fang",
"Chunsheng",
""
],
[
"Xu",
"Wandi",
""
]
] | RNA structure and functional dynamics play fundamental roles in controlling biological systems. Molecular dynamics simulation, which can characterize interactions at an atomistic level, can advance the understanding on new drug discovery, manufacturing, and delivery mechanisms. However, it is computationally unattainable to support the development of a digital twin for enzymatic reaction network mechanism learning, and end-to-end bioprocess design and control. Thus, we create a hybrid ("mechanistic + machine learning") model characterizing the interdependence of RNA structure and functional dynamics from atomistic to macroscopic levels. To assess the proposed modeling strategy, in this paper, we consider RNA degradation which is a critical process in cellular biology that affects gene expression. The empirical study on RNA lifetime prediction demonstrates the promising performance of the proposed multi-scale bioprocess hybrid modeling strategy. |
0811.3716 | Jonathan Doye | Gabriel Villar, Alex W. Wilber, Alex J. Williamson, Parvinder Thiara,
Jonathan P.K. Doye, Ard A. Louis, Mara N. Jochum, Anna C.F. Lewis and
Emmanuel D. Levy | The self-assembly and evolution of homomeric protein complexes | 4 pages, 4 figures | Phys. Rev. Lett. 102, 118106 (2009) | 10.1103/PhysRevLett.102.118106 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a simple "patchy particle" model to study the thermodynamics and
dynamics of self-assembly of homomeric protein complexes. Our calculations
allow us to rationalize recent results for dihedral complexes. Namely, why
evolution of such complexes naturally takes the system into a region of
interaction space where (i) the evolutionarily newer interactions are weaker,
(ii) subcomplexes involving the stronger interactions are observed to be
thermodynamically stable on destabilization of the protein-protein interactions
and (iii) the self-assembly dynamics are hierarchical with these same
subcomplexes acting as kinetic intermediates.
| [
{
"created": "Sat, 22 Nov 2008 23:05:05 GMT",
"version": "v1"
}
] | 2009-10-07 | [
[
"Villar",
"Gabriel",
""
],
[
"Wilber",
"Alex W.",
""
],
[
"Williamson",
"Alex J.",
""
],
[
"Thiara",
"Parvinder",
""
],
[
"Doye",
"Jonathan P. K.",
""
],
[
"Louis",
"Ard A.",
""
],
[
"Jochum",
"Mara N.",
""
],
[
"Lewis",
"Anna C. F.",
""
],
[
"Levy",
"Emmanuel D.",
""
]
] | We introduce a simple "patchy particle" model to study the thermodynamics and dynamics of self-assembly of homomeric protein complexes. Our calculations allow us to rationalize recent results for dihedral complexes. Namely, why evolution of such complexes naturally takes the system into a region of interaction space where (i) the evolutionarily newer interactions are weaker, (ii) subcomplexes involving the stronger interactions are observed to be thermodynamically stable on destabilization of the protein-protein interactions and (iii) the self-assembly dynamics are hierarchical with these same subcomplexes acting as kinetic intermediates. |
1301.0004 | Ignacio Gallo | Ignacio Gallo | Population genetics of gene function | 30 pages, 6 figures | null | 10.1007/s11538-013-9841-6 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that differentiating the lifetimes of two phenotypes
independently from their fertility can lead to a qualitative change in the
equilibrium of a population: since survival and reproduction are distinct
functional aspects of an organism, this observation contributes to extend the
population-genetical characterisation of biological function. To support this
statement a mathematical relation is derived to link the lifetime ratio
T_1/T_2, which parametrizes the different survival ability of two phenotypes,
with population variables that quantify the amount of neutral variation
underlying a population's phenotypic distribution.
| [
{
"created": "Sun, 30 Dec 2012 08:05:19 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jan 2013 17:29:26 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Jan 2013 15:13:07 GMT",
"version": "v3"
},
{
"created": "Thu, 24 Jan 2013 06:24:43 GMT",
"version": "v4"
},
{
"created": "Sat, 16 Feb 2013 17:56:43 GMT",
"version": "v5"
},
{
"created": "Tue, 21 May 2013 09:20:38 GMT",
"version": "v6"
}
] | 2013-05-22 | [
[
"Gallo",
"Ignacio",
""
]
] | This paper shows that differentiating the lifetimes of two phenotypes independently from their fertility can lead to a qualitative change in the equilibrium of a population: since survival and reproduction are distinct functional aspects of an organism, this observation contributes to extend the population-genetical characterisation of biological function. To support this statement a mathematical relation is derived to link the lifetime ratio T_1/T_2, which parametrizes the different survival ability of two phenotypes, with population variables that quantify the amount of neutral variation underlying a population's phenotypic distribution. |
2404.04086 | Patricia Lamirande | Patricia Lamirande, Eamonn A. Gaffney, Michael Gertz, Philip K. Maini,
Jessica R. Crawshaw, Antonello Caruso | A first passage model of intravitreal drug delivery and residence time,
in relation to ocular geometry, individual variability, and injection
location | null | null | null | null | q-bio.QM math.AP | http://creativecommons.org/licenses/by/4.0/ | Purpose: Standard of care for various retinal diseases involves recurrent
intravitreal injections. This motivates mathematical modelling efforts to
identify influential factors for drug residence time, aiming to minimise
administration frequency. We sought to describe the vitreal diffusion of
therapeutics in nonclinical species used during drug development assessments.
In human eyes, we investigated the impact of variability in vitreous cavity
size and eccentricity, and in injection location, on drug elimination.
Methods: Using a first passage time approach, we modelled the
transport-controlled distribution of two standard therapeutic protein formats
(Fab and IgG) and elimination through anterior and posterior pathways. Detailed
anatomical 3D geometries of mouse, rat, rabbit, cynomolgus monkey, and human
eyes were constructed using ocular images and biometry datasets. A scaling
relationship was derived for comparison with experimental ocular half-lives.
Results: Model simulations revealed a dependence of residence time on ocular
size and injection location. Delivery to the posterior vitreous resulted in
increased vitreal half-life and retinal permeation. Interindividual variability
in human eyes had a significant influence on residence time (half-life range of
5-7 days), showing a strong correlation to axial length and vitreal volume.
Anterior exit was the predominant route of drug elimination. Contribution of
the posterior pathway displayed a small (3%) difference between protein
formats, but varied between species (10-30%).
Conclusions: The modelling results suggest that experimental variability in
ocular half-life is partially attributed to anatomical differences and
injection site location. Simulations further suggest a potential role of the
posterior pathway permeability in determining species differences in ocular
pharmacokinetics.
| [
{
"created": "Fri, 5 Apr 2024 13:21:48 GMT",
"version": "v1"
}
] | 2024-04-08 | [
[
"Lamirande",
"Patricia",
""
],
[
"Gaffney",
"Eamonn A.",
""
],
[
"Gertz",
"Michael",
""
],
[
"Maini",
"Philip K.",
""
],
[
"Crawshaw",
"Jessica R.",
""
],
[
"Caruso",
"Antonello",
""
]
] | Purpose: Standard of care for various retinal diseases involves recurrent intravitreal injections. This motivates mathematical modelling efforts to identify influential factors for drug residence time, aiming to minimise administration frequency. We sought to describe the vitreal diffusion of therapeutics in nonclinical species used during drug development assessments. In human eyes, we investigated the impact of variability in vitreous cavity size and eccentricity, and in injection location, on drug elimination. Methods: Using a first passage time approach, we modelled the transport-controlled distribution of two standard therapeutic protein formats (Fab and IgG) and elimination through anterior and posterior pathways. Detailed anatomical 3D geometries of mouse, rat, rabbit, cynomolgus monkey, and human eyes were constructed using ocular images and biometry datasets. A scaling relationship was derived for comparison with experimental ocular half-lives. Results: Model simulations revealed a dependence of residence time on ocular size and injection location. Delivery to the posterior vitreous resulted in increased vitreal half-life and retinal permeation. Interindividual variability in human eyes had a significant influence on residence time (half-life range of 5-7 days), showing a strong correlation to axial length and vitreal volume. Anterior exit was the predominant route of drug elimination. Contribution of the posterior pathway displayed a small (3%) difference between protein formats, but varied between species (10-30%). Conclusions: The modelling results suggest that experimental variability in ocular half-life is partially attributed to anatomical differences and injection site location. Simulations further suggest a potential role of the posterior pathway permeability in determining species differences in ocular pharmacokinetics. |
1901.10005 | Thomas Gaudelet | Thomas Gaudelet, Noel Malod-Dognin, Jon Sanchez-Valle, Vera Pancaldi,
Alfonso Valencia and Natasa Przulj | Unveiling new disease, pathway, and gene associations via multi-scale
neural networks | 16 pages | PLOS ONE, 15(4), p.e0231059 (2020) | 10.1371/journal.pone.0231059 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diseases involve complex processes and modifications to the cellular
machinery. The gene expression profile of the affected cells contains
characteristic patterns linked to a disease. Hence, biological knowledge
pertaining to a disease can be derived from a patient cell's profile, improving
our diagnosis ability, as well as our grasp of disease risks. This knowledge
can be used for drug re-purposing, or by physicians to evaluate a patient's
condition and co-morbidity risk. Here, we look at differential gene expression
obtained from microarray technology for patients diagnosed with various
diseases. Based on this data and cellular multi-scale organization, we aim to
uncover disease--disease links, as well as disease-gene and disease--pathways
associations. We propose neural networks with structures inspired by the
multi-scale organization of a cell. We show that these models are able to
correctly predict the diagnosis for the majority of the patients. Through the
analysis of the trained models, we predict and validate disease-disease,
disease-pathway, and disease-gene associations with comparisons to known
interactions and literature search, proposing putative explanations for the
novel predictions that come from our study.
| [
{
"created": "Mon, 28 Jan 2019 21:17:57 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2019 11:36:44 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Apr 2020 07:53:13 GMT",
"version": "v3"
}
] | 2020-04-13 | [
[
"Gaudelet",
"Thomas",
""
],
[
"Malod-Dognin",
"Noel",
""
],
[
"Sanchez-Valle",
"Jon",
""
],
[
"Pancaldi",
"Vera",
""
],
[
"Valencia",
"Alfonso",
""
],
[
"Przulj",
"Natasa",
""
]
] | Diseases involve complex processes and modifications to the cellular machinery. The gene expression profile of the affected cells contains characteristic patterns linked to a disease. Hence, biological knowledge pertaining to a disease can be derived from a patient cell's profile, improving our diagnosis ability, as well as our grasp of disease risks. This knowledge can be used for drug re-purposing, or by physicians to evaluate a patient's condition and co-morbidity risk. Here, we look at differential gene expression obtained from microarray technology for patients diagnosed with various diseases. Based on this data and cellular multi-scale organization, we aim to uncover disease--disease links, as well as disease-gene and disease--pathways associations. We propose neural networks with structures inspired by the multi-scale organization of a cell. We show that these models are able to correctly predict the diagnosis for the majority of the patients. Through the analysis of the trained models, we predict and validate disease-disease, disease-pathway, and disease-gene associations with comparisons to known interactions and literature search, proposing putative explanations for the novel predictions that come from our study. |
2301.02286 | Yury Garcia | Yury E. Garcia, Shu-Wei Chou-Chen, Luis A. Barboza, Maria L.
Daza-Torres, J. Cricelio Montesinos-Lopez, Paola Vasquez, Juan G. Calvo,
Miriam Nuno, and Fabio Sanchez | Common patterns between dengue cases, climate, and local environmental
variables in Costa Rica: A Wavelet Approach | 21 pages, 15 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Throughout history, prevention and control of dengue transmission have
challenged public health authorities worldwide. In the last decades, the
interaction of multiple factors, such as environmental and climate variability,
has influenced increments in incidence and geographical spread of the virus. In
Costa Rica, a country characterized by multiple microclimates separated by
short distances, dengue has been endemic since its introduction in 1993.
Understanding the role of climatic and environmental factors in the seasonal
and inter-annual variability of disease spread is essential to develop
effective surveillance and control efforts. In this study, we conducted a
wavelet time series analysis of weekly climate, local environmental variables,
and dengue cases (2001-2019) from 32 cantons in Costa Rica to identify
significant periods (e.g., annual, biannual) in which climate and environmental
variables co-varied with dengue cases. Wavelet coherence analysis was used to
characterize seasonality, multi-year outbreaks, and relative delays between the
time series. Results show that dengue outbreaks occurring every 3 years in
cantons located in the country's Central, North, and South Pacific regions were
highly coherent with the Oceanic Ni\~no 3.4 and the Tropical North Caribbean
Index (TNA). Dengue cases were in phase with El Ni\~no 3.4 and TNA, with El
Ni\~no 3.4 ahead of dengue cases by roughly nine months and TNA ahead by less
than three months. Annual dengue outbreaks were coherent with local
environmental variables (NDWI, EVI, Evapotranspiration, and Precipitation) in
most cantons except those located in the Central, South Pacific, and South
Caribbean regions of the country. The local environmental variables were in
phase with dengue cases and were ahead by around three months.
| [
{
"created": "Tue, 3 Jan 2023 22:08:46 GMT",
"version": "v1"
}
] | 2023-01-09 | [
[
"Garcia",
"Yury E.",
""
],
[
"Chou-Chen",
"Shu-Wei",
""
],
[
"Barboza",
"Luis A.",
""
],
[
"Daza-Torres",
"Maria L.",
""
],
[
"Montesinos-Lopez",
"J. Cricelio",
""
],
[
"Vasquez",
"Paola",
""
],
[
"Calvo",
"Juan G.",
""
],
[
"Nuno",
"Miriam",
""
],
[
"Sanchez",
"Fabio",
""
]
] | Throughout history, prevention and control of dengue transmission have challenged public health authorities worldwide. In the last decades, the interaction of multiple factors, such as environmental and climate variability, has influenced increments in incidence and geographical spread of the virus. In Costa Rica, a country characterized by multiple microclimates separated by short distances, dengue has been endemic since its introduction in 1993. Understanding the role of climatic and environmental factors in the seasonal and inter-annual variability of disease spread is essential to develop effective surveillance and control efforts. In this study, we conducted a wavelet time series analysis of weekly climate, local environmental variables, and dengue cases (2001-2019) from 32 cantons in Costa Rica to identify significant periods (e.g., annual, biannual) in which climate and environmental variables co-varied with dengue cases. Wavelet coherence analysis was used to characterize seasonality, multi-year outbreaks, and relative delays between the time series. Results show that dengue outbreaks occurring every 3 years in cantons located in the country's Central, North, and South Pacific regions were highly coherent with the Oceanic Ni\~no 3.4 and the Tropical North Caribbean Index (TNA). Dengue cases were in phase with El Ni\~no 3.4 and TNA, with El Ni\~no 3.4 ahead of dengue cases by roughly nine months and TNA ahead by less than three months. Annual dengue outbreaks were coherent with local environmental variables (NDWI, EVI, Evapotranspiration, and Precipitation) in most cantons except those located in the Central, South Pacific, and South Caribbean regions of the country. The local environmental variables were in phase with dengue cases and were ahead by around three months. |
1605.08740 | Elizabeth Lee | Elizabeth C. Lee, Jason M. Asher, Sandra Goldlust, John D. Kraemer,
Andrew B. Lawson, and Shweta Bansal | Mind the scales: Harnessing spatial big data for infectious disease
surveillance and inference | 12 pages, 1 figure | null | null | null | q-bio.PE physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial big data have the "velocity," "volume," and "variety" of big data
sources and additional geographic information about the record. Digital data
sources, such as medical claims, mobile phone call data records, and geo-tagged
tweets, have entered infectious disease epidemiology as novel sources of data
to complement traditional infectious disease surveillance. In this work, we
provide examples of how spatial big data have been used thus far in
epidemiological analyses and describe opportunities for these sources to
improve public health coordination and disease mitigation strategies. In
addition, we consider the technical, practical, and ethical challenges with the
use of spatial big data in infectious disease surveillance and inference.
Finally, we discuss the implications of the rising use of spatial big data in
epidemiology to health risk communications, across-scale public health
coordination, and public health policy recommendation.
| [
{
"created": "Fri, 27 May 2016 18:17:20 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jun 2016 02:39:04 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Aug 2016 20:31:56 GMT",
"version": "v3"
}
] | 2016-08-30 | [
[
"Lee",
"Elizabeth C.",
""
],
[
"Asher",
"Jason M.",
""
],
[
"Goldlust",
"Sandra",
""
],
[
"Kraemer",
"John D.",
""
],
[
"Lawson",
"Andrew B.",
""
],
[
"Bansal",
"Shweta",
""
]
] | Spatial big data have the "velocity," "volume," and "variety" of big data sources and additional geographic information about the record. Digital data sources, such as medical claims, mobile phone call data records, and geo-tagged tweets, have entered infectious disease epidemiology as novel sources of data to complement traditional infectious disease surveillance. In this work, we provide examples of how spatial big data have been used thus far in epidemiological analyses and describe opportunities for these sources to improve public health coordination and disease mitigation strategies. In addition, we consider the technical, practical, and ethical challenges with the use of spatial big data in infectious disease surveillance and inference. Finally, we discuss the implications of the rising use of spatial big data in epidemiology to health risk communications, across-scale public health coordination, and public health policy recommendation. |
0808.2231 | Brian Gin | Brian C. Gin, Juan P. Garrahan and Phillip L. Geissler | The limited role of non-native contacts in folding pathways of a lattice
protein | 11 pages, 4 figures | J Mol Biol. 2009 Oct 9;392(5):1303-14. | 10.1016/j.jmb.2009.06.058 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of protein energetics which neglect interactions between amino acids
that are not adjacent in the native state, such as the Go model, encode or
underlie many influential ideas on protein folding. Implicit in this
simplification is a crucial assumption that has never been critically evaluated
in a broad context: Detailed mechanisms of protein folding are not biased by
non-native contacts, typically imagined as a consequence of sequence design
and/or topology. Here we present, using computer simulations of a well-studied
lattice heteropolymer model, the first systematic test of this oft-assumed
correspondence over the statistically significant range of hundreds of
thousands of amino acid sequences, and a concomitantly diverse set of folding
pathways. Enabled by a novel means of fingerprinting folding trajectories, our
study reveals a profound insensitivity of the order in which native contacts
accumulate to the omission of non-native interactions. Contrary to conventional
thinking, this robustness does not arise from topological restrictions and does
not depend on folding rate. We find instead that the crucial factor in
discriminating among topological pathways is the heterogeneity of native
contact energies. Our results challenge conventional thinking on the
relationship between sequence design and free energy landscapes for protein
folding, and help justify the widespread use of Go-like models to scrutinize
detailed folding mechanisms of real proteins.
| [
{
"created": "Sat, 16 Aug 2008 02:31:02 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jan 2009 22:42:18 GMT",
"version": "v2"
}
] | 2009-10-08 | [
[
"Gin",
"Brian C.",
""
],
[
"Garrahan",
"Juan P.",
""
],
[
"Geissler",
"Phillip L.",
""
]
] | Models of protein energetics which neglect interactions between amino acids that are not adjacent in the native state, such as the Go model, encode or underlie many influential ideas on protein folding. Implicit in this simplification is a crucial assumption that has never been critically evaluated in a broad context: Detailed mechanisms of protein folding are not biased by non-native contacts, typically imagined as a consequence of sequence design and/or topology. Here we present, using computer simulations of a well-studied lattice heteropolymer model, the first systematic test of this oft-assumed correspondence over the statistically significant range of hundreds of thousands of amino acid sequences, and a concomitantly diverse set of folding pathways. Enabled by a novel means of fingerprinting folding trajectories, our study reveals a profound insensitivity of the order in which native contacts accumulate to the omission of non-native interactions. Contrary to conventional thinking, this robustness does not arise from topological restrictions and does not depend on folding rate. We find instead that the crucial factor in discriminating among topological pathways is the heterogeneity of native contact energies. Our results challenge conventional thinking on the relationship between sequence design and free energy landscapes for protein folding, and help justify the widespread use of Go-like models to scrutinize detailed folding mechanisms of real proteins. |
2107.03220 | Yanqiao Zhu | Yanqiao Zhu, Hejie Cui, Lifang He, Lichao Sun, Carl Yang | Joint Embedding of Structural and Functional Brain Networks with Graph
Neural Networks for Mental Illness Diagnosis | Formal version accepted to IEEE EMBC 2022; previously presented at
ICML 2021 Workshop on Computational Approaches to Mental Health (no
proceedings) | null | null | null | q-bio.NC cs.LG physics.med-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal brain networks characterize complex connectivities among different
brain regions from both structural and functional aspects and provide a new
means for mental disease analysis. Recently, Graph Neural Networks (GNNs) have
become a de facto model for analyzing graph-structured data. However, how to
employ GNNs to extract effective representations from brain networks in
multiple modalities remains rarely explored. Moreover, as brain networks
provide no initial node features, how to design informative node attributes and
leverage edge weights for GNNs to learn is left unsolved. To this end, we
develop a novel multiview GNN for multimodal brain networks. In particular, we
regard each modality as a view for brain networks and employ contrastive
learning for multimodal fusion. Then, we propose a GNN model which takes
advantage of the message passing scheme by propagating messages based on degree
statistics and brain region connectivities. Extensive experiments on two
real-world disease datasets (HIV and Bipolar) demonstrate the effectiveness of
our proposed method over state-of-the-art baselines.
| [
{
"created": "Wed, 7 Jul 2021 13:49:57 GMT",
"version": "v1"
},
{
"created": "Tue, 24 May 2022 17:04:23 GMT",
"version": "v2"
}
] | 2022-05-25 | [
[
"Zhu",
"Yanqiao",
""
],
[
"Cui",
"Hejie",
""
],
[
"He",
"Lifang",
""
],
[
"Sun",
"Lichao",
""
],
[
"Yang",
"Carl",
""
]
] | Multimodal brain networks characterize complex connectivities among different brain regions from both structural and functional aspects and provide a new means for mental disease analysis. Recently, Graph Neural Networks (GNNs) have become a de facto model for analyzing graph-structured data. However, how to employ GNNs to extract effective representations from brain networks in multiple modalities remains rarely explored. Moreover, as brain networks provide no initial node features, how to design informative node attributes and leverage edge weights for GNNs to learn is left unsolved. To this end, we develop a novel multiview GNN for multimodal brain networks. In particular, we regard each modality as a view for brain networks and employ contrastive learning for multimodal fusion. Then, we propose a GNN model which takes advantage of the message passing scheme by propagating messages based on degree statistics and brain region connectivities. Extensive experiments on two real-world disease datasets (HIV and Bipolar) demonstrate the effectiveness of our proposed method over state-of-the-art baselines. |
2208.14102 | Medhavi Vishwakarma | Sindhu M, Medhavi Vishwakarma | Role of heterogeneity in dictating tumorigenesis in epithelial tissues | null | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Biological systems across various length and time scales are noisy, including
tissues. Why are biological tissues inherently chaotic? Does heterogeneity play
a role in determining the physiology and pathology of tissues? How do physical
and biochemical heterogeneity crosstalk to dictate tissue function? In this
review, we begin with a brief primer on heterogeneity in biological tissues.
Then, we take examples from recent literature indicating functional relevance
of biochemical and physical heterogeneity and discuss the impact of
heterogeneity on tissue function and pathology. We take specific examples from
studies on epithelial tissues to discuss the potential role of inherent tissue
heterogeneity in tumorigenesis.
| [
{
"created": "Tue, 30 Aug 2022 09:35:30 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Sep 2022 15:28:45 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Sep 2022 12:13:54 GMT",
"version": "v3"
}
] | 2022-09-30 | [
[
"M",
"Sindhu",
""
],
[
"Vishwakarma",
"Medhavi",
""
]
] | Biological systems across various length and time scales are noisy, including tissues. Why are biological tissues inherently chaotic? Does heterogeneity play a role in determining the physiology and pathology of tissues? How do physical and biochemical heterogeneity crosstalk to dictate tissue function? In this review, we begin with a brief primer on heterogeneity in biological tissues. Then, we take examples from recent literature indicating functional relevance of biochemical and physical heterogeneity and discuss the impact of heterogeneity on tissue function and pathology. We take specific examples from studies on epithelial tissues to discuss the potential role of inherent tissue heterogeneity in tumorigenesis. |
1908.05120 | Alexandre de Brevern | Tarun Narwani (BIGR), Catherine Etchebest (BIGR), Pierrick Craveur
(BIGR), Sylvain L\'eonard (DSIMB, BIGR), Joseph Rebehmed (LAU, BIGR),
Narayanaswamy Srinivasan, Aur\'elie Bornot (DSIMB), Jean-Christophe Gelly
(BIGR), Alexandre de Brevern (BIGR) | In silico prediction of protein flexibility with local structure
approach | null | Biochimie, Elsevier, 2019, 165, pp.150-155 | 10.1016/j.biochi.2019.07.025 | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flexibility is an intrinsic essential feature of protein structures, directly
linked to their functions. To this day, most of the prediction methods use the
crystallographic data (namely B-factors) as the only indicator of protein's
inner flexibility and predicts them as rigid or flexible.PredyFlexy stands
differently from other approaches as it relies on the definition of protein
flexibility (i) not only taken from crystallographic data, but also (ii) from
Root Mean Square Fluctuation (RMSFs) observed in Molecular Dynamics
simulations. It also uses a specific representation of protein structures,
named Long Structural Prototypes (LSPs). From Position-Specific Scoring Matrix,
the 120 LSPs are predicted with a good accuracy and directly used to predict
(i) the protein flexibility in three categories (flexible, intermediate and
rigid), (ii) the normalized B-factors, (iii) the normalized RMSFs, and (iv) a
confidence index. Prediction accuracy among these three classes is equivalent
to the best two class prediction methods, while the normalized B-factors and
normalized RMSFs have a good correlation with experimental and in silico
values. Thus, PredyFlexy is a unique approach, which is of major utility for
the scientific community. It support parallelization features and can be run on
a local cluster using multiple cores.The entire project is available under an
open-source license at
http://www.dsimb.inserm.fr/~debrevern/TOOLS/predyflexy_1.3/index.php.
| [
{
"created": "Wed, 14 Aug 2019 13:40:51 GMT",
"version": "v1"
}
] | 2019-08-15 | [
[
"Narwani",
"Tarun",
"",
"BIGR"
],
[
"Etchebest",
"Catherine",
"",
"BIGR"
],
[
"Craveur",
"Pierrick",
"",
"BIGR"
],
[
"Léonard",
"Sylvain",
"",
"DSIMB, BIGR"
],
[
"Rebehmed",
"Joseph",
"",
"LAU, BIGR"
],
[
"Srinivasan",
"Narayanaswamy",
"",
"DSIMB"
],
[
"Bornot",
"Aurélie",
"",
"DSIMB"
],
[
"Gelly",
"Jean-Christophe",
"",
"BIGR"
],
[
"de Brevern",
"Alexandre",
"",
"BIGR"
]
] | Flexibility is an intrinsic essential feature of protein structures, directly linked to their functions. To this day, most of the prediction methods use the crystallographic data (namely B-factors) as the only indicator of protein's inner flexibility and predicts them as rigid or flexible.PredyFlexy stands differently from other approaches as it relies on the definition of protein flexibility (i) not only taken from crystallographic data, but also (ii) from Root Mean Square Fluctuation (RMSFs) observed in Molecular Dynamics simulations. It also uses a specific representation of protein structures, named Long Structural Prototypes (LSPs). From Position-Specific Scoring Matrix, the 120 LSPs are predicted with a good accuracy and directly used to predict (i) the protein flexibility in three categories (flexible, intermediate and rigid), (ii) the normalized B-factors, (iii) the normalized RMSFs, and (iv) a confidence index. Prediction accuracy among these three classes is equivalent to the best two class prediction methods, while the normalized B-factors and normalized RMSFs have a good correlation with experimental and in silico values. Thus, PredyFlexy is a unique approach, which is of major utility for the scientific community. It support parallelization features and can be run on a local cluster using multiple cores.The entire project is available under an open-source license at http://www.dsimb.inserm.fr/~debrevern/TOOLS/predyflexy_1.3/index.php. |
2312.05956 | Kohitij Kar | Kohitij Kar, and James J DiCarlo | The Quest for an Integrated Set of Neural Mechanisms Underlying Object
Recognition in Primates | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual object recognition -- the behavioral ability to rapidly and accurately
categorize many visually encountered objects -- is core to primate cognition.
This behavioral capability is algorithmically impressive because of the myriad
identity-preserving viewpoints and scenes that dramatically change the visual
image produced by the same object. Until recently, the brain mechanisms that
support that capability were deeply mysterious. However, over the last decade,
this scientific mystery has been illuminated by the discovery and development
of brain-inspired, image-computable, artificial neural network (ANN) systems
that rival primates in this behavioral feat. Apart from fundamentally changing
the landscape of artificial intelligence (AI), modified versions of these ANN
systems are the current leading scientific hypotheses of an integrated set of
mechanisms in the primate ventral visual stream that support object
recognition. What separates brain-mapped versions of these systems from prior
conceptual models is that they are Sensory-computable, Mechanistic,
Anatomically Referenced, and Testable (SMART). Here, we review and provide
perspective on the brain mechanisms that the currently leading SMART models
address. We review the empirical brain and behavioral alignment successes and
failures of those current models. Given ongoing advances in neurobehavioral
measurements and AI, we discuss the next frontiers for even more accurate
mechanistic understanding. And we outline the likely applications of that
SMART-model-based understanding.
| [
{
"created": "Sun, 10 Dec 2023 17:58:08 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Kar",
"Kohitij",
""
],
[
"DiCarlo",
"James J",
""
]
] | Visual object recognition -- the behavioral ability to rapidly and accurately categorize many visually encountered objects -- is core to primate cognition. This behavioral capability is algorithmically impressive because of the myriad identity-preserving viewpoints and scenes that dramatically change the visual image produced by the same object. Until recently, the brain mechanisms that support that capability were deeply mysterious. However, over the last decade, this scientific mystery has been illuminated by the discovery and development of brain-inspired, image-computable, artificial neural network (ANN) systems that rival primates in this behavioral feat. Apart from fundamentally changing the landscape of artificial intelligence (AI), modified versions of these ANN systems are the current leading scientific hypotheses of an integrated set of mechanisms in the primate ventral visual stream that support object recognition. What separates brain-mapped versions of these systems from prior conceptual models is that they are Sensory-computable, Mechanistic, Anatomically Referenced, and Testable (SMART). Here, we review and provide perspective on the brain mechanisms that the currently leading SMART models address. We review the empirical brain and behavioral alignment successes and failures of those current models. Given ongoing advances in neurobehavioral measurements and AI, we discuss the next frontiers for even more accurate mechanistic understanding. And we outline the likely applications of that SMART-model-based understanding. |
1310.4598 | Alexey Mazur K | Alexey K. Mazur and Mounir Maaloum | DNA flexibility on short length scales probed by atomic force microscopy | 5 pages, 5 figures; to appear in PRL | Phys. Rev. Lett. (2014) 112,068104 | 10.1103/PhysRevLett.112.068104 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unusually high bending flexibility has been recently reported for DNA on
short length scales. We use atomic force microscopy (AFM) in solution to obtain
a direct estimate of DNA bending statistics for scales down to one helical
turn. It appears that DNA behaves as a Gaussian chain and is well described by
the worm-like chain model at length scales beyond 3 helical turns (10.5nm).
Below this threshold, the AFM data exhibit growing noise because of
experimental limitations. This noise may hide small deviations from the
Gaussian behavior, but they can hardly be significant.
| [
{
"created": "Thu, 17 Oct 2013 07:30:29 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jan 2014 17:44:42 GMT",
"version": "v2"
}
] | 2014-07-22 | [
[
"Mazur",
"Alexey K.",
""
],
[
"Maaloum",
"Mounir",
""
]
] | Unusually high bending flexibility has been recently reported for DNA on short length scales. We use atomic force microscopy (AFM) in solution to obtain a direct estimate of DNA bending statistics for scales down to one helical turn. It appears that DNA behaves as a Gaussian chain and is well described by the worm-like chain model at length scales beyond 3 helical turns (10.5nm). Below this threshold, the AFM data exhibit growing noise because of experimental limitations. This noise may hide small deviations from the Gaussian behavior, but they can hardly be significant. |
1403.6328 | Simone Pigolotti | Giuseppe Bianco, Patrizio Mariani, Andre W. Visser, Maria Grazia
Mazzocchi, and Simone Pigolotti | Analysis of self-overlap reveals trade-offs in plankton swimming
trajectories | 9 pages, 5 figures, submitted | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Movement is a fundamental behaviour of organisms that brings about beneficial
encounters with resources and mates, but at the same time exposes the organism
to dangerous encounters with predators. The movement patterns adopted by
organisms should reflect a balance between these contrasting processes. This
trade-off can be hypothesized as being evident in the behaviour of plankton,
which inhabit a dilute 3D environment with few refuges or orienting landmarks.
We present an analysis of the swimming path geometries based on a volumetric
Monte Carlo sampling approach, which is particularly adept at revealing such
trade-offs by measuring the self-overlap of the trajectories. Application of
this method to experimentally measured trajectories reveals that swimming
patterns in copepods are shaped to efficiently explore volumes at small scales,
while achieving a large overlap at larger scales. Regularities in the observed
trajectories make the transition between these two regimes always sharper than
in randomized trajectories or as predicted by random walk theory. Thus real
trajectories present a stronger separation between exploration for food and
exposure to predators. The specific scale and features of this transition
depend on species, gender, and local environmental conditions, pointing at
adaptation to state and stage dependent evolutionary trade-offs.
| [
{
"created": "Tue, 25 Mar 2014 12:49:38 GMT",
"version": "v1"
}
] | 2014-03-26 | [
[
"Bianco",
"Giuseppe",
""
],
[
"Mariani",
"Patrizio",
""
],
[
"Visser",
"Andre W.",
""
],
[
"Mazzocchi",
"Maria Grazia",
""
],
[
"Pigolotti",
"Simone",
""
]
] | Movement is a fundamental behaviour of organisms that brings about beneficial encounters with resources and mates, but at the same time exposes the organism to dangerous encounters with predators. The movement patterns adopted by organisms should reflect a balance between these contrasting processes. This trade-off can be hypothesized as being evident in the behaviour of plankton, which inhabit a dilute 3D environment with few refuges or orienting landmarks. We present an analysis of the swimming path geometries based on a volumetric Monte Carlo sampling approach, which is particularly adept at revealing such trade-offs by measuring the self-overlap of the trajectories. Application of this method to experimentally measured trajectories reveals that swimming patterns in copepods are shaped to efficiently explore volumes at small scales, while achieving a large overlap at larger scales. Regularities in the observed trajectories make the transition between these two regimes always sharper than in randomized trajectories or as predicted by random walk theory. Thus real trajectories present a stronger separation between exploration for food and exposure to predators. The specific scale and features of this transition depend on species, gender, and local environmental conditions, pointing at adaptation to state and stage dependent evolutionary trade-offs. |
0807.0247 | Catherine Beauchemin | Amy L. Bauer, Catherine A.A. Beauchemin, and Alan S. Perelson | Agent-Based Modeling of Host-Pathogen Systems: The Successes and
Challenges | LaTeX, 12 pages, 1 EPS figure, uses document class REVTeX 4, and
packages hyperref, xspace, graphics, amsmath, verbatim, and SIunits | Information Sciences, Volume 179, Issue 10, 29 April 2009, Pages
1379-1389 | 10.1016/j.ins.2008.11.012 | null | q-bio.CB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agent-based models have been employed to describe numerous processes in
immunology. Simulations based on these types of models have been used to
enhance our understanding of immunology and disease pathology. We review
various agent-based models relevant to host-pathogen systems and discuss their
contributions to our understanding of biological processes. We then point out
some limitations and challenges of agent-based models and encourage efforts
towards reproducibility and model validation.
| [
{
"created": "Tue, 1 Jul 2008 22:01:21 GMT",
"version": "v1"
}
] | 2017-12-05 | [
[
"Bauer",
"Amy L.",
""
],
[
"Beauchemin",
"Catherine A. A.",
""
],
[
"Perelson",
"Alan S.",
""
]
] | Agent-based models have been employed to describe numerous processes in immunology. Simulations based on these types of models have been used to enhance our understanding of immunology and disease pathology. We review various agent-based models relevant to host-pathogen systems and discuss their contributions to our understanding of biological processes. We then point out some limitations and challenges of agent-based models and encourage efforts towards reproducibility and model validation. |
1612.09268 | Sidarta Ribeiro | Natalia Bezerra Mota, Sylvia Pinheiro, Mariano Sigman, Diego Fernandez
Slezak, Guillermo Cecchi, Mauro Copelli, Sidarta Ribeiro | The ontogeny of discourse structure mimics the development of literature | Natalia Bezerra Mota and Sylvia Pinheiro: Equal contribution Sidarta
Ribeiro and Mauro Copelli: Corresponding authors | null | null | null | q-bio.NC cs.CL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discourse varies with age, education, psychiatric state and historical epoch,
but the ontogenetic and cultural dynamics of discourse structure remain to be
quantitatively characterized. To this end we investigated word graphs obtained
from verbal reports of 200 subjects ages 2-58, and 676 literary texts spanning
~5,000 years. In healthy subjects, lexical diversity, graph size, and
long-range recurrence departed from initial near-random levels through a
monotonic asymptotic increase across ages, while short-range recurrence showed
a corresponding decrease. These changes were explained by education and suggest
a hierarchical development of discourse structure: short-range recurrence and
lexical diversity stabilize after elementary school, but graph size and
long-range recurrence only stabilize after high school. This gradual maturation
was blurred in psychotic subjects, who maintained in adulthood a near-random
structure. In literature, monotonic asymptotic changes over time were
remarkable: While lexical diversity, long-range recurrence and graph size
increased away from near-randomness, short-range recurrence declined, from
above to below random levels. Bronze Age texts are structurally similar to
childish or psychotic discourses, but subsequent texts converge abruptly to the
healthy adult pattern around the onset of the Axial Age (800-200 BC), a period
of pivotal cultural change. Thus, individually as well as historically,
discourse maturation increases the range of word recurrence away from
randomness.
| [
{
"created": "Tue, 27 Dec 2016 21:58:42 GMT",
"version": "v1"
}
] | 2016-12-30 | [
[
"Mota",
"Natalia Bezerra",
""
],
[
"Pinheiro",
"Sylvia",
""
],
[
"Sigman",
"Mariano",
""
],
[
"Slezak",
"Diego Fernandez",
""
],
[
"Cecchi",
"Guillermo",
""
],
[
"Copelli",
"Mauro",
""
],
[
"Ribeiro",
"Sidarta",
""
]
] | Discourse varies with age, education, psychiatric state and historical epoch, but the ontogenetic and cultural dynamics of discourse structure remain to be quantitatively characterized. To this end we investigated word graphs obtained from verbal reports of 200 subjects ages 2-58, and 676 literary texts spanning ~5,000 years. In healthy subjects, lexical diversity, graph size, and long-range recurrence departed from initial near-random levels through a monotonic asymptotic increase across ages, while short-range recurrence showed a corresponding decrease. These changes were explained by education and suggest a hierarchical development of discourse structure: short-range recurrence and lexical diversity stabilize after elementary school, but graph size and long-range recurrence only stabilize after high school. This gradual maturation was blurred in psychotic subjects, who maintained in adulthood a near-random structure. In literature, monotonic asymptotic changes over time were remarkable: While lexical diversity, long-range recurrence and graph size increased away from near-randomness, short-range recurrence declined, from above to below random levels. Bronze Age texts are structurally similar to childish or psychotic discourses, but subsequent texts converge abruptly to the healthy adult pattern around the onset of the Axial Age (800-200 BC), a period of pivotal cultural change. Thus, individually as well as historically, discourse maturation increases the range of word recurrence away from randomness. |
2203.04695 | Shengchao Liu | Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, Jian Tang | Structured Multi-task Learning for Molecular Property Prediction | null | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task learning for molecular property prediction is becoming
increasingly important in drug discovery. However, in contrast to other
domains, the performance of multi-task learning in drug discovery is still not
satisfying as the number of labeled data for each task is too limited, which
calls for additional data to complement the data scarcity. In this paper, we
study multi-task learning for molecular property prediction in a novel setting,
where a relation graph between tasks is available. We first construct a dataset
(ChEMBL-STRING) including around 400 tasks as well as a task relation graph.
Then to better utilize such relation graph, we propose a method called SGNN-EBM
to systematically investigate the structured task modeling from two
perspectives. (1) In the \emph{latent} space, we model the task representations
by applying a state graph neural network (SGNN) on the relation graph. (2) In
the \emph{output} space, we employ structured prediction with the energy-based
model (EBM), which can be efficiently trained through noise-contrastive
estimation (NCE) approach. Empirical results justify the effectiveness of
SGNN-EBM. Code is available on https://github.com/chao1224/SGNN-EBM.
| [
{
"created": "Tue, 22 Feb 2022 20:31:23 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Oct 2022 03:21:41 GMT",
"version": "v2"
}
] | 2022-10-07 | [
[
"Liu",
"Shengchao",
""
],
[
"Qu",
"Meng",
""
],
[
"Zhang",
"Zuobai",
""
],
[
"Cai",
"Huiyu",
""
],
[
"Tang",
"Jian",
""
]
] | Multi-task learning for molecular property prediction is becoming increasingly important in drug discovery. However, in contrast to other domains, the performance of multi-task learning in drug discovery is still not satisfying as the number of labeled data for each task is too limited, which calls for additional data to complement the data scarcity. In this paper, we study multi-task learning for molecular property prediction in a novel setting, where a relation graph between tasks is available. We first construct a dataset (ChEMBL-STRING) including around 400 tasks as well as a task relation graph. Then to better utilize such relation graph, we propose a method called SGNN-EBM to systematically investigate the structured task modeling from two perspectives. (1) In the \emph{latent} space, we model the task representations by applying a state graph neural network (SGNN) on the relation graph. (2) In the \emph{output} space, we employ structured prediction with the energy-based model (EBM), which can be efficiently trained through noise-contrastive estimation (NCE) approach. Empirical results justify the effectiveness of SGNN-EBM. Code is available on https://github.com/chao1224/SGNN-EBM. |
0705.3473 | Alex Barnett | A. H. Barnett and P. R. Moorcroft | Analytic steady-state space use patterns and rapid computations in
mechanistic home range analysis | 14 pages, 7 figures, submit to J. Math. Biol | null | null | null | q-bio.QM | null | Mechanistic home range models are important tools in modeling animal dynamics
in spatially-complex environments. We introduce a class of stochastic models
for animal movement in a habitat of varying preference. Such models interpolate
between spatially-implicit resource selection analysis (RSA) and
advection-diffusion models, possessing these two models as limiting cases. We
find a closed-form solution for the steady-state (equilibrium) probability
distribution u* using a factorization of the redistribution operator into
symmetric and diagonal parts. How space use is controlled by the preference
function w then depends on the characteristic width of the redistribution
kernel: when w changes rapidly compared to this width, u* ~ w, whereas on
global scales large compared to this width, u* ~ w^2. We analyse the behavior
at discontinuities in w which occur at habitat type boundaries. We simulate the
dynamics of space use given two-dimensional prey-availability data and explore
the effect of the redistribution kernel width. Our factorization allows such
numerical simulations to be done extremely fast; we expect this to aid the
computationally-intensive task of model parameter fitting and inverse modeling.
| [
{
"created": "Wed, 23 May 2007 21:53:04 GMT",
"version": "v1"
}
] | 2007-05-25 | [
[
"Barnett",
"A. H.",
""
],
[
"Moorcroft",
"P. R.",
""
]
] | Mechanistic home range models are important tools in modeling animal dynamics in spatially-complex environments. We introduce a class of stochastic models for animal movement in a habitat of varying preference. Such models interpolate between spatially-implicit resource selection analysis (RSA) and advection-diffusion models, possessing these two models as limiting cases. We find a closed-form solution for the steady-state (equilibrium) probability distribution u* using a factorization of the redistribution operator into symmetric and diagonal parts. How space use is controlled by the preference function w then depends on the characteristic width of the redistribution kernel: when w changes rapidly compared to this width, u* ~ w, whereas on global scales large compared to this width, u* ~ w^2. We analyse the behavior at discontinuities in w which occur at habitat type boundaries. We simulate the dynamics of space use given two-dimensional prey-availability data and explore the effect of the redistribution kernel width. Our factorization allows such numerical simulations to be done extremely fast; we expect this to aid the computationally-intensive task of model parameter fitting and inverse modeling. |
1909.12653 | Mauricio Barahona | Maxwell Hodges, Mauricio Barahona and Sophia N. Yaliraki | Allostery and cooperativity in multimeric proteins: bond-to-bond
propensities in ATCase | 17 pages, 7 figures | Scientific Reports, volume 8, Article number: 11079 (2018) | 10.1038/s41598-018-27992-z | null | q-bio.QM physics.bio-ph physics.chem-ph q-bio.BM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aspartate carbamoyltransferase (ATCase) is a large dodecameric enzyme with
six active sites that exhibits allostery: its catalytic rate is modulated by
the binding of various substrates at distal points from the active sites. A
recently developed method, bond-to-bond propensity analysis, has proven capable
of predicting allosteric sites in a wide range of proteins using an
energy-weighted atomistic graph obtained from the protein structure and given
knowledge only of the location of the active site. Bond-to-bond propensity
establishes if energy fluctuations at given bonds have significant effects on
any other bond in the protein, by considering their propagation through the
protein graph. In this work, we use bond-to-bond propensity analysis to study
different aspects of ATCase activity using three different protein structures
and sources of fluctuations. First, we predict key residues and bonds involved
in the transition between inactive (T) and active (R) states of ATCase by
analysing allosteric substrate binding as a source of energy perturbations in
the protein graph. Our computational results also indicate that the effect of
multiple allosteric binding is non linear: a switching effect is observed after
a particular number and arrangement of substrates is bound suggesting a form of
long range communication between the distantly arranged allosteric sites.
Second, cooperativity is explored by considering a bisubstrate analogue as the
source of energy fluctuations at the active site, also leading to the
identification of highly significant residues to the T-R transition that
enhance cooperativity across active sites. Finally, the inactive (T) structure
is shown to exhibit a strong, non linear communication between the allosteric
sites and the interface between catalytic subunits, rather than the active
site.
| [
{
"created": "Fri, 27 Sep 2019 12:37:49 GMT",
"version": "v1"
}
] | 2019-09-30 | [
[
"Hodges",
"Maxwell",
""
],
[
"Barahona",
"Mauricio",
""
],
[
"Yaliraki",
"Sophia N.",
""
]
] | Aspartate carbamoyltransferase (ATCase) is a large dodecameric enzyme with six active sites that exhibits allostery: its catalytic rate is modulated by the binding of various substrates at distal points from the active sites. A recently developed method, bond-to-bond propensity analysis, has proven capable of predicting allosteric sites in a wide range of proteins using an energy-weighted atomistic graph obtained from the protein structure and given knowledge only of the location of the active site. Bond-to-bond propensity establishes if energy fluctuations at given bonds have significant effects on any other bond in the protein, by considering their propagation through the protein graph. In this work, we use bond-to-bond propensity analysis to study different aspects of ATCase activity using three different protein structures and sources of fluctuations. First, we predict key residues and bonds involved in the transition between inactive (T) and active (R) states of ATCase by analysing allosteric substrate binding as a source of energy perturbations in the protein graph. Our computational results also indicate that the effect of multiple allosteric binding is non linear: a switching effect is observed after a particular number and arrangement of substrates is bound suggesting a form of long range communication between the distantly arranged allosteric sites. Second, cooperativity is explored by considering a bisubstrate analogue as the source of energy fluctuations at the active site, also leading to the identification of highly significant residues to the T-R transition that enhance cooperativity across active sites. Finally, the inactive (T) structure is shown to exhibit a strong, non linear communication between the allosteric sites and the interface between catalytic subunits, rather than the active site. |
1707.00027 | Nathan Baker | Elizabeth Jurrus, Dave Engel, Keith Star, Kyle Monson, Juan Brandi,
Lisa E. Felberg, David H. Brookes, Leighton Wilson, Jiahui Chen, Karina
Liles, Minju Chun, Peter Li, David W. Gohara, Todd Dolinsky, Robert Konecny,
David R. Koes, Jens Erik Nielsen, Teresa Head-Gordon, Weihua Geng, Robert
Krasny, Guo Wei Wei, Michael J. Holst, J. Andrew McCammon, Nathan A. Baker | Improvements to the APBS biomolecular solvation software suite | null | null | 10.1002/pro.3280 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve
the equations of continuum electrostatics for large biomolecular assemblages
that has provided impact in the study of a broad range of chemical, biological,
and biomedical applications. APBS addresses three key technology challenges for
understanding solvation and electrostatics in biomedical applications: accurate
and efficient models for biomolecular solvation and electrostatics, robust and
scalable software for applying those theories to biomolecular systems, and
mechanisms for sharing and analyzing biomolecular electrostatics data in the
scientific community. To address new research applications and advancing
computational capabilities, we have continually updated APBS and its suite of
accompanying software since its release in 2001. In this manuscript, we discuss
the models and capabilities that have recently been implemented within the APBS
software package including: a Poisson-Boltzmann analytical and a
semi-analytical solver, an optimized boundary element solver, a geometry-based
geometric flow solvation model, a graph theory based algorithm for determining
p$K_a$ values, and an improved web-based visualization tool for viewing
electrostatics.
| [
{
"created": "Fri, 30 Jun 2017 19:09:01 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2017 21:24:37 GMT",
"version": "v2"
}
] | 2017-12-29 | [
[
"Jurrus",
"Elizabeth",
""
],
[
"Engel",
"Dave",
""
],
[
"Star",
"Keith",
""
],
[
"Monson",
"Kyle",
""
],
[
"Brandi",
"Juan",
""
],
[
"Felberg",
"Lisa E.",
""
],
[
"Brookes",
"David H.",
""
],
[
"Wilson",
"Leighton",
""
],
[
"Chen",
"Jiahui",
""
],
[
"Liles",
"Karina",
""
],
[
"Chun",
"Minju",
""
],
[
"Li",
"Peter",
""
],
[
"Gohara",
"David W.",
""
],
[
"Dolinsky",
"Todd",
""
],
[
"Konecny",
"Robert",
""
],
[
"Koes",
"David R.",
""
],
[
"Nielsen",
"Jens Erik",
""
],
[
"Head-Gordon",
"Teresa",
""
],
[
"Geng",
"Weihua",
""
],
[
"Krasny",
"Robert",
""
],
[
"Wei",
"Guo Wei",
""
],
[
"Holst",
"Michael J.",
""
],
[
"McCammon",
"J. Andrew",
""
],
[
"Baker",
"Nathan A.",
""
]
] | The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining p$K_a$ values, and an improved web-based visualization tool for viewing electrostatics. |
2004.00834 | Claus Vogl | Claus Vogl, Sandra Peer | Inference of population genetic parameters with a biallelic mutation
drift model using the coalescent, diffusion with orthogonal polynomials, and
the Moran model | 26 pages, 2 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In population genetics, extant samples are usually used for inference of past
population genetic forces. With the Kingman coalescent and the backward
diffusion equation, inference of the marginal likelihood proceeds from an
extant sample backward in time. Conditional on an extant sample, the Moran
model can also be used backward in time with identical results, up to a scaling
of time. In particular, all three approaches -- the coalescent, the backward
diffusion, and the Moran model -- lead to the identical marginal likelihood of
the sample. If probabilities of ancestral states are also inferred, either of
discrete ancestral allele particle configurations, as in the coalescent, or of
ancestral population allele proportions, as in the backward diffusion, the
backward algorithm needs to be combined with the corresponding forward
algorithm to the forward-backward algorithm. Generally orthogonal polynomials,
solving the diffusion equation, are numerically simpler than the other
approaches: they implicitly sum over many intermediate ancestral particle
configurations; furthermore, while the Moran model requires iterative matrix
multiplication with a transition matrix of a dimension of the population size
squared, expansion of the polynomials is only necessary up to the sample size.
For discrete samples, forward-in-time moving pure birth processes similar to
the Polya- or Hoppe-urn models complement the backward-looking coalescent.
Because, the sample size is a random variable forward in time, pure-birth
processes are unsuited to model population demography given extant samples.
With orthogonal polynomials, however, not only ancestral allele proportions but
also probabilities of ancestral particle configurations can be calculated
easily. Assuming only mutation and drift, the use of orthogonal polynomials is
numerically advantageous over alternative strategies.
| [
{
"created": "Thu, 2 Apr 2020 06:26:53 GMT",
"version": "v1"
}
] | 2020-04-03 | [
[
"Vogl",
"Claus",
""
],
[
"Peer",
"Sandra",
""
]
] | In population genetics, extant samples are usually used for inference of past population genetic forces. With the Kingman coalescent and the backward diffusion equation, inference of the marginal likelihood proceeds from an extant sample backward in time. Conditional on an extant sample, the Moran model can also be used backward in time with identical results, up to a scaling of time. In particular, all three approaches -- the coalescent, the backward diffusion, and the Moran model -- lead to the identical marginal likelihood of the sample. If probabilities of ancestral states are also inferred, either of discrete ancestral allele particle configurations, as in the coalescent, or of ancestral population allele proportions, as in the backward diffusion, the backward algorithm needs to be combined with the corresponding forward algorithm to the forward-backward algorithm. Generally orthogonal polynomials, solving the diffusion equation, are numerically simpler than the other approaches: they implicitly sum over many intermediate ancestral particle configurations; furthermore, while the Moran model requires iterative matrix multiplication with a transition matrix of a dimension of the population size squared, expansion of the polynomials is only necessary up to the sample size. For discrete samples, forward-in-time moving pure birth processes similar to the Polya- or Hoppe-urn models complement the backward-looking coalescent. Because, the sample size is a random variable forward in time, pure-birth processes are unsuited to model population demography given extant samples. With orthogonal polynomials, however, not only ancestral allele proportions but also probabilities of ancestral particle configurations can be calculated easily. Assuming only mutation and drift, the use of orthogonal polynomials is numerically advantageous over alternative strategies. |
1608.04700 | Antoine Zambelli | Antoine Zambelli | A Data-Driven Approach to Estimating the Number of Clusters in
Hierarchical Clustering | 6 pages, 7 figures, 12 tables | null | null | null | q-bio.QM cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose two new methods for estimating the number of clusters in a
hierarchical clustering framework in the hopes of creating a fully automated
process with no human intervention. The methods are completely data-driven and
require no input from the researcher, and as such are fully automated. They are
quite easy to implement and not computationally intensive in the least. We
analyze performance on several simulated data sets and the Biobase Gene
Expression Set, comparing our methods to the established Gap statistic and
Elbow methods and outperforming both in multi-cluster scenarios.
| [
{
"created": "Tue, 16 Aug 2016 18:35:09 GMT",
"version": "v1"
}
] | 2016-08-17 | [
[
"Zambelli",
"Antoine",
""
]
] | We propose two new methods for estimating the number of clusters in a hierarchical clustering framework in the hopes of creating a fully automated process with no human intervention. The methods are completely data-driven and require no input from the researcher, and as such are fully automated. They are quite easy to implement and not computationally intensive in the least. We analyze performance on several simulated data sets and the Biobase Gene Expression Set, comparing our methods to the established Gap statistic and Elbow methods and outperforming both in multi-cluster scenarios. |
2112.14500 | Mar\'ia Vallet-Regi | Elena Alvarez, Manuel Estevez, Carla Jimenez-Jimenez, Montserrat
Colilla, Isabel Izquierdo-Barba, Blanca Gonzalez, Maria Vallet-Regi | A versatile multicomponent mesoporous silica nanosystem with dual
antimicrobial and osteogenic effects | 27 pages, 8 figures | Acta Biomaterialia 136 (2021) 570 to 581 | 10.1016/j.actbio.2021.09.027 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this manuscript, we propose a simple and versatile methodology to design
nanosystems based on biocompatible and multicomponent mesoporous silica
nanoparticles (MSNs) for infection management. This strategy relies on the
combination of antibiotic molecules and antimicrobial metal ions into the same
nanosystem, affording a significant improvement of the antibiofilm effect
compared to that of nanosystems carrying only one of these agents. The
multicomponent nanosystem is based on MSNs externally functionalized with a
polyamine dendrimer (MSN-G3) that favors internalization inside the bacteria
and allows the complexation of multiactive metal ions (MSN-G3-Mn+).
Importantly, the selection of both the antibiotic and the cation may be done
depending on clinical needs. Herein, levofloxacin and Zn2+ ion, chosen owing to
both its antimicrobial and osteogenic capability, have been incorporated. This
dual biological role of Zn2+ could have and adjuvant effect thought destroying
the biofilm in combination with the antibiotic as well as aid to the repair and
regeneration of lost bone tissue associated to osteolysis during infection
process. The versatility of the nanosystem has been demonstrated incorporating
Ag+ ions in a reference nanosystem. In vitro antimicrobial assays in planktonic
and biofilm state show a high antimicrobial efficacy due to the combined action
of levofloxacin and Zn2+, achieving an antimicrobial efficacy above 99%
compared to the MSNs containing only one of the microbicide agents. In vitro
cell cultures with MC3T3-E1 preosteoblasts reveal the osteogenic capability of
the nanosystem, showing a positive effect on osteoblastic differentiation while
preserving the cell viability.
| [
{
"created": "Wed, 29 Dec 2021 11:12:33 GMT",
"version": "v1"
}
] | 2021-12-30 | [
[
"Alvarez",
"Elena",
""
],
[
"Estevez",
"Manuel",
""
],
[
"Jimenez-Jimenez",
"Carla",
""
],
[
"Colilla",
"Montserrat",
""
],
[
"Izquierdo-Barba",
"Isabel",
""
],
[
"Gonzalez",
"Blanca",
""
],
[
"Vallet-Regi",
"Maria",
""
]
] | In this manuscript, we propose a simple and versatile methodology to design nanosystems based on biocompatible and multicomponent mesoporous silica nanoparticles (MSNs) for infection management. This strategy relies on the combination of antibiotic molecules and antimicrobial metal ions into the same nanosystem, affording a significant improvement of the antibiofilm effect compared to that of nanosystems carrying only one of these agents. The multicomponent nanosystem is based on MSNs externally functionalized with a polyamine dendrimer (MSN-G3) that favors internalization inside the bacteria and allows the complexation of multiactive metal ions (MSN-G3-Mn+). Importantly, the selection of both the antibiotic and the cation may be done depending on clinical needs. Herein, levofloxacin and Zn2+ ion, chosen owing to both its antimicrobial and osteogenic capability, have been incorporated. This dual biological role of Zn2+ could have and adjuvant effect thought destroying the biofilm in combination with the antibiotic as well as aid to the repair and regeneration of lost bone tissue associated to osteolysis during infection process. The versatility of the nanosystem has been demonstrated incorporating Ag+ ions in a reference nanosystem. In vitro antimicrobial assays in planktonic and biofilm state show a high antimicrobial efficacy due to the combined action of levofloxacin and Zn2+, achieving an antimicrobial efficacy above 99% compared to the MSNs containing only one of the microbicide agents. In vitro cell cultures with MC3T3-E1 preosteoblasts reveal the osteogenic capability of the nanosystem, showing a positive effect on osteoblastic differentiation while preserving the cell viability. |
2303.13996 | Steven Salzberg | Paulo Amaral, Silvia Carbonell-Sala, Francisco M. De La Vega, Tiago
Faial, Adam Frankish, Thomas Gingeras, Roderic Guigo, Jennifer L Harrow,
Artemis G. Hatzigeorgiou, Rory Johnson, Terence D. Murphy, Mihaela Pertea,
Kim D. Pruitt, Shashikant Pujar, Hazuki Takahashi, Igor Ulitsky, Ales
Varabyou, Christine A. Wells, Mark Yandell, Piero Carninci, and Steven L.
Salzberg | The status of the human gene catalogue | 14 pages | null | null | null | q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Scientists have been trying to identify all of the genes in the human genome
since the initial draft of the genome was published in 2001. Over the
intervening years, much progress has been made in identifying protein-coding
genes, and the estimated number has shrunk to fewer than 20,000, although the
number of distinct protein-coding isoforms has expanded dramatically. The
invention of high-throughput RNA sequencing and other technological
breakthroughs have led to an explosion in the number of reported non-coding RNA
genes, although most of them do not yet have any known function. A combination
of recent advances offers a path forward to identifying these functions and
towards eventually completing the human gene catalogue. However, much work
remains to be done before we have a universal annotation standard that includes
all medically significant genes, maintains their relationships with different
reference genomes, and describes clinically relevant genetic variants.
| [
{
"created": "Fri, 24 Mar 2023 13:49:25 GMT",
"version": "v1"
}
] | 2023-03-27 | [
[
"Amaral",
"Paulo",
""
],
[
"Carbonell-Sala",
"Silvia",
""
],
[
"De La Vega",
"Francisco M.",
""
],
[
"Faial",
"Tiago",
""
],
[
"Frankish",
"Adam",
""
],
[
"Gingeras",
"Thomas",
""
],
[
"Guigo",
"Roderic",
""
],
[
"Harrow",
"Jennifer L",
""
],
[
"Hatzigeorgiou",
"Artemis G.",
""
],
[
"Johnson",
"Rory",
""
],
[
"Murphy",
"Terence D.",
""
],
[
"Pertea",
"Mihaela",
""
],
[
"Pruitt",
"Kim D.",
""
],
[
"Pujar",
"Shashikant",
""
],
[
"Takahashi",
"Hazuki",
""
],
[
"Ulitsky",
"Igor",
""
],
[
"Varabyou",
"Ales",
""
],
[
"Wells",
"Christine A.",
""
],
[
"Yandell",
"Mark",
""
],
[
"Carninci",
"Piero",
""
],
[
"Salzberg",
"Steven L.",
""
]
] | Scientists have been trying to identify all of the genes in the human genome since the initial draft of the genome was published in 2001. Over the intervening years, much progress has been made in identifying protein-coding genes, and the estimated number has shrunk to fewer than 20,000, although the number of distinct protein-coding isoforms has expanded dramatically. The invention of high-throughput RNA sequencing and other technological breakthroughs have led to an explosion in the number of reported non-coding RNA genes, although most of them do not yet have any known function. A combination of recent advances offers a path forward to identifying these functions and towards eventually completing the human gene catalogue. However, much work remains to be done before we have a universal annotation standard that includes all medically significant genes, maintains their relationships with different reference genomes, and describes clinically relevant genetic variants. |
2102.11629 | Simon Martina-Perez | Simon Martina-Perez, Matthew J. Simpson, Ruth E. Baker | Bayesian uncertainty quantification for data-driven equation learning | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Equation learning aims to infer differential equation models from data. While
a number of studies have shown that differential equation models can be
successfully identified when the data are sufficiently detailed and corrupted
with relatively small amounts of noise, the relationship between observation
noise and uncertainty in the learned differential equation models remains
unexplored. We demonstrate that for noisy data sets there exists great
variation in both the structure of the learned differential equation models as
well as the parameter values. We explore how to combine data sets to quantify
uncertainty in the learned models, and at the same time draw mechanistic
conclusions about the target differential equations. We generate noisy data
using a stochastic agent-based model and combine equation learning methods with
approximate Bayesian computation (ABC) to show that the correct differential
equation model can be successfully learned from data, while a quantification of
uncertainty is given by a posterior distribution in parameter space.
| [
{
"created": "Tue, 23 Feb 2021 11:08:30 GMT",
"version": "v1"
},
{
"created": "Mon, 17 May 2021 16:06:44 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Jun 2021 17:19:44 GMT",
"version": "v3"
},
{
"created": "Wed, 29 Sep 2021 16:46:41 GMT",
"version": "v4"
}
] | 2021-09-30 | [
[
"Martina-Perez",
"Simon",
""
],
[
"Simpson",
"Matthew J.",
""
],
[
"Baker",
"Ruth E.",
""
]
] | Equation learning aims to infer differential equation models from data. While a number of studies have shown that differential equation models can be successfully identified when the data are sufficiently detailed and corrupted with relatively small amounts of noise, the relationship between observation noise and uncertainty in the learned differential equation models remains unexplored. We demonstrate that for noisy data sets there exists great variation in both the structure of the learned differential equation models as well as the parameter values. We explore how to combine data sets to quantify uncertainty in the learned models, and at the same time draw mechanistic conclusions about the target differential equations. We generate noisy data using a stochastic agent-based model and combine equation learning methods with approximate Bayesian computation (ABC) to show that the correct differential equation model can be successfully learned from data, while a quantification of uncertainty is given by a posterior distribution in parameter space. |
1906.11365 | Eshan Mitra | Eshan D. Mitra, William S. Hlavacek | Parameter Estimation and Uncertainty Quantification for Systems Biology
Models | 23 pages, 1 figure, 1 table | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical models can provide quantitative insight into immunoreceptor
signaling, but require parameterization and uncertainty quantification before
making reliable predictions. We review currently available methods and software
tools to address these problems. We consider gradient-based and gradient-free
methods for point estimation of parameter values, and methods of profile
likelihood, bootstrapping, and Bayesian inference for uncertainty
quantification. We consider recent and potential future applications of these
methods to systems-level modeling of immune-related phenomena.
| [
{
"created": "Wed, 26 Jun 2019 22:22:21 GMT",
"version": "v1"
}
] | 2019-06-28 | [
[
"Mitra",
"Eshan D.",
""
],
[
"Hlavacek",
"William S.",
""
]
] | Mathematical models can provide quantitative insight into immunoreceptor signaling, but require parameterization and uncertainty quantification before making reliable predictions. We review currently available methods and software tools to address these problems. We consider gradient-based and gradient-free methods for point estimation of parameter values, and methods of profile likelihood, bootstrapping, and Bayesian inference for uncertainty quantification. We consider recent and potential future applications of these methods to systems-level modeling of immune-related phenomena. |
2211.02829 | Arka Sanyal Mr | Adrita Chanda, Adrija Aich, Arka Sanyal, Anantika Chandra, Saumyadeep
Goswami | Current Landscape of Mesenchymal Stem Cell Therapy in COVID Induced
Acute Respiratory Distress Syndrome | 14 Pages, 6 Figures | Acta Scientific MICROBIOLOGY (ISSN: 2581-3226), Volume 5 Issue 8
August 2022 | 10.31080/ASMI.2022.05.1125 | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | The severe acute respiratory syndrome coronavirus 2 outbreak in Chinas Hubei
area in late 2019 has now created a global pandemic that has spread to over 150
countries. In most people, COVID 19 is a respiratory infection that produces
fever, cough, and shortness of breath. Patients with severe COVID 19 may
develop ARDS. MSCs can come from a number of places, such as bone marrow,
umbilical cord, and adipose tissue. Because of their easy accessibility and low
immunogenicity, MSCs were often used in animal and clinical research. In recent
studies, MSCs have been shown to decrease inflammation, enhance lung
permeability, improve microbial and alveolar fluid clearance, and accelerate
lung epithelial and endothelial repair. Furthermore, MSC-based therapy has
shown promising outcomes in preclinical studies and phase 1 clinical trials in
sepsis and ARDS. In this paper, we posit the therapeutic strategies using MSC
and dissect how and why MSC therapy is a potential treatment option for COVID
19 induced ARDS. We cite numerous promising clinical trials, elucidate the
potential advantages of MSC therapy for COVID 19 ARDS patients, examine the
detriments of this therapeutic strategy and suggest possibilities of subsequent
research.
| [
{
"created": "Sat, 5 Nov 2022 06:54:42 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Chanda",
"Adrita",
""
],
[
"Aich",
"Adrija",
""
],
[
"Sanyal",
"Arka",
""
],
[
"Chandra",
"Anantika",
""
],
[
"Goswami",
"Saumyadeep",
""
]
] | The severe acute respiratory syndrome coronavirus 2 outbreak in Chinas Hubei area in late 2019 has now created a global pandemic that has spread to over 150 countries. In most people, COVID 19 is a respiratory infection that produces fever, cough, and shortness of breath. Patients with severe COVID 19 may develop ARDS. MSCs can come from a number of places, such as bone marrow, umbilical cord, and adipose tissue. Because of their easy accessibility and low immunogenicity, MSCs were often used in animal and clinical research. In recent studies, MSCs have been shown to decrease inflammation, enhance lung permeability, improve microbial and alveolar fluid clearance, and accelerate lung epithelial and endothelial repair. Furthermore, MSC-based therapy has shown promising outcomes in preclinical studies and phase 1 clinical trials in sepsis and ARDS. In this paper, we posit the therapeutic strategies using MSC and dissect how and why MSC therapy is a potential treatment option for COVID 19 induced ARDS. We cite numerous promising clinical trials, elucidate the potential advantages of MSC therapy for COVID 19 ARDS patients, examine the detriments of this therapeutic strategy and suggest possibilities of subsequent research. |
q-bio/0611049 | David A. Kessler | David A. Kessler, Nadav M. Shnerb | Extinction Rates for Fluctuation-Induced Metastabilities : A Real-Space
WKB Approach | null | null | 10.1007/s10955-007-9312-2 | null | q-bio.PE | null | The extinction of a single species due to demographic stochasticity is
analyzed. The discrete nature of the individual agents and the Poissonian noise
related to the birth-death processes result in local extinction of a metastable
population, as the system hits the absorbing state. The Fokker-Planck
formulation of that problem fails to capture the statistics of large deviations
from the metastable state, while approximations appropriate close to the
absorbing state become, in general, invalid as the population becomes large. To
connect these two regimes, a master equation based on a real space WKB method
is presented, and is shown to yield an excellent approximation for the decay
rate and the extreme events statistics all the way down to the absorbing state.
The details of the underlying microscopic process, smeared out in a mean field
treatment, are shown to be crucial for an exact determination of the extinction
exponent. This general scheme is shown to reproduce the known results in the
field, to yield new corollaries and to fit quite precisely the numerical
solutions. Moreover it allows for systematic improvement via a series expansion
where the small parameter is the inverse of the number of individuals in the
metastable state.
| [
{
"created": "Thu, 16 Nov 2006 21:10:16 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Kessler",
"David A.",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | The extinction of a single species due to demographic stochasticity is analyzed. The discrete nature of the individual agents and the Poissonian noise related to the birth-death processes result in local extinction of a metastable population, as the system hits the absorbing state. The Fokker-Planck formulation of that problem fails to capture the statistics of large deviations from the metastable state, while approximations appropriate close to the absorbing state become, in general, invalid as the population becomes large. To connect these two regimes, a master equation based on a real space WKB method is presented, and is shown to yield an excellent approximation for the decay rate and the extreme events statistics all the way down to the absorbing state. The details of the underlying microscopic process, smeared out in a mean field treatment, are shown to be crucial for an exact determination of the extinction exponent. This general scheme is shown to reproduce the known results in the field, to yield new corollaries and to fit quite precisely the numerical solutions. Moreover it allows for systematic improvement via a series expansion where the small parameter is the inverse of the number of individuals in the metastable state. |
2106.12405 | Kento Nakamura | Kento Nakamura and Tetsuya J. Kobayashi | Optimal sensing and control of run-and-tumble chemotaxis | 8 pages, 4 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Run-and-tumble chemotaxis is one of the representative search strategies of
an odor source via sensing its spatial gradient. The optimal ways of sensing
and control in the run-and-tumble chemotaxis have been analyzed theoretically
to elucidate the efficiency of strategies implemented in organisms. However,
because of theoretical difficulties, most of attempts have been limited only to
either linear or deterministic analysis even though real biological chemotactic
systems involve considerable stochasticity and nonlinearity in their sensory
processes and controlled responses. In this paper, by combining the theories of
optimal filtering and Kullback-Leibler control of partially observed Markov
decision process (POMDP), we derive the optimal and fully nonlinear strategy
for controlling run-and-tumble motion depending on noisy sensing of ligand
gradient. The derived optimal strategy consists of the optimal filtering
dynamics to estimate the run-direction from noisy sensory input and the control
function to regulate the motor output. We further show that this optimal
strategy can be associated naturally with a standard biochemical model and
experimental data of the Escherichia coli's chemotaxis. These results
demonstrate that our theoretical framework can work as a basis for analyzing
the efficiency and optimality of run-and-tumble chemotaxis.
| [
{
"created": "Wed, 23 Jun 2021 13:48:31 GMT",
"version": "v1"
}
] | 2021-06-24 | [
[
"Nakamura",
"Kento",
""
],
[
"Kobayashi",
"Tetsuya J.",
""
]
] | Run-and-tumble chemotaxis is one of the representative search strategies of an odor source via sensing its spatial gradient. The optimal ways of sensing and control in the run-and-tumble chemotaxis have been analyzed theoretically to elucidate the efficiency of strategies implemented in organisms. However, because of theoretical difficulties, most of attempts have been limited only to either linear or deterministic analysis even though real biological chemotactic systems involve considerable stochasticity and nonlinearity in their sensory processes and controlled responses. In this paper, by combining the theories of optimal filtering and Kullback-Leibler control of partially observed Markov decision process (POMDP), we derive the optimal and fully nonlinear strategy for controlling run-and-tumble motion depending on noisy sensing of ligand gradient. The derived optimal strategy consists of the optimal filtering dynamics to estimate the run-direction from noisy sensory input and the control function to regulate the motor output. We further show that this optimal strategy can be associated naturally with a standard biochemical model and experimental data of the Escherichia coli's chemotaxis. These results demonstrate that our theoretical framework can work as a basis for analyzing the efficiency and optimality of run-and-tumble chemotaxis. |
1301.3528 | Momiao Xiong | Momiao Xiong and Long Ma | An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance | null | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
| [
{
"created": "Tue, 15 Jan 2013 23:19:14 GMT",
"version": "v1"
}
] | 2013-01-17 | [
[
"Xiong",
"Momiao",
""
],
[
"Ma",
"Long",
""
]
] | Fast and cheaper next generation sequencing technologies will generate unprecedentedly massive and highly-dimensional genomic and epigenomic variation data. In the near future, a routine part of medical record will include the sequenced genomes. A fundamental question is how to efficiently extract genomic and epigenomic variants of clinical utility which will provide information for optimal wellness and interference strategies. Traditional paradigm for identifying variants of clinical validity is to test association of the variants. However, significantly associated genetic variants may or may not be usefulness for diagnosis and prognosis of diseases. Alternative to association studies for finding genetic variants of predictive utility is to systematically search variants that contain sufficient information for phenotype prediction. To achieve this, we introduce concepts of sufficient dimension reduction and coordinate hypothesis which project the original high dimensional data to very low dimensional space while preserving all information on response phenotypes. We then formulate clinically significant genetic variant discovery problem into sparse SDR problem and develop algorithms that can select significant genetic variants from up to or even ten millions of predictors with the aid of dividing SDR for whole genome into a number of subSDR problems defined for genomic regions. The sparse SDR is in turn formulated as sparse optimal scoring problem, but with penalty which can remove row vectors from the basis matrix. To speed up computation, we develop the modified alternating direction method for multipliers to solve the sparse optimal scoring problem which can easily be implemented in parallel. To illustrate its application, the proposed method is applied to simulation data and the NHLBI's Exome Sequencing Project dataset |
1104.2562 | Mareike Fischer | Mareike Fischer | Mathematical aspects of phylogenetic groves | 17 pages, 5 figures | null | null | null | q-bio.PE cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inference of new information on the relatedness of species by
phylogenetic trees based on DNA data is one of the main challenges of modern
biology. But despite all technological advances, DNA sequencing is still a
time-consuming and costly process. Therefore, decision criteria would be
desirable to decide a priori which data might contribute new information to the
supertree which is not explicitly displayed by any input tree. A new concept,
so-called groves, to identify taxon sets with the potential to construct such
informative supertrees was suggested by An\'e et al. in 2009. But the important
conjecture that maximal groves can easily be identified in a database remained
unproved and was published on the Isaac Newton Institute's list of open
phylogenetic problems. In this paper, we show that the conjecture does not
generally hold, but also introduce a new concept, namely 2-overlap groves,
which overcomes this problem.
| [
{
"created": "Wed, 13 Apr 2011 17:58:47 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Fischer",
"Mareike",
""
]
] | The inference of new information on the relatedness of species by phylogenetic trees based on DNA data is one of the main challenges of modern biology. But despite all technological advances, DNA sequencing is still a time-consuming and costly process. Therefore, decision criteria would be desirable to decide a priori which data might contribute new information to the supertree which is not explicitly displayed by any input tree. A new concept, so-called groves, to identify taxon sets with the potential to construct such informative supertrees was suggested by An\'e et al. in 2009. But the important conjecture that maximal groves can easily be identified in a database remained unproved and was published on the Isaac Newton Institute's list of open phylogenetic problems. In this paper, we show that the conjecture does not generally hold, but also introduce a new concept, namely 2-overlap groves, which overcomes this problem. |
0905.2329 | Marco Morelli | Marco J. Morelli, Pieter Rein ten Wolde and Rosalind J. Allen | DNA looping provides stability and robustness to the bacteriophage
lambda switch | In press on PNAS. Single file contains supplementary info | null | 10.1073/pnas.0810399106 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bistable gene regulatory switch controlling the transition from lysogeny
to lysis in bacteriophage lambda presents a unique challenge to quantitative
modeling. Despite extensive characterization of this regulatory network, the
origin of the extreme stability of the lysogenic state remains unclear. We have
constructed a stochastic model for this switch. Using Forward Flux Sampling
simulations, we show that this model predicts an extremely low rate of
spontaneous prophage induction in a recA mutant, in agreement with experimental
observations. In our model, the DNA loop formed by octamerization of CI bound
to the O_L and O_R operator regions is crucial for stability, allowing the
lysogenic state to remain stable even when a large fraction of the total CI is
depleted by nonspecific binding to genomic DNA. DNA looping also ensures that
the switch is robust to mutations in the order of the O_R binding sites. Our
results suggest that DNA looping can provide a mechanism to maintain a stable
lysogenic state in the face of a range of challenges including noisy gene
expression, nonspecific DNA binding and operator site mutations.
| [
{
"created": "Thu, 14 May 2009 13:37:23 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Morelli",
"Marco J.",
""
],
[
"Wolde",
"Pieter Rein ten",
""
],
[
"Allen",
"Rosalind J.",
""
]
] | The bistable gene regulatory switch controlling the transition from lysogeny to lysis in bacteriophage lambda presents a unique challenge to quantitative modeling. Despite extensive characterization of this regulatory network, the origin of the extreme stability of the lysogenic state remains unclear. We have constructed a stochastic model for this switch. Using Forward Flux Sampling simulations, we show that this model predicts an extremely low rate of spontaneous prophage induction in a recA mutant, in agreement with experimental observations. In our model, the DNA loop formed by octamerization of CI bound to the O_L and O_R operator regions is crucial for stability, allowing the lysogenic state to remain stable even when a large fraction of the total CI is depleted by nonspecific binding to genomic DNA. DNA looping also ensures that the switch is robust to mutations in the order of the O_R binding sites. Our results suggest that DNA looping can provide a mechanism to maintain a stable lysogenic state in the face of a range of challenges including noisy gene expression, nonspecific DNA binding and operator site mutations. |
1609.07292 | David A. Kessler | David A. Kessler and Herbert Levine | Nonlinear self-adapting wave patterns | null | null | 10.1088/1367-2630/18/12/122001 | null | q-bio.SC nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new type of traveling wave pattern, one that can adapt to the
size of physical system in which it is embedded. Such a system arises when the
initial state has an instability that extends down to zero wavevector,
connecting at that point to two symmetry modes of the underlying dynamical
system. The Min system of proteins in E. coli is such as system with the
symmetry emerging from the global conservation of two proteins, MinD and MinE.
For this and related systems, traveling waves can adiabatically deform as the
system is increased in size without the increase in node number that would be
expected for an oscillatory version of a Turing instability containing an
allowed wavenumber band with a finite minimum.
| [
{
"created": "Fri, 23 Sep 2016 09:46:51 GMT",
"version": "v1"
}
] | 2017-01-04 | [
[
"Kessler",
"David A.",
""
],
[
"Levine",
"Herbert",
""
]
] | We propose a new type of traveling wave pattern, one that can adapt to the size of physical system in which it is embedded. Such a system arises when the initial state has an instability that extends down to zero wavevector, connecting at that point to two symmetry modes of the underlying dynamical system. The Min system of proteins in E. coli is such as system with the symmetry emerging from the global conservation of two proteins, MinD and MinE. For this and related systems, traveling waves can adiabatically deform as the system is increased in size without the increase in node number that would be expected for an oscillatory version of a Turing instability containing an allowed wavenumber band with a finite minimum. |
1203.6231 | Maroussia Favre | Maroussia Favre and Didier Sornette | Strong gender differences in reproductive success variance, and the
times to the most recent common ancestors | null | Journal of Theoretical Biology 310, 43-54 (2012) | 10.1016/j.jtbi.2012.06.026 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Time To the Most Recent Common Ancestor (TMRCA) based on human
mitochondrial DNA (mtDNA) is estimated to be twice that based on the
non-recombining part of the Y chromosome (NRY). These TMRCAs have special
demographic implications because mtDNA is transmitted only from mother to
child, and NRY from father to son. Therefore, mtDNA reflects female history,
and NRY, male history. To investigate what caused the two-to-one female-male
TMRCA ratio in humans, we develop a forward-looking agent-based model (ABM)
with overlapping generations and individual life cycles. We implement two main
mating systems: polygynandry and polygyny with different degrees in between. In
each mating system, the male population can be either homogeneous or
heterogeneous. In the latter case, some males are `alphas' and others are
`betas', which reflects the extent to which they are favored by female mates. A
heterogeneous male population implies a competition among males with the
purpose of signaling as alphas. The introduction of a heterogeneous male
population is found to reduce by a factor 2 the probability of finding equal
female and male TMRCAs and shifts the distribution of the TMRCA ratio to higher
values. We find that high male-male competition is necessary to reproduce a
TMRCA ratio of 2: less than half the males can be alphas and betas can have at
most half the fitness of alphas. In addition, in the modes that maximize the
probability of having a TMRCA ratio between 1.5 and 2.5, the present generation
has 1.4 times as many female as male ancestors. We also tested the effect of
sex-biased migration and sex-specific death rates and found that these are
unlikely to explain alone the sex-biased TMRCA ratio observed in humans. Our
results support the view that we are descended from males who were successful
in a highly competitive context, while females were facing a much smaller
female-female competition.
| [
{
"created": "Wed, 28 Mar 2012 11:26:13 GMT",
"version": "v1"
}
] | 2013-12-19 | [
[
"Favre",
"Maroussia",
""
],
[
"Sornette",
"Didier",
""
]
] | The Time To the Most Recent Common Ancestor (TMRCA) based on human mitochondrial DNA (mtDNA) is estimated to be twice that based on the non-recombining part of the Y chromosome (NRY). These TMRCAs have special demographic implications because mtDNA is transmitted only from mother to child, and NRY from father to son. Therefore, mtDNA reflects female history, and NRY, male history. To investigate what caused the two-to-one female-male TMRCA ratio in humans, we develop a forward-looking agent-based model (ABM) with overlapping generations and individual life cycles. We implement two main mating systems: polygynandry and polygyny with different degrees in between. In each mating system, the male population can be either homogeneous or heterogeneous. In the latter case, some males are `alphas' and others are `betas', which reflects the extent to which they are favored by female mates. A heterogeneous male population implies a competition among males with the purpose of signaling as alphas. The introduction of a heterogeneous male population is found to reduce by a factor 2 the probability of finding equal female and male TMRCAs and shifts the distribution of the TMRCA ratio to higher values. We find that high male-male competition is necessary to reproduce a TMRCA ratio of 2: less than half the males can be alphas and betas can have at most half the fitness of alphas. In addition, in the modes that maximize the probability of having a TMRCA ratio between 1.5 and 2.5, the present generation has 1.4 times as many female as male ancestors. We also tested the effect of sex-biased migration and sex-specific death rates and found that these are unlikely to explain alone the sex-biased TMRCA ratio observed in humans. Our results support the view that we are descended from males who were successful in a highly competitive context, while females were facing a much smaller female-female competition. |
0712.3900 | Damien Eveillard | J\'er\'emie Bourdon (LINA), Damien Eveillard (LINA), Samuel Gabillard
(LINA), Theo Merle (LINA, ENS Cachan) | Integrating heterogeneous knowledges for understanding biological
behaviors: a probabilistic approach | 10 pages | null | null | null | q-bio.QM | null | Despite recent molecular technique improvements, biological knowledge remains
incomplete. Reasoning on living systems hence implies to integrate
heterogeneous and partial informations. Although current investigations
successfully focus on qualitative behaviors of macromolecular networks, others
approaches show partial quantitative informations like protein concentration
variations over times. We consider that both informations, qualitative and
quantitative, have to be combined into a modeling method to provide a better
understanding of the biological system. We propose here such a method using a
probabilistic-like approach. After its exhaustive description, we illustrate
its advantages by modeling the carbon starvation response in Escherichia coli.
In this purpose, we build an original qualitative model based on available
observations. After the formal verification of its qualitative properties, the
probabilistic model shows quantitative results corresponding to biological
expectations which confirm the interest of our probabilistic approach.
| [
{
"created": "Sun, 23 Dec 2007 07:22:47 GMT",
"version": "v1"
}
] | 2009-09-29 | [
[
"Bourdon",
"Jérémie",
"",
"LINA"
],
[
"Eveillard",
"Damien",
"",
"LINA"
],
[
"Gabillard",
"Samuel",
"",
"LINA"
],
[
"Merle",
"Theo",
"",
"LINA, ENS Cachan"
]
] | Despite recent molecular technique improvements, biological knowledge remains incomplete. Reasoning on living systems hence implies to integrate heterogeneous and partial informations. Although current investigations successfully focus on qualitative behaviors of macromolecular networks, others approaches show partial quantitative informations like protein concentration variations over times. We consider that both informations, qualitative and quantitative, have to be combined into a modeling method to provide a better understanding of the biological system. We propose here such a method using a probabilistic-like approach. After its exhaustive description, we illustrate its advantages by modeling the carbon starvation response in Escherichia coli. In this purpose, we build an original qualitative model based on available observations. After the formal verification of its qualitative properties, the probabilistic model shows quantitative results corresponding to biological expectations which confirm the interest of our probabilistic approach. |
1707.02614 | Daniel Hoffmann | Jean-No\"el Grad, Alba Gigante, Christoph Wilms, Jan Nikolaj Dybowski,
Ludwig Ohl, Christian Ottmann, Carsten Schmuck, and Daniel Hoffmann | Locating large flexible ligands on proteins | null | null | 10.1021/acs.jcim.7b00413 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many biologically important ligands of proteins are large, flexible, and
often charged molecules that bind to extended regions on the protein surface.
It is infeasible or expensive to locate such ligands on proteins with standard
methods such as docking or molecular dynamics (MD) simulation. The alternative
approach proposed here is the scanning of a spatial and angular grid around the
protein with smaller fragments of the large ligand. Energy values for complete
grids can be computed efficiently with a well-known Fast Fourier Transform
accelerated algorithm and a physically meaningful interaction model. We show
that the approach can readily incorporate flexibility of protein and ligand.
The energy grids (EGs) resulting from the ligand fragment scans can be
transformed into probability distributions, and then directly compared to
probability distributions estimated from MD simulations and experimental
structural data. We test the approach on a diverse set of complexes between
proteins and large, flexible ligands, including a complex of Sonic Hedgehog
protein and heparin, three heparin sulfate substrates or non-substrates of an
epimerase, a multi-branched supramolecular ligand that stabilizes a
protein-peptide complex, and a flexible zwitterionic ligand that binds to a
surface basin of a Kringle domain. In all cases the EG approach gives results
that are in good agreement with experimental data or MD simulations.
| [
{
"created": "Sun, 9 Jul 2017 18:25:52 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jul 2017 16:24:03 GMT",
"version": "v2"
}
] | 2018-04-17 | [
[
"Grad",
"Jean-Noël",
""
],
[
"Gigante",
"Alba",
""
],
[
"Wilms",
"Christoph",
""
],
[
"Dybowski",
"Jan Nikolaj",
""
],
[
"Ohl",
"Ludwig",
""
],
[
"Ottmann",
"Christian",
""
],
[
"Schmuck",
"Carsten",
""
],
[
"Hoffmann",
"Daniel",
""
]
] | Many biologically important ligands of proteins are large, flexible, and often charged molecules that bind to extended regions on the protein surface. It is infeasible or expensive to locate such ligands on proteins with standard methods such as docking or molecular dynamics (MD) simulation. The alternative approach proposed here is the scanning of a spatial and angular grid around the protein with smaller fragments of the large ligand. Energy values for complete grids can be computed efficiently with a well-known Fast Fourier Transform accelerated algorithm and a physically meaningful interaction model. We show that the approach can readily incorporate flexibility of protein and ligand. The energy grids (EGs) resulting from the ligand fragment scans can be transformed into probability distributions, and then directly compared to probability distributions estimated from MD simulations and experimental structural data. We test the approach on a diverse set of complexes between proteins and large, flexible ligands, including a complex of Sonic Hedgehog protein and heparin, three heparin sulfate substrates or non-substrates of an epimerase, a multi-branched supramolecular ligand that stabilizes a protein-peptide complex, and a flexible zwitterionic ligand that binds to a surface basin of a Kringle domain. In all cases the EG approach gives results that are in good agreement with experimental data or MD simulations. |
1901.06286 | Alexander L\"uck | Charalampos Kyriakopoulos, Pascal Giehr, Alexander L\"uck, J\"orn
Walter, Verena Wolf | A Hybrid HMM Approach for the Dynamics of DNA Methylation | 15 pages, 5 figures, 2 tables | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The understanding of mechanisms that control epigenetic changes is an
important research area in modern functional biology. Epigenetic modifications
such as DNA methylation are in general very stable over many cell divisions.
DNA methylation can however be subject to specific and fast changes over a
short time scale even in non-dividing (i.e. not-replicating) cells. Such
dynamic DNA methylation changes are caused by a combination of active
demethylation and de novo methylation processes which have not been
investigated in integrated models. Here we present a hybrid (hidden) Markov
model to describe the cycle of methylation and demethylation over (short) time
scales. Our hybrid model decribes several molecular events either happening at
deterministic points (i.e. describing mechanisms that occur only during cell
division) and other events occurring at random time points. We test our model
on mouse embryonic stem cells using time-resolved data. We predict methylation
changes and estimate the efficiencies of the different modification steps
related to DNA methylation and demethylation.
| [
{
"created": "Fri, 18 Jan 2019 14:55:08 GMT",
"version": "v1"
}
] | 2019-01-21 | [
[
"Kyriakopoulos",
"Charalampos",
""
],
[
"Giehr",
"Pascal",
""
],
[
"Lück",
"Alexander",
""
],
[
"Walter",
"Jörn",
""
],
[
"Wolf",
"Verena",
""
]
] | The understanding of mechanisms that control epigenetic changes is an important research area in modern functional biology. Epigenetic modifications such as DNA methylation are in general very stable over many cell divisions. DNA methylation can however be subject to specific and fast changes over a short time scale even in non-dividing (i.e. not-replicating) cells. Such dynamic DNA methylation changes are caused by a combination of active demethylation and de novo methylation processes which have not been investigated in integrated models. Here we present a hybrid (hidden) Markov model to describe the cycle of methylation and demethylation over (short) time scales. Our hybrid model decribes several molecular events either happening at deterministic points (i.e. describing mechanisms that occur only during cell division) and other events occurring at random time points. We test our model on mouse embryonic stem cells using time-resolved data. We predict methylation changes and estimate the efficiencies of the different modification steps related to DNA methylation and demethylation. |
1901.04053 | Tim Peterson | Sandeep Kumar, Timothy R. Peterson | Moonshots for aging | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As the global population ages, there is increased interest in living longer
and improving one's quality of life in later years. However, studying aging -
the decline in body function - is expensive and time-consuming. And despite
research success to make model organisms live longer, there still aren't really
any feasible solutions for delaying aging in humans. With space travel,
scientists couldn't know what it would take to get to the moon. They had to
extrapolate from theory and shorter-range tests. Perhaps with aging, we need a
similar moonshot philosophy. And though "shot" might imply medicine, perhaps we
need to think beyond biological interventions. Like the moon, we seem a long
way away from provable therapies to increase human healthspan (the healthy
period of one's life) or lifespan (how long one lives). This review therefore
focuses on radical proposals. We hope it might stimulate discussion on what we
might consider doing significantly differently than ongoing aging research.
| [
{
"created": "Sun, 13 Jan 2019 20:10:39 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Apr 2019 20:47:12 GMT",
"version": "v2"
}
] | 2019-04-23 | [
[
"Kumar",
"Sandeep",
""
],
[
"Peterson",
"Timothy R.",
""
]
] | As the global population ages, there is increased interest in living longer and improving one's quality of life in later years. However, studying aging - the decline in body function - is expensive and time-consuming. And despite research success to make model organisms live longer, there still aren't really any feasible solutions for delaying aging in humans. With space travel, scientists couldn't know what it would take to get to the moon. They had to extrapolate from theory and shorter-range tests. Perhaps with aging, we need a similar moonshot philosophy. And though "shot" might imply medicine, perhaps we need to think beyond biological interventions. Like the moon, we seem a long way away from provable therapies to increase human healthspan (the healthy period of one's life) or lifespan (how long one lives). This review therefore focuses on radical proposals. We hope it might stimulate discussion on what we might consider doing significantly differently than ongoing aging research. |
2401.01632 | Jose A Capitan | Jose A. Capitan and David Alonso | Out-of-equlibrium inference of stochastic model parameters through
population data from generic consumer-resource dynamics | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Consumer-resource dynamics is central in determining biomass transport across
ecosystems. The assumptions of mass action, chemostatic conditions and
stationarity in stochastic feeding dynamics lead to Holling type II functional
responses, whose use is widespread in macroscopic models of population
dynamics. However, to be useful for parameter inference, stochastic population
models need to be identifiable, this meaning that model parameters can be
uniquely inferred from a large number of model observations. In this
contribution we study parameter identifiability in a multi-resource
consumer-resource model, for which we can obtain the steady-state and
out-of-equilibrium probability distributions of predator's abundances by
analytically solving the master equation. Based on these analytical results, we
can conduct in silico experiments by tracking the abundance of consumers that
are either searching for or handling prey, data then used for maximum
likelihood parameter estimation. We show that, when model observations are
recorded out of equilibrium, feeding parameters are truly identifiable, whereas
if sampling is done at stationarity, only ratios of rates can be inferred from
data. We discuss the implications of our results when inferring parameters of
general dynamical models.
| [
{
"created": "Wed, 3 Jan 2024 09:07:10 GMT",
"version": "v1"
}
] | 2024-01-04 | [
[
"Capitan",
"Jose A.",
""
],
[
"Alonso",
"David",
""
]
] | Consumer-resource dynamics is central in determining biomass transport across ecosystems. The assumptions of mass action, chemostatic conditions and stationarity in stochastic feeding dynamics lead to Holling type II functional responses, whose use is widespread in macroscopic models of population dynamics. However, to be useful for parameter inference, stochastic population models need to be identifiable, this meaning that model parameters can be uniquely inferred from a large number of model observations. In this contribution we study parameter identifiability in a multi-resource consumer-resource model, for which we can obtain the steady-state and out-of-equilibrium probability distributions of predator's abundances by analytically solving the master equation. Based on these analytical results, we can conduct in silico experiments by tracking the abundance of consumers that are either searching for or handling prey, data then used for maximum likelihood parameter estimation. We show that, when model observations are recorded out of equilibrium, feeding parameters are truly identifiable, whereas if sampling is done at stationarity, only ratios of rates can be inferred from data. We discuss the implications of our results when inferring parameters of general dynamical models. |
2402.00090 | Vishnu Menon | Akash K Rao, Vishnu K Menon, Arnav Bhavsar, Shubhajit Roy Chowdhury,
Ramsingh Negi, Varun Dutt | Classification of attention performance post-longitudinal tDCS via
functional connectivity and machine learning methods | 6 pages, to be presented in the IEEE 9th International Conference for
Convergence in Technology (I2CT),Pune, April 2024. arXiv admin note:
substantial text overlap with arXiv:2401.17700 | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention is the brain's mechanism for selectively processing specific
stimuli while filtering out irrelevant information. Characterizing changes in
attention following long-term interventions (such as transcranial direct
current stimulation (tDCS)) has seldom been emphasized in the literature. To
classify attention performance post-tDCS, this study uses functional
connectivity and machine learning algorithms. Fifty individuals were split into
experimental and control conditions. On Day 1, EEG data was obtained as
subjects executed an attention task. From Day 2 through Day 8, the experimental
group was administered 1mA tDCS, while the control group received sham tDCS. On
Day 10, subjects repeated the task mentioned on Day 1. Functional connectivity
metrics were used to classify attention performance using various machine
learning methods. Results revealed that combining the Adaboost model and
recursive feature elimination yielded a classification accuracy of 91.84%. We
discuss the implications of our results in developing neurofeedback frameworks
to assess attention.
| [
{
"created": "Wed, 31 Jan 2024 10:38:52 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Rao",
"Akash K",
""
],
[
"Menon",
"Vishnu K",
""
],
[
"Bhavsar",
"Arnav",
""
],
[
"Chowdhury",
"Shubhajit Roy",
""
],
[
"Negi",
"Ramsingh",
""
],
[
"Dutt",
"Varun",
""
]
] | Attention is the brain's mechanism for selectively processing specific stimuli while filtering out irrelevant information. Characterizing changes in attention following long-term interventions (such as transcranial direct current stimulation (tDCS)) has seldom been emphasized in the literature. To classify attention performance post-tDCS, this study uses functional connectivity and machine learning algorithms. Fifty individuals were split into experimental and control conditions. On Day 1, EEG data was obtained as subjects executed an attention task. From Day 2 through Day 8, the experimental group was administered 1mA tDCS, while the control group received sham tDCS. On Day 10, subjects repeated the task mentioned on Day 1. Functional connectivity metrics were used to classify attention performance using various machine learning methods. Results revealed that combining the Adaboost model and recursive feature elimination yielded a classification accuracy of 91.84%. We discuss the implications of our results in developing neurofeedback frameworks to assess attention. |
1106.6210 | Kazuhiko Minami | Kazuhiko Minami | Equivalence between two-dimensional cell-sorting and one-dimensional
generalized random walk -- spin representations of generating operators | 32 pages | null | null | null | q-bio.CB cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two-dimensional cell-sorting problem is found to be mathematically
equivalent to the one-dimensional random walk problem with pair creations and
annihilations, i.e. the adhesion probabilities in the cell-sorting model relate
analytically to the expectation values in the random walk problem. This is an
example demonstrating that two completely different biological systems are
governed by a common mathematical structure. This result is obtained through
the equivalences of these systems with lattice spin models. It is also shown
that arbitrary generation operators can be written by the spin operators, and
hence all biological stochastic problems can in principle be analyzed utilizing
the techniques and knowledge previously obtained in the study of lattice spin
systems.
| [
{
"created": "Thu, 30 Jun 2011 12:51:23 GMT",
"version": "v1"
}
] | 2011-07-01 | [
[
"Minami",
"Kazuhiko",
""
]
] | The two-dimensional cell-sorting problem is found to be mathematically equivalent to the one-dimensional random walk problem with pair creations and annihilations, i.e. the adhesion probabilities in the cell-sorting model relate analytically to the expectation values in the random walk problem. This is an example demonstrating that two completely different biological systems are governed by a common mathematical structure. This result is obtained through the equivalences of these systems with lattice spin models. It is also shown that arbitrary generation operators can be written by the spin operators, and hence all biological stochastic problems can in principle be analyzed utilizing the techniques and knowledge previously obtained in the study of lattice spin systems. |
1207.0108 | Eckhard Schlemm | Eckhard Schlemm | Asymptotic fitness distribution in the Bak-Sneppen model of biological
evolution with four species | 10 pages, one figure; to appear in Journal of Statistical Physics | 2012, Journal of Statistical Physics, 148, pp 191-203 | 10.1007/s10955-012-0538-2 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a new method to compute the asymptotic fitness distribution in the
Bak-Sneppen model of biological evolution. As applications we derive the full
asymptotic distribution in the four-species model, and give an explicit linear
recurrence relation for a set of coefficients determining the asymptotic
distribution in the five-species model.
| [
{
"created": "Sat, 30 Jun 2012 15:18:29 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"Schlemm",
"Eckhard",
""
]
] | We suggest a new method to compute the asymptotic fitness distribution in the Bak-Sneppen model of biological evolution. As applications we derive the full asymptotic distribution in the four-species model, and give an explicit linear recurrence relation for a set of coefficients determining the asymptotic distribution in the five-species model. |
1211.5807 | Pleuni Pennings | Pleuni S. Pennings | HIV drug resistance: problems and perspectives | Updated version, minor changes in text. Review paper. Submitted to:
Infectious Disease Reports
http://www.pagepress.org/journals/index.php/idr/index | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Access to combination antiretroviral treatment (ART) has improved greatly
over recent years. At the end of 2011, more than eight million HIV infected
people were receiving antiretroviral therapy in low-income and middle-income
countries. ART generally works well in keeping the virus suppressed and the
patient healthy. However, treatment only works as long as the virus is not
resistant against the drugs used. In the last decades, HIV treatments have
become better and better at slowing down the evolution of drug resistance, so
that some patients are treated for many years without having any resistance
problems. However, for some patients, especially in low-income countries, drug
resistance is still a serious threat to their health. This essay will review
what is known about transmitted and acquired drug resistance, multi-class drug
resistance, resistance to newer drugs, resistance due to treatment for the
prevention of mother-to-child transmission, the role of minority variants
(low-frequency drug-resistance mutations), and resistance due to pre-exposure
prophylaxis.
| [
{
"created": "Sun, 25 Nov 2012 21:00:17 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jan 2013 21:23:27 GMT",
"version": "v2"
}
] | 2013-01-25 | [
[
"Pennings",
"Pleuni S.",
""
]
] | Access to combination antiretroviral treatment (ART) has improved greatly over recent years. At the end of 2011, more than eight million HIV infected people were receiving antiretroviral therapy in low-income and middle-income countries. ART generally works well in keeping the virus suppressed and the patient healthy. However, treatment only works as long as the virus is not resistant against the drugs used. In the last decades, HIV treatments have become better and better at slowing down the evolution of drug resistance, so that some patients are treated for many years without having any resistance problems. However, for some patients, especially in low-income countries, drug resistance is still a serious threat to their health. This essay will review what is known about transmitted and acquired drug resistance, multi-class drug resistance, resistance to newer drugs, resistance due to treatment for the prevention of mother-to-child transmission, the role of minority variants (low-frequency drug-resistance mutations), and resistance due to pre-exposure prophylaxis. |
2403.12684 | Matthijs Meijers | Matthijs Meijers, Denis Ruchnewitz, Jan Eberhardt, Malancha Karmakar,
Marta {\L}uksza, and Michael L\"assig | Concepts and methods for predicting viral evolution | 30 pages, 6 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The seasonal human influenza virus undergoes rapid evolution, leading to
significant changes in circulating viral strains from year to year. These
changes are typically driven by adaptive mutations, particularly in the
antigenic epitopes, the regions of the viral surface protein haemagglutinin
targeted by human antibodies. Here we describe a consistent set of methods for
data-driven predictive analysis of viral evolution. Our pipeline integrates
four types of data: (1) sequence data of viral isolates collected on a
worldwide scale, (2) epidemiological data on incidences, (3) antigenic
characterization of circulating viruses, and (4) intrinsic viral phenotypes.
From the combined analysis of these data, we obtain estimates of relative
fitness for circulating strains and predictions of clade frequencies for
periods of up to one year. Furthermore, we obtain comparative estimates of
protection against future viral populations for candidate vaccine strains,
providing a basis for pre-emptive vaccine strain selection. Continuously
updated predictions obtained from the prediction pipeline for influenza and
SARS-CoV-2 are available on the website https://previr.app.
| [
{
"created": "Tue, 19 Mar 2024 12:39:37 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2024 10:15:27 GMT",
"version": "v2"
}
] | 2024-05-03 | [
[
"Meijers",
"Matthijs",
""
],
[
"Ruchnewitz",
"Denis",
""
],
[
"Eberhardt",
"Jan",
""
],
[
"Karmakar",
"Malancha",
""
],
[
"Łuksza",
"Marta",
""
],
[
"Lässig",
"Michael",
""
]
] | The seasonal human influenza virus undergoes rapid evolution, leading to significant changes in circulating viral strains from year to year. These changes are typically driven by adaptive mutations, particularly in the antigenic epitopes, the regions of the viral surface protein haemagglutinin targeted by human antibodies. Here we describe a consistent set of methods for data-driven predictive analysis of viral evolution. Our pipeline integrates four types of data: (1) sequence data of viral isolates collected on a worldwide scale, (2) epidemiological data on incidences, (3) antigenic characterization of circulating viruses, and (4) intrinsic viral phenotypes. From the combined analysis of these data, we obtain estimates of relative fitness for circulating strains and predictions of clade frequencies for periods of up to one year. Furthermore, we obtain comparative estimates of protection against future viral populations for candidate vaccine strains, providing a basis for pre-emptive vaccine strain selection. Continuously updated predictions obtained from the prediction pipeline for influenza and SARS-CoV-2 are available on the website https://previr.app. |
2401.08712 | Fariba Jafari Horestani | M. Mehdi Owrang O, Fariba Jafari Horestani, Ginger Schwarz | Survival Analysis of Young Triple-Negative Breast Cancer Patients | 31 Pages, 11 Figures, 7 Tables, Peer-reviewed article | null | null | null | q-bio.QM cs.LG stat.AP | http://creativecommons.org/publicdomain/zero/1.0/ | Breast cancer prognosis is crucial for effective treatment, with the disease
more common in women over 40 years old but rare under 40 years old, where less
than 5 percent of cases occur in the U.S. Studies indicate a worse prognosis in
younger women, which varies by ethnicity. Breast cancers are classified based
on receptors like estrogen, progesterone, and HER2. Triple-negative breast
cancer (TNBC), lacking these receptors, accounts for about 15 percent of cases
and is more prevalent in younger patients, often resulting in poorer outcomes.
Nevertheless, the impact of age on TNBC prognosis remains unclear. Factors like
age, race, tumor grade, size, and lymph node status are studied for their role
in TNBC's clinical outcomes, but current research is inconclusive about
age-related differences. This study uses SEER data set to examine the influence
of younger age on survivability in TNBC patients, aiming to determine if age is
a significant prognostic factor. Our experimental results on SEER dataset
confirm the existing research reports that TNBC patients have worse prognosis
compared to non-TNBC based on age. Our main goal was to investigate whether
younger age has any significance on the survivability of TNBC patients.
Experimental results do not show that younger age has any significance on the
prognosis and survival rate of the TNBC patients
| [
{
"created": "Mon, 15 Jan 2024 17:51:14 GMT",
"version": "v1"
}
] | 2024-01-18 | [
[
"O",
"M. Mehdi Owrang",
""
],
[
"Horestani",
"Fariba Jafari",
""
],
[
"Schwarz",
"Ginger",
""
]
] | Breast cancer prognosis is crucial for effective treatment, with the disease more common in women over 40 years old but rare under 40 years old, where less than 5 percent of cases occur in the U.S. Studies indicate a worse prognosis in younger women, which varies by ethnicity. Breast cancers are classified based on receptors like estrogen, progesterone, and HER2. Triple-negative breast cancer (TNBC), lacking these receptors, accounts for about 15 percent of cases and is more prevalent in younger patients, often resulting in poorer outcomes. Nevertheless, the impact of age on TNBC prognosis remains unclear. Factors like age, race, tumor grade, size, and lymph node status are studied for their role in TNBC's clinical outcomes, but current research is inconclusive about age-related differences. This study uses SEER data set to examine the influence of younger age on survivability in TNBC patients, aiming to determine if age is a significant prognostic factor. Our experimental results on SEER dataset confirm the existing research reports that TNBC patients have worse prognosis compared to non-TNBC based on age. Our main goal was to investigate whether younger age has any significance on the survivability of TNBC patients. Experimental results do not show that younger age has any significance on the prognosis and survival rate of the TNBC patients |
q-bio/0403026 | Abhijnan Rej | Abhijnan Rej | A Dynamical Similarity Approach to the Foundations of Complexity and
Coordination in Multiscale Systems | latex2e, 35 pages, University Scholar Thesis (University of
Connecticut) | null | null | null | q-bio.NC q-bio.QM | null | I review a number of cognate issues that, taken together, pertain to the
creation of a non-reductionistic theory of multiscale coordination and present
one candidate theory based on the principle of dynamical similarity.
| [
{
"created": "Thu, 18 Mar 2004 18:05:06 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Rej",
"Abhijnan",
""
]
] | I review a number of cognate issues that, taken together, pertain to the creation of a non-reductionistic theory of multiscale coordination and present one candidate theory based on the principle of dynamical similarity. |
2109.10474 | Yuxiang Wu | Yuxiang Wu, Shang Wu, Xin Wang, Chengtian Lang, Quanshi Zhang, Quan
Wen, Tianqi Xu | Rapid detection and recognition of whole brain activity in a freely
behaving Caenorhabditis elegans | null | PLOS Computational Biology 18(10): e1010594, 2022 | 10.1371/journal.pcbi.1010594 | null | q-bio.QM cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Advanced volumetric imaging methods and genetically encoded activity
indicators have permitted a comprehensive characterization of whole brain
activity at single neuron resolution in \textit{Caenorhabditis elegans}. The
constant motion and deformation of the nematode nervous system, however, impose
a great challenge for consistent identification of densely packed neurons in a
behaving animal. Here, we propose a cascade solution for long-term and rapid
recognition of head ganglion neurons in a freely moving \textit{C. elegans}.
First, potential neuronal regions from a stack of fluorescence images are
detected by a deep learning algorithm. Second, 2-dimensional neuronal regions
are fused into 3-dimensional neuron entities. Third, by exploiting the neuronal
density distribution surrounding a neuron and relative positional information
between neurons, a multi-class artificial neural network transforms engineered
neuronal feature vectors into digital neuronal identities. With a small number
of training samples, our bottom-up approach is able to process each volume -
$1024 \times 1024 \times 18$ in voxels - in less than 1 second and achieves an
accuracy of $91\%$ in neuronal detection and above $80\%$ in neuronal tracking
over a long video recording. Our work represents a step towards rapid and fully
automated algorithms for decoding whole brain activity underlying naturalistic
behaviors.
| [
{
"created": "Wed, 22 Sep 2021 01:33:54 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Sep 2021 07:16:01 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jan 2022 15:57:51 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Sep 2022 09:13:11 GMT",
"version": "v4"
}
] | 2022-10-12 | [
[
"Wu",
"Yuxiang",
""
],
[
"Wu",
"Shang",
""
],
[
"Wang",
"Xin",
""
],
[
"Lang",
"Chengtian",
""
],
[
"Zhang",
"Quanshi",
""
],
[
"Wen",
"Quan",
""
],
[
"Xu",
"Tianqi",
""
]
] | Advanced volumetric imaging methods and genetically encoded activity indicators have permitted a comprehensive characterization of whole brain activity at single neuron resolution in \textit{Caenorhabditis elegans}. The constant motion and deformation of the nematode nervous system, however, impose a great challenge for consistent identification of densely packed neurons in a behaving animal. Here, we propose a cascade solution for long-term and rapid recognition of head ganglion neurons in a freely moving \textit{C. elegans}. First, potential neuronal regions from a stack of fluorescence images are detected by a deep learning algorithm. Second, 2-dimensional neuronal regions are fused into 3-dimensional neuron entities. Third, by exploiting the neuronal density distribution surrounding a neuron and relative positional information between neurons, a multi-class artificial neural network transforms engineered neuronal feature vectors into digital neuronal identities. With a small number of training samples, our bottom-up approach is able to process each volume - $1024 \times 1024 \times 18$ in voxels - in less than 1 second and achieves an accuracy of $91\%$ in neuronal detection and above $80\%$ in neuronal tracking over a long video recording. Our work represents a step towards rapid and fully automated algorithms for decoding whole brain activity underlying naturalistic behaviors. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.