id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.19235 | Weiyi Wang | Weiyi Wang, Shailendra Sawleshwarkar and Mahendra Piraveenan | Computational Approaches of Modelling Human Papillomavirus Transmission
and Prevention Strategies: A Systematic Review | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human papillomavirus (HPV) infection is the most common sexually transmitted
infection in the world. Persistent oncogenic Human papillomavirus infection has
been a leading threat to global health and can lead to serious complications
such as cervical cancer. Prevention interventions including vaccination and
screening have been proved effective in reducing the risk of HPV-related
diseases. In recent decades, computational epidemiology has been serving as a
very useful tool to study HPV transmission dynamics and evaluation of
prevention strategies. In this paper, we conduct a comprehensive literature
review on state-of-the-art computational epidemic models for HPV disease
dynamics, transmission dynamics, as well as prevention efforts. We summarise
current research trends, identify gaps in the present literature, and identify
future research directions with potential in accelerating the containment
and/or elimination of HPV infection.
| [
{
"created": "Tue, 30 Apr 2024 03:32:06 GMT",
"version": "v1"
}
] | 2024-05-01 | [
[
"Wang",
"Weiyi",
""
],
[
"Sawleshwarkar",
"Shailendra",
""
],
[
"Piraveenan",
"Mahendra",
""
]
] | Human papillomavirus (HPV) infection is the most common sexually transmitted infection in the world. Persistent oncogenic Human papillomavirus infection has been a leading threat to global health and can lead to serious complications such as cervical cancer. Prevention interventions including vaccination and screening have been proved effective in reducing the risk of HPV-related diseases. In recent decades, computational epidemiology has been serving as a very useful tool to study HPV transmission dynamics and evaluation of prevention strategies. In this paper, we conduct a comprehensive literature review on state-of-the-art computational epidemic models for HPV disease dynamics, transmission dynamics, as well as prevention efforts. We summarise current research trends, identify gaps in the present literature, and identify future research directions with potential in accelerating the containment and/or elimination of HPV infection. |
2307.08758 | Ivan Maksymov | Ivan S. Maksymov and Ganna Pogrebna | Linking Physics and Psychology of Bistable Perception Using an Eye Blink
Inspired Quantum Harmonic Oscillator Model | null | null | null | null | q-bio.NC cs.CV physics.bio-ph quant-ph | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a novel quantum-mechanical model that describes
psychological phenomena using the analogy of a harmonic oscillator represented
by an electron trapped in a potential well. Study~1 demonstrates the
application of the proposed model to bistable perception of ambiguous figures
(i.e., optical illusions), exemplified by the Necker cube. While prior research
has theoretically linked quantum mechanics to psychological phenomena, in
Study~2 we demonstrate a viable physiological connection between physics and
bistable perception. To that end, the model draws parallels between quantum
tunneling of an electron through a potential energy barrier and an eye blink,
an action known to trigger perceptual reversals. Finally, we discuss the
ability of the model to capture diverse optical illusions and other
psychological phenomena, including cognitive dissonance.
| [
{
"created": "Fri, 23 Jun 2023 06:10:00 GMT",
"version": "v1"
}
] | 2023-07-19 | [
[
"Maksymov",
"Ivan S.",
""
],
[
"Pogrebna",
"Ganna",
""
]
] | This paper introduces a novel quantum-mechanical model that describes psychological phenomena using the analogy of a harmonic oscillator represented by an electron trapped in a potential well. Study~1 demonstrates the application of the proposed model to bistable perception of ambiguous figures (i.e., optical illusions), exemplified by the Necker cube. While prior research has theoretically linked quantum mechanics to psychological phenomena, in Study~2 we demonstrate a viable physiological connection between physics and bistable perception. To that end, the model draws parallels between quantum tunneling of an electron through a potential energy barrier and an eye blink, an action known to trigger perceptual reversals. Finally, we discuss the ability of the model to capture diverse optical illusions and other psychological phenomena, including cognitive dissonance. |
2402.04274 | Elham E Khoda | Xiaohan Liu, ChiJui Chen, YanLun Huang, LingChi Yang, Elham E Khoda,
Yihui Chen, Scott Hauck, Shih-Chieh Hsu, Bo-Cheng Lai | FPGA Deployment of LFADS for Real-time Neuroscience Experiments | 6 pages, 8 figures | Fast Machine Learning for Science, ICCAD 2023 | null | null | q-bio.NC cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Large-scale recordings of neural activity are providing new opportunities to
study neural population dynamics. A powerful method for analyzing such
high-dimensional measurements is to deploy an algorithm to learn the
low-dimensional latent dynamics. LFADS (Latent Factor Analysis via Dynamical
Systems) is a deep learning method for inferring latent dynamics from
high-dimensional neural spiking data recorded simultaneously in single trials.
This method has shown a remarkable performance in modeling complex brain
signals with an average inference latency in milliseconds. As our capacity of
simultaneously recording many neurons is increasing exponentially, it is
becoming crucial to build capacity for deploying low-latency inference of the
computing algorithms. To improve the real-time processing ability of LFADS, we
introduce an efficient implementation of the LFADS models onto Field
Programmable Gate Arrays (FPGA). Our implementation shows an inference latency
of 41.97 $\mu$s for processing the data in a single trial on a Xilinx U55C.
| [
{
"created": "Fri, 2 Feb 2024 07:52:20 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Liu",
"Xiaohan",
""
],
[
"Chen",
"ChiJui",
""
],
[
"Huang",
"YanLun",
""
],
[
"Yang",
"LingChi",
""
],
[
"Khoda",
"Elham E",
""
],
[
"Chen",
"Yihui",
""
],
[
"Hauck",
"Scott",
""
],
[
"Hsu",
"Shih-Chieh",
""
],
[
"Lai",
"Bo-Cheng",
""
]
] | Large-scale recordings of neural activity are providing new opportunities to study neural population dynamics. A powerful method for analyzing such high-dimensional measurements is to deploy an algorithm to learn the low-dimensional latent dynamics. LFADS (Latent Factor Analysis via Dynamical Systems) is a deep learning method for inferring latent dynamics from high-dimensional neural spiking data recorded simultaneously in single trials. This method has shown a remarkable performance in modeling complex brain signals with an average inference latency in milliseconds. As our capacity of simultaneously recording many neurons is increasing exponentially, it is becoming crucial to build capacity for deploying low-latency inference of the computing algorithms. To improve the real-time processing ability of LFADS, we introduce an efficient implementation of the LFADS models onto Field Programmable Gate Arrays (FPGA). Our implementation shows an inference latency of 41.97 $\mu$s for processing the data in a single trial on a Xilinx U55C. |
2403.07920 | Zewen Chi | Le Zhuo, Zewen Chi, Minghao Xu, Heyan Huang, Heqi Zheng, Conghui He,
Xian-Ling Mao, Wentao Zhang | ProtLLM: An Interleaved Protein-Language LLM with Protein-as-Word
Pre-Training | https://protllm.github.io/project/ | null | null | null | q-bio.BM cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose ProtLLM, a versatile cross-modal large language model (LLM) for
both protein-centric and protein-language tasks. ProtLLM features a unique
dynamic protein mounting mechanism, enabling it to handle complex inputs where
the natural language text is interspersed with an arbitrary number of proteins.
Besides, we propose the protein-as-word language modeling approach to train
ProtLLM. By developing a specialized protein vocabulary, we equip the model
with the capability to predict not just natural language but also proteins from
a vast pool of candidates. Additionally, we construct a large-scale interleaved
protein-text dataset, named InterPT, for pre-training. This dataset
comprehensively encompasses both (1) structured data sources like protein
annotations and (2) unstructured data sources like biological research papers,
thereby endowing ProtLLM with crucial knowledge for understanding proteins. We
evaluate ProtLLM on classic supervised protein-centric tasks and explore its
novel protein-language applications. Experimental results demonstrate that
ProtLLM not only achieves superior performance against protein-specialized
baselines on protein-centric tasks but also induces zero-shot and in-context
learning capabilities on protein-language tasks.
| [
{
"created": "Wed, 28 Feb 2024 01:29:55 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Zhuo",
"Le",
""
],
[
"Chi",
"Zewen",
""
],
[
"Xu",
"Minghao",
""
],
[
"Huang",
"Heyan",
""
],
[
"Zheng",
"Heqi",
""
],
[
"He",
"Conghui",
""
],
[
"Mao",
"Xian-Ling",
""
],
[
"Zhang",
"Wentao",
""
]
] | We propose ProtLLM, a versatile cross-modal large language model (LLM) for both protein-centric and protein-language tasks. ProtLLM features a unique dynamic protein mounting mechanism, enabling it to handle complex inputs where the natural language text is interspersed with an arbitrary number of proteins. Besides, we propose the protein-as-word language modeling approach to train ProtLLM. By developing a specialized protein vocabulary, we equip the model with the capability to predict not just natural language but also proteins from a vast pool of candidates. Additionally, we construct a large-scale interleaved protein-text dataset, named InterPT, for pre-training. This dataset comprehensively encompasses both (1) structured data sources like protein annotations and (2) unstructured data sources like biological research papers, thereby endowing ProtLLM with crucial knowledge for understanding proteins. We evaluate ProtLLM on classic supervised protein-centric tasks and explore its novel protein-language applications. Experimental results demonstrate that ProtLLM not only achieves superior performance against protein-specialized baselines on protein-centric tasks but also induces zero-shot and in-context learning capabilities on protein-language tasks. |
1304.7670 | Timothy Sackton | Timothy B. Sackton, Russell B. Corbett-Detig, Javaregowda Nagaraju, R.
Lakshmi Vaishna, Kallare P. Arunkumar, and Daniel L. Hartl | Positive selection drives faster-Z evolution in silkmoths | 19 pages, 3 figures; revised results, discussion, and methods from
previous version | null | 10.1111/evo.12449 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genes linked to X or Z chromosomes, which are hemizygous in the heterogametic
sex, are predicted to evolve at different rates than those on autosomes. This
faster-X effect can arise either as a consequence of hemizygosity, which leads
to more efficient selection for recessive beneficial mutations in the
heterogametic sex, or as a consequence of reduced effective population size of
the hemizygous chromosome, which leads to increased fixation of weakly
deleterious mutations due to random genetic drift. Empirical results to date
have suggested that, while the overall pattern across taxa is complicated, in
general systems with male-heterogamy show a faster-X effect primarily
attributable to more efficient selection while the only female-heterogamy taxon
studied to date (birds) shows a faster-Z effect primarily attributable to
increased drift. In order to test the generality of the faster-Z pattern seen
in birds, we sequenced the genome of the Lepidopteran insect Bombyx huttoni, a
close outgroup of the domesticated silkmoth Bombyx mori. We show that silkmoths
experience faster-Z evolution, but unlike in birds, the faster-Z effect appears
to be attributable to more efficient positive selection in females. These
results suggest that female-heterogamy alone is unlikely to be sufficient to
explain the reduced efficacy of selection on the bird Z chromosome. Instead, it
is likely that a combination of patterns of dosage compensation and overall
effective population size, among other factors, influence patterns of faster-Z
evolution.
| [
{
"created": "Mon, 29 Apr 2013 14:26:02 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2013 05:56:43 GMT",
"version": "v2"
}
] | 2014-06-10 | [
[
"Sackton",
"Timothy B.",
""
],
[
"Corbett-Detig",
"Russell B.",
""
],
[
"Nagaraju",
"Javaregowda",
""
],
[
"Vaishna",
"R. Lakshmi",
""
],
[
"Arunkumar",
"Kallare P.",
""
],
[
"Hartl",
"Daniel L.",
""
]
] | Genes linked to X or Z chromosomes, which are hemizygous in the heterogametic sex, are predicted to evolve at different rates than those on autosomes. This faster-X effect can arise either as a consequence of hemizygosity, which leads to more efficient selection for recessive beneficial mutations in the heterogametic sex, or as a consequence of reduced effective population size of the hemizygous chromosome, which leads to increased fixation of weakly deleterious mutations due to random genetic drift. Empirical results to date have suggested that, while the overall pattern across taxa is complicated, in general systems with male-heterogamy show a faster-X effect primarily attributable to more efficient selection while the only female-heterogamy taxon studied to date (birds) shows a faster-Z effect primarily attributable to increased drift. In order to test the generality of the faster-Z pattern seen in birds, we sequenced the genome of the Lepidopteran insect Bombyx huttoni, a close outgroup of the domesticated silkmoth Bombyx mori. We show that silkmoths experience faster-Z evolution, but unlike in birds, the faster-Z effect appears to be attributable to more efficient positive selection in females. These results suggest that female-heterogamy alone is unlikely to be sufficient to explain the reduced efficacy of selection on the bird Z chromosome. Instead, it is likely that a combination of patterns of dosage compensation and overall effective population size, among other factors, influence patterns of faster-Z evolution. |
2303.15712 | Frederik Van Daele | Frederik Van Daele, Olivier Honnay, Steven Janssens, Hanne De Kort | Habitat fragmentation affects climate adaptation in a forest herb | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Climate change and the resulting increased drought frequencies pose
considerable threats to forest herb populations, especially when compounded by
additional environmental challenges. Specifically, habitat fragmentation may
disrupt climate adaptation and cause shifts in mating systems. To examine this,
we conducted a garden experiment with Primula elatior offspring from 24
populations across a climate and landscape fragmentation gradient. We evaluated
vegetative, regulatory, and reproductive traits under different soil moisture
regimes, assessing local adaptation and phenotypic plasticity. We also
conducted a field study in 60 populations along the same gradient to examine
potential breakdown of reciprocal herkogamy. Our results showed an evolutionary
shift from drought avoidance in southern populations to drought tolerance in
northern populations for large, connected populations. However, fragmentation
disrupted climate clines and adaptive responses to drought in key traits
related to growth, biomass allocation and water regulation. Our findings also
indicate the beginning of an evolutionary breakdown in reciprocal herkogamy.
These disruptions resulted in significantly reduced flowering investment,
especially in southern fragmented populations. These findings provide new
evidence of how habitat fragmentation disrupts climate adaptation and drought
tolerance in Primula elatior, emphasizing the need to account for habitat
fragmentation in conservation strategies to preserve resilient forest herb
populations amidst global changes.
| [
{
"created": "Tue, 28 Mar 2023 03:47:39 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Apr 2023 11:21:02 GMT",
"version": "v2"
},
{
"created": "Wed, 17 May 2023 16:05:29 GMT",
"version": "v3"
}
] | 2023-05-18 | [
[
"Van Daele",
"Frederik",
""
],
[
"Honnay",
"Olivier",
""
],
[
"Janssens",
"Steven",
""
],
[
"De Kort",
"Hanne",
""
]
] | Climate change and the resulting increased drought frequencies pose considerable threats to forest herb populations, especially when compounded by additional environmental challenges. Specifically, habitat fragmentation may disrupt climate adaptation and cause shifts in mating systems. To examine this, we conducted a garden experiment with Primula elatior offspring from 24 populations across a climate and landscape fragmentation gradient. We evaluated vegetative, regulatory, and reproductive traits under different soil moisture regimes, assessing local adaptation and phenotypic plasticity. We also conducted a field study in 60 populations along the same gradient to examine potential breakdown of reciprocal herkogamy. Our results showed an evolutionary shift from drought avoidance in southern populations to drought tolerance in northern populations for large, connected populations. However, fragmentation disrupted climate clines and adaptive responses to drought in key traits related to growth, biomass allocation and water regulation. Our findings also indicate the beginning of an evolutionary breakdown in reciprocal herkogamy. These disruptions resulted in significantly reduced flowering investment, especially in southern fragmented populations. These findings provide new evidence of how habitat fragmentation disrupts climate adaptation and drought tolerance in Primula elatior, emphasizing the need to account for habitat fragmentation in conservation strategies to preserve resilient forest herb populations amidst global changes. |
1306.2808 | Wolfgang Keil | Manuel Schottdorf, Stephen J. Eglen, Fred Wolf and Wolfgang Keil | Can retinal ganglion cell dipoles seed iso-orientation domains in the
visual cortex? | 9 figures + 1 Supplementary figure and 1 Supplementary table | PLoS ONE 9(1): e86139 (2014) | 10.1371/journal.pone.0086139 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been argued that the emergence of roughly periodic orientation
preference maps (OPMs) in the primary visual cortex (V1) of carnivores and
primates can be explained by a so-called statistical connectivity model. This
model assumes that input to V1 neurons is dominated by feed-forward projections
originating from a small set of retinal ganglion cells (RGCs). The typical
spacing between adjacent cortical orientation columns preferring the same
orientation then arises via Moir\'{e}-Interference between hexagonal ON/OFF RGC
mosaics. While this Moir\'{e}-Interference critically depends on long-range
hexagonal order within the RGC mosaics, a recent statistical analysis of RGC
receptive field positions found no evidence for such long-range positional
order. Hexagonal order may be only one of several ways to obtain spatially
repetitive OPMs in the statistical connectivity model. Here, we investigate a
more general requirement on the spatial structure of RGC mosaics that can seed
the emergence of spatially repetitive cortical OPMs, namely that angular
correlations between so-called RGC dipoles exhibit a spatial structure similar
to that of OPM autocorrelation functions. Both in cat beta cell mosaics as well
as primate parasol receptive field mosaics we find that RGC dipole angles are
spatially uncorrelated. To help assess the level of these correlations, we
introduce a novel point process that generates mosaics with realistic nearest
neighbor statistics and a tunable degree of spatial correlations of dipole
angles. Using this process, we show that given the size of available data sets,
the presence of even weak angular correlations in the data is very unlikely. We
conclude that the layout of ON/OFF ganglion cell mosaics lacks the spatial
structure necessary to seed iso-orientation domains in the primary visual
cortex.
| [
{
"created": "Wed, 12 Jun 2013 12:57:13 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2013 14:58:37 GMT",
"version": "v2"
}
] | 2023-10-18 | [
[
"Schottdorf",
"Manuel",
""
],
[
"Eglen",
"Stephen J.",
""
],
[
"Wolf",
"Fred",
""
],
[
"Keil",
"Wolfgang",
""
]
] | It has been argued that the emergence of roughly periodic orientation preference maps (OPMs) in the primary visual cortex (V1) of carnivores and primates can be explained by a so-called statistical connectivity model. This model assumes that input to V1 neurons is dominated by feed-forward projections originating from a small set of retinal ganglion cells (RGCs). The typical spacing between adjacent cortical orientation columns preferring the same orientation then arises via Moir\'{e}-Interference between hexagonal ON/OFF RGC mosaics. While this Moir\'{e}-Interference critically depends on long-range hexagonal order within the RGC mosaics, a recent statistical analysis of RGC receptive field positions found no evidence for such long-range positional order. Hexagonal order may be only one of several ways to obtain spatially repetitive OPMs in the statistical connectivity model. Here, we investigate a more general requirement on the spatial structure of RGC mosaics that can seed the emergence of spatially repetitive cortical OPMs, namely that angular correlations between so-called RGC dipoles exhibit a spatial structure similar to that of OPM autocorrelation functions. Both in cat beta cell mosaics as well as primate parasol receptive field mosaics we find that RGC dipole angles are spatially uncorrelated. To help assess the level of these correlations, we introduce a novel point process that generates mosaics with realistic nearest neighbor statistics and a tunable degree of spatial correlations of dipole angles. Using this process, we show that given the size of available data sets, the presence of even weak angular correlations in the data is very unlikely. We conclude that the layout of ON/OFF ganglion cell mosaics lacks the spatial structure necessary to seed iso-orientation domains in the primary visual cortex. |
2407.19454 | Fran\c{c}ois Bienvenu | Fran\c{c}ois Bienvenu, Jean-Jil Duchamps, Michael Fuchs and Tsan-Cheng
Yu | The $B_2$ index of galled trees | null | null | null | null | q-bio.PE math.CO math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been an effort to extend the classical notion of
phylogenetic balance, originally defined in the context of trees, to networks.
One of the most natural ways to do this is with the so-called $B_2$ index. In
this paper, we study the $B_2$ index for a prominent class of phylogenetic
networks: galled trees. We show that the $B_2$ index of a uniform leaf-labeled
galled tree converges in distribution as the network becomes large. We
characterize the corresponding limiting distribution, and show that its
expected value is 2.707911858984... This is the first time that a balance index
has been studied to this level of detail for a random phylogenetic network.
One specificity of this work is that we use two different and independent
approaches, each with its advantages: analytic combinatorics, and local limits.
The analytic combinatorics approach is more direct, as it relies on standard
tools; but it involves slightly more complex calculations. Because it has not
previously been used to study such questions, the local limit approach requires
developing an extensive framework beforehand; however, this framework is
interesting in itself and can be used to tackle other similar problems.
| [
{
"created": "Sun, 28 Jul 2024 10:11:36 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Bienvenu",
"François",
""
],
[
"Duchamps",
"Jean-Jil",
""
],
[
"Fuchs",
"Michael",
""
],
[
"Yu",
"Tsan-Cheng",
""
]
] | In recent years, there has been an effort to extend the classical notion of phylogenetic balance, originally defined in the context of trees, to networks. One of the most natural ways to do this is with the so-called $B_2$ index. In this paper, we study the $B_2$ index for a prominent class of phylogenetic networks: galled trees. We show that the $B_2$ index of a uniform leaf-labeled galled tree converges in distribution as the network becomes large. We characterize the corresponding limiting distribution, and show that its expected value is 2.707911858984... This is the first time that a balance index has been studied to this level of detail for a random phylogenetic network. One specificity of this work is that we use two different and independent approaches, each with its advantages: analytic combinatorics, and local limits. The analytic combinatorics approach is more direct, as it relies on standard tools; but it involves slightly more complex calculations. Because it has not previously been used to study such questions, the local limit approach requires developing an extensive framework beforehand; however, this framework is interesting in itself and can be used to tackle other similar problems. |
1211.6615 | Werner Ehm | Werner Ehm and Jiri Wackermann | Modeling geometric-optical illusions: A variational approach | Minor corrections, final version | Journal of Mathematical Psychology 56 (2012), 404-416 | null | null | q-bio.NC math.CA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual distortions of perceived lengths, angles, or forms, are generally
known as "geometric-optical illusions" (GOI). In the present paper we focus on
a class of GOIs where the distortion of a straight line segment (the "target"
stimulus) is induced by an array of non-intersecting curvilinear elements
("context" stimulus). Assuming local target-context interactions in a vector
field representation of the context, we propose to model the perceptual
distortion of the target as the solution to a minimization problem in the
calculus of variations. We discuss properties of the solutions and reproduction
of the respective form of the perceptual distortion for several types of
contexts. Moreover, we draw a connection between the interactionist model of
GOIs and Riemannian geometry: the context stimulus is understood as perturbing
the geometry of the visual field from which the illusory distortion naturally
arises. The approach is illustrated by data from a psychophysical experiment
with nine subjects and six different contexts.
| [
{
"created": "Wed, 28 Nov 2012 14:39:16 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2013 20:06:00 GMT",
"version": "v2"
}
] | 2013-04-24 | [
[
"Ehm",
"Werner",
""
],
[
"Wackermann",
"Jiri",
""
]
] | Visual distortions of perceived lengths, angles, or forms, are generally known as "geometric-optical illusions" (GOI). In the present paper we focus on a class of GOIs where the distortion of a straight line segment (the "target" stimulus) is induced by an array of non-intersecting curvilinear elements ("context" stimulus). Assuming local target-context interactions in a vector field representation of the context, we propose to model the perceptual distortion of the target as the solution to a minimization problem in the calculus of variations. We discuss properties of the solutions and reproduction of the respective form of the perceptual distortion for several types of contexts. Moreover, we draw a connection between the interactionist model of GOIs and Riemannian geometry: the context stimulus is understood as perturbing the geometry of the visual field from which the illusory distortion naturally arises. The approach is illustrated by data from a psychophysical experiment with nine subjects and six different contexts. |
1406.1734 | Christian T\"onsing | Christian T\"onsing, Jens Timmer and Clemens Kreutz | Cause and Cure of Sloppiness in Ordinary Differential Equation Models | 17 pages, 15 figures, submitted to Phys. Rev. E on 12 April 2014 | Phys. Rev. E 90 (2014), 023303 | 10.1103/PhysRevE.90.023303 | null | q-bio.MN physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-based mathematical modeling of biochemical reaction networks, e.g. by
nonlinear ordinary differential equation (ODE) models, has been successfully
applied. In this context, parameter estimation and uncertainty analysis is a
major task in order to assess the quality of the description of the system by
the model. Recently, a broadened eigenvalue spectrum of the Hessian matrix of
the objective function covering orders of magnitudes was observed and has been
termed as sloppiness. In this work, we investigate the origin of sloppiness
from structures in the sensitivity matrix arising from the properties of the
model topology and the experimental design. Furthermore, we present strategies
using optimal experimental design methods in order to circumvent the sloppiness
issue and present non-sloppy designs for a benchmark model.
| [
{
"created": "Fri, 6 Jun 2014 16:49:18 GMT",
"version": "v1"
}
] | 2014-09-02 | [
[
"Tönsing",
"Christian",
""
],
[
"Timmer",
"Jens",
""
],
[
"Kreutz",
"Clemens",
""
]
] | Data-based mathematical modeling of biochemical reaction networks, e.g. by nonlinear ordinary differential equation (ODE) models, has been successfully applied. In this context, parameter estimation and uncertainty analysis is a major task in order to assess the quality of the description of the system by the model. Recently, a broadened eigenvalue spectrum of the Hessian matrix of the objective function covering orders of magnitudes was observed and has been termed as sloppiness. In this work, we investigate the origin of sloppiness from structures in the sensitivity matrix arising from the properties of the model topology and the experimental design. Furthermore, we present strategies using optimal experimental design methods in order to circumvent the sloppiness issue and present non-sloppy designs for a benchmark model. |
1904.10375 | Nadya Morozova | N. Bessonov, O.Butuzova, A.Minarsky, R. Penner, C. Soule, A.
Tosenberger and N. Morozova | Morphogenesis Software based on Epigenetic Code Concept | null | Computational and Structural Biotechnology Journal 17 (2019)
p.1203 | 10.1016/j.csbj.2019.08.007 10.1016/j.csbj.2019.08.007
10.1016/j.csbj.2019.08.007 10.1016/j.csbj.2019.08.007
10.1016/j.csbj.2019.08.007 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process of morphogenesis is an evolution of the shape of an organism
together with the differentiation of its parts. This process encompasses
numerous biological processes ranging from embryogenesis to regeneration
following crisis such as amputation or transplantation. A fundamental
theoretical question is where exactly do these instructions for
(re-)construction reside and how are they implemented? We have recently
proposed a set of concepts, aiming to respond to these questions and to provide
an appropriate mathematical formalization of the geometry of morphogenesis [1].
First, we consider the possibility that evolution of shape is determined by
epigenetic information, responsible for realization of different types of cell
events. Second, we suggest a set of rules for converting this epigenetic
information into instructive signals for cell events for each cell, as well as
for transforming it after each cell event. Next we give notions of cell state,
determined by its epigenetic array, and cell event, which is a change of cell
state, and formalize development as a graph (tree) of cell states connected by
5 types of cell events, corresponding to the processes of cell division, cell
growth, cell death, cell movement and cell differentiation. Here we present a
Morphogenesis Software capable of simulating the evolution of a 3D embryo
starting from zygote, following a set of rules based on our theoretical
assumptions, and thus to provide a proof-of-concept for the hypothesis of
epigenetic code regulation. The software creates a developing embryo and a
corresponding graph of cell events according to the zygotic epigenetic spectrum
and chosen parameters of the developmental rules. Variation of rules
influencing the resulting shape of an embryo may help elucidating the principal
laws underlying pattern formation.
| [
{
"created": "Tue, 23 Apr 2019 15:18:30 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2019 17:58:04 GMT",
"version": "v2"
}
] | 2019-09-23 | [
[
"Bessonov",
"N.",
""
],
[
"Butuzova",
"O.",
""
],
[
"Minarsky",
"A.",
""
],
[
"Penner",
"R.",
""
],
[
"Soule",
"C.",
""
],
[
"Tosenberger",
"A.",
""
],
[
"Morozova",
"N.",
""
]
] | The process of morphogenesis is an evolution of the shape of an organism together with the differentiation of its parts. This process encompasses numerous biological processes ranging from embryogenesis to regeneration following crisis such as amputation or transplantation. A fundamental theoretical question is where exactly do these instructions for (re-)construction reside and how are they implemented? We have recently proposed a set of concepts, aiming to respond to these questions and to provide an appropriate mathematical formalization of the geometry of morphogenesis [1]. First, we consider the possibility that evolution of shape is determined by epigenetic information, responsible for realization of different types of cell events. Second, we suggest a set of rules for converting this epigenetic information into instructive signals for cell events for each cell, as well as for transforming it after each cell event. Next we give notions of cell state, determined by its epigenetic array, and cell event, which is a change of cell state, and formalize development as a graph (tree) of cell states connected by 5 types of cell events, corresponding to the processes of cell division, cell growth, cell death, cell movement and cell differentiation. Here we present a Morphogenesis Software capable of simulating the evolution of a 3D embryo starting from zygote, following a set of rules based on our theoretical assumptions, and thus to provide a proof-of-concept for the hypothesis of epigenetic code regulation. The software creates a developing embryo and a corresponding graph of cell events according to the zygotic epigenetic spectrum and chosen parameters of the developmental rules. Variation of rules influencing the resulting shape of an embryo may help elucidating the principal laws underlying pattern formation. |
1812.06315 | Erik Fagerholm | Erik D. Fagerholm, Rosalyn J. Moran, In\^es R. Violante, Robert Leech,
Karl J. Friston | Breaking the bonds of weak coupling: the dynamic causal modelling of
oscillator amplitudes | 17 pages, 5 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of coupled oscillators are useful in describing a wide variety of
phenomena in physics, biology and economics. These models typically rest on the
premise that the oscillators are weakly coupled, meaning that amplitudes can be
assumed to be constant and dynamics can therefore be described purely in terms
of phase differences. Whilst mathematically convenient, the restrictive nature
of the weak coupling assumption can limit the explanatory power of these
phase-coupled oscillator models. We therefore propose an extension to the
weakly-coupled oscillator model that incorporates both amplitude and phase as
dependent variables. We use the bilinear neuronal state equations of dynamic
causal modelling as a foundation in deriving coupled differential equations
that describe the activity of both weakly and strongly coupled oscillators. We
show that weakly-coupled oscillator models are inadequate in describing the
processes underlying the temporally variable signals observed in a variety of
systems. We demonstrate that phase-coupled models perform well on simulations
of weakly coupled systems but fail when connectivity is no longer weak. On the
other hand, using Bayesian model selection, we show that our phase-amplitude
coupling model can describe non-weakly coupled systems more effectively despite
the added complexity associated with using amplitude as an extra dependent
variable. We demonstrate the advantage of our phase-amplitude model in the
context of model-generated data, as well as of a simulation of inter-connected
pendula, neural local field potential recordings in rodents under anaesthesia
and international economic gross domestic product data.
| [
{
"created": "Sat, 15 Dec 2018 16:26:07 GMT",
"version": "v1"
}
] | 2018-12-18 | [
[
"Fagerholm",
"Erik D.",
""
],
[
"Moran",
"Rosalyn J.",
""
],
[
"Violante",
"Inês R.",
""
],
[
"Leech",
"Robert",
""
],
[
"Friston",
"Karl J.",
""
]
] | Models of coupled oscillators are useful in describing a wide variety of phenomena in physics, biology and economics. These models typically rest on the premise that the oscillators are weakly coupled, meaning that amplitudes can be assumed to be constant and dynamics can therefore be described purely in terms of phase differences. Whilst mathematically convenient, the restrictive nature of the weak coupling assumption can limit the explanatory power of these phase-coupled oscillator models. We therefore propose an extension to the weakly-coupled oscillator model that incorporates both amplitude and phase as dependent variables. We use the bilinear neuronal state equations of dynamic causal modelling as a foundation in deriving coupled differential equations that describe the activity of both weakly and strongly coupled oscillators. We show that weakly-coupled oscillator models are inadequate in describing the processes underlying the temporally variable signals observed in a variety of systems. We demonstrate that phase-coupled models perform well on simulations of weakly coupled systems but fail when connectivity is no longer weak. On the other hand, using Bayesian model selection, we show that our phase-amplitude coupling model can describe non-weakly coupled systems more effectively despite the added complexity associated with using amplitude as an extra dependent variable. We demonstrate the advantage of our phase-amplitude model in the context of model-generated data, as well as of a simulation of inter-connected pendula, neural local field potential recordings in rodents under anaesthesia and international economic gross domestic product data. |
2405.05712 | Masato Suzuki | Makiko Aok, Mai Nishimura, Masato Suzuki, Eiriko Terasawa, Hisayo
Okayama | Characterization of the Autonomic Nervous System Activity in Females
Classified According to Mood Scores During the Follicular Phase | 5 pages, 2 figures, 1 table, 2024 IEEE International Conference on
Robotics and Automation (ICRA 2024) | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many sexually mature females suffer from premenstrual syndrome (PMS), but
effective coping methods for PMS are limited due to the complexity of symptoms
and unclear pathogenesis. Awareness has shown promise in alleviating PMS
symptoms but faces challenges in long-term recording and consistency. Our
research goal is to establish a convenient and simple method to make individual
female aware of their own psychological, and autonomic conditions. In previous
research, we demonstrated that participants could be classified into non-PMS
and PMS groups based on mood scores obtained during the follicular phase.
However, the properties of neurophysiological activity in the participants
classified by mood scores have not been elucidated. This study aimed to
classify participants based on their scores on a mood questionnaire during the
follicular phase and to evaluate their autonomic nervous system (ANS) activity
using a simple device that measures pulse waves from the earlobe. Participants
were grouped into Cluster I (high positive mood) and Cluster II (low mood).
Cluster II participants showed reduced parasympathetic nervous system activity
from the follicular to the menstrual phase, indicating potential PMS symptoms.
The study demonstrates the feasibility of using mood scores to classify
individuals into PMS and non-PMS groups and monitor ANS changes across
menstrual phases. Despite limitations such as sample size and device
variability, the findings highlight a promising avenue for convenient PMS
self-monitoring.
| [
{
"created": "Wed, 8 May 2024 16:31:06 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Aok",
"Makiko",
""
],
[
"Nishimura",
"Mai",
""
],
[
"Suzuki",
"Masato",
""
],
[
"Terasawa",
"Eiriko",
""
],
[
"Okayama",
"Hisayo",
""
]
] | Many sexually mature females suffer from premenstrual syndrome (PMS), but effective coping methods for PMS are limited due to the complexity of symptoms and unclear pathogenesis. Awareness has shown promise in alleviating PMS symptoms but faces challenges in long-term recording and consistency. Our research goal is to establish a convenient and simple method to make individual female aware of their own psychological, and autonomic conditions. In previous research, we demonstrated that participants could be classified into non-PMS and PMS groups based on mood scores obtained during the follicular phase. However, the properties of neurophysiological activity in the participants classified by mood scores have not been elucidated. This study aimed to classify participants based on their scores on a mood questionnaire during the follicular phase and to evaluate their autonomic nervous system (ANS) activity using a simple device that measures pulse waves from the earlobe. Participants were grouped into Cluster I (high positive mood) and Cluster II (low mood). Cluster II participants showed reduced parasympathetic nervous system activity from the follicular to the menstrual phase, indicating potential PMS symptoms. The study demonstrates the feasibility of using mood scores to classify individuals into PMS and non-PMS groups and monitor ANS changes across menstrual phases. Despite limitations such as sample size and device variability, the findings highlight a promising avenue for convenient PMS self-monitoring. |
1505.04195 | Zachary Kilpatrick PhD | Alan Veliz-Cuba, Zachary P. Kilpatrick, and Kresimir Josic | Stochastic models of evidence accumulation in changing environments | 26 pages, 7 figures | null | null | null | q-bio.NC math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Organisms and ecological groups accumulate evidence to make decisions.
Classic experiments and theoretical studies have explored this process when the
correct choice is fixed during each trial. However, we live in a constantly
changing world. What effect does such impermanence have on classical results
about decision making? To address this question we use sequential analysis to
derive a tractable model of evidence accumulation when the correct option
changes in time. Our analysis shows that ideal observers discount prior
evidence at a rate determined by the volatility of the environment, and the
dynamics of evidence accumulation is governed by the information gained over an
average environmental epoch. A plausible neural implementation of an optimal
observer in a changing environment shows that, in contrast to previous models,
neural populations representing alternate choices are coupled through
excitation. Our work builds a bridge between statistical decision making in
volatile environments and stochastic nonlinear dynamics.
| [
{
"created": "Fri, 15 May 2015 20:07:28 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jun 2015 17:38:03 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Sep 2015 18:41:42 GMT",
"version": "v3"
},
{
"created": "Wed, 30 Sep 2015 14:40:45 GMT",
"version": "v4"
}
] | 2015-10-01 | [
[
"Veliz-Cuba",
"Alan",
""
],
[
"Kilpatrick",
"Zachary P.",
""
],
[
"Josic",
"Kresimir",
""
]
] | Organisms and ecological groups accumulate evidence to make decisions. Classic experiments and theoretical studies have explored this process when the correct choice is fixed during each trial. However, we live in a constantly changing world. What effect does such impermanence have on classical results about decision making? To address this question we use sequential analysis to derive a tractable model of evidence accumulation when the correct option changes in time. Our analysis shows that ideal observers discount prior evidence at a rate determined by the volatility of the environment, and the dynamics of evidence accumulation is governed by the information gained over an average environmental epoch. A plausible neural implementation of an optimal observer in a changing environment shows that, in contrast to previous models, neural populations representing alternate choices are coupled through excitation. Our work builds a bridge between statistical decision making in volatile environments and stochastic nonlinear dynamics. |
0712.0170 | Johannes Berg | Johannes Berg | Non-equilibrium dynamics of gene expression and the Jarzynski equality | null | null | 10.1103/PhysRevLett.100.188101 | null | q-bio.MN cond-mat.stat-mech | null | In order to express specific genes at the right time, the transcription of
genes is regulated by the presence and absence of transcription factor
molecules. With transcription factor concentrations undergoing constant
changes, gene transcription takes place out of equilibrium. In this paper we
discuss a simple mapping between dynamic models of gene expression and
stochastic systems driven out of equilibrium. Using this mapping, results of
nonequilibrium statistical mechanics such as the Jarzynski equality and the
fluctuation theorem are demonstrated for gene expression dynamics. Applications
of this approach include the determination of regulatory interactions between
genes from experimental gene expression data.
| [
{
"created": "Mon, 3 Dec 2007 18:18:08 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Berg",
"Johannes",
""
]
] | In order to express specific genes at the right time, the transcription of genes is regulated by the presence and absence of transcription factor molecules. With transcription factor concentrations undergoing constant changes, gene transcription takes place out of equilibrium. In this paper we discuss a simple mapping between dynamic models of gene expression and stochastic systems driven out of equilibrium. Using this mapping, results of nonequilibrium statistical mechanics such as the Jarzynski equality and the fluctuation theorem are demonstrated for gene expression dynamics. Applications of this approach include the determination of regulatory interactions between genes from experimental gene expression data. |
2009.04581 | Andr\'es David B\'aez-S\'anchez | Andr\'es David B\'aez-S\'anchez and Nara Bobko | Effects of anti-infection behavior on the equilibrium states of an
infectious disease | 23 pages, 2 Figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a mathematical model to analyze the effects of anti-infection
behavior on the equilibrium states of an infectious disease. The anti-infection
behavior is incorporated into a classical epidemiological SIR model, by
considering the behavior adoption rate across the population as an additional
variable. We consider also the effects on the adoption rate produced by the
disease evolution, using a dynamic payoff function and an additional
differential equation. The equilibrium states of the proposed model have
remarkable characteristics: possible coexistence of two locally stable endemic
equilibria, the coexistence of locally stable endemic and disease-free
equilibria, and even the possibility of a stable continuum of endemic
equilibrium points. We show how some of the results obtained may be used to
support strategic planning leading to effective control of the disease in the
long-term.
| [
{
"created": "Wed, 9 Sep 2020 21:33:30 GMT",
"version": "v1"
}
] | 2020-09-11 | [
[
"Báez-Sánchez",
"Andrés David",
""
],
[
"Bobko",
"Nara",
""
]
] | We propose a mathematical model to analyze the effects of anti-infection behavior on the equilibrium states of an infectious disease. The anti-infection behavior is incorporated into a classical epidemiological SIR model, by considering the behavior adoption rate across the population as an additional variable. We consider also the effects on the adoption rate produced by the disease evolution, using a dynamic payoff function and an additional differential equation. The equilibrium states of the proposed model have remarkable characteristics: possible coexistence of two locally stable endemic equilibria, the coexistence of locally stable endemic and disease-free equilibria, and even the possibility of a stable continuum of endemic equilibrium points. We show how some of the results obtained may be used to support strategic planning leading to effective control of the disease in the long-term. |
2406.15537 | Matteo Ciferri | Matteo Ferrante, Matteo Ciferri, Nicola Toschi | R&B -- Rhythm and Brain: Cross-subject Decoding of Music from Human
Brain Activity | The first two authors contributed equally to this work | null | null | null | q-bio.NC cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Music is a universal phenomenon that profoundly influences human experiences
across cultures. This study investigates whether music can be decoded from
human brain activity measured with functional MRI (fMRI) during its perception.
Leveraging recent advancements in extensive datasets and pre-trained
computational models, we construct mappings between neural data and latent
representations of musical stimuli. Our approach integrates functional and
anatomical alignment techniques to facilitate cross-subject decoding,
addressing the challenges posed by the low temporal resolution and
signal-to-noise ratio (SNR) in fMRI data. Starting from the GTZan fMRI dataset,
where five participants listened to 540 musical stimuli from 10 different
genres while their brain activity was recorded, we used the CLAP (Contrastive
Language-Audio Pretraining) model to extract latent representations of the
musical stimuli and developed voxel-wise encoding models to identify brain
regions responsive to these stimuli. By applying a threshold to the association
between predicted and actual brain activity, we identified specific regions of
interest (ROIs) which can be interpreted as key players in music processing.
Our decoding pipeline, primarily retrieval-based, employs a linear map to
project brain activity to the corresponding CLAP features. This enables us to
predict and retrieve the musical stimuli most similar to those that originated
the fMRI data. Our results demonstrate state-of-the-art identification
accuracy, with our methods significantly outperforming existing approaches. Our
findings suggest that neural-based music retrieval systems could enable
personalized recommendations and therapeutic applications. Future work could
use higher temporal resolution neuroimaging and generative models to improve
decoding accuracy and explore the neural underpinnings of music perception and
emotion.
| [
{
"created": "Fri, 21 Jun 2024 17:11:45 GMT",
"version": "v1"
}
] | 2024-06-25 | [
[
"Ferrante",
"Matteo",
""
],
[
"Ciferri",
"Matteo",
""
],
[
"Toschi",
"Nicola",
""
]
] | Music is a universal phenomenon that profoundly influences human experiences across cultures. This study investigates whether music can be decoded from human brain activity measured with functional MRI (fMRI) during its perception. Leveraging recent advancements in extensive datasets and pre-trained computational models, we construct mappings between neural data and latent representations of musical stimuli. Our approach integrates functional and anatomical alignment techniques to facilitate cross-subject decoding, addressing the challenges posed by the low temporal resolution and signal-to-noise ratio (SNR) in fMRI data. Starting from the GTZan fMRI dataset, where five participants listened to 540 musical stimuli from 10 different genres while their brain activity was recorded, we used the CLAP (Contrastive Language-Audio Pretraining) model to extract latent representations of the musical stimuli and developed voxel-wise encoding models to identify brain regions responsive to these stimuli. By applying a threshold to the association between predicted and actual brain activity, we identified specific regions of interest (ROIs) which can be interpreted as key players in music processing. Our decoding pipeline, primarily retrieval-based, employs a linear map to project brain activity to the corresponding CLAP features. This enables us to predict and retrieve the musical stimuli most similar to those that originated the fMRI data. Our results demonstrate state-of-the-art identification accuracy, with our methods significantly outperforming existing approaches. Our findings suggest that neural-based music retrieval systems could enable personalized recommendations and therapeutic applications. Future work could use higher temporal resolution neuroimaging and generative models to improve decoding accuracy and explore the neural underpinnings of music perception and emotion. |
q-bio/0312046 | Anders Irb\"ack | Giorgio Favrin, Anders Irb\"ack, Bj\"orn Samuelsson, Stefan Wallin | Two-state folding over a weak free-energy barrier | 22 pages, 5 figures | Biophys. J. 85 (2003) 1457-1465 | 10.1016/S0006-3495(03)74578-0 | LU TP 03-07 | q-bio.BM cond-mat.soft | null | We present a Monte Carlo study of a model protein with 54 amino acids that
folds directly to its native three-helix-bundle state without forming any
well-defined intermediate state. The free-energy barrier separating the native
and unfolded states of this protein is found to be weak, even at the folding
temperature. Nevertheless, we find that melting curves to a good approximation
can be described in terms of a simple two-state system, and that the relaxation
behavior is close to single exponential. The motion along individual reaction
coordinates is roughly diffusive on timescales beyond the reconfiguration time
for an individual helix. A simple estimate based on diffusion in a square-well
potential predicts the relaxation time within a factor of two.
| [
{
"created": "Tue, 30 Dec 2003 20:23:07 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Favrin",
"Giorgio",
""
],
[
"Irbäck",
"Anders",
""
],
[
"Samuelsson",
"Björn",
""
],
[
"Wallin",
"Stefan",
""
]
] | We present a Monte Carlo study of a model protein with 54 amino acids that folds directly to its native three-helix-bundle state without forming any well-defined intermediate state. The free-energy barrier separating the native and unfolded states of this protein is found to be weak, even at the folding temperature. Nevertheless, we find that melting curves to a good approximation can be described in terms of a simple two-state system, and that the relaxation behavior is close to single exponential. The motion along individual reaction coordinates is roughly diffusive on timescales beyond the reconfiguration time for an individual helix. A simple estimate based on diffusion in a square-well potential predicts the relaxation time within a factor of two. |
1809.02623 | Greg Gloor Dr | Andrew D. Fernandes, Michael T.H.Q. Vu, Lisa-Monique Edward, Jean M.
Macklaim, and Gregory B. Gloor | A reproducible effect size is more useful than an irreproducible
hypothesis test to analyze high throughput sequencing datasets | Draft paper explaining the properties and utility of the ALDEx2
effect size metric | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-sa/4.0/ | Motivation: P values derived from the null hypothesis significance testing
framework are strongly affected by sample size, and are known to be
irreproducible in underpowered studies, yet no suitable replacement has been
proposed. Results: Here we present implementations of non-parametric
standardized median effect size estimates, dNEF, for high-throughput sequencing
datasets. Case studies are shown for transcriptome and tag-sequencing datasets.
The dNEF measure is shown to be more reproducible and robust than P values and
requires sample sizes as small as 3 to reproducibly identify differentially
abundant features. Availability: Source code and binaries freely available at:
https://bioconductor.org/packages/ALDEx2.html , omicplotR, and
https://github.com/ggloor/CoDaSeq .
| [
{
"created": "Fri, 7 Sep 2018 18:07:28 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2019 16:03:17 GMT",
"version": "v2"
}
] | 2019-05-14 | [
[
"Fernandes",
"Andrew D.",
""
],
[
"Vu",
"Michael T. H. Q.",
""
],
[
"Edward",
"Lisa-Monique",
""
],
[
"Macklaim",
"Jean M.",
""
],
[
"Gloor",
"Gregory B.",
""
]
] | Motivation: P values derived from the null hypothesis significance testing framework are strongly affected by sample size, and are known to be irreproducible in underpowered studies, yet no suitable replacement has been proposed. Results: Here we present implementations of non-parametric standardized median effect size estimates, dNEF, for high-throughput sequencing datasets. Case studies are shown for transcriptome and tag-sequencing datasets. The dNEF measure is shown to be more reproducible and robust than P values and requires sample sizes as small as 3 to reproducibly identify differentially abundant features. Availability: Source code and binaries freely available at: https://bioconductor.org/packages/ALDEx2.html , omicplotR, and https://github.com/ggloor/CoDaSeq . |
1405.6673 | Simona Cocco | Simona Cocco (LPS), John F. Marko, Remi Monasson (LPTENS) | Stochastic Ratchet Mechanisms for Replacement of Proteins Bound to DNA | \`a paraitre en PHys. Rev. Lett. june 2014 | null | 10.1103/PhysRevLett.112.238101 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experiments indicate that unbinding rates of proteins from DNA can depend on
the concentration of proteins in nearby solution. Here we present a theory of
multi-step replacement of DNA-bound proteins by solution-phase proteins. For
four different kinetic scenarios we calculate the depen- dence of protein
unbinding and replacement rates on solution protein concentration. We find (1)
strong effects of progressive 'rezipping' of the solution-phase protein onto
DNA sites liberated by 'unzipping' of the originally bound protein; (2) that a
model in which solution-phase proteins bind non-specifically to DNA can
describe experiments on exchanges between the non specific DNA- binding
proteins Fis-Fis and Fis-HU; (3) that a binding specific model describes
experiments on the exchange of CueR proteins on specific binding sites.
| [
{
"created": "Mon, 26 May 2014 18:40:57 GMT",
"version": "v1"
}
] | 2015-06-19 | [
[
"Cocco",
"Simona",
"",
"LPS"
],
[
"Marko",
"John F.",
"",
"LPTENS"
],
[
"Monasson",
"Remi",
"",
"LPTENS"
]
] | Experiments indicate that unbinding rates of proteins from DNA can depend on the concentration of proteins in nearby solution. Here we present a theory of multi-step replacement of DNA-bound proteins by solution-phase proteins. For four different kinetic scenarios we calculate the depen- dence of protein unbinding and replacement rates on solution protein concentration. We find (1) strong effects of progressive 'rezipping' of the solution-phase protein onto DNA sites liberated by 'unzipping' of the originally bound protein; (2) that a model in which solution-phase proteins bind non-specifically to DNA can describe experiments on exchanges between the non specific DNA- binding proteins Fis-Fis and Fis-HU; (3) that a binding specific model describes experiments on the exchange of CueR proteins on specific binding sites. |
2109.05796 | Rosa Hernansaiz-Ballesteros | Rosa Hernansaiz-Ballesteros, Christian H. Holland, Aurelien Dugourd,
Julio Saez-Rodriguez | FUNKI: Interactive functional footprint-based analysis of omics data | 4 main pages, 2 supplementary pages, 1 figure | null | null | null | q-bio.GN cs.CE | http://creativecommons.org/licenses/by/4.0/ | Motivation: Omics data, such as transcriptomics or phosphoproteomics, are
broadly used to get a snap-shot of the molecular status of cells. In
particular, changes in omics can be used to estimate the activity of pathways,
transcription factors and kinases based on known regulated targets, that we
call footprints. Then the molecular paths driving these activities can be
estimated using causal reasoning on large signaling networks. Results: We have
developed FUNKI, a FUNctional toolKIt for footprint analysis. It provides a
user-friendly interface for an easy and fast analysis of several omics data,
either from bulk or single-cell experiments. FUNKI also features different
options to visualise the results and run post-analyses, and is mirrored as a
scripted version in R. Availability: FUNKI is a free and open-source
application built on R and Shiny, available in GitHub at
https://github.com/saezlab/ShinyFUNKI under GNU v3.0 license and accessible
also in https://saezlab.shinyapps.io/funki/ Contact: pub.saez@uni-heidelberg.de
Supplementary information: We provide data examples within the app, as well as
extensive information about the different variables to select, the results, and
the different plots in the help page.
| [
{
"created": "Mon, 13 Sep 2021 09:19:21 GMT",
"version": "v1"
}
] | 2021-09-14 | [
[
"Hernansaiz-Ballesteros",
"Rosa",
""
],
[
"Holland",
"Christian H.",
""
],
[
"Dugourd",
"Aurelien",
""
],
[
"Saez-Rodriguez",
"Julio",
""
]
] | Motivation: Omics data, such as transcriptomics or phosphoproteomics, are broadly used to get a snap-shot of the molecular status of cells. In particular, changes in omics can be used to estimate the activity of pathways, transcription factors and kinases based on known regulated targets, that we call footprints. Then the molecular paths driving these activities can be estimated using causal reasoning on large signaling networks. Results: We have developed FUNKI, a FUNctional toolKIt for footprint analysis. It provides a user-friendly interface for an easy and fast analysis of several omics data, either from bulk or single-cell experiments. FUNKI also features different options to visualise the results and run post-analyses, and is mirrored as a scripted version in R. Availability: FUNKI is a free and open-source application built on R and Shiny, available in GitHub at https://github.com/saezlab/ShinyFUNKI under GNU v3.0 license and accessible also in https://saezlab.shinyapps.io/funki/ Contact: pub.saez@uni-heidelberg.de Supplementary information: We provide data examples within the app, as well as extensive information about the different variables to select, the results, and the different plots in the help page. |
1410.1417 | J. C. Phillips | J. C. Phillips | Filovirus Glycoprotein Sequence, Structure and Virulence | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leading Ebola subtypes exhibit a wide mortality range, here explained at the
molecular level by using fractal hydropathic scaling of amino acid sequences
based on protein self-organized criticality. Specific hydrophobic features in
the hydrophilic mucin-like domain suffice to account for the wide mortality
range. Significance statement: Ebola virus is spreading rapidly in Africa. The
connection between protein amino acid sequence and mortality is identified
here.
| [
{
"created": "Fri, 26 Sep 2014 21:21:06 GMT",
"version": "v1"
}
] | 2014-10-07 | [
[
"Phillips",
"J. C.",
""
]
] | Leading Ebola subtypes exhibit a wide mortality range, here explained at the molecular level by using fractal hydropathic scaling of amino acid sequences based on protein self-organized criticality. Specific hydrophobic features in the hydrophilic mucin-like domain suffice to account for the wide mortality range. Significance statement: Ebola virus is spreading rapidly in Africa. The connection between protein amino acid sequence and mortality is identified here. |
2205.02645 | Arshed Nabeel | Arshed Nabeel, Ashwin Karichannavar, Shuaib Palathingal, Jitesh
Jhawar, David B. Br\"uckner, Danny Raj M., Vishwesha Guttal | Discovering stochastic dynamical equations from biological time series
data | Updates: v3: Significantly reorganized the paper and added a section
analysis of a cell migration dataset. v4: Update arXiv title to match the
updated title of the manuscript. v5: Added sections detailing the limitations
of the approach | null | null | null | q-bio.QM cs.LG math.DS | http://creativecommons.org/licenses/by-sa/4.0/ | Stochastic differential equations (SDEs) are an important framework to model
dynamics with randomness, as is common in most biological systems. The inverse
problem of integrating these models with empirical data remains a major
challenge. Here, we present an equation discovery methodology that takes time
series data as an input, analyses fine scale fluctuations and outputs an
interpretable SDE that can correctly capture long-time dynamics of data. We
achieve this by combining traditional approaches from stochastic calculus
literature with state-of-the-art equation discovery techniques. We validate our
approach on synthetic datasets, and demonstrate the generality and
applicability of the method on two real-world datasets of vastly different
spatiotemporal scales: (i) collective movement of fish school where
stochasticity plays a crucial role, and (ii) confined migration of a single
cell, primarily following a relaxed oscillation. We make the method available
as an easy-to-use, open-source Python package, PyDaddy (Python Library for Data
Driven Dynamics).
| [
{
"created": "Thu, 5 May 2022 13:44:24 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Nov 2022 18:43:41 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Nov 2023 09:48:28 GMT",
"version": "v3"
},
{
"created": "Wed, 22 Nov 2023 03:43:55 GMT",
"version": "v4"
},
{
"created": "Sat, 17 Feb 2024 06:53:08 GMT",
"version": "v5"
}
] | 2024-02-20 | [
[
"Nabeel",
"Arshed",
""
],
[
"Karichannavar",
"Ashwin",
""
],
[
"Palathingal",
"Shuaib",
""
],
[
"Jhawar",
"Jitesh",
""
],
[
"Brückner",
"David B.",
""
],
[
"M.",
"Danny Raj",
""
],
[
"Guttal",
"Vishwesha",
""
]
] | Stochastic differential equations (SDEs) are an important framework to model dynamics with randomness, as is common in most biological systems. The inverse problem of integrating these models with empirical data remains a major challenge. Here, we present an equation discovery methodology that takes time series data as an input, analyses fine scale fluctuations and outputs an interpretable SDE that can correctly capture long-time dynamics of data. We achieve this by combining traditional approaches from stochastic calculus literature with state-of-the-art equation discovery techniques. We validate our approach on synthetic datasets, and demonstrate the generality and applicability of the method on two real-world datasets of vastly different spatiotemporal scales: (i) collective movement of fish school where stochasticity plays a crucial role, and (ii) confined migration of a single cell, primarily following a relaxed oscillation. We make the method available as an easy-to-use, open-source Python package, PyDaddy (Python Library for Data Driven Dynamics). |
1104.0021 | Stuart Borrett Stuart Borrett | Stuart R. Borrett, Andria K. Salas | Evidence for Resource Homogenization in 50 Trophic Ecosystem Networks | 10 pages, 3 figures, 1 table | Borrett, S.R., A.K. Salas. 2010. Evidence for resource
homogenization in 50 trophic ecosystem networks. Ecological Modelling 221:
1710-1716 | 10.1016/j.ecolmodel.2010.04.004 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connectivity patterns of ecological elements are often the core concern of
ecologists working at multiple levels of organization (e.g., populations,
ecosystems, and landscapes) because these patterns often reflect the forces
shaping the system's development as well as constraining their operation. One
reason these patterns of direct connections are critical is that they establish
the pathways through which elements influence each other indirectly. Here, we
tested a hypothesized consequence of connectivity in ecosystems: the
homogenization of resource distributions in flow networks. Specifically, we
tested the generality of the systems ecology hypothesis of resource
homogenization in 50 empirically derived trophic ecosystem models representing
35 distinct ecosystems. We applied Ecological Network Analysis to calculate
resource homogenization for these models. We further evaluated the robustness
of our results in two ways. First, we verified the close correspondence between
the input- and output-oriented homogenization values to ensure that our results
were not biased by our decision to focus on the output orientation. Second, we
conducted a Monte Carlo based uncertainty analysis to determine the robustness
of our results to +/-5% error introduced into the original flow matrices for
each model. Our results show that resource homogenization occurs universally in
the 50 ecosystem models tested. We confirm that our results are not biased by
using the output-oriented homogenization values because there is a significant
linear regression between the input and output oriented homogenization (r^2 =
0.38, p < 0.001). Finally, we found that our results are robust to +/-5% error
in the flow matrices. In conclusion, we found strong support for the resource
homogenization hypothesis in 50 empirically derived ecosystem models.
| [
{
"created": "Thu, 31 Mar 2011 20:16:04 GMT",
"version": "v1"
}
] | 2011-04-04 | [
[
"Borrett",
"Stuart R.",
""
],
[
"Salas",
"Andria K.",
""
]
] | Connectivity patterns of ecological elements are often the core concern of ecologists working at multiple levels of organization (e.g., populations, ecosystems, and landscapes) because these patterns often reflect the forces shaping the system's development as well as constraining their operation. One reason these patterns of direct connections are critical is that they establish the pathways through which elements influence each other indirectly. Here, we tested a hypothesized consequence of connectivity in ecosystems: the homogenization of resource distributions in flow networks. Specifically, we tested the generality of the systems ecology hypothesis of resource homogenization in 50 empirically derived trophic ecosystem models representing 35 distinct ecosystems. We applied Ecological Network Analysis to calculate resource homogenization for these models. We further evaluated the robustness of our results in two ways. First, we verified the close correspondence between the input- and output-oriented homogenization values to ensure that our results were not biased by our decision to focus on the output orientation. Second, we conducted a Monte Carlo based uncertainty analysis to determine the robustness of our results to +/-5% error introduced into the original flow matrices for each model. Our results show that resource homogenization occurs universally in the 50 ecosystem models tested. We confirm that our results are not biased by using the output-oriented homogenization values because there is a significant linear regression between the input and output oriented homogenization (r^2 = 0.38, p < 0.001). Finally, we found that our results are robust to +/-5% error in the flow matrices. In conclusion, we found strong support for the resource homogenization hypothesis in 50 empirically derived ecosystem models. |
0902.1292 | Michael B\"orsch | N. Zarrabi, S. Ernst, M. G. Dueser, A. Golovina-Leiker, W. Becker, R.
Erdmann, S. D. Dunn, M. Borsch | Simultaneous monitoring of the two coupled motors of a single FoF1-ATP
synthase by three-color FRET using duty cycle-optimized triple-ALEX | 14 pages, 8 figures | null | 10.1117/12.809610 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FoF1-ATP synthase is the enzyme that provides the 'chemical energy currency'
adenosine triphosphate, ATP, for living cells. The formation of ATP is
accomplished by a stepwise internal rotation of subunits within the enzyme.
Briefly, proton translocation through the membrane-bound Fo part of ATP
synthase drives a 10-step rotary motion of the ring of c subunits with respect
to the non-rotating subunits a and b. This rotation is transmitted to the gamma
and epsilon subunits of the F1 sector resulting in 120 degree steps. In order
to unravel this symmetry mismatch we monitor subunit rotation by a
single-molecule fluorescence resonance energy transfer (FRET) approach using
three fluorophores specifically attached to the enzyme: one attached to the F1
motor, another one to the Fo motor, and the third one to a non-rotating
subunit. To reduce photophysical artifacts due to spectral fluctuations of the
single fluorophores, a duty cycle-optimized alternating three-laser scheme
(DCO-ALEX) has been developed. Simultaneous observation of the stepsizes for
both motors allows the detection of reversible elastic deformations between the
rotor parts of Fo and F1.
| [
{
"created": "Sun, 8 Feb 2009 23:17:18 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Zarrabi",
"N.",
""
],
[
"Ernst",
"S.",
""
],
[
"Dueser",
"M. G.",
""
],
[
"Golovina-Leiker",
"A.",
""
],
[
"Becker",
"W.",
""
],
[
"Erdmann",
"R.",
""
],
[
"Dunn",
"S. D.",
""
],
[
"Borsch",
"M.",
""
]
] | FoF1-ATP synthase is the enzyme that provides the 'chemical energy currency' adenosine triphosphate, ATP, for living cells. The formation of ATP is accomplished by a stepwise internal rotation of subunits within the enzyme. Briefly, proton translocation through the membrane-bound Fo part of ATP synthase drives a 10-step rotary motion of the ring of c subunits with respect to the non-rotating subunits a and b. This rotation is transmitted to the gamma and epsilon subunits of the F1 sector resulting in 120 degree steps. In order to unravel this symmetry mismatch we monitor subunit rotation by a single-molecule fluorescence resonance energy transfer (FRET) approach using three fluorophores specifically attached to the enzyme: one attached to the F1 motor, another one to the Fo motor, and the third one to a non-rotating subunit. To reduce photophysical artifacts due to spectral fluctuations of the single fluorophores, a duty cycle-optimized alternating three-laser scheme (DCO-ALEX) has been developed. Simultaneous observation of the stepsizes for both motors allows the detection of reversible elastic deformations between the rotor parts of Fo and F1. |
2404.12029 | Chaitanya A. Athale | Dhruv Khatri, Shivani A. Yadav and Chaitanya A. Athale | KnotResolver: Tracking self-intersecting filaments in microscopy using
directed graphs | Manuscript in submission | null | null | null | q-bio.QM physics.bio-ph q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Quantification of microscopy time-series of in vitro reconstituted motor
driven microtubule (MT) transport in 'gliding assays' is typically performed
using computational object tracking tools. However, these are limited to
non-intersecting and rod-like filaments. Here, we describe a novel
computational image-analysis pipeline, KnotResolver, to track image time-series
of highly curved self-intersecting looped filaments (knots) by resolving
cross-overs. The code integrates filament segmentation and cross-over or 'knot'
identification based on directed graph representation, where nodes represent
cross-overs and edges represent the path connecting them. The graphs are mapped
back to contours and the distance to a reference minimized. We demonstrate the
utility of the tool by segmentation and tracking MTs from experiments with
dynein-driven wave like filament looping. The accuracy of contour detection is
sub-pixel accuracy, and Dice scores indicate a robustness to noise, better than
currently used tools. Thus KnotResolver overcomes multiple limitations of
widely used tools in microscopy of cytoskeletal filament-like structures.
| [
{
"created": "Thu, 18 Apr 2024 09:28:43 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Khatri",
"Dhruv",
""
],
[
"Yadav",
"Shivani A.",
""
],
[
"Athale",
"Chaitanya A.",
""
]
] | Quantification of microscopy time-series of in vitro reconstituted motor driven microtubule (MT) transport in 'gliding assays' is typically performed using computational object tracking tools. However, these are limited to non-intersecting and rod-like filaments. Here, we describe a novel computational image-analysis pipeline, KnotResolver, to track image time-series of highly curved self-intersecting looped filaments (knots) by resolving cross-overs. The code integrates filament segmentation and cross-over or 'knot' identification based on directed graph representation, where nodes represent cross-overs and edges represent the path connecting them. The graphs are mapped back to contours and the distance to a reference minimized. We demonstrate the utility of the tool by segmentation and tracking MTs from experiments with dynein-driven wave like filament looping. The accuracy of contour detection is sub-pixel accuracy, and Dice scores indicate a robustness to noise, better than currently used tools. Thus KnotResolver overcomes multiple limitations of widely used tools in microscopy of cytoskeletal filament-like structures. |
1102.5658 | Antti Niemi | Shuangwei Hu, Martin Lundgren and Antti J. Niemi | The Discrete Frenet Frame, Inflection Point Solitons And Curve
Visualization with Applications to Folded Proteins | 14 pages 12 figures | null | 10.1103/PhysRevE.83.061908 | null | q-bio.BM cond-mat.soft hep-th physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a transfer matrix formalism to visualize the framing of discrete
piecewise linear curves in three dimensional space. Our approach is based on
the concept of an intrinsically discrete curve, which enables us to more
effectively describe curves that in the limit where the length of line segments
vanishes approach fractal structures in lieu of continuous curves. We verify
that in the case of differentiable curves the continuum limit of our discrete
equation does reproduce the generalized Frenet equation. As an application we
consider folded proteins, their Hausdorff dimension is known to be fractal. We
explain how to employ the orientation of $C_\beta$ carbons of amino acids along
a protein backbone to introduce a preferred framing along the backbone. By
analyzing the experimentally resolved fold geometries in the Protein Data Bank
we observe that this $C_\beta$ framing relates intimately to the discrete
Frenet framing. We also explain how inflection points can be located in the
loops, and clarify their distinctive r\^ole in determining the loop structure
of foldel proteins.
| [
{
"created": "Mon, 28 Feb 2011 13:40:12 GMT",
"version": "v1"
}
] | 2015-05-27 | [
[
"Hu",
"Shuangwei",
""
],
[
"Lundgren",
"Martin",
""
],
[
"Niemi",
"Antti J.",
""
]
] | We develop a transfer matrix formalism to visualize the framing of discrete piecewise linear curves in three dimensional space. Our approach is based on the concept of an intrinsically discrete curve, which enables us to more effectively describe curves that in the limit where the length of line segments vanishes approach fractal structures in lieu of continuous curves. We verify that in the case of differentiable curves the continuum limit of our discrete equation does reproduce the generalized Frenet equation. As an application we consider folded proteins, their Hausdorff dimension is known to be fractal. We explain how to employ the orientation of $C_\beta$ carbons of amino acids along a protein backbone to introduce a preferred framing along the backbone. By analyzing the experimentally resolved fold geometries in the Protein Data Bank we observe that this $C_\beta$ framing relates intimately to the discrete Frenet framing. We also explain how inflection points can be located in the loops, and clarify their distinctive r\^ole in determining the loop structure of foldel proteins. |
2107.07131 | Tsvi Tlusty | Somya Mani and Tsvi Tlusty | A topological look into the evolution of developmental programs | null | null | null | null | q-bio.MN physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid advance of experimental techniques provides an unprecedented in-depth
view into complex developmental processes. Still, little is known on how the
complexity of multicellular organisms evolved by elaborating developmental
programs and inventing new cell types. A hurdle to understanding developmental
evolution is the difficulty of even describing the intertwined network of
spatiotemporal processes underlying the development of complex multicellular
organisms. Nonetheless, an overview of developmental trajectories can be
obtained from cell type lineage maps. Here, we propose that these lineage maps
can also reveal how developmental programs evolve: the modes of evolving new
cell types in an organism should be visible in its developmental trajectories,
and therefore in the geometry of its cell type lineage map. This idea is
demonstrated using a parsimonious generative model of developmental programs,
which allows us to reliably survey the universe of all possible programs and
examine their topological features. We find that, contrary to belief, tree-like
lineage maps are rare and lineage maps of complex multicellular organisms are
likely to be directed acyclic graphs where multiple developmental routes can
converge on the same cell type. While cell type evolution prescribes what
developmental programs come into existence, natural selection prunes those
programs which produce low-functioning organisms. Our model indicates that
additionally, lineage map topologies are correlated with such a functional
property: the ability of organisms to regenerate.
| [
{
"created": "Thu, 15 Jul 2021 05:33:19 GMT",
"version": "v1"
}
] | 2021-07-16 | [
[
"Mani",
"Somya",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | Rapid advance of experimental techniques provides an unprecedented in-depth view into complex developmental processes. Still, little is known on how the complexity of multicellular organisms evolved by elaborating developmental programs and inventing new cell types. A hurdle to understanding developmental evolution is the difficulty of even describing the intertwined network of spatiotemporal processes underlying the development of complex multicellular organisms. Nonetheless, an overview of developmental trajectories can be obtained from cell type lineage maps. Here, we propose that these lineage maps can also reveal how developmental programs evolve: the modes of evolving new cell types in an organism should be visible in its developmental trajectories, and therefore in the geometry of its cell type lineage map. This idea is demonstrated using a parsimonious generative model of developmental programs, which allows us to reliably survey the universe of all possible programs and examine their topological features. We find that, contrary to belief, tree-like lineage maps are rare and lineage maps of complex multicellular organisms are likely to be directed acyclic graphs where multiple developmental routes can converge on the same cell type. While cell type evolution prescribes what developmental programs come into existence, natural selection prunes those programs which produce low-functioning organisms. Our model indicates that additionally, lineage map topologies are correlated with such a functional property: the ability of organisms to regenerate. |
2011.01750 | Susanna Gordleeva | Susanna Yu. Gordleeva, Yulia A. Tsybina, Mikhail I. Krivonosov,
Mikhail V. Ivanchenko, Alexey A. Zaikin, Victor B. Kazantsev, Alexander N.
Gorban | Formation of working memory in a spiking neuron network accompanied by
astrocytes | null | Frontiers in Cellular Neuroscience, 15, 2021, Article 631485 | 10.3389/fncel.2021.631485 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | We propose a biologically plausible computational model of working memory
(WM) implemented by the spiking neuron network (SNN) interacting with a network
of astrocytes. SNN is modelled by the synaptically coupled Izhikevich neurons
with a non-specific architecture connection topology. Astrocytes generating
calcium signals are connected by local gap junction diffusive couplings and
interact with neurons by chemicals diffused in the extracellular space. Calcium
elevations occur in response to the increase of concentration of a
neurotransmitter released by spiking neurons when a group of them fire
coherently. In turn, gliotransmitters are released by activated astrocytes
modulating the strengths of synaptic connections in the corresponding neuronal
group. Input information is encoded as two-dimensional patterns of short
applied current pulses stimulating neurons. The output is taken from
frequencies of transient discharges of corresponding neurons. We show how a set
of information patterns with quite significant overlapping areas can be
uploaded into the neuron-astrocyte network and stored for several seconds.
Information retrieval is organised by the application of a cue pattern
representing the one from the memory set distorted by noise. We found that
successful retrieval with level of the correlation between recalled pattern and
ideal pattern more than 90% is possible for multi-item WM task. Having analysed
the dynamical mechanism of WM formation, we discovered that astrocytes
operating at a time scale of a dozen of seconds can successfully store traces
of neuronal activations corresponding to information patterns. In the retrieval
stage, the astrocytic network selectively modulates synaptic connections in SNN
leading to the successful recall. Information and dynamical characteristics of
the proposed WM model agrees with classical concepts and other WM models.
| [
{
"created": "Tue, 3 Nov 2020 14:56:35 GMT",
"version": "v1"
}
] | 2022-05-17 | [
[
"Gordleeva",
"Susanna Yu.",
""
],
[
"Tsybina",
"Yulia A.",
""
],
[
"Krivonosov",
"Mikhail I.",
""
],
[
"Ivanchenko",
"Mikhail V.",
""
],
[
"Zaikin",
"Alexey A.",
""
],
[
"Kazantsev",
"Victor B.",
""
],
[
"Gorban",
"Alexander N.",
""
]
] | We propose a biologically plausible computational model of working memory (WM) implemented by the spiking neuron network (SNN) interacting with a network of astrocytes. SNN is modelled by the synaptically coupled Izhikevich neurons with a non-specific architecture connection topology. Astrocytes generating calcium signals are connected by local gap junction diffusive couplings and interact with neurons by chemicals diffused in the extracellular space. Calcium elevations occur in response to the increase of concentration of a neurotransmitter released by spiking neurons when a group of them fire coherently. In turn, gliotransmitters are released by activated astrocytes modulating the strengths of synaptic connections in the corresponding neuronal group. Input information is encoded as two-dimensional patterns of short applied current pulses stimulating neurons. The output is taken from frequencies of transient discharges of corresponding neurons. We show how a set of information patterns with quite significant overlapping areas can be uploaded into the neuron-astrocyte network and stored for several seconds. Information retrieval is organised by the application of a cue pattern representing the one from the memory set distorted by noise. We found that successful retrieval with level of the correlation between recalled pattern and ideal pattern more than 90% is possible for multi-item WM task. Having analysed the dynamical mechanism of WM formation, we discovered that astrocytes operating at a time scale of a dozen of seconds can successfully store traces of neuronal activations corresponding to information patterns. In the retrieval stage, the astrocytic network selectively modulates synaptic connections in SNN leading to the successful recall. Information and dynamical characteristics of the proposed WM model agrees with classical concepts and other WM models. |
2008.02547 | Matthijs Meijers | Matthijs Meijers, Kanika Vanshylla, Henning Gruell, Florian Klein, and
Michael Laessig | Predicting in vivo escape dynamics of HIV-1 from a broadly neutralizing
antibody | 16 pages, 6 figures | null | 10.1073/pnas.2104651118 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Broadly neutralizing antibodies are promising candidates for treatment and
prevention of HIV-1 infections. Such antibodies can temporarily suppress viral
load in infected individuals; however, the virus often rebounds by escape
mutants that have evolved resistance. In this paper, we map an in vivo fitness
landscape of HIV-1 interacting with broadly neutralizing antibodies, using data
from a recent clinical trial. We identify two fitness factors, antibody dosage
and viral load, that determine viral reproduction rates reproducibly across
different hosts. The model successfully predicts the escape dynamics of HIV-1
in the course of an antibody treatment, including a characteristic frequency
turnover between sensitive and resistant strains. This turnover is governed by
a dosage-dependent fitness ranking, resulting from an evolutionary tradeoff
between antibody resistance and its collateral cost in drug-free growth. Our
analysis suggests resistance-cost tradeoff curves as a measure of antibody
performance in the presence of resistance evolution.
| [
{
"created": "Thu, 6 Aug 2020 09:56:32 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Meijers",
"Matthijs",
""
],
[
"Vanshylla",
"Kanika",
""
],
[
"Gruell",
"Henning",
""
],
[
"Klein",
"Florian",
""
],
[
"Laessig",
"Michael",
""
]
] | Broadly neutralizing antibodies are promising candidates for treatment and prevention of HIV-1 infections. Such antibodies can temporarily suppress viral load in infected individuals; however, the virus often rebounds by escape mutants that have evolved resistance. In this paper, we map an in vivo fitness landscape of HIV-1 interacting with broadly neutralizing antibodies, using data from a recent clinical trial. We identify two fitness factors, antibody dosage and viral load, that determine viral reproduction rates reproducibly across different hosts. The model successfully predicts the escape dynamics of HIV-1 in the course of an antibody treatment, including a characteristic frequency turnover between sensitive and resistant strains. This turnover is governed by a dosage-dependent fitness ranking, resulting from an evolutionary tradeoff between antibody resistance and its collateral cost in drug-free growth. Our analysis suggests resistance-cost tradeoff curves as a measure of antibody performance in the presence of resistance evolution. |
2307.10585 | Yonatan Ashenafi | Yonatan Ashenafi, Peter R. Kramer | Statistical Mobility of Multicellular Colonies of Flagellated Swimming
Cells | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | We study the stochastic hydrodynamics of colonies of flagellated swimming
cells, typified by multicellular choanoflagellates, which can form both rosette
and chainlike shapes. The objective is to link cell-scale dynamics to
colony-scale dynamics for various colonial morphologies. Via autoregressive
stochastic models for the cycle-averaged flagellar force dynamics and
statistical models for demographic cell-to-cell variability in flagellar
properties and placement, we derive effective transport properties of the
colonies, including cell-to-cell variability. We provide the most quantitative
detail on disclike geometries to model rosettes, but also present formulas for
the dynamics of general planar colony morphologies, which includes planar
chain-like configurations.
| [
{
"created": "Thu, 20 Jul 2023 04:58:10 GMT",
"version": "v1"
}
] | 2023-07-21 | [
[
"Ashenafi",
"Yonatan",
""
],
[
"Kramer",
"Peter R.",
""
]
] | We study the stochastic hydrodynamics of colonies of flagellated swimming cells, typified by multicellular choanoflagellates, which can form both rosette and chainlike shapes. The objective is to link cell-scale dynamics to colony-scale dynamics for various colonial morphologies. Via autoregressive stochastic models for the cycle-averaged flagellar force dynamics and statistical models for demographic cell-to-cell variability in flagellar properties and placement, we derive effective transport properties of the colonies, including cell-to-cell variability. We provide the most quantitative detail on disclike geometries to model rosettes, but also present formulas for the dynamics of general planar colony morphologies, which includes planar chain-like configurations. |
1707.07771 | Andrea Giometto Dr | Silvia Zaoli, Andrea Giometto, Amos Maritan, Andrea Rinaldo | Covariations in ecological scaling laws fostered by community dynamics | S.Z. and A.G. contributed equally to this work | Proceedings of the National Academy of Sciences, 114(40),
10672-10677 | 10.1073/pnas.1708376114 | null | q-bio.PE nlin.AO physics.bio-ph physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scaling laws in ecology, intended both as functional relationships among
ecologically-relevant quantities and the probability distributions that
characterize their occurrence, have long attracted the interest of empiricists
and theoreticians. Empirical evidence exists of power laws associated with the
number of species inhabiting an ecosystem, their abundances and traits.
Although their functional form appears to be ubiquitous, empirical scaling
exponents vary with ecosystem type and resource supply rate. The idea that
ecological scaling laws are linked had been entertained before, but the full
extent of macroecological pattern covariations, the role of the constraints
imposed by finite resource supply and a comprehensive empirical verification
are still unexplored. Here, we propose a theoretical scaling framework that
predicts the linkages of several macroecological patterns related to species'
abundances and body sizes. We show that such framework is consistent with the
stationary state statistics of a broad class of resource-limited community
dynamics models, regardless of parametrization and model assumptions. We verify
predicted theoretical covariations by contrasting empirical data and provide
testable hypotheses for yet unexplored patterns. We thus place the observed
variability of ecological scaling exponents into a coherent statistical
framework where patterns in ecology embed constrained fluctuations.
| [
{
"created": "Mon, 24 Jul 2017 23:26:58 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2017 21:02:33 GMT",
"version": "v2"
}
] | 2017-10-19 | [
[
"Zaoli",
"Silvia",
""
],
[
"Giometto",
"Andrea",
""
],
[
"Maritan",
"Amos",
""
],
[
"Rinaldo",
"Andrea",
""
]
] | Scaling laws in ecology, intended both as functional relationships among ecologically-relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked had been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such framework is consistent with the stationary state statistics of a broad class of resource-limited community dynamics models, regardless of parametrization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations. |
2107.03971 | Julien Lagarde | Julien Lagarde | The classical mean negative asynchrony in sensorimotor synchronization
is not universal in humans. A cross-cultural study | null | null | null | null | q-bio.NC nlin.AO | http://creativecommons.org/licenses/by/4.0/ | The present study examines to what extent cultural background determines
sensorimotor synchronization in humans
| [
{
"created": "Thu, 8 Jul 2021 16:56:05 GMT",
"version": "v1"
}
] | 2021-07-12 | [
[
"Lagarde",
"Julien",
""
]
] | The present study examines to what extent cultural background determines sensorimotor synchronization in humans |
1212.3214 | Yupeng Cun | Yupeng Cun, Holger Fr\"ohlich | Integrating Prior Knowledge Into Prognostic Biomarker Discovery based on
Network Structure | null | null | null | null | q-bio.GN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Predictive, stable and interpretable gene signatures are
generally seen as an important step towards a better personalized medicine.
During the last decade various methods have been proposed for that purpose.
However, one important obstacle for making gene signatures a standard tool in
clinics is the typical low reproducibility of these signatures combined with
the difficulty to achieve a clear biological interpretation. For that purpose
in the last years there has been a growing interest in approaches that try to
integrate information from molecular interaction networks. Results: We propose
a novel algorithm, called FrSVM, which integrates protein-protein interaction
network information into gene selection for prognostic biomarker discovery. Our
method is a simple filter based approach, which focuses on central genes with
large differences in their expression. Compared to several other competing
methods our algorithm reveals a significantly better prediction performance and
higher signature stability. More- over, obtained gene lists are highly enriched
with known disease genes and drug targets. We extendd our approach further by
integrating information on candidate disease genes and targets of disease
associated Transcript Factors (TFs).
| [
{
"created": "Thu, 13 Dec 2012 16:44:55 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2013 13:47:19 GMT",
"version": "v2"
}
] | 2013-05-28 | [
[
"Cun",
"Yupeng",
""
],
[
"Fröhlich",
"Holger",
""
]
] | Background: Predictive, stable and interpretable gene signatures are generally seen as an important step towards a better personalized medicine. During the last decade various methods have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinics is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks. Results: We propose a novel algorithm, called FrSVM, which integrates protein-protein interaction network information into gene selection for prognostic biomarker discovery. Our method is a simple filter based approach, which focuses on central genes with large differences in their expression. Compared to several other competing methods our algorithm reveals a significantly better prediction performance and higher signature stability. More- over, obtained gene lists are highly enriched with known disease genes and drug targets. We extendd our approach further by integrating information on candidate disease genes and targets of disease associated Transcript Factors (TFs). |
1111.4597 | Iaroslav Ispolatov | Iaroslav Ispolatov, Martin Ackermann, and Michael Doebeli | Division of labour and the evolution of multicellularity | 28 pages, 2 figures | Proc.R.Soc.B(2012)279,1768 | 10.1098/rspb.2011.1999 | null | q-bio.PE cond-mat.other | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the emergence and evolution of multicellularity and cellular
differentiation is a core problem in biology. We develop a quantitative model
that shows that a multicellular form emerges from genetically identical
unicellular ancestors when the compartmentalization of poorly compatible
physiological processes into component cells of an aggregate produces a fitness
advantage. This division of labour between the cells in the aggregate occurs
spontaneously at the regulatory level due to mechanisms present in unicellular
ancestors and does not require any genetic pre-disposition for a particular
role in the aggregate or any orchestrated cooperative behaviour of aggregate
cells. Mathematically, aggregation implies an increase in the dimensionality of
phenotype space that generates a fitness landscape with new fitness maxima, and
in which the unicellular states of optimized metabolism become fitness saddle
points. Evolution of multicellularity is modeled as evolution of a hereditary
parameter, the propensity of cells to stick together, which determines the
fraction of time a cell spends in the aggregate form. Stickiness can increase
evolutionarily due to the fitness advantage generated by the division of labour
between cells in an aggregate.
| [
{
"created": "Sun, 20 Nov 2011 00:16:06 GMT",
"version": "v1"
}
] | 2017-02-07 | [
[
"Ispolatov",
"Iaroslav",
""
],
[
"Ackermann",
"Martin",
""
],
[
"Doebeli",
"Michael",
""
]
] | Understanding the emergence and evolution of multicellularity and cellular differentiation is a core problem in biology. We develop a quantitative model that shows that a multicellular form emerges from genetically identical unicellular ancestors when the compartmentalization of poorly compatible physiological processes into component cells of an aggregate produces a fitness advantage. This division of labour between the cells in the aggregate occurs spontaneously at the regulatory level due to mechanisms present in unicellular ancestors and does not require any genetic pre-disposition for a particular role in the aggregate or any orchestrated cooperative behaviour of aggregate cells. Mathematically, aggregation implies an increase in the dimensionality of phenotype space that generates a fitness landscape with new fitness maxima, and in which the unicellular states of optimized metabolism become fitness saddle points. Evolution of multicellularity is modeled as evolution of a hereditary parameter, the propensity of cells to stick together, which determines the fraction of time a cell spends in the aggregate form. Stickiness can increase evolutionarily due to the fitness advantage generated by the division of labour between cells in an aggregate. |
2205.08610 | Ali H Husseen Al-Nuaimi Mr | Ali H. Al-Nuaimi, Emmanuel Jammeh, Lingfen Sun, and Emmanuel Ifeachor | Complexity Measures for Quantifying Changes in Electroencephalogram in
Alzheimers Disease | null | null | 10.1155/2018/8915079 | null | q-bio.NC eess.SP | http://creativecommons.org/licenses/by/4.0/ | Alzheimers disease (AD) is a progressive disorder that affects cognitive
brain functions and starts many years before its clinical manifestations. A
biomarker that provides a quantitative measure of changes in the brain due to
AD in the early stages would be useful for early diagnosis of AD, but this
would involve dealing with large numbers of people because up to 50% of
dementia sufferers do not receive a formal diagnosis. Thus, there is a need for
accurate, low-cost, and easy-to-use biomarkers that could be used to detect AD
in its early stages. Potentially, electroencephalogram (EEG) based biomarkers
can play a vital role in early diagnosis of AD as they can fulfill these needs.
This is a cross-sectional study that aims to demonstrate the usefulness of EEG
complexity measures in early AD diagnosis. We have focused on the three
complexity methods which have shown the greatest promise in the detection of
AD, Tsallis entropy (TsEn), Higuchi Fractal Dimension (HFD), and Lempel-Ziv
complexity (LZC) methods. Unlike previous approaches, in this study, the
complexity measures are derived from EEG frequency bands (instead of the entire
EEG) as EEG activities have significant association with AD and this has led to
enhanced performance. The results show that AD patients have significantly
lower TsEn, HFD, and LZC values for specific EEG frequency bands and for
specific EEG channels and that this information can be used to detect AD with a
sensitivity and specificity of more than 90%.
| [
{
"created": "Tue, 17 May 2022 19:57:57 GMT",
"version": "v1"
}
] | 2022-05-19 | [
[
"Al-Nuaimi",
"Ali H.",
""
],
[
"Jammeh",
"Emmanuel",
""
],
[
"Sun",
"Lingfen",
""
],
[
"Ifeachor",
"Emmanuel",
""
]
] | Alzheimers disease (AD) is a progressive disorder that affects cognitive brain functions and starts many years before its clinical manifestations. A biomarker that provides a quantitative measure of changes in the brain due to AD in the early stages would be useful for early diagnosis of AD, but this would involve dealing with large numbers of people because up to 50% of dementia sufferers do not receive a formal diagnosis. Thus, there is a need for accurate, low-cost, and easy-to-use biomarkers that could be used to detect AD in its early stages. Potentially, electroencephalogram (EEG) based biomarkers can play a vital role in early diagnosis of AD as they can fulfill these needs. This is a cross-sectional study that aims to demonstrate the usefulness of EEG complexity measures in early AD diagnosis. We have focused on the three complexity methods which have shown the greatest promise in the detection of AD, Tsallis entropy (TsEn), Higuchi Fractal Dimension (HFD), and Lempel-Ziv complexity (LZC) methods. Unlike previous approaches, in this study, the complexity measures are derived from EEG frequency bands (instead of the entire EEG) as EEG activities have significant association with AD and this has led to enhanced performance. The results show that AD patients have significantly lower TsEn, HFD, and LZC values for specific EEG frequency bands and for specific EEG channels and that this information can be used to detect AD with a sensitivity and specificity of more than 90%. |
1705.03329 | Anne Robertson | Fangzhou Cheng, Anne M. Robertson, Lori Birder, F. Aura Kullmann, Jack
Hornsby, Paul Watton, Simon C. Watkins | Layer dependent role of collagen recruitment during loading of the rat
bladder wall | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we reevaluated long standing conjectures as to the source of
the exceptionally large compliance of the bladder wall. Whereas, these
conjectures were based on indirect measures of loading mechanisms, in this work
we take advantage of advances in bioimaging to directly assess collagen fibers
and wall architecture during loading. A custom biaxial mechanical testing
system compatible with multiphoton microscopy (MPM) was used to directly
measure the layer dependent collagen fiber recruitment in bladder tissue from 9
male Fischer rats (4 adult and 5 aged). As for other soft tissues, the bladder
loading curve was exponential in shape and could be divided into toe,
transition and high stress regimes. The relationship between collagen
recruitment and loading curves were evaluated in the context of the inner
bladder wall (lamina propria) and outer detrusor smooth muscle layer. The large
extensibility of the bladder was found to be possible due to folds in the wall
(rugae) that provide a mechanism for low resistance flattening without any
discernible recruitment of collagen fibers throughout the toe regime. For
elastic bladders, as the loading extended into the transition regime, a gradual
coordinated recruitment of collagen fibers between the lamina propria and
detrusor smooth muscle layers was found. A second important finding is that
wall extensibility could be lost by premature recruitment of collagen in the
outer wall that cut short the toe region. This work provides, for the first
time, a mechanistic understanding of the role of collagen recruitment in
determining bladder capacitance.
| [
{
"created": "Tue, 9 May 2017 13:44:06 GMT",
"version": "v1"
}
] | 2017-05-10 | [
[
"Cheng",
"Fangzhou",
""
],
[
"Robertson",
"Anne M.",
""
],
[
"Birder",
"Lori",
""
],
[
"Kullmann",
"F. Aura",
""
],
[
"Hornsby",
"Jack",
""
],
[
"Watton",
"Paul",
""
],
[
"Watkins",
"Simon C.",
""
]
] | In this work, we reevaluated long standing conjectures as to the source of the exceptionally large compliance of the bladder wall. Whereas, these conjectures were based on indirect measures of loading mechanisms, in this work we take advantage of advances in bioimaging to directly assess collagen fibers and wall architecture during loading. A custom biaxial mechanical testing system compatible with multiphoton microscopy (MPM) was used to directly measure the layer dependent collagen fiber recruitment in bladder tissue from 9 male Fischer rats (4 adult and 5 aged). As for other soft tissues, the bladder loading curve was exponential in shape and could be divided into toe, transition and high stress regimes. The relationship between collagen recruitment and loading curves were evaluated in the context of the inner bladder wall (lamina propria) and outer detrusor smooth muscle layer. The large extensibility of the bladder was found to be possible due to folds in the wall (rugae) that provide a mechanism for low resistance flattening without any discernible recruitment of collagen fibers throughout the toe regime. For elastic bladders, as the loading extended into the transition regime, a gradual coordinated recruitment of collagen fibers between the lamina propria and detrusor smooth muscle layers was found. A second important finding is that wall extensibility could be lost by premature recruitment of collagen in the outer wall that cut short the toe region. This work provides, for the first time, a mechanistic understanding of the role of collagen recruitment in determining bladder capacitance. |
2110.01731 | David Nguyen | David H Nguyen | Heritable Nongenetic Information in the Form of the DNA-Autonomous
Tissue Spatial Code that Governs Organismal Development, Tissue Regeneration,
and Tumor Architecture | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Numerous studies of the tumor microenvironment, interspecies xenografting,
and limb regeneration suggest the existence of a tissue spatial code [TSC] that
controls tissue structure in a quasi "epigenetic" fashion. Epigenetic is an
inadequate label, because this information does not act directly upon DNA
molecules like methylation. A broader term is needed to capture the diversity
of three-dimensional spatial codes in biology. One such term is Heritable
Nongenetic Information [HNI], which encompasses the TSC. The term heritable is
appropriate because this information is passed onto offspring, otherwise it
would have disappeared during evolution. Another reason for the heritability of
HNI is that the spatial information observed in tissues is not reducible to the
laws of physics, meaning structures like epithelial tubes or neural circuits do
not spontaneously form in an aqueous solution. Pre-existing physiological
information in the microenvironment is necessary.
| [
{
"created": "Mon, 4 Oct 2021 22:23:51 GMT",
"version": "v1"
}
] | 2021-10-06 | [
[
"Nguyen",
"David H",
""
]
] | Numerous studies of the tumor microenvironment, interspecies xenografting, and limb regeneration suggest the existence of a tissue spatial code [TSC] that controls tissue structure in a quasi "epigenetic" fashion. Epigenetic is an inadequate label, because this information does not act directly upon DNA molecules like methylation. A broader term is needed to capture the diversity of three-dimensional spatial codes in biology. One such term is Heritable Nongenetic Information [HNI], which encompasses the TSC. The term heritable is appropriate because this information is passed onto offspring, otherwise it would have disappeared during evolution. Another reason for the heritability of HNI is that the spatial information observed in tissues is not reducible to the laws of physics, meaning structures like epithelial tubes or neural circuits do not spontaneously form in an aqueous solution. Pre-existing physiological information in the microenvironment is necessary. |
2110.10040 | Olivier Faugeras | Olivier D. Faugeras, Anna Song and Romain Veltz | Spatial and color hallucinations in a mathematical model of primary
visual cortex | 30 pages, 12 figures | Comptes Rendus. Math\'ematique, Tome 360 (2022), pp. 59-87 | 10.5802/crmath.289 | null | q-bio.NC cs.NA math.DS math.NA nlin.PS | http://creativecommons.org/licenses/by/4.0/ | We study a simplified model of the representation of colors in the primate
primary cortical visual area V1. The model is described by an initial value
problem related to a Hammerstein equation. The solutions to this problem
represent the variation of the activity of populations of neurons in V1 as a
function of space and color. The two space variables describe the spatial
extent of the cortex while the two color variables describe the hue and the
saturation represented at every location in the cortex. We prove the
well-posedness of the initial value problem. We focus on its stationary, i.e.
independent of time, and periodic in space solutions. We show that the model
equation is equivariant with respect to the direct product G of the group of
the Euclidean transformations of the planar lattice determined by the spatial
periodicity and the group of color transformations, isomorphic to O(2), and
study the equivariant bifurcations of its stationary solutions when some
parameters in the model vary. Their variations may be caused by the consumption
of drugs and the bifurcated solutions may represent visual hallucinations in
space and color. Some of the bifurcated solutions can be determined by applying
the Equivariant Branching Lemma (EBL) by determining the axial subgroups of G .
These define bifurcated solutions which are invariant under the action of the
corresponding axial subgroup. We compute analytically these solutions and
illustrate them as color images. Using advanced methods of numerical
bifurcation analysis we then explore the persistence and stability of these
solutions when varying some parameters in the model. We conjecture that we can
rely on the EBL to predict the existence of patterns that survive in large
parameter domains but not to predict their stability. On our way we discover
the existence of spatially localized stable patterns through the phenomenon of
"snaking".
| [
{
"created": "Tue, 19 Oct 2021 15:08:51 GMT",
"version": "v1"
}
] | 2022-09-16 | [
[
"Faugeras",
"Olivier D.",
""
],
[
"Song",
"Anna",
""
],
[
"Veltz",
"Romain",
""
]
] | We study a simplified model of the representation of colors in the primate primary cortical visual area V1. The model is described by an initial value problem related to a Hammerstein equation. The solutions to this problem represent the variation of the activity of populations of neurons in V1 as a function of space and color. The two space variables describe the spatial extent of the cortex while the two color variables describe the hue and the saturation represented at every location in the cortex. We prove the well-posedness of the initial value problem. We focus on its stationary, i.e. independent of time, and periodic in space solutions. We show that the model equation is equivariant with respect to the direct product G of the group of the Euclidean transformations of the planar lattice determined by the spatial periodicity and the group of color transformations, isomorphic to O(2), and study the equivariant bifurcations of its stationary solutions when some parameters in the model vary. Their variations may be caused by the consumption of drugs and the bifurcated solutions may represent visual hallucinations in space and color. Some of the bifurcated solutions can be determined by applying the Equivariant Branching Lemma (EBL) by determining the axial subgroups of G . These define bifurcated solutions which are invariant under the action of the corresponding axial subgroup. We compute analytically these solutions and illustrate them as color images. Using advanced methods of numerical bifurcation analysis we then explore the persistence and stability of these solutions when varying some parameters in the model. We conjecture that we can rely on the EBL to predict the existence of patterns that survive in large parameter domains but not to predict their stability. On our way we discover the existence of spatially localized stable patterns through the phenomenon of "snaking". |
1804.02867 | Mona Arabzadeh | Mona Arabzadeh, Mehdi Sedighi, Morteza Saheb Zamani, Sayed-Amir
Marashi | A system architecture for parallel analysis of flux-balanced metabolic
pathways | 33 pages, 13 figures, 3 tables | Computational Biology and Chemistry, Available online 24 June 2020 | 10.1016/j.compbiolchem.2020.107309 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a system architecture is proposed that approximately models
the functionality of metabolic networks. The AND/OR graph model is used to
represent the metabolic network and each processing element in the system
emulates the functionality of a metabolite. The system is implemented on a
graphics processing unit (GPU) as the hardware platform using CUDA environment.
The proposed architecture takes advantage of the inherent parallelism in the
network structure in terms of both pathway and metabolite traversal. The
function of each element is defined such that it can find flux-balanced
pathways. Pathways in both small and large metabolic networks are applied to
the proposed architecture and the results are discussed.
| [
{
"created": "Mon, 9 Apr 2018 08:42:46 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Nov 2018 07:59:04 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Jul 2020 07:33:15 GMT",
"version": "v3"
}
] | 2020-07-03 | [
[
"Arabzadeh",
"Mona",
""
],
[
"Sedighi",
"Mehdi",
""
],
[
"Zamani",
"Morteza Saheb",
""
],
[
"Marashi",
"Sayed-Amir",
""
]
] | In this paper, a system architecture is proposed that approximately models the functionality of metabolic networks. The AND/OR graph model is used to represent the metabolic network and each processing element in the system emulates the functionality of a metabolite. The system is implemented on a graphics processing unit (GPU) as the hardware platform using CUDA environment. The proposed architecture takes advantage of the inherent parallelism in the network structure in terms of both pathway and metabolite traversal. The function of each element is defined such that it can find flux-balanced pathways. Pathways in both small and large metabolic networks are applied to the proposed architecture and the results are discussed. |
2306.04658 | Yuchi Qiu | Yuchi Qiu, Guo-Wei Wei | Mathematics-assisted directed evolution and protein engineering | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Directed evolution is a molecular biology technique that is transforming
protein engineering by creating proteins with desirable properties and
functions. However, it is experimentally impossible to perform the deep
mutational scanning of the entire protein library due to the enormous
mutational space, which scales as $20^N$ , where N is the number of amino
acids. This has led to the rapid growth of AI-assisted directed evolution
(AIDE) or AI-assisted protein engineering (AIPE) as an emerging research field.
Aided with advanced natural language processing (NLP) techniques, including
long short-term memory, autoencoder, and transformer, sequence-based embeddings
have been dominant approaches in AIDE and AIPE. Persistent Laplacians, an
emerging technique in topological data analysis (TDA), have made
structure-based embeddings a superb option in AIDE and AIPE. We argue that a
class of persistent topological Laplacians (PTLs), including persistent
Laplacians, persistent path Laplacians, persistent sheaf Laplacians, persistent
hypergraph Laplacians, persistent hyperdigraph Laplacians, and evolutionary de
Rham-Hodge theory, can effectively overcome the limitations of the current TDA
and offer a new generation of more powerful TDA approaches. In the general
framework of topological deep learning, mathematics-assisted directed evolution
(MADE) has a great potential for future protein engineering.
| [
{
"created": "Tue, 6 Jun 2023 19:27:11 GMT",
"version": "v1"
}
] | 2023-06-09 | [
[
"Qiu",
"Yuchi",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Directed evolution is a molecular biology technique that is transforming protein engineering by creating proteins with desirable properties and functions. However, it is experimentally impossible to perform the deep mutational scanning of the entire protein library due to the enormous mutational space, which scales as $20^N$ , where N is the number of amino acids. This has led to the rapid growth of AI-assisted directed evolution (AIDE) or AI-assisted protein engineering (AIPE) as an emerging research field. Aided with advanced natural language processing (NLP) techniques, including long short-term memory, autoencoder, and transformer, sequence-based embeddings have been dominant approaches in AIDE and AIPE. Persistent Laplacians, an emerging technique in topological data analysis (TDA), have made structure-based embeddings a superb option in AIDE and AIPE. We argue that a class of persistent topological Laplacians (PTLs), including persistent Laplacians, persistent path Laplacians, persistent sheaf Laplacians, persistent hypergraph Laplacians, persistent hyperdigraph Laplacians, and evolutionary de Rham-Hodge theory, can effectively overcome the limitations of the current TDA and offer a new generation of more powerful TDA approaches. In the general framework of topological deep learning, mathematics-assisted directed evolution (MADE) has a great potential for future protein engineering. |
1503.01880 | Chuan-Chao Wang | Hong-Bing Yao, Chuan-Chao Wang, Jiang Wang, Xiaolan Tao, Shao-Qing
Wen, Qiajun Du, Qiongying Deng, Bingying Xu, Ying Huang, Hong-Dan Wang,
Shujin Li, Bin Cong, Liying Ma, Li Jin, Johannes Krause, Hui Li | Genetic structure of Sino-Tibetan populations revealed by forensic STR
loci | 11 pages, 2 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The origin and diversification of Sino-Tibetan populations have been a
long-standing hot debate. However, the limited genetic information of Tibetan
populations keeps this topic far from clear. In the present study, we genotyped
15 forensic autosomal STRs from 803 unrelated Tibetan individuals from Gansu
Province (635 from Gannan and 168 from Tianzhu). We combined these data with
published dataset to infer a detailed population affinities and admixture of
Sino-Tibetan populations. Our results revealed that the genetic structure of
Sino-Tibetan populations was strongly correlated with linguistic affiliations.
Although the among-population variances are relatively small, the genetic
components for Tibetan, Lolo-Burmese, and Han Chinese were quite distinctive,
especially for the Deng, Nu, and Derung of Lolo-Burmese. Southern indigenous
populations, such as Tai-Kadai and Hmong-Mien populations might have made
substantial genetic contribution to Han Chinese and Altaic populations, but not
to Tibetans. Likewise, Han Chinese but not Tibetan shared very similar genetic
makeups with Altaic populations, which did not support the North Asian origin
of Tibetan populations. The dataset generated here are also valuable for
forensic identification.
| [
{
"created": "Fri, 6 Mar 2015 09:01:41 GMT",
"version": "v1"
}
] | 2015-03-09 | [
[
"Yao",
"Hong-Bing",
""
],
[
"Wang",
"Chuan-Chao",
""
],
[
"Wang",
"Jiang",
""
],
[
"Tao",
"Xiaolan",
""
],
[
"Wen",
"Shao-Qing",
""
],
[
"Du",
"Qiajun",
""
],
[
"Deng",
"Qiongying",
""
],
[
"Xu",
"Bingying",
""
],
[
"Huang",
"Ying",
""
],
[
"Wang",
"Hong-Dan",
""
],
[
"Li",
"Shujin",
""
],
[
"Cong",
"Bin",
""
],
[
"Ma",
"Liying",
""
],
[
"Jin",
"Li",
""
],
[
"Krause",
"Johannes",
""
],
[
"Li",
"Hui",
""
]
] | The origin and diversification of Sino-Tibetan populations have been a long-standing hot debate. However, the limited genetic information of Tibetan populations keeps this topic far from clear. In the present study, we genotyped 15 forensic autosomal STRs from 803 unrelated Tibetan individuals from Gansu Province (635 from Gannan and 168 from Tianzhu). We combined these data with published dataset to infer a detailed population affinities and admixture of Sino-Tibetan populations. Our results revealed that the genetic structure of Sino-Tibetan populations was strongly correlated with linguistic affiliations. Although the among-population variances are relatively small, the genetic components for Tibetan, Lolo-Burmese, and Han Chinese were quite distinctive, especially for the Deng, Nu, and Derung of Lolo-Burmese. Southern indigenous populations, such as Tai-Kadai and Hmong-Mien populations might have made substantial genetic contribution to Han Chinese and Altaic populations, but not to Tibetans. Likewise, Han Chinese but not Tibetan shared very similar genetic makeups with Altaic populations, which did not support the North Asian origin of Tibetan populations. The dataset generated here are also valuable for forensic identification. |
2306.14926 | Christopher Thron | Francis G. T. Kamba, Leonard C. Eze, Jean Claude Kamgang, Christopher
P. Thron | Analysis of Control Measures for Vector-borne Diseases Using a
Multistage Vector Model with Multi-Host Sub-populations | 42 pages, 3 figures. arXiv admin note: substantial text overlap with
arXiv:1808.07574 | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We propose and analyze an epidemiological model for vector borne diseases
that integrates a multi-stage vector population and several host
sub-populations which may be characterized by a variety of compartmental model
types: subpopulations all include Susceptible and Infected compartments, but
may or may not include Exposed and/or Recovered compartments. The model was
originally designed to evaluate the effectiveness of various prophylactic
measures in malaria-endemic areas, but can be applied as well to other
vector-borne diseases. This model is expressed as a system of several
differential equations, where the number of equations depends on the particular
assumptions of the model. We compute the basic reproduction number $\mathcal
R_0$, and show that if $\mathcal R_0\leqslant 1$, the disease free equilibrium
(DFE) is globally asymptotically stable (GAS) on the nonnegative orthant. If
$\mathcal R_0>1$, the system admits a unique endemic equilibrium (EE) that is
GAS. We analyze the sensitivity of $R_0$ and the EE to different system
parameters, and based on this analysis we discuss the relative effectiveness of
different control measures.
| [
{
"created": "Sat, 24 Jun 2023 11:26:03 GMT",
"version": "v1"
}
] | 2023-06-28 | [
[
"Kamba",
"Francis G. T.",
""
],
[
"Eze",
"Leonard C.",
""
],
[
"Kamgang",
"Jean Claude",
""
],
[
"Thron",
"Christopher P.",
""
]
] | We propose and analyze an epidemiological model for vector borne diseases that integrates a multi-stage vector population and several host sub-populations which may be characterized by a variety of compartmental model types: subpopulations all include Susceptible and Infected compartments, but may or may not include Exposed and/or Recovered compartments. The model was originally designed to evaluate the effectiveness of various prophylactic measures in malaria-endemic areas, but can be applied as well to other vector-borne diseases. This model is expressed as a system of several differential equations, where the number of equations depends on the particular assumptions of the model. We compute the basic reproduction number $\mathcal R_0$, and show that if $\mathcal R_0\leqslant 1$, the disease free equilibrium (DFE) is globally asymptotically stable (GAS) on the nonnegative orthant. If $\mathcal R_0>1$, the system admits a unique endemic equilibrium (EE) that is GAS. We analyze the sensitivity of $R_0$ and the EE to different system parameters, and based on this analysis we discuss the relative effectiveness of different control measures. |
2401.15047 | Akil Narayan | Caleb C. Berggren, David Jiang, Y.F. Jack Wang, Jake A. Bergquist,
Lindsay C. Rupp, Zexin Liu, Rob S. MacLeod, Akil Narayan, Lucas H. Timmins | Influence of Material Parameter Variability on the Predicted Coronary
Artery Biomechanical Environment via Uncertainty Quantification | To appear: Biomechanics and Modeling in Mechanobiology | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Central to the clinical adoption of patient-specific modeling strategies is
demonstrating that simulation results are reliable and safe. Simulation
frameworks must be robust to uncertainty in model input(s), and levels of
confidence should accompany results. In this study we applied a coupled
uncertainty quantification-finite element (FE) framework to understand the
impact of uncertainty in vascular material properties on variability in
predicted stresses. Univariate probability distributions were fit to material
parameters derived from layer-specific mechanical behavior testing of human
coronary tissue. Parameters were assumed to be probabilistically independent,
allowing for efficient parameter ensemble sampling. In an idealized coronary
artery geometry, a forward FE model for each parameter ensemble was created to
predict tissue stresses under physiologic loading. An emulator was constructed
within the UncertainSCI software using polynomial chaos techniques, and
statistics and sensitivities were directly computed. Results demonstrated that
material parameter uncertainty propagates to variability in predicted stresses
across the vessel wall, with the largest dispersions in stress within the
adventitial layer. Variability in stress was most sensitive to uncertainties in
the anisotropic component of the strain energy function. Unary and binary
interactions within the adventitial layer were the main contributors to stress
variance, and the leading factor in stress variability was uncertainty in the
stress-like material parameter summarizing contribution of the embedded fibers
to the overall artery stiffness. Results from a patient-specific coronary model
confirmed many of these findings. Collectively, this highlights the impact of
material property variation on predicted artery stresses and presents a
pipeline to explore and characterize uncertainty in computational biomechanics.
| [
{
"created": "Fri, 26 Jan 2024 18:19:15 GMT",
"version": "v1"
}
] | 2024-01-29 | [
[
"Berggren",
"Caleb C.",
""
],
[
"Jiang",
"David",
""
],
[
"Wang",
"Y. F. Jack",
""
],
[
"Bergquist",
"Jake A.",
""
],
[
"Rupp",
"Lindsay C.",
""
],
[
"Liu",
"Zexin",
""
],
[
"MacLeod",
"Rob S.",
""
],
[
"Narayan",
"Akil",
""
],
[
"Timmins",
"Lucas H.",
""
]
] | Central to the clinical adoption of patient-specific modeling strategies is demonstrating that simulation results are reliable and safe. Simulation frameworks must be robust to uncertainty in model input(s), and levels of confidence should accompany results. In this study we applied a coupled uncertainty quantification-finite element (FE) framework to understand the impact of uncertainty in vascular material properties on variability in predicted stresses. Univariate probability distributions were fit to material parameters derived from layer-specific mechanical behavior testing of human coronary tissue. Parameters were assumed to be probabilistically independent, allowing for efficient parameter ensemble sampling. In an idealized coronary artery geometry, a forward FE model for each parameter ensemble was created to predict tissue stresses under physiologic loading. An emulator was constructed within the UncertainSCI software using polynomial chaos techniques, and statistics and sensitivities were directly computed. Results demonstrated that material parameter uncertainty propagates to variability in predicted stresses across the vessel wall, with the largest dispersions in stress within the adventitial layer. Variability in stress was most sensitive to uncertainties in the anisotropic component of the strain energy function. Unary and binary interactions within the adventitial layer were the main contributors to stress variance, and the leading factor in stress variability was uncertainty in the stress-like material parameter summarizing contribution of the embedded fibers to the overall artery stiffness. Results from a patient-specific coronary model confirmed many of these findings. Collectively, this highlights the impact of material property variation on predicted artery stresses and presents a pipeline to explore and characterize uncertainty in computational biomechanics. |
1406.2405 | Andrew Vlasic | Andrew Vlasic | Stochastic Replicator Dynamics Subject to Markovian Switching | 14 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Population dynamics are often subject to random independent changes in the
environment. For the two strategy stochastic replicator dynamic, we assume that
stochastic changes in the environment replace the payoffs and variance. This is
modeled by a continuous time Markov chain in a finite atom space. We establish
conditions for this dynamic to have an analogous characterization of the
long-run behavior to that of the deterministic dynamic. To create intuition, we
first consider the case when the Markov chain has two states. A very natural
extension to the general finite state space of the Markov chain will be given.
| [
{
"created": "Tue, 10 Jun 2014 02:33:33 GMT",
"version": "v1"
}
] | 2014-06-11 | [
[
"Vlasic",
"Andrew",
""
]
] | Population dynamics are often subject to random independent changes in the environment. For the two strategy stochastic replicator dynamic, we assume that stochastic changes in the environment replace the payoffs and variance. This is modeled by a continuous time Markov chain in a finite atom space. We establish conditions for this dynamic to have an analogous characterization of the long-run behavior to that of the deterministic dynamic. To create intuition, we first consider the case when the Markov chain has two states. A very natural extension to the general finite state space of the Markov chain will be given. |
2405.09809 | Joshua Pickard | Joshua Pickard, Cooper Stansbury, Amit Surana, Lindsey Muir, Anthony
Bloch, and Indika Rajapakse | Biomarker Selection for Adaptive Systems | null | null | null | null | q-bio.MN math.OC | http://creativecommons.org/licenses/by/4.0/ | Biomarkers enable objective monitoring of a given cell or state in a
biological system and are widely used in research, biomanufacturing, and
clinical practice. However, identifying appropriate biomarkers that are both
robustly measurable and capture a state accurately remains challenging. We
present a framework for biomarker identification based upon observability
guided sensor selection. Our methods, Dynamic Sensor Selection (DSS) and
Structure-Guided Sensor Selection (SGSS), utilize temporal models and
experimental data, offering a template for applying observability theory to
data from biological systems. Unlike conventional methods that assume
well-known, fixed dynamics, DSS adaptively select biomarkers or sensors that
maximize observability while accounting for the time-varying nature of
biological systems. Additionally, SGSS incorporates structural information and
diverse data to identify sensors which are resilient against inaccuracies in
our model of the underlying system. We validate our approaches by performing
estimation on high dimensional systems derived from temporal gene expression
data from partial observations. Our algorithms reliably identify known
biomarkers and uncover new ones within our datasets. Additionally, integrating
chromosome conformation and gene expression data addresses noise and
uncertainty, enhancing the reliability of our biomarker selection approach for
the genome.
| [
{
"created": "Thu, 16 May 2024 04:42:21 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 15:19:21 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Aug 2024 18:49:38 GMT",
"version": "v3"
}
] | 2024-08-14 | [
[
"Pickard",
"Joshua",
""
],
[
"Stansbury",
"Cooper",
""
],
[
"Surana",
"Amit",
""
],
[
"Muir",
"Lindsey",
""
],
[
"Bloch",
"Anthony",
""
],
[
"Rajapakse",
"Indika",
""
]
] | Biomarkers enable objective monitoring of a given cell or state in a biological system and are widely used in research, biomanufacturing, and clinical practice. However, identifying appropriate biomarkers that are both robustly measurable and capture a state accurately remains challenging. We present a framework for biomarker identification based upon observability guided sensor selection. Our methods, Dynamic Sensor Selection (DSS) and Structure-Guided Sensor Selection (SGSS), utilize temporal models and experimental data, offering a template for applying observability theory to data from biological systems. Unlike conventional methods that assume well-known, fixed dynamics, DSS adaptively select biomarkers or sensors that maximize observability while accounting for the time-varying nature of biological systems. Additionally, SGSS incorporates structural information and diverse data to identify sensors which are resilient against inaccuracies in our model of the underlying system. We validate our approaches by performing estimation on high dimensional systems derived from temporal gene expression data from partial observations. Our algorithms reliably identify known biomarkers and uncover new ones within our datasets. Additionally, integrating chromosome conformation and gene expression data addresses noise and uncertainty, enhancing the reliability of our biomarker selection approach for the genome. |
2208.00684 | Thomas Williams | Thomas Williams, James McCaw, James Osborne | Choice of spatial discretisation influences the progression of viral
infection within multicellular tissues | 28 pages, 11 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | There has been an increasing recognition of the utility of models of the
spatial dynamics of viral spread within tissues. Multicellular models, where
cells are represented as discrete regions of space coupled to a virus density
surface, are a popular approach to capture these dynamics. Conventionally, such
models are simulated by discretising the viral surface and depending on the
rate of viral diffusion and other considerations, a finer or coarser
discretisation may be used. The impact that this choice may have on the
behaviour of the system has not been studied. Here we demonstrate that, if
rates of viral diffusion are small, then the choice of spatial discretisation
of the viral surface can have quantitative and even qualitative influence on
model outputs. We investigate in detail the mechanisms driving these phenomena
and discuss the constraints on the design and implementation of multicellular
viral dynamics models for different parameter configurations.
| [
{
"created": "Mon, 1 Aug 2022 08:36:27 GMT",
"version": "v1"
}
] | 2022-08-02 | [
[
"Williams",
"Thomas",
""
],
[
"McCaw",
"James",
""
],
[
"Osborne",
"James",
""
]
] | There has been an increasing recognition of the utility of models of the spatial dynamics of viral spread within tissues. Multicellular models, where cells are represented as discrete regions of space coupled to a virus density surface, are a popular approach to capture these dynamics. Conventionally, such models are simulated by discretising the viral surface and depending on the rate of viral diffusion and other considerations, a finer or coarser discretisation may be used. The impact that this choice may have on the behaviour of the system has not been studied. Here we demonstrate that, if rates of viral diffusion are small, then the choice of spatial discretisation of the viral surface can have quantitative and even qualitative influence on model outputs. We investigate in detail the mechanisms driving these phenomena and discuss the constraints on the design and implementation of multicellular viral dynamics models for different parameter configurations. |
1607.01656 | Jean-Francois Berret | G. Ramniceanu, B.-T. Doan, C. Vezignol, A. Graillot, C. Loubat, N.
Mignet and J.-F. Berret | Delayed hepatic uptake of multi-phosphonic acid poly(ethylene glycol)
coated iron oxide measured by real-time Magnetic Resonance Imaging | 19 pages 8 figures, RSC Advances, 2016 | RSC Advances 6, 63788 - 63800 (2016) | 10.1039/C6RA09896G | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report on the synthesis, characterization, stability and pharmacokinetics
of novel iron based contrast agents for magnetic resonance imaging (MRI).
Statistical copolymers combining multiple phosphonic acid groups and
poly(ethylene glycol) (PEG) were synthesized and used as coating agents for 10
nm iron oxide nanocrystals. In vitro, protein corona and stability assays show
that phosphonic acid PEG copolymers outperform all other coating types
examined, including low molecular weight anionic ligands and polymers. In vivo,
the particle pharmacokinetics is investigated by monitoring the MRI signal
intensity from mouse liver, spleen and arteries as a function of the time,
between one minute and seven days after injection. Iron oxide particles coated
with multi-phosphonic acid PEG polymers are shown to have a blood circulation
lifetime of 250 minutes, i.e. 10 to 50 times greater than that of recently
published PEGylated probes and benchmarks. The clearance from the liver takes
in average 2 to 3 days and is independent of the core size, coating and
particle stability. By comparing identical core particles with different
coatings, we are able to determine the optimum conditions for stealth MRI
probes.
| [
{
"created": "Tue, 5 Jul 2016 15:40:32 GMT",
"version": "v1"
}
] | 2021-09-21 | [
[
"Ramniceanu",
"G.",
""
],
[
"Doan",
"B. -T.",
""
],
[
"Vezignol",
"C.",
""
],
[
"Graillot",
"A.",
""
],
[
"Loubat",
"C.",
""
],
[
"Mignet",
"N.",
""
],
[
"Berret",
"J. -F.",
""
]
] | We report on the synthesis, characterization, stability and pharmacokinetics of novel iron based contrast agents for magnetic resonance imaging (MRI). Statistical copolymers combining multiple phosphonic acid groups and poly(ethylene glycol) (PEG) were synthesized and used as coating agents for 10 nm iron oxide nanocrystals. In vitro, protein corona and stability assays show that phosphonic acid PEG copolymers outperform all other coating types examined, including low molecular weight anionic ligands and polymers. In vivo, the particle pharmacokinetics is investigated by monitoring the MRI signal intensity from mouse liver, spleen and arteries as a function of the time, between one minute and seven days after injection. Iron oxide particles coated with multi-phosphonic acid PEG polymers are shown to have a blood circulation lifetime of 250 minutes, i.e. 10 to 50 times greater than that of recently published PEGylated probes and benchmarks. The clearance from the liver takes in average 2 to 3 days and is independent of the core size, coating and particle stability. By comparing identical core particles with different coatings, we are able to determine the optimum conditions for stealth MRI probes. |
2104.06686 | Marco Baity-Jesi | E. Merz, T. Kozakiewicz, M. Reyes, C. Ebi, P. Isles, M. Baity-Jesi, P.
Roberts, J. S. Jaffe, S. Dennis, T. Hardeman, N. Stevens, T. Lorimer, F.
Pomati | Underwater dual-magnification imaging for automated lake plankton
monitoring | 49 pages | Water Research, 203, 117524 (2021) | 10.1016/j.watres.2021.117524 | null | q-bio.PE physics.ins-det | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present an approach for automated in-situ monitoring of phytoplankton and
zooplankton communities based on a dual magnification dark-field imaging
microscope/camera. We describe the Dual Scripps Plankton Camera (DSPC) system
and associated image processing, and assess its capabilities in detecting and
characterizing plankton species of different size and taxonomic categories, and
in measuring their abundances in both laboratory and field applications. In the
laboratory, body size and abundance estimates by the DSPC significantly and
robustly scale with the same measurements derived by traditional microscopy. In
the field, a DSPC installed permanently at 3 m depth in Lake Greifensee
(Switzerland), delivered images of plankton individuals, colonies, and
heterospecific aggregates without disrupting natural arrangements of
interacting organisms, their microenvironment or their behavior at hourly
timescales. The DSPC was able to track the dynamics of taxa in the size range
between ~10 $\mu$m to ~ 1 cm, covering virtually all the components of the
planktonic food web (including parasites and potentially toxic cyanobacteria).
Comparing data from the field-deployed DSPC to traditional sampling and
microscopy revealed a general overall agreement in estimates of plankton
diversity and abundances, despite imaging limitations in detecting small
phytoplankton species and rare and large zooplankton taxa (e.g. carnivorous
zooplankton). The most significant disagreements between traditional methods
and the DSPC resided in the measurements of community properties of
zooplankton, organisms that are heterogeneously distributed spatially and
temporally, and whose demography appeared to be better captured by automated
imaging. Time series collected by the DSPC depicted ecological succession
patterns, algal bloom dynamics and circadian fluctuations with a temporal
frequency and morphological [continues...]
| [
{
"created": "Wed, 14 Apr 2021 08:16:08 GMT",
"version": "v1"
}
] | 2021-09-17 | [
[
"Merz",
"E.",
""
],
[
"Kozakiewicz",
"T.",
""
],
[
"Reyes",
"M.",
""
],
[
"Ebi",
"C.",
""
],
[
"Isles",
"P.",
""
],
[
"Baity-Jesi",
"M.",
""
],
[
"Roberts",
"P.",
""
],
[
"Jaffe",
"J. S.",
""
],
[
"Dennis",
"S.",
""
],
[
"Hardeman",
"T.",
""
],
[
"Stevens",
"N.",
""
],
[
"Lorimer",
"T.",
""
],
[
"Pomati",
"F.",
""
]
] | We present an approach for automated in-situ monitoring of phytoplankton and zooplankton communities based on a dual magnification dark-field imaging microscope/camera. We describe the Dual Scripps Plankton Camera (DSPC) system and associated image processing, and assess its capabilities in detecting and characterizing plankton species of different size and taxonomic categories, and in measuring their abundances in both laboratory and field applications. In the laboratory, body size and abundance estimates by the DSPC significantly and robustly scale with the same measurements derived by traditional microscopy. In the field, a DSPC installed permanently at 3 m depth in Lake Greifensee (Switzerland), delivered images of plankton individuals, colonies, and heterospecific aggregates without disrupting natural arrangements of interacting organisms, their microenvironment or their behavior at hourly timescales. The DSPC was able to track the dynamics of taxa in the size range between ~10 $\mu$m to ~ 1 cm, covering virtually all the components of the planktonic food web (including parasites and potentially toxic cyanobacteria). Comparing data from the field-deployed DSPC to traditional sampling and microscopy revealed a general overall agreement in estimates of plankton diversity and abundances, despite imaging limitations in detecting small phytoplankton species and rare and large zooplankton taxa (e.g. carnivorous zooplankton). The most significant disagreements between traditional methods and the DSPC resided in the measurements of community properties of zooplankton, organisms that are heterogeneously distributed spatially and temporally, and whose demography appeared to be better captured by automated imaging. Time series collected by the DSPC depicted ecological succession patterns, algal bloom dynamics and circadian fluctuations with a temporal frequency and morphological [continues...] |
1111.4106 | Jose Vilar | Jose M. G. Vilar and Leonor Saiz | Trafficking Coordinate Description of Intracellular Transport Control of
Signaling Networks | 17 pages, 5 figures | Biophys. J. 101, 2315-2323 (2011) | 10.1016/j.bpj.2011.09.035 | null | q-bio.SC physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cellular networks rely on the regulated transport of their components to
transduce extracellular information into precise intracellular signals. The
dynamics of these networks is typically described in terms of compartmentalized
chemical reactions. There are many important situations, however, in which the
properties of the compartments change continuously in a way that cannot
naturally be described by chemical reactions. Here, we develop an approach
based on transport along a trafficking coordinate to precisely describe these
processes and we apply it explicitly to the TGF-{\beta} signal transduction
network, which plays a fundamental role in many diseases and cellular
processes. The results of this newly introduced approach accurately capture for
the first time the distinct TGF-{\beta} signaling dynamics of cells with and
without cancerous backgrounds and provide an avenue to predict the effects of
chemical perturbations in a way that closely recapitulates the observed
cellular behavior.
| [
{
"created": "Thu, 17 Nov 2011 14:17:18 GMT",
"version": "v1"
}
] | 2011-11-18 | [
[
"Vilar",
"Jose M. G.",
""
],
[
"Saiz",
"Leonor",
""
]
] | Many cellular networks rely on the regulated transport of their components to transduce extracellular information into precise intracellular signals. The dynamics of these networks is typically described in terms of compartmentalized chemical reactions. There are many important situations, however, in which the properties of the compartments change continuously in a way that cannot naturally be described by chemical reactions. Here, we develop an approach based on transport along a trafficking coordinate to precisely describe these processes and we apply it explicitly to the TGF-{\beta} signal transduction network, which plays a fundamental role in many diseases and cellular processes. The results of this newly introduced approach accurately capture for the first time the distinct TGF-{\beta} signaling dynamics of cells with and without cancerous backgrounds and provide an avenue to predict the effects of chemical perturbations in a way that closely recapitulates the observed cellular behavior. |
2007.07789 | Zeina Khan PhD | Fazle Hussain, Zeina S. Khan, Frank Van Bussel | US faces endemic Covid-19 infections and deaths; ways to stop the
pandemic | 8 pages, 3 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new epidemic model for Covid-19 has been constructed and simulated for
eight US states. The coefficients for this model, based on seven coupled
differential equations, are carefully evaluated against recorded data on cases
and deaths. These projections reveal that Covid-19 will become endemic,
spreading for more than two years. If stay-at-home orders are relaxed, most
states may experience a secondary peak in 2021. The number of Covid-19 deaths
could have been significantly lower in most states that opened up, if lockdowns
had been maintained. Additionally, our model predicts that decreasing contact
rate by 10%, or increasing testing by approximately 15%, or doubling lockdown
compliance (from the current $\sim$ 15%) will eradicate infections in Texas
within a year. Applied to the entire US, the predictions based on the current
situation indicate about 11 million total infections (including undetected), 8
million cumulative confirmed cases, and 630,000 cumulative deaths by November
1, 2020.
| [
{
"created": "Wed, 15 Jul 2020 16:14:25 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2020 19:09:14 GMT",
"version": "v2"
}
] | 2020-07-23 | [
[
"Hussain",
"Fazle",
""
],
[
"Khan",
"Zeina S.",
""
],
[
"Van Bussel",
"Frank",
""
]
] | A new epidemic model for Covid-19 has been constructed and simulated for eight US states. The coefficients for this model, based on seven coupled differential equations, are carefully evaluated against recorded data on cases and deaths. These projections reveal that Covid-19 will become endemic, spreading for more than two years. If stay-at-home orders are relaxed, most states may experience a secondary peak in 2021. The number of Covid-19 deaths could have been significantly lower in most states that opened up, if lockdowns had been maintained. Additionally, our model predicts that decreasing contact rate by 10%, or increasing testing by approximately 15%, or doubling lockdown compliance (from the current $\sim$ 15%) will eradicate infections in Texas within a year. Applied to the entire US, the predictions based on the current situation indicate about 11 million total infections (including undetected), 8 million cumulative confirmed cases, and 630,000 cumulative deaths by November 1, 2020. |
1102.4749 | Jean-Pierre Nadal | Laurent Bonnasse-Gahot and Jean-Pierre Nadal | Perception of categories: from coding efficiency to reaction times | null | Brain Research, Volume 1434 (2012) pp. 47-61 | 10.1016/j.brainres.2011.08.014 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reaction-times in perceptual tasks are the subject of many experimental and
theoretical studies. With the neural decision making process as main focus,
most of these works concern discrete (typically binary) choice tasks, implying
the identification of the stimulus as an exemplar of a category. Here we
address issues specific to the perception of categories (e.g. vowels, familiar
faces, ...), making a clear distinction between identifying a category (an
element of a discrete set) and estimating a continuous parameter (such as a
direction). We exhibit a link between optimal Bayesian decoding and coding
efficiency, the latter being measured by the mutual information between the
discrete category set and the neural activity. We characterize the properties
of the best estimator of the likelihood of the category, when this estimator
takes its inputs from a large population of stimulus-specific coding cells.
Adopting the diffusion-to-bound approach to model the decisional process, this
allows to relate analytically the bias and variance of the diffusion process
underlying decision making to macroscopic quantities that are behaviorally
measurable. A major consequence is the existence of a quantitative link between
reaction times and discrimination accuracy. The resulting analytical expression
of mean reaction times during an identification task accounts for empirical
facts, both qualitatively (e.g. more time is needed to identify a category from
a stimulus at the boundary compared to a stimulus lying within a category), and
quantitatively (working on published experimental data on phoneme
identification tasks).
| [
{
"created": "Wed, 23 Feb 2011 14:32:28 GMT",
"version": "v1"
}
] | 2012-03-01 | [
[
"Bonnasse-Gahot",
"Laurent",
""
],
[
"Nadal",
"Jean-Pierre",
""
]
] | Reaction-times in perceptual tasks are the subject of many experimental and theoretical studies. With the neural decision making process as main focus, most of these works concern discrete (typically binary) choice tasks, implying the identification of the stimulus as an exemplar of a category. Here we address issues specific to the perception of categories (e.g. vowels, familiar faces, ...), making a clear distinction between identifying a category (an element of a discrete set) and estimating a continuous parameter (such as a direction). We exhibit a link between optimal Bayesian decoding and coding efficiency, the latter being measured by the mutual information between the discrete category set and the neural activity. We characterize the properties of the best estimator of the likelihood of the category, when this estimator takes its inputs from a large population of stimulus-specific coding cells. Adopting the diffusion-to-bound approach to model the decisional process, this allows to relate analytically the bias and variance of the diffusion process underlying decision making to macroscopic quantities that are behaviorally measurable. A major consequence is the existence of a quantitative link between reaction times and discrimination accuracy. The resulting analytical expression of mean reaction times during an identification task accounts for empirical facts, both qualitatively (e.g. more time is needed to identify a category from a stimulus at the boundary compared to a stimulus lying within a category), and quantitatively (working on published experimental data on phoneme identification tasks). |
0809.0029 | Emmanuel Tannenbaum | Pavel Gorodetsky and Emmanuel Tannenbaum | A Dual Role for Sex? | 7 pages, 4 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two classic theories for the existence of sexual replication are that sex
purges deleterious mutations from a population, and that sex allows a
population to adapt more rapidly to changing environments. These two theories
have often been presented as opposing explanations for the existence of sex.
Here, we develop and analyze evolutionary models based on the asexual and
sexual replication pathways in Saccharomyces cerevisiae (Baker's yeast), and
show that sexual replication can both purge deleterious mutations in a static
environment, as well as lead to faster adaptation in a dynamic environment.
This implies that sex can serve a dual role, which is in sharp contrast to
previous theories.
| [
{
"created": "Fri, 29 Aug 2008 23:58:49 GMT",
"version": "v1"
}
] | 2008-09-02 | [
[
"Gorodetsky",
"Pavel",
""
],
[
"Tannenbaum",
"Emmanuel",
""
]
] | The two classic theories for the existence of sexual replication are that sex purges deleterious mutations from a population, and that sex allows a population to adapt more rapidly to changing environments. These two theories have often been presented as opposing explanations for the existence of sex. Here, we develop and analyze evolutionary models based on the asexual and sexual replication pathways in Saccharomyces cerevisiae (Baker's yeast), and show that sexual replication can both purge deleterious mutations in a static environment, as well as lead to faster adaptation in a dynamic environment. This implies that sex can serve a dual role, which is in sharp contrast to previous theories. |
q-bio/0509022 | Can Ozan Tan Mr. | Can O. Tan and Uygar Ozesmi | A Cognitive Model of an Epistemic Community: Mapping the Dynamics of
Shallow Lake Ecosystems | 24 pages, 5 Figures | Hydrobiologia, 563:125-142 | 10.1007/s10750-005-1397-5 | null | q-bio.NC q-bio.OT | null | We used fuzzy cognitive mapping (FCM) to develop a generic shallow lake
ecosystem model by augmenting the individual cognitive maps drawn by 8
scientists working in the area of shallow lake ecology. We calculated graph
theoretical indices of the individual cognitive maps and the collective
cognitive map produced by augmentation. The graph theoretical indices revealed
internal cycles showing non-linear dynamics in the shallow lake ecosystem. The
ecological processes were organized democratically without a top-down
hierarchical structure. The steady state condition of the generic model was a
characteristic turbid shallow lake ecosystem since there were no dynamic
environmental changes that could cause shifts between a turbid and a clearwater
state, and the generic model indicated that only a dynamic disturbance regime
could maintain the clearwater state. The model developed herein captured the
empirical behavior of shallow lakes, and contained the basic model of the
Alternative Stable States Theory. In addition, our model expanded the basic
model by quantifying the relative effects of connections and by extending it.
In our expanded model we ran 4 simulations: harvesting submerged plants,
nutrient reduction, fish removal without nutrient reduction, and
biomanipulation. Only biomanipulation, which included fish removal and nutrient
reduction, had the potential to shift the turbid state into clearwater state.
The structure and relationships in the generic model as well as the outcomes of
the management simulations were supported by actual field studies in shallow
lake ecosystems. Thus, fuzzy cognitive mapping methodology enabled us to
understand the complex structure of shallow lake ecosystems as a whole and
obtain a valid generic model based on tacit knowledge of experts in the field.
| [
{
"created": "Sun, 18 Sep 2005 16:50:32 GMT",
"version": "v1"
}
] | 2011-07-29 | [
[
"Tan",
"Can O.",
""
],
[
"Ozesmi",
"Uygar",
""
]
] | We used fuzzy cognitive mapping (FCM) to develop a generic shallow lake ecosystem model by augmenting the individual cognitive maps drawn by 8 scientists working in the area of shallow lake ecology. We calculated graph theoretical indices of the individual cognitive maps and the collective cognitive map produced by augmentation. The graph theoretical indices revealed internal cycles showing non-linear dynamics in the shallow lake ecosystem. The ecological processes were organized democratically without a top-down hierarchical structure. The steady state condition of the generic model was a characteristic turbid shallow lake ecosystem since there were no dynamic environmental changes that could cause shifts between a turbid and a clearwater state, and the generic model indicated that only a dynamic disturbance regime could maintain the clearwater state. The model developed herein captured the empirical behavior of shallow lakes, and contained the basic model of the Alternative Stable States Theory. In addition, our model expanded the basic model by quantifying the relative effects of connections and by extending it. In our expanded model we ran 4 simulations: harvesting submerged plants, nutrient reduction, fish removal without nutrient reduction, and biomanipulation. Only biomanipulation, which included fish removal and nutrient reduction, had the potential to shift the turbid state into clearwater state. The structure and relationships in the generic model as well as the outcomes of the management simulations were supported by actual field studies in shallow lake ecosystems. Thus, fuzzy cognitive mapping methodology enabled us to understand the complex structure of shallow lake ecosystems as a whole and obtain a valid generic model based on tacit knowledge of experts in the field. |
0803.3061 | Thierry Mora | Marc Mezard and Thierry Mora | Constraint satisfaction problems and neural networks: a statistical
physics perspective | Prepared for the proceedings of the 2007 Tauc Conference on
Complexity in Neural Network Dynamics | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new field of research is rapidly expanding at the crossroad between
statistical physics, information theory and combinatorial optimization. In
particular, the use of cutting edge statistical physics concepts and methods
allow one to solve very large constraint satisfaction problems like random
satisfiability, coloring, or error correction. Several aspects of these
developments should be relevant for the understanding of functional complexity
in neural networks. On the one hand the message passing procedures which are
used in these new algorithms are based on local exchange of information, and
succeed in solving some of the hardest computational problems. On the other
hand some crucial inference problems in neurobiology, like those generated in
multi-electrode recordings, naturally translate into hard constraint
satisfaction problems. This paper gives a non-technical introduction to this
field, emphasizing the main ideas at work in message passing strategies and
their possible relevance to neural networks modeling. It also introduces a new
message passing algorithm for inferring interactions between variables from
correlation data, which could be useful in the analysis of multi-electrode
recording data.
| [
{
"created": "Thu, 20 Mar 2008 19:22:18 GMT",
"version": "v1"
}
] | 2008-03-28 | [
[
"Mezard",
"Marc",
""
],
[
"Mora",
"Thierry",
""
]
] | A new field of research is rapidly expanding at the crossroad between statistical physics, information theory and combinatorial optimization. In particular, the use of cutting edge statistical physics concepts and methods allow one to solve very large constraint satisfaction problems like random satisfiability, coloring, or error correction. Several aspects of these developments should be relevant for the understanding of functional complexity in neural networks. On the one hand the message passing procedures which are used in these new algorithms are based on local exchange of information, and succeed in solving some of the hardest computational problems. On the other hand some crucial inference problems in neurobiology, like those generated in multi-electrode recordings, naturally translate into hard constraint satisfaction problems. This paper gives a non-technical introduction to this field, emphasizing the main ideas at work in message passing strategies and their possible relevance to neural networks modeling. It also introduces a new message passing algorithm for inferring interactions between variables from correlation data, which could be useful in the analysis of multi-electrode recording data. |
2202.08825 | Parsifal Islas-Morales Mr | Parsifal Fidelio Islas-Morales and Luis Felipe Jimenez-Garcia | On the ideas of the origin of eukaryotes: a critical review | 31 pages, review | null | null | null | q-bio.PE q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | The origin and early evolution of eukaryotes are one of the major transitions
in the evolution of life on earth. One of its most interesting aspects is the
emergence of cellular organelles, their dynamics, their functions, and their
divergence. Cell compartmentalization and architecture in prokaryotes is a less
understood complex property. In eukaryotes it is related to cell size, specific
genomic architecture, evolution of cell cycles, biogenesis of membranes and
endosymbiotic processes. Explaining cell evolution through form and function
demands an interdisciplinary approach focused on microbial diversity,
phylogenetic and functional cell biology. Two centuries of views on eukaryotic
origin have completed the disciplinary tools necessarily to answer these
questions. We have moved from Haeckel SCALA NATURAE to the un-rooted tree of
life. However, the major relations among cell domains are still elusive and
keep the nature of eukaryotic ancestor enigmatic. Here we present a review on
state of art views of eukaryogenesis; the background and perspectives of
different disciplines involved in this topic
| [
{
"created": "Thu, 17 Feb 2022 18:46:53 GMT",
"version": "v1"
}
] | 2022-02-18 | [
[
"Islas-Morales",
"Parsifal Fidelio",
""
],
[
"Jimenez-Garcia",
"Luis Felipe",
""
]
] | The origin and early evolution of eukaryotes are one of the major transitions in the evolution of life on earth. One of its most interesting aspects is the emergence of cellular organelles, their dynamics, their functions, and their divergence. Cell compartmentalization and architecture in prokaryotes is a less understood complex property. In eukaryotes it is related to cell size, specific genomic architecture, evolution of cell cycles, biogenesis of membranes and endosymbiotic processes. Explaining cell evolution through form and function demands an interdisciplinary approach focused on microbial diversity, phylogenetic and functional cell biology. Two centuries of views on eukaryotic origin have completed the disciplinary tools necessarily to answer these questions. We have moved from Haeckel SCALA NATURAE to the un-rooted tree of life. However, the major relations among cell domains are still elusive and keep the nature of eukaryotic ancestor enigmatic. Here we present a review on state of art views of eukaryogenesis; the background and perspectives of different disciplines involved in this topic |
1208.3766 | Daniele Marinazzo | G. Wu, W.Liao, S. Stramaglia, J. Ding, H. Chen, D. Marinazzo | A blind deconvolution approach to recover effective connectivity brain
networks from resting state fMRI data | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A great improvement to the insight on brain function that we can get from
fMRI data can come from effective connectivity analysis, in which the flow of
information between even remote brain regions is inferred by the parameters of
a predictive dynamical model. As opposed to biologically inspired models, some
techniques as Granger causality (GC) are purely data-driven and rely on
statistical prediction and temporal precedence. While powerful and widely
applicable, this approach could suffer from two main limitations when applied
to BOLD fMRI data: confounding effect of hemodynamic response function (HRF)
and conditioning to a large number of variables in presence of short time
series. For task-related fMRI, neural population dynamics can be captured by
modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI
on the other hand, the absence of explicit inputs makes this task more
difficult, unless relying on some specific prior physiological hypothesis. In
order to overcome these issues and to allow a more general approach, here we
present a simple and novel blind-deconvolution technique for BOLD-fMRI signal.
Coming to the second limitation, a fully multivariate conditioning with short
and noisy data leads to computational problems due to overfitting. Furthermore,
conceptual issues arise in presence of redundancy. We thus apply partial
conditioning to a limited subset of variables in the framework of information
theory, as recently proposed. Mixing these two improvements we compare the
differences between BOLD and deconvolved BOLD level effective networks and draw
some conclusions.
| [
{
"created": "Sat, 18 Aug 2012 17:02:26 GMT",
"version": "v1"
}
] | 2012-08-21 | [
[
"Wu",
"G.",
""
],
[
"Liao",
"W.",
""
],
[
"Stramaglia",
"S.",
""
],
[
"Ding",
"J.",
""
],
[
"Chen",
"H.",
""
],
[
"Marinazzo",
"D.",
""
]
] | A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions. |
2103.10433 | Catherine Matias | Marc Ohlmann (LECA ), Catherine Matias (LPSM (UMR\_8001)), Giovanni
Poggiato (LECA, STATIFY), St\'ephane Dray, Wilfried Thuiller (LECA, LECA),
Vincent Miele (LBBE) | Quantifying the overall effect of biotic interactions on species
distributions along environmental gradients | null | null | 10.1016/j.ecolmodel.2023.110424 | null | q-bio.PE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Separating environmental effects from those of interspecific interactions on
species distributions has always been a central objective of community ecology.
Despite years of effort in analysing patterns of species co-occurrences and the
developments of sophisticated tools, we are still unable to address this major
objective. A key reason is that the wealth of ecological knowledge is not
sufficiently harnessed in current statistical models, notably the knowledge on
interspecific interactions. Here, we develop ELGRIN, a statistical model that
simultaneously combines knowledge on interspecific interactions (i.e., the
metanetwork), environmental data and species occurrences to tease apart their
relative effects on species distributions. Instead of focusing on single
effects of pairwise species interactions, which have little sense in complex
communities, ELGRIN contrasts the overall effect of species interactions to
that of the environment. Using various simulated and empirical data, we
demonstrate the suitability of ELGRIN to address the objectives for various
types of interspecific interactions like mutualism, competition and trophic
interactions. We then apply the model on vertebrate trophic networks in the
European Alps to map the effect of biotic interactions on species
distributions.Data on ecological networks are everyday increasing and we
believe the time is ripe to mobilize these data to better understand
biodiversity patterns. ELGRIN provides this opportunity to unravel how
interspecific interactions actually influence species distributions.
| [
{
"created": "Thu, 18 Mar 2021 14:41:26 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Oct 2022 07:30:45 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Jan 2023 13:25:23 GMT",
"version": "v3"
},
{
"created": "Tue, 2 May 2023 12:43:35 GMT",
"version": "v4"
}
] | 2023-08-30 | [
[
"Ohlmann",
"Marc",
"",
"LECA"
],
[
"Matias",
"Catherine",
"",
"LPSM"
],
[
"Poggiato",
"Giovanni",
"",
"LECA, STATIFY"
],
[
"Dray",
"Stéphane",
"",
"LECA, LECA"
],
[
"Thuiller",
"Wilfried",
"",
"LECA, LECA"
],
[
"Miele",
"Vincent",
"",
"LBBE"
]
] | Separating environmental effects from those of interspecific interactions on species distributions has always been a central objective of community ecology. Despite years of effort in analysing patterns of species co-occurrences and the developments of sophisticated tools, we are still unable to address this major objective. A key reason is that the wealth of ecological knowledge is not sufficiently harnessed in current statistical models, notably the knowledge on interspecific interactions. Here, we develop ELGRIN, a statistical model that simultaneously combines knowledge on interspecific interactions (i.e., the metanetwork), environmental data and species occurrences to tease apart their relative effects on species distributions. Instead of focusing on single effects of pairwise species interactions, which have little sense in complex communities, ELGRIN contrasts the overall effect of species interactions to that of the environment. Using various simulated and empirical data, we demonstrate the suitability of ELGRIN to address the objectives for various types of interspecific interactions like mutualism, competition and trophic interactions. We then apply the model on vertebrate trophic networks in the European Alps to map the effect of biotic interactions on species distributions.Data on ecological networks are everyday increasing and we believe the time is ripe to mobilize these data to better understand biodiversity patterns. ELGRIN provides this opportunity to unravel how interspecific interactions actually influence species distributions. |
q-bio/0611045 | Gergely J Sz\"oll\H{o}si | Balint Szabo, Gergely J. Szollosi, Balazs Gonci, Zsofi Juranyi, David
Selmeczi and Tamas Vicsek | Phase transition in the collective migration of tissue cells: experiment
and model | Submitted to Physical Review E. Supplementary material available at
http://angel.elte.hu/~bszabo/collectivecells/ | Phys. Rev. E 74, 061908 (2006). | 10.1103/PhysRevE.74.061908 | null | q-bio.CB cond-mat.stat-mech physics.bio-ph q-bio.QM q-bio.TO | null | We have recorded the swarming-like collective migration of a large number of
keratocytes (tissue cells obtained from the scales of goldfish) using long-term
videomicroscopy. By increasing the overall density of the migrating cells, we
have been able to demonstrate experimentally a kinetic phase transition from a
disordered into an ordered state. Near the critical density a complex picture
emerges with interacting clusters of cells moving in groups. Motivated by these
experiments we have constructed a flocking model that exhibits a continuous
transition to the ordered phase, while assuming only short-range interactions
and no explicit information about the knowledge of the directions of motion of
neighbors. Placing cells in microfabricated arenas we found spectacular
whirling behavior which we could also reproduce in simulations.
| [
{
"created": "Wed, 15 Nov 2006 11:51:22 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Szabo",
"Balint",
""
],
[
"Szollosi",
"Gergely J.",
""
],
[
"Gonci",
"Balazs",
""
],
[
"Juranyi",
"Zsofi",
""
],
[
"Selmeczi",
"David",
""
],
[
"Vicsek",
"Tamas",
""
]
] | We have recorded the swarming-like collective migration of a large number of keratocytes (tissue cells obtained from the scales of goldfish) using long-term videomicroscopy. By increasing the overall density of the migrating cells, we have been able to demonstrate experimentally a kinetic phase transition from a disordered into an ordered state. Near the critical density a complex picture emerges with interacting clusters of cells moving in groups. Motivated by these experiments we have constructed a flocking model that exhibits a continuous transition to the ordered phase, while assuming only short-range interactions and no explicit information about the knowledge of the directions of motion of neighbors. Placing cells in microfabricated arenas we found spectacular whirling behavior which we could also reproduce in simulations. |
2005.06336 | Isabel San Martin | M. Isabel San-Martin, Adrian Escapaa, Raul M. Alonso, Moises Canle,
Antonio Moran | Degradation of 2-mercaptobenzothizaole in microbial electrolysis cells:
intermediates, toxicity, and microbial communities | null | published on Science of The Total Environment, 2020 | 10.1016/j.scitotenv.2020.139155 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The compound 2-mercaptobenzothizaole (MBT) has been frequently detected in
wastewater and surface water and is a potential threat to both aquatic
organisms and human health (its mutagenic potential has been demonstrated).
This study investigated the degradation routes of MBT in the anode of a
microbial electrolysis cell (MEC) and the involved microbial communities. The
results indicated that graphene-modified anodes promoted the presence of more
enriched, developed, and specific communities compared to bare anodes.
Moreover, consecutive additions of the OH substituent to the benzene ring of
MBT were only detected in the reactor equipped with the graphene-treated
electrode. Both phenomena, together with the application of an external
voltage, may be related to the larger reduction of biotoxicity observed in the
MEC equipped with graphene-modified anodes (46.2 eqtox/m3 to 27.9 eqtox/m3).
| [
{
"created": "Wed, 13 May 2020 14:25:18 GMT",
"version": "v1"
}
] | 2020-05-14 | [
[
"San-Martin",
"M. Isabel",
""
],
[
"Escapaa",
"Adrian",
""
],
[
"Alonso",
"Raul M.",
""
],
[
"Canle",
"Moises",
""
],
[
"Moran",
"Antonio",
""
]
] | The compound 2-mercaptobenzothizaole (MBT) has been frequently detected in wastewater and surface water and is a potential threat to both aquatic organisms and human health (its mutagenic potential has been demonstrated). This study investigated the degradation routes of MBT in the anode of a microbial electrolysis cell (MEC) and the involved microbial communities. The results indicated that graphene-modified anodes promoted the presence of more enriched, developed, and specific communities compared to bare anodes. Moreover, consecutive additions of the OH substituent to the benzene ring of MBT were only detected in the reactor equipped with the graphene-treated electrode. Both phenomena, together with the application of an external voltage, may be related to the larger reduction of biotoxicity observed in the MEC equipped with graphene-modified anodes (46.2 eqtox/m3 to 27.9 eqtox/m3). |
1501.05605 | Vahid Salari | Vahid Salari, Maryam Sajadi, Hassan Bassereh, Vahid Rezania, Mojtaba
Alaei, Jack Tuszynski | On the Classical Vibrational Coherence of Carbonyl Groups in the
Selectivity Filter Backbone of KcsA Ion Channel | 8 pages, 8 figures | J Integrative Neuroscience, 14 (2), 1-12, 2015 | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been suggested that quantum coherence in the selectivity filter of ion
channel may play a key role in fast conduction and selectivity of ions.
However, it has not been clearly elucidated yet why classical coherence is not
sufficient for this purpose. In this paper, we investigate the classical
vibrational coherence between carbonyl groups oscillations in the selectivity
filter of KcsA ion channels based on the data obtained from molecular dynamics
simulations. Our results show that classical coherence plays no effective role
in fast ionic conduction.
| [
{
"created": "Sat, 3 Jan 2015 18:28:55 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Apr 2015 12:15:22 GMT",
"version": "v2"
}
] | 2015-04-28 | [
[
"Salari",
"Vahid",
""
],
[
"Sajadi",
"Maryam",
""
],
[
"Bassereh",
"Hassan",
""
],
[
"Rezania",
"Vahid",
""
],
[
"Alaei",
"Mojtaba",
""
],
[
"Tuszynski",
"Jack",
""
]
] | It has been suggested that quantum coherence in the selectivity filter of ion channel may play a key role in fast conduction and selectivity of ions. However, it has not been clearly elucidated yet why classical coherence is not sufficient for this purpose. In this paper, we investigate the classical vibrational coherence between carbonyl groups oscillations in the selectivity filter of KcsA ion channels based on the data obtained from molecular dynamics simulations. Our results show that classical coherence plays no effective role in fast ionic conduction. |
0907.2824 | Andrea De Martino | A. De Martino, D. Granata, E. Marinari, C. Martelli, V. Van
Kerrebroeck | Optimal flux states, reaction replaceability and response to knockouts
in the human red blood cell | 7 pages | null | null | null | q-bio.MN cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing the capabilities, criticalities and response to perturbations
of genome-scale metabolic networks is a basic problem with important
applications. A key question concerns the identification of the potentially
most harmful knockouts. The integration of combinatorial methods with sampling
techniques to explore the space of viable flux states may provide crucial
insights on this issue. We assess the replaceability of every metabolic
conversion in the human red blood cell by enumerating the alternative paths
from substrate to product, obtaining a complete map of the potential damage of
single enzymopathies. Sampling the space of optimal flux states in the healthy
and in the mutated cell reveals both correlations and complementarity between
topologic and dynamical aspects.
| [
{
"created": "Thu, 16 Jul 2009 12:56:05 GMT",
"version": "v1"
}
] | 2009-07-17 | [
[
"De Martino",
"A.",
""
],
[
"Granata",
"D.",
""
],
[
"Marinari",
"E.",
""
],
[
"Martelli",
"C.",
""
],
[
"Van Kerrebroeck",
"V.",
""
]
] | Characterizing the capabilities, criticalities and response to perturbations of genome-scale metabolic networks is a basic problem with important applications. A key question concerns the identification of the potentially most harmful knockouts. The integration of combinatorial methods with sampling techniques to explore the space of viable flux states may provide crucial insights on this issue. We assess the replaceability of every metabolic conversion in the human red blood cell by enumerating the alternative paths from substrate to product, obtaining a complete map of the potential damage of single enzymopathies. Sampling the space of optimal flux states in the healthy and in the mutated cell reveals both correlations and complementarity between topologic and dynamical aspects. |
q-bio/0507003 | Christian Emig | Christian Emig (DIMAR), Patrick Geistdoerfer (LOV) | The Mediterranean deep-sea fauna: historical evolution, bathymetric
variations and geographical changes | null | Carnets de G\'{e}ologie / Notebooks on Geology Article 2004/01
(CG2004_A01_CCE-PG) (2004) 10 p., 4 fig., 3 tabl | null | null | q-bio.PE | null | The deep-water fauna of the Mediterranean is characterized by an absence of
distinctive characteristics and by a relative impoverishment. Both are a result
of events after the Messinian salinity crisis (Late Miocene). The three main
classes of phenomena involved in producing or recording these effects are
analysed and discussed: - Historical: Sequential faunal changes during the
Pliocene and thereafter in particular those during the Quaternary glaciations
and still in progress. - Bathymetric: Changes in the vertical aspects of the
Bathyal and Abyssal zones that took place under peculiar conditions, i.e.
homothermy, a relative oligotrophy, the barrier of the Gibraltar sill, and
water mass movement. The deeper the habitat of a species in the Mediterranean,
the more extensive is its distribution elsewhere. - Geographical: There are
strong affinities and relationships between Mediterranean and Atlantic faunas.
Endemic species remain a biogeographical problem. Species always become smaller
in size eastward where they occupy a progressively deeper habitat. Thus, the
existing deep Mediterranean Sea appears to be younger than any other deep-sea
constituent of the World Ocean.
| [
{
"created": "Sat, 2 Jul 2005 09:32:32 GMT",
"version": "v1"
}
] | 2019-05-01 | [
[
"Emig",
"Christian",
"",
"DIMAR"
],
[
"Geistdoerfer",
"Patrick",
"",
"LOV"
]
] | The deep-water fauna of the Mediterranean is characterized by an absence of distinctive characteristics and by a relative impoverishment. Both are a result of events after the Messinian salinity crisis (Late Miocene). The three main classes of phenomena involved in producing or recording these effects are analysed and discussed: - Historical: Sequential faunal changes during the Pliocene and thereafter in particular those during the Quaternary glaciations and still in progress. - Bathymetric: Changes in the vertical aspects of the Bathyal and Abyssal zones that took place under peculiar conditions, i.e. homothermy, a relative oligotrophy, the barrier of the Gibraltar sill, and water mass movement. The deeper the habitat of a species in the Mediterranean, the more extensive is its distribution elsewhere. - Geographical: There are strong affinities and relationships between Mediterranean and Atlantic faunas. Endemic species remain a biogeographical problem. Species always become smaller in size eastward where they occupy a progressively deeper habitat. Thus, the existing deep Mediterranean Sea appears to be younger than any other deep-sea constituent of the World Ocean. |
0704.0634 | Mark Bathe | Mark Bathe | A Finite Element framework for computation of protein normal modes and
mechanical response | null | null | null | null | q-bio.BM q-bio.QM | null | A coarse-grained computational procedure based on the Finite Element Method
is proposed to calculate the normal modes and mechanical response of proteins
and their supramolecular assemblies. Motivated by the elastic network model,
proteins are modeled as homogeneous isotropic elastic solids with volume
defined by their solvent-excluded surface. The discretized Finite Element
representation is obtained using a surface simplification algorithm that
facilitates the generation of models of arbitrary prescribed spatial
resolution. The procedure is applied to compute the normal modes of a mutant of
T4 phage lysozyme and of filamentous actin, as well as the critical Euler
buckling load of the latter when subject to axial compression. Results compare
favorably with all-atom normal mode analysis, the Rotation Translation Blocks
procedure, and experiment. The proposed methodology establishes a computational
framework for the calculation of protein mechanical response that facilitates
the incorporation of specific atomic-level interactions into the model,
including aqueous-electrolyte-mediated electrostatic effects. The procedure is
equally applicable to proteins with known atomic coordinates as it is to
electron density maps of proteins, protein complexes, and supramolecular
assemblies of unknown atomic structure.
| [
{
"created": "Wed, 4 Apr 2007 19:02:54 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bathe",
"Mark",
""
]
] | A coarse-grained computational procedure based on the Finite Element Method is proposed to calculate the normal modes and mechanical response of proteins and their supramolecular assemblies. Motivated by the elastic network model, proteins are modeled as homogeneous isotropic elastic solids with volume defined by their solvent-excluded surface. The discretized Finite Element representation is obtained using a surface simplification algorithm that facilitates the generation of models of arbitrary prescribed spatial resolution. The procedure is applied to compute the normal modes of a mutant of T4 phage lysozyme and of filamentous actin, as well as the critical Euler buckling load of the latter when subject to axial compression. Results compare favorably with all-atom normal mode analysis, the Rotation Translation Blocks procedure, and experiment. The proposed methodology establishes a computational framework for the calculation of protein mechanical response that facilitates the incorporation of specific atomic-level interactions into the model, including aqueous-electrolyte-mediated electrostatic effects. The procedure is equally applicable to proteins with known atomic coordinates as it is to electron density maps of proteins, protein complexes, and supramolecular assemblies of unknown atomic structure. |
2407.05076 | Yihang Zhou | Yihang Zhou | Metagenomic analysis reveals shared and distinguishing features in horse
and donkey gut microbiome and maternal resemblance of the microbiota in
hybrid equids | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Mammalian gut microbiomes are essential for host functions like digestion,
immunity, and nutrient utilization. This study examines the gut microbiome of
horses, donkeys, and their hybrids, mules and hinnies, to explore the role of
microbiomes in hybrid vigor. We performed whole-genome sequencing on rectal
microbiota from 18 equids, generating detailed microbiome assemblies. Our
analysis revealed significant differences between horse and donkey microbiomes,
with hybrids showing a pronounced maternal resemblance. Notably, Firmicutes
were more abundant in the horse-maternal group, while Fibrobacteres were richer
in the donkey-maternal group, indicating distinct digestive processes.
Functional annotations indicated metabolic differences, such as protein
synthesis in horses and energy metabolism in donkeys. Machine learning
predictions of probiotic species highlighted potential health benefits for each
maternal group. This study provides a high-resolution view of the equid gut
microbiome, revealing significant taxonomic and metabolic differences
influenced by maternal lineage, and offers insights into microbial
contributions to hybrid vigor.
| [
{
"created": "Sat, 6 Jul 2024 13:37:11 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Zhou",
"Yihang",
""
]
] | Mammalian gut microbiomes are essential for host functions like digestion, immunity, and nutrient utilization. This study examines the gut microbiome of horses, donkeys, and their hybrids, mules and hinnies, to explore the role of microbiomes in hybrid vigor. We performed whole-genome sequencing on rectal microbiota from 18 equids, generating detailed microbiome assemblies. Our analysis revealed significant differences between horse and donkey microbiomes, with hybrids showing a pronounced maternal resemblance. Notably, Firmicutes were more abundant in the horse-maternal group, while Fibrobacteres were richer in the donkey-maternal group, indicating distinct digestive processes. Functional annotations indicated metabolic differences, such as protein synthesis in horses and energy metabolism in donkeys. Machine learning predictions of probiotic species highlighted potential health benefits for each maternal group. This study provides a high-resolution view of the equid gut microbiome, revealing significant taxonomic and metabolic differences influenced by maternal lineage, and offers insights into microbial contributions to hybrid vigor. |
2310.05639 | Miroslav Semotiuk Vasilevich | M. V. Semotiuk, A. V. Palagin | Technocratic model of the human auditory system | 30 pages, 19 figures | null | null | null | q-bio.NC cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate the phenomenon of transverse resonance and
transverse standing waves that occur within the cochlea of living organisms. It
is demonstrated that the predisposing factor for their occurrence is the
cochlear shape, which resembles a conical acoustic tube coiled into a spiral
and exhibits non-uniformities on its internal surface. This cochlear structure
facilitates the analysis of constituent sound signals akin to a spectrum
analyzer, with a corresponding interpretation of the physical processes
occurring in the auditory system. Additionally, we conclude that the cochlear
duct's scala media, composed of a system of membranes and the organ of Corti,
functions primarily as an information collection and amplification system along
the cochlear spiral. Collectively, these findings enable the development of a
novel, highly realistic wave model of the auditory system in living organisms
based on a technocratic approach within the scientific context.
| [
{
"created": "Mon, 9 Oct 2023 11:51:22 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Semotiuk",
"M. V.",
""
],
[
"Palagin",
"A. V.",
""
]
] | In this work, we investigate the phenomenon of transverse resonance and transverse standing waves that occur within the cochlea of living organisms. It is demonstrated that the predisposing factor for their occurrence is the cochlear shape, which resembles a conical acoustic tube coiled into a spiral and exhibits non-uniformities on its internal surface. This cochlear structure facilitates the analysis of constituent sound signals akin to a spectrum analyzer, with a corresponding interpretation of the physical processes occurring in the auditory system. Additionally, we conclude that the cochlear duct's scala media, composed of a system of membranes and the organ of Corti, functions primarily as an information collection and amplification system along the cochlear spiral. Collectively, these findings enable the development of a novel, highly realistic wave model of the auditory system in living organisms based on a technocratic approach within the scientific context. |
2306.03117 | Jiarui Lu | Jiarui Lu, Bozitao Zhong, Zuobai Zhang, Jian Tang | Str2Str: A Score-based Framework for Zero-shot Protein Conformation
Sampling | Published as a conference paper at ICLR 2024, see
https://openreview.net/forum?id=C4BikKsgmK | null | null | null | q-bio.QM cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic nature of proteins is crucial for determining their biological
functions and properties, for which Monte Carlo (MC) and molecular dynamics
(MD) simulations stand as predominant tools to study such phenomena. By
utilizing empirically derived force fields, MC or MD simulations explore the
conformational space through numerically evolving the system via Markov chain
or Newtonian mechanics. However, the high-energy barrier of the force fields
can hamper the exploration of both methods by the rare event, resulting in
inadequately sampled ensemble without exhaustive running. Existing
learning-based approaches perform direct sampling yet heavily rely on
target-specific simulation data for training, which suffers from high data
acquisition cost and poor generalizability. Inspired by simulated annealing, we
propose Str2Str, a novel structure-to-structure translation framework capable
of zero-shot conformation sampling with roto-translation equivariant property.
Our method leverages an amortized denoising score matching objective trained on
general crystal structures and has no reliance on simulation data during both
training and inference. Experimental results across several benchmarking
protein systems demonstrate that Str2Str outperforms previous state-of-the-art
generative structure prediction models and can be orders of magnitude faster
compared to long MD simulations. Our open-source implementation is available at
https://github.com/lujiarui/Str2Str
| [
{
"created": "Mon, 5 Jun 2023 15:19:06 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Feb 2024 16:59:42 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Mar 2024 19:54:30 GMT",
"version": "v3"
}
] | 2024-03-13 | [
[
"Lu",
"Jiarui",
""
],
[
"Zhong",
"Bozitao",
""
],
[
"Zhang",
"Zuobai",
""
],
[
"Tang",
"Jian",
""
]
] | The dynamic nature of proteins is crucial for determining their biological functions and properties, for which Monte Carlo (MC) and molecular dynamics (MD) simulations stand as predominant tools to study such phenomena. By utilizing empirically derived force fields, MC or MD simulations explore the conformational space through numerically evolving the system via Markov chain or Newtonian mechanics. However, the high-energy barrier of the force fields can hamper the exploration of both methods by the rare event, resulting in inadequately sampled ensemble without exhaustive running. Existing learning-based approaches perform direct sampling yet heavily rely on target-specific simulation data for training, which suffers from high data acquisition cost and poor generalizability. Inspired by simulated annealing, we propose Str2Str, a novel structure-to-structure translation framework capable of zero-shot conformation sampling with roto-translation equivariant property. Our method leverages an amortized denoising score matching objective trained on general crystal structures and has no reliance on simulation data during both training and inference. Experimental results across several benchmarking protein systems demonstrate that Str2Str outperforms previous state-of-the-art generative structure prediction models and can be orders of magnitude faster compared to long MD simulations. Our open-source implementation is available at https://github.com/lujiarui/Str2Str |
1002.4485 | Ulrich S. Schwarz | Jakob Schluttig, Christian B. Korn, and Ulrich S. Schwarz (University
of Heidelberg, Institute for Theoretical Physics and Bioquant) | Role of anisotropy for protein-protein encounter | 4 pages, Revtex with 3 figures, to appear as a Rapid Communication in
Physical Review E | null | 10.1103/PhysRevE.81.030902 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-protein interactions comprise both transport and reaction steps.
During the transport step, anisotropy of proteins and their complexes is
important both for hydrodynamic diffusion and accessibility of the binding
site. Using a Brownian dynamics approach and extensive computer simulations, we
quantify the effect of anisotropy on the encounter rate of ellipsoidal
particles covered with spherical encounter patches. We show that the encounter
rate $k$ depends on the aspect ratios $\xi$ mainly through steric effects,
while anisotropic diffusion has only a little effect. Calculating analytically
the crossover times from anisotropic to isotropic diffusion in three
dimensions, we find that they are much smaller than typical protein encounter
times, in agreement with our numerical results.
| [
{
"created": "Wed, 24 Feb 2010 08:14:19 GMT",
"version": "v1"
}
] | 2015-05-18 | [
[
"Schluttig",
"Jakob",
"",
"University\n of Heidelberg, Institute for Theoretical Physics and Bioquant"
],
[
"Korn",
"Christian B.",
"",
"University\n of Heidelberg, Institute for Theoretical Physics and Bioquant"
],
[
"Schwarz",
"Ulrich S.",
"",
"University\n of Heidelberg, Institute for Theoretical Physics and Bioquant"
]
] | Protein-protein interactions comprise both transport and reaction steps. During the transport step, anisotropy of proteins and their complexes is important both for hydrodynamic diffusion and accessibility of the binding site. Using a Brownian dynamics approach and extensive computer simulations, we quantify the effect of anisotropy on the encounter rate of ellipsoidal particles covered with spherical encounter patches. We show that the encounter rate $k$ depends on the aspect ratios $\xi$ mainly through steric effects, while anisotropic diffusion has only a little effect. Calculating analytically the crossover times from anisotropic to isotropic diffusion in three dimensions, we find that they are much smaller than typical protein encounter times, in agreement with our numerical results. |
0803.2092 | Yoann Pigne | Fr\'ed\'eric Guinand (LITIS), Yoann Pign\'e (LITIS) | An Ant-Based Model for Multiple Sequence Alignment | null | Dans Large-Scale Scientific Computing - Large-Scale Scientific
Computing, 6th International Conference, LSSC 2007, Sozopol : Bulgarie (2007) | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple sequence alignment is a key process in today's biology, and finding
a relevant alignment of several sequences is much more challenging than just
optimizing some improbable evaluation functions. Our approach for addressing
multiple sequence alignment focuses on the building of structures in a new
graph model: the factor graph model. This model relies on block-based
formulation of the original problem, formulation that seems to be one of the
most suitable ways for capturing evolutionary aspects of alignment. The
structures are implicitly built by a colony of ants laying down pheromones in
the factor graphs, according to relations between blocks belonging to the
different sequences.
| [
{
"created": "Fri, 14 Mar 2008 06:58:56 GMT",
"version": "v1"
}
] | 2008-12-18 | [
[
"Guinand",
"Frédéric",
"",
"LITIS"
],
[
"Pigné",
"Yoann",
"",
"LITIS"
]
] | Multiple sequence alignment is a key process in today's biology, and finding a relevant alignment of several sequences is much more challenging than just optimizing some improbable evaluation functions. Our approach for addressing multiple sequence alignment focuses on the building of structures in a new graph model: the factor graph model. This model relies on block-based formulation of the original problem, formulation that seems to be one of the most suitable ways for capturing evolutionary aspects of alignment. The structures are implicitly built by a colony of ants laying down pheromones in the factor graphs, according to relations between blocks belonging to the different sequences. |
1408.2114 | Peter Clote Peter Clote | Ivan Dotu, Juan Antonio Garcia-Martin, Betty L. Slinger, Vinodh
Mechery, Michelle M. Meyer, Peter Clote | Complete RNA inverse folding: computational design of functional
hammerhead ribozymes | 17 pages, 2 tables, 7 figures, final version to appear in Nucleic
Acids Research | Nucl. Acids Res. (2014) 42 (18): 11752-11762 | 10.1093/nar/gku740 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nanotechnology and synthetic biology currently constitute one of the most
innovative, interdisciplinary fields of research, poised to radically transform
society in the 21st century. This paper concerns the synthetic design of
ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can
determine all RNA sequences whose minimum free energy secondary structure is a
user-specified target structure. Using RNAiFold, we design ten cis-cleaving
hammerhead ribozymes, all of which are shown to be functional by a cleavage
assay. We additionally use RNAiFold to design a functional cis-cleaving
hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on
this small set of hammerheads suggests that cleavage rate of computationally
designed ribozymes may be correlated with positional entropy, ensemble defect,
structural flexibility/rigidity and related measures. Artificial ribozymes have
been designed in the past either manually or by SELEX (Systematic Evolution of
Ligands by Exponential Enrichment); however, this appears to be the first
purely computational design and experimental validation of novel functional
ribozymes. RNAiFold is available at
http://bioinformatics.bc.edu/clotelab/RNAiFold/.
| [
{
"created": "Sat, 9 Aug 2014 14:42:00 GMT",
"version": "v1"
}
] | 2015-05-29 | [
[
"Dotu",
"Ivan",
""
],
[
"Garcia-Martin",
"Juan Antonio",
""
],
[
"Slinger",
"Betty L.",
""
],
[
"Mechery",
"Vinodh",
""
],
[
"Meyer",
"Michelle M.",
""
],
[
"Clote",
"Peter",
""
]
] | Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. |
1605.07488 | Wei Zhang | Ke Tang and Wei Zhang | Transcriptional Similarity in Couples Reveals the Impact of Shared
Environment and Lifestyle on Gene Regulation through Modified Cytosines | null | null | 10.7717/peerj.2123. | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression is a complex and quantitative trait that is influenced by
both genetic and non-genetic regulators including environmental factors.
Evaluating the contribution of environment to gene expression regulation and
identifying which genes are more likely to be influenced by environmental
factors are important for understanding human complex traits. We hypothesize
that by living together as couples, there can be commonly co-regulated genes
that may reflect the shared living environment (e.g., diet, indoor air
pollutants, behavioral lifestyle). The lymphoblastoid cell lines (LCLs) derived
from unrelated couples of African ancestry (YRI, Yoruba people from Ibadan,
Nigeria) from the International HapMap Project provided a unique model for us
to characterize gene expression pattern in couples by comparing gene expression
levels between husbands and wives. Strikingly, 778 genes were found to show
much smaller variances in couples than random pairs of individuals at a false
discovery rate (FDR) of 5%. Since genetic variation between unrelated family
members in a general population is expected to be the same assuming a
random-mating society, non-genetic factors (e.g., epigenetic systems) are more
likely to be the mediators for the observed transcriptional similarity in
couples. We thus evaluated the contribution of modified cytosines to those
genes showing transcriptional similarity in couples as well as the
relationships these CpG sites with other gene regulatory elements, such as
transcription factor binding sites (TFBS). Our findings suggested that
transcriptional similarity in couples likely reflected shared common
environment partially mediated through cytosine modifications.
| [
{
"created": "Tue, 24 May 2016 14:53:03 GMT",
"version": "v1"
}
] | 2019-06-27 | [
[
"Tang",
"Ke",
""
],
[
"Zhang",
"Wei",
""
]
] | Gene expression is a complex and quantitative trait that is influenced by both genetic and non-genetic regulators including environmental factors. Evaluating the contribution of environment to gene expression regulation and identifying which genes are more likely to be influenced by environmental factors are important for understanding human complex traits. We hypothesize that by living together as couples, there can be commonly co-regulated genes that may reflect the shared living environment (e.g., diet, indoor air pollutants, behavioral lifestyle). The lymphoblastoid cell lines (LCLs) derived from unrelated couples of African ancestry (YRI, Yoruba people from Ibadan, Nigeria) from the International HapMap Project provided a unique model for us to characterize gene expression pattern in couples by comparing gene expression levels between husbands and wives. Strikingly, 778 genes were found to show much smaller variances in couples than random pairs of individuals at a false discovery rate (FDR) of 5%. Since genetic variation between unrelated family members in a general population is expected to be the same assuming a random-mating society, non-genetic factors (e.g., epigenetic systems) are more likely to be the mediators for the observed transcriptional similarity in couples. We thus evaluated the contribution of modified cytosines to those genes showing transcriptional similarity in couples as well as the relationships these CpG sites with other gene regulatory elements, such as transcription factor binding sites (TFBS). Our findings suggested that transcriptional similarity in couples likely reflected shared common environment partially mediated through cytosine modifications. |
2103.13120 | Mar\'ia Vallet-Regi | L. Casarrubios, N. Gomez-Cerezo, S. Sanchez-Salcedo, M.J. Feito, M.C.
Serrano, M. Saiz-Pardo, L. Ortega, D. de Pablo, I. Diaz-Guemes, B.
Fernandez-Tome, S. Enciso, F.M. Sanchez-Margallo, M.T. Portoles, D. Arcos, M.
Vallet-Regi | Silicon substituted hydroxyapatite/VEGF scaffolds stimulate bone
regeneration in osteoporotic sheep | 23 pages | Acta Biomaterialia 101, 544-553 (2019) | 10.1016/j.actbio.2019.10.033 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Silicon-substituted hydroxyapatite (SiHA) macroporous scaffolds have been
prepared by robocasting. In order to optimize their bone regeneration
properties, we have manufactured these scaffolds presenting different
microstructures: nanocrystalline and crystalline. Moreover, their surfaces have
been decorated with vascular endothelial growth factor (VEGF) to evaluate the
potential coupling between vascularization and bone regeneration. In vitro cell
culture tests evidence that nanocrystalline SiHA hinders pre-osteblast
proliferation, whereas the presence of VEGF enhances the biological functions
of both endothelial cells and pre-osteoblasts. The bone regeneration capability
has been evaluated using an osteoporotic sheep model. In vivo observations
strongly correlate with in vitro cell culture tests. Those scaffolds made of
nanocrystalline SiHA were colonized by fibrous tissue, promoted inflammatory
response and fostered osteoclast recruitment. These observations discard
nanocystalline SiHA as a suitable material for bone regeneration purposes. On
the contrary, those scaffolds made of crystalline SiHA and decorated with VEGF
exhibited bone regeneration properties, with high ossification degree, thicker
trabeculae and higher presence of osteoblasts and blood vessels. Considering
these results, macroporous scaffolds made of SiHA and decorated with VEGF are
suitable bone grafts for regeneration purposes, even in adverse pathological
scenarios such as osteoporosis.
| [
{
"created": "Wed, 24 Mar 2021 11:51:09 GMT",
"version": "v1"
}
] | 2021-03-25 | [
[
"Casarrubios",
"L.",
""
],
[
"Gomez-Cerezo",
"N.",
""
],
[
"Sanchez-Salcedo",
"S.",
""
],
[
"Feito",
"M. J.",
""
],
[
"Serrano",
"M. C.",
""
],
[
"Saiz-Pardo",
"M.",
""
],
[
"Ortega",
"L.",
""
],
[
"de Pablo",
"D.",
""
],
[
"Diaz-Guemes",
"I.",
""
],
[
"Fernandez-Tome",
"B.",
""
],
[
"Enciso",
"S.",
""
],
[
"Sanchez-Margallo",
"F. M.",
""
],
[
"Portoles",
"M. T.",
""
],
[
"Arcos",
"D.",
""
],
[
"Vallet-Regi",
"M.",
""
]
] | Silicon-substituted hydroxyapatite (SiHA) macroporous scaffolds have been prepared by robocasting. In order to optimize their bone regeneration properties, we have manufactured these scaffolds presenting different microstructures: nanocrystalline and crystalline. Moreover, their surfaces have been decorated with vascular endothelial growth factor (VEGF) to evaluate the potential coupling between vascularization and bone regeneration. In vitro cell culture tests evidence that nanocrystalline SiHA hinders pre-osteblast proliferation, whereas the presence of VEGF enhances the biological functions of both endothelial cells and pre-osteoblasts. The bone regeneration capability has been evaluated using an osteoporotic sheep model. In vivo observations strongly correlate with in vitro cell culture tests. Those scaffolds made of nanocrystalline SiHA were colonized by fibrous tissue, promoted inflammatory response and fostered osteoclast recruitment. These observations discard nanocystalline SiHA as a suitable material for bone regeneration purposes. On the contrary, those scaffolds made of crystalline SiHA and decorated with VEGF exhibited bone regeneration properties, with high ossification degree, thicker trabeculae and higher presence of osteoblasts and blood vessels. Considering these results, macroporous scaffolds made of SiHA and decorated with VEGF are suitable bone grafts for regeneration purposes, even in adverse pathological scenarios such as osteoporosis. |
2007.13469 | Niharika Pandala | Niharika Pandala (1), Casey A. Cole (1), Devaun McFarland (1), Anita
Nag (2), Homayoun Valafar (1) ((1) University of South Carolina Columbia, (2)
University of South Carolina Upstate) | A Preliminary Investigation in the Molecular Basis of Host Shutoff
Mechanism in SARS-CoV | Consists of 9 pages, 8 figures and 7 tables. 11th ACM Conference on
Bioinformatics, Computational Biology, and Health Informatics 2020 | null | null | null | q-bio.BM cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent events leading to the worldwide pandemic of COVID-19 have demonstrated
the effective use of genomic sequencing technologies to establish the genetic
sequence of this virus. In contrast, the COVID-19 pandemic has demonstrated the
absence of computational approaches to understand the molecular basis of this
infection rapidly. Here we present an integrated approach to the study of the
nsp1 protein in SARS-CoV-1, which plays an essential role in maintaining the
expression of viral proteins and further disabling the host protein expression,
also known as the host shutoff mechanism. We present three independent methods
of evaluating two potential binding sites speculated to participate in host
shutoff by nsp1. We have combined results from computed models of nsp1, with
deep mining of all existing protein structures (using PDBMine), and binding
site recognition (using msTALI) to examine the two sites consisting of residues
55-59 and 73-80. Based on our preliminary results, we conclude that the
residues 73-80 appear as the regions that facilitate the critical initial steps
in the function of nsp1. Given the 90% sequence identity between nsp1 from
SARS-CoV-1 and SARS-CoV-2, we conjecture the same critical initiation step in
the function of COVID-19 nsp1.
| [
{
"created": "Thu, 23 Jul 2020 21:56:07 GMT",
"version": "v1"
}
] | 2020-07-28 | [
[
"Pandala",
"Niharika",
""
],
[
"Cole",
"Casey A.",
""
],
[
"McFarland",
"Devaun",
""
],
[
"Nag",
"Anita",
""
],
[
"Valafar",
"Homayoun",
""
]
] | Recent events leading to the worldwide pandemic of COVID-19 have demonstrated the effective use of genomic sequencing technologies to establish the genetic sequence of this virus. In contrast, the COVID-19 pandemic has demonstrated the absence of computational approaches to understand the molecular basis of this infection rapidly. Here we present an integrated approach to the study of the nsp1 protein in SARS-CoV-1, which plays an essential role in maintaining the expression of viral proteins and further disabling the host protein expression, also known as the host shutoff mechanism. We present three independent methods of evaluating two potential binding sites speculated to participate in host shutoff by nsp1. We have combined results from computed models of nsp1, with deep mining of all existing protein structures (using PDBMine), and binding site recognition (using msTALI) to examine the two sites consisting of residues 55-59 and 73-80. Based on our preliminary results, we conclude that the residues 73-80 appear as the regions that facilitate the critical initial steps in the function of nsp1. Given the 90% sequence identity between nsp1 from SARS-CoV-1 and SARS-CoV-2, we conjecture the same critical initiation step in the function of COVID-19 nsp1. |
1906.11679 | Anna Melnykova | Vincent Calvez (ICJ), Susely Figueroa Iglesias (IMT), H\'el\`ene
Hivert (ICJ), Sylvie M\'el\'eard (CMAP), Anna Melnykova (LJK, AGM), Samuel
Nordmann (CAMS) | Horizontal gene transfer: numerical comparison between stochastic and
deterministic approaches | null | null | 10.1051/proc/202067009 | null | q-bio.PE math.AP math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Horizontal gene Transfer (HT) denotes the transmission of genetic material
between two living organisms, while the vertical transmission refers to a DNA
transfer from parents to their offspring. Consistent experimental evidence
report that this phenomenon plays an essential role in the evolution of certain
bacterias. In particular, HT is believed to be the main instrument of
developing the antibiotic resistance. In this work, we consider several models
which describe this phenomenon: a stochastic jump process (individual-based)
and the deterministic nonlinear integrod-ifferential equation obtained as a
limit for large populations. We also consider a Hamilton-Jacobi equation,
obtained as a limit of the deterministic model under the assumption of small
mutations. The goal of this paper is to compare these models with the help of
numerical simulations. More specifically, our goal is to understand to which
extent the Hamilton-Jacobi model reproduces the qualitative behavior of the
stochastic model and the phenomenon of evolutionary rescue in particular.
| [
{
"created": "Thu, 27 Jun 2019 14:23:28 GMT",
"version": "v1"
}
] | 2021-02-12 | [
[
"Calvez",
"Vincent",
"",
"ICJ"
],
[
"Iglesias",
"Susely Figueroa",
"",
"IMT"
],
[
"Hivert",
"Hélène",
"",
"ICJ"
],
[
"Méléard",
"Sylvie",
"",
"CMAP"
],
[
"Melnykova",
"Anna",
"",
"LJK, AGM"
],
[
"Nordmann",
"Samuel",
"",
"CAMS"
]
] | Horizontal gene Transfer (HT) denotes the transmission of genetic material between two living organisms, while the vertical transmission refers to a DNA transfer from parents to their offspring. Consistent experimental evidence report that this phenomenon plays an essential role in the evolution of certain bacterias. In particular, HT is believed to be the main instrument of developing the antibiotic resistance. In this work, we consider several models which describe this phenomenon: a stochastic jump process (individual-based) and the deterministic nonlinear integrod-ifferential equation obtained as a limit for large populations. We also consider a Hamilton-Jacobi equation, obtained as a limit of the deterministic model under the assumption of small mutations. The goal of this paper is to compare these models with the help of numerical simulations. More specifically, our goal is to understand to which extent the Hamilton-Jacobi model reproduces the qualitative behavior of the stochastic model and the phenomenon of evolutionary rescue in particular. |
1612.01396 | Antti Niemi | Jiaojiao Liu, Jin Dai, Jianfeng He, Antti J. Niemi, Nevena Ilieva | Towards multistage modelling of protein dynamics with monomeric Myc
oncoprotein as an example | 17 figures | Phys. Rev. E 95, 032406 (2017) | 10.1103/PhysRevE.95.032406 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to combine a mean field approach with all atom molecular dynamics
into a multistage algorithm that can model protein folding and dynamics over
very long time periods yet with atomic level precision. As an example we
investigate an isolated monomeric Myc oncoprotein that has been implicated in
carcinomas including those in colon, breast and lungs. Under physiological
conditions a monomeric Myc is presumed to be an example of intrinsically
disordered proteins, that pose a serious challenge to existing modelling
techniques. We argue that a room temperature monomeric Myc is in a dynamical
state, it oscillates between different conformations that we identify. For this
we adopt the C-alpha backbone of Myc in a crystallographic heteromer as an
initial Ansatz for the monomeric structure. We construct a multisoliton of the
pertinent Landau free energy, to describe the C-alpha profile with ultra high
precision. We use Glauber dynamics to resolve how the multisoliton responds to
repeated increases and decreases in ambient temperature. We confirm that the
initial structure is unstable in isolation. We reveal a highly degenerate
ground state landscape, an attractive set towards which Glauber dynamics
converges in the limit of vanishing ambient temperature. We analyse the thermal
stability of this Glauber attractor using room temperature molecular dynamics.
We identify and scrutinise a particularly stable subset in which the two
helical segments of the original multisoliton align in parallel, next to each
other. During the MD time evolution of a representative structure from this
subset, we observe intermittent quasiparticle oscillations along the C-terminal
alpha-helix, some of which resemble a translating Davydov's Amide-I soliton. We
propose that the presence of oscillatory motion is in line with the expected
intrinsically disordered character of Myc.
| [
{
"created": "Mon, 5 Dec 2016 15:30:55 GMT",
"version": "v1"
}
] | 2017-03-15 | [
[
"Liu",
"Jiaojiao",
""
],
[
"Dai",
"Jin",
""
],
[
"He",
"Jianfeng",
""
],
[
"Niemi",
"Antti J.",
""
],
[
"Ilieva",
"Nevena",
""
]
] | We propose to combine a mean field approach with all atom molecular dynamics into a multistage algorithm that can model protein folding and dynamics over very long time periods yet with atomic level precision. As an example we investigate an isolated monomeric Myc oncoprotein that has been implicated in carcinomas including those in colon, breast and lungs. Under physiological conditions a monomeric Myc is presumed to be an example of intrinsically disordered proteins, that pose a serious challenge to existing modelling techniques. We argue that a room temperature monomeric Myc is in a dynamical state, it oscillates between different conformations that we identify. For this we adopt the C-alpha backbone of Myc in a crystallographic heteromer as an initial Ansatz for the monomeric structure. We construct a multisoliton of the pertinent Landau free energy, to describe the C-alpha profile with ultra high precision. We use Glauber dynamics to resolve how the multisoliton responds to repeated increases and decreases in ambient temperature. We confirm that the initial structure is unstable in isolation. We reveal a highly degenerate ground state landscape, an attractive set towards which Glauber dynamics converges in the limit of vanishing ambient temperature. We analyse the thermal stability of this Glauber attractor using room temperature molecular dynamics. We identify and scrutinise a particularly stable subset in which the two helical segments of the original multisoliton align in parallel, next to each other. During the MD time evolution of a representative structure from this subset, we observe intermittent quasiparticle oscillations along the C-terminal alpha-helix, some of which resemble a translating Davydov's Amide-I soliton. We propose that the presence of oscillatory motion is in line with the expected intrinsically disordered character of Myc. |
1006.1730 | Atheer Matroud | A. A. Matroud, M. D. Hendy and C. P. Tuffley | NTRFINDER: A Software Tool to Find Nested Tandem Repeats | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the software tool NTRFinder to find the complex repetitive
structure in DNA we call a nested tandem repeat (NTR). An NTR is a recurrence
of two or more distinct tandem motifs interspersed with each other. We propose
that nested tandem repeats can be used as phylogenetic and population markers.
We have tested our algorithm on both real and simulated data, and present some
real nested tandem repeats of interest. We discuss how the NTR found in the
ribosomal DNA of taro (Colocasia esculenta) may assist in determining the
cultivation prehistory of this ancient staple food crop. NTRFinder can be
downloaded from http://www.maths.otago.ac.nz/? aamatroud/.
| [
{
"created": "Wed, 9 Jun 2010 07:49:37 GMT",
"version": "v1"
},
{
"created": "Tue, 3 May 2011 02:40:13 GMT",
"version": "v2"
}
] | 2011-05-04 | [
[
"Matroud",
"A. A.",
""
],
[
"Hendy",
"M. D.",
""
],
[
"Tuffley",
"C. P.",
""
]
] | We introduce the software tool NTRFinder to find the complex repetitive structure in DNA we call a nested tandem repeat (NTR). An NTR is a recurrence of two or more distinct tandem motifs interspersed with each other. We propose that nested tandem repeats can be used as phylogenetic and population markers. We have tested our algorithm on both real and simulated data, and present some real nested tandem repeats of interest. We discuss how the NTR found in the ribosomal DNA of taro (Colocasia esculenta) may assist in determining the cultivation prehistory of this ancient staple food crop. NTRFinder can be downloaded from http://www.maths.otago.ac.nz/? aamatroud/. |
1103.4090 | Luis Rocha | An\'alia Louren\c{c}o, Michael Conover, Andrew Wong, Azadeh
Nematzadeh, Fengxia Pan, Hagit Shatkay, Luis M. Rocha | A Linear Classifier Based on Entity Recognition Tools and a Statistical
Approach to Method Extraction in the Protein-Protein Interaction Literature | BMC Bioinformatics. In Press | null | null | null | q-bio.QM cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We participated, in the Article Classification and the Interaction Method
subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of
the BioCreative III Challenge. For the ACT, we pursued an extensive testing of
available Named Entity Recognition and dictionary tools, and used the most
promising ones to extend our Variable Trigonometric Threshold linear
classifier. For the IMT, we experimented with a primarily statistical approach,
as opposed to employing a deeper natural language processing strategy. Finally,
we also studied the benefits of integrating the method extraction approach that
we have used for the IMT into the ACT pipeline. For the ACT, our linear article
classifier leads to a ranking and classification performance significantly
higher than all the reported submissions. For the IMT, our results are
comparable to those of other systems, which took very different approaches. For
the ACT, we show that the use of named entity recognition tools leads to a
substantial improvement in the ranking and classification of articles relevant
to protein-protein interaction. Thus, we show that our substantially expanded
linear classifier is a very competitive classifier in this domain. Moreover,
this classifier produces interpretable surfaces that can be understood as
"rules" for human understanding of the classification. In terms of the IMT
task, in contrast to other participants, our approach focused on identifying
sentences that are likely to bear evidence for the application of a PPI
detection method, rather than on classifying a document as relevant to a
method. As BioCreative III did not perform an evaluation of the evidence
provided by the system, we have conducted a separate assessment; the evaluators
agree that our tool is indeed effective in detecting relevant evidence for PPI
detection methods.
| [
{
"created": "Mon, 21 Mar 2011 17:33:32 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2011 17:46:37 GMT",
"version": "v2"
}
] | 2011-04-25 | [
[
"Lourenço",
"Anália",
""
],
[
"Conover",
"Michael",
""
],
[
"Wong",
"Andrew",
""
],
[
"Nematzadeh",
"Azadeh",
""
],
[
"Pan",
"Fengxia",
""
],
[
"Shatkay",
"Hagit",
""
],
[
"Rocha",
"Luis M.",
""
]
] | We participated, in the Article Classification and the Interaction Method subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of the BioCreative III Challenge. For the ACT, we pursued an extensive testing of available Named Entity Recognition and dictionary tools, and used the most promising ones to extend our Variable Trigonometric Threshold linear classifier. For the IMT, we experimented with a primarily statistical approach, as opposed to employing a deeper natural language processing strategy. Finally, we also studied the benefits of integrating the method extraction approach that we have used for the IMT into the ACT pipeline. For the ACT, our linear article classifier leads to a ranking and classification performance significantly higher than all the reported submissions. For the IMT, our results are comparable to those of other systems, which took very different approaches. For the ACT, we show that the use of named entity recognition tools leads to a substantial improvement in the ranking and classification of articles relevant to protein-protein interaction. Thus, we show that our substantially expanded linear classifier is a very competitive classifier in this domain. Moreover, this classifier produces interpretable surfaces that can be understood as "rules" for human understanding of the classification. In terms of the IMT task, in contrast to other participants, our approach focused on identifying sentences that are likely to bear evidence for the application of a PPI detection method, rather than on classifying a document as relevant to a method. As BioCreative III did not perform an evaluation of the evidence provided by the system, we have conducted a separate assessment; the evaluators agree that our tool is indeed effective in detecting relevant evidence for PPI detection methods. |
1606.03482 | Mathieu Delangle | Mathieu Delangle, \'Emilie Poirson, Jean Fran\c{c}ois Petiot | Study of human accessibility: physical tests versus numerical simulation | null | Design conference , May 2014, Cavtat, Croatia | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consideration of physical dimensions of the user population is essential to
design adapted environment. This variability in body dimensions (called
"anthropometry") is involved in design tools commonly used today to assess
user's accommodation (physical mock-ups, population models, database, boundary
manikins, hybrid methods or digital human modeling). Databases are created from
campaigns of measurement. Besides the fact that such measures are costly in
time and money, they give more "static" measures of man. They do not take into
account possible stretching limbs that could allow increased accessibility.
This paper presents a methodology for human body modeling, in a dynamic way,
not static. The methodology allows to highlight influences of design and human
behaviour on reach skills, directly induced by the interaction with real
prototypes and not just considered by human physical dimensions. The first part
of the method is to create a database of measurements (arms length, hip
breadth, etc.). From these data, a model of accessibility is proposed. The
accessibility field is determined purely numerically. In parallel, an
experiment is set up to measure the extension of the accessibility field, with
the same people. A comparison of the results is then performed and a new model
of the human body is proposed.
| [
{
"created": "Tue, 22 Dec 2015 13:34:34 GMT",
"version": "v1"
}
] | 2016-06-14 | [
[
"Delangle",
"Mathieu",
""
],
[
"Poirson",
"Émilie",
""
],
[
"Petiot",
"Jean François",
""
]
] | Consideration of physical dimensions of the user population is essential to design adapted environment. This variability in body dimensions (called "anthropometry") is involved in design tools commonly used today to assess user's accommodation (physical mock-ups, population models, database, boundary manikins, hybrid methods or digital human modeling). Databases are created from campaigns of measurement. Besides the fact that such measures are costly in time and money, they give more "static" measures of man. They do not take into account possible stretching limbs that could allow increased accessibility. This paper presents a methodology for human body modeling, in a dynamic way, not static. The methodology allows to highlight influences of design and human behaviour on reach skills, directly induced by the interaction with real prototypes and not just considered by human physical dimensions. The first part of the method is to create a database of measurements (arms length, hip breadth, etc.). From these data, a model of accessibility is proposed. The accessibility field is determined purely numerically. In parallel, an experiment is set up to measure the extension of the accessibility field, with the same people. A comparison of the results is then performed and a new model of the human body is proposed. |
1505.01289 | Jicun Wang-Michelitsch | Jicun Wang-Michelitsch and Thomas M. Michelitsch | Misrepair mechanism in the development of atherosclerotic plaques | 8 pages, 3 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Atherosclerosis is a disease characterized by the development of
atherosclerotic plaques (APs) in arterial endothelium. The APs in part of an
arterial wall are inhomogeneous on size and on distribution. In order to
understand this in-homogeneity, the pathology of APs is analyzed by Misrepair
mechanism, a mechanism proposed in our Misrepair-accumulation aging theory. I.
In general, development of an AP is a result of repair of injured endothelium.
Because of infusion and deposition of lipids beneath endothelial cells, the
repair has to be achieved by altered remodeling of local endothelium. Such a
repair is a Misrepair. During repair, smooth muscular cells (SMCs) are
clustered and collagen fibers are produced to the lesion of endothelium for
reconstructing an anchoring structure for endothelial cells and for forming a
barrier to isolate the lipids. II. Altered remodeling (Misrepair) makes the
local part of endothelium have increased damage-sensitivity and reduced
repair-efficiency. Thus, this part of endothelium will have increased risk for
injuries, lipid-infusion, and Misrepairs. Focalized accumulation of Misrepairs
and focalized deposition of lipids result in development of a plaque. III. By a
viscous circle between lipid-infusion and endothelium-Misrepair, growing of an
AP is self-accelerating. Namely, once an AP develops, it grows in an increasing
rate with time and it does not stop growing. Within part of an arterial wall,
older APs grow faster than younger ones. The oldest and the biggest AP is the
most threatening one in narrowing local vessel lumen. Therefore, the
self-accelerated growing of an AP is a fatal factor in atherosclerosis. In
conclusion, development of APs is focalized and self-accelerating, because it
is a result of accumulation of Misrepairs of endothelium.
| [
{
"created": "Wed, 6 May 2015 08:59:49 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2017 09:08:23 GMT",
"version": "v2"
}
] | 2017-09-07 | [
[
"Wang-Michelitsch",
"Jicun",
""
],
[
"Michelitsch",
"Thomas M.",
""
]
] | Atherosclerosis is a disease characterized by the development of atherosclerotic plaques (APs) in arterial endothelium. The APs in part of an arterial wall are inhomogeneous on size and on distribution. In order to understand this in-homogeneity, the pathology of APs is analyzed by Misrepair mechanism, a mechanism proposed in our Misrepair-accumulation aging theory. I. In general, development of an AP is a result of repair of injured endothelium. Because of infusion and deposition of lipids beneath endothelial cells, the repair has to be achieved by altered remodeling of local endothelium. Such a repair is a Misrepair. During repair, smooth muscular cells (SMCs) are clustered and collagen fibers are produced to the lesion of endothelium for reconstructing an anchoring structure for endothelial cells and for forming a barrier to isolate the lipids. II. Altered remodeling (Misrepair) makes the local part of endothelium have increased damage-sensitivity and reduced repair-efficiency. Thus, this part of endothelium will have increased risk for injuries, lipid-infusion, and Misrepairs. Focalized accumulation of Misrepairs and focalized deposition of lipids result in development of a plaque. III. By a viscous circle between lipid-infusion and endothelium-Misrepair, growing of an AP is self-accelerating. Namely, once an AP develops, it grows in an increasing rate with time and it does not stop growing. Within part of an arterial wall, older APs grow faster than younger ones. The oldest and the biggest AP is the most threatening one in narrowing local vessel lumen. Therefore, the self-accelerated growing of an AP is a fatal factor in atherosclerosis. In conclusion, development of APs is focalized and self-accelerating, because it is a result of accumulation of Misrepairs of endothelium. |
1103.0097 | Kavita Jain | Kavita Jain and Sarada Seetharaman | Nonlinear deterministic equations in biological evolution | Invited review for J. Nonlin. Math. Phys | J. Nonlin. Math. Phys. 18, 321 (2011) | 10.1142/S1402925111001556 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review models of biological evolution in which the population frequency
changes deterministically with time. If the population is self-replicating,
although the equations for simple prototypes can be linearised, nonlinear
equations arise in many complex situations. For sexual populations, even in the
simplest setting, the equations are necessarily nonlinear due to the mixing of
the parental genetic material. The solutions of such nonlinear equations
display interesting features such as multiple equilibria and phase transitions.
We mainly discuss those models for which an analytical understanding of such
nonlinear equations is available.
| [
{
"created": "Tue, 1 Mar 2011 07:58:44 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Apr 2011 11:16:25 GMT",
"version": "v2"
}
] | 2015-05-27 | [
[
"Jain",
"Kavita",
""
],
[
"Seetharaman",
"Sarada",
""
]
] | We review models of biological evolution in which the population frequency changes deterministically with time. If the population is self-replicating, although the equations for simple prototypes can be linearised, nonlinear equations arise in many complex situations. For sexual populations, even in the simplest setting, the equations are necessarily nonlinear due to the mixing of the parental genetic material. The solutions of such nonlinear equations display interesting features such as multiple equilibria and phase transitions. We mainly discuss those models for which an analytical understanding of such nonlinear equations is available. |
1609.09051 | Michelle Kendall | Michelle Kendall and Diepreye Ayabina and Caroline Colijn | Estimating transmission from genetic and epidemiological data: a metric
to compare transmission trees | 17 pages, 8 figures | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing who infected whom is a central challenge in analysing
epidemiological data. Recently, advances in sequencing technology have led to
increasing interest in Bayesian approaches to inferring who infected whom using
genetic data from pathogens. The logic behind such approaches is that isolates
that are nearly genetically identical are more likely to have been recently
transmitted than those that are very different. A number of methods have been
developed to perform this inference. However, testing their convergence,
examining posterior sets of transmission trees and comparing methods'
performance are challenged by the fact that the object of inference - the
transmission tree - is a complicated discrete structure. We introduce a metric
on transmission trees to quantify distances between them. The metric can
accommodate trees with unsampled individuals, and highlights differences in the
source case and in the number of infections per infector. We illustrate its
performance on simple simulated scenarios and on posterior transmission trees
from a TB outbreak. We find that the metric reveals where the posterior is
sensitive to the priors, and where collections of trees are composed of
distinct clusters. We use the metric to define median trees summarising these
clusters. Quantitative tools to compare transmission trees to each other will
be required for assessing MCMC convergence, exploring posterior trees and
benchmarking diverse methods as this field continues to mature.
| [
{
"created": "Wed, 28 Sep 2016 19:45:05 GMT",
"version": "v1"
}
] | 2016-09-30 | [
[
"Kendall",
"Michelle",
""
],
[
"Ayabina",
"Diepreye",
""
],
[
"Colijn",
"Caroline",
""
]
] | Reconstructing who infected whom is a central challenge in analysing epidemiological data. Recently, advances in sequencing technology have led to increasing interest in Bayesian approaches to inferring who infected whom using genetic data from pathogens. The logic behind such approaches is that isolates that are nearly genetically identical are more likely to have been recently transmitted than those that are very different. A number of methods have been developed to perform this inference. However, testing their convergence, examining posterior sets of transmission trees and comparing methods' performance are challenged by the fact that the object of inference - the transmission tree - is a complicated discrete structure. We introduce a metric on transmission trees to quantify distances between them. The metric can accommodate trees with unsampled individuals, and highlights differences in the source case and in the number of infections per infector. We illustrate its performance on simple simulated scenarios and on posterior transmission trees from a TB outbreak. We find that the metric reveals where the posterior is sensitive to the priors, and where collections of trees are composed of distinct clusters. We use the metric to define median trees summarising these clusters. Quantitative tools to compare transmission trees to each other will be required for assessing MCMC convergence, exploring posterior trees and benchmarking diverse methods as this field continues to mature. |
1604.06299 | Christoph Adami | Thomas LaBar and Christoph Adami | Different evolutionary paths to complexity for small and large
populations of digital organisms | 22 pages, 5 figures, 7 Supporting Figures and 1 Supporting Table | PLoS Computational Biology 12 (2016) e1005066 | 10.1371/journal.pcbi.1005066 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major aim of evolutionary biology is to explain the respective roles of
adaptive versus non-adaptive changes in the evolution of complexity. While
selection is certainly responsible for the spread and maintenance of complex
phenotypes, this does not automatically imply that strong selection enhances
the chance for the emergence of novel traits, that is, the origination of
complexity. Population size is one parameter that alters the relative
importance of adaptive and non-adaptive processes: as population size
decreases, selection weakens and genetic drift grows in importance. Because of
this relationship, many theories invoke a role for population size in the
evolution of complexity. Such theories are difficult to test empirically
because of the time required for the evolution of complexity in biological
populations. Here, we used digital experimental evolution to test whether large
or small asexual populations tend to evolve greater complexity. We find that
both small and large---but not intermediate-sized---populations are favored to
evolve larger genomes, which provides the opportunity for subsequent increases
in phenotypic complexity. However, small and large populations followed
different evolutionary paths towards these novel traits. Small populations
evolved larger genomes by fixing slightly deleterious insertions, while large
populations fixed rare beneficial insertions that increased genome size. These
results demonstrate that genetic drift can lead to the evolution of complexity
in small populations and that purifying selection is not powerful enough to
prevent the evolution of complexity in large populations.
| [
{
"created": "Thu, 21 Apr 2016 13:36:33 GMT",
"version": "v1"
}
] | 2017-01-17 | [
[
"LaBar",
"Thomas",
""
],
[
"Adami",
"Christoph",
""
]
] | A major aim of evolutionary biology is to explain the respective roles of adaptive versus non-adaptive changes in the evolution of complexity. While selection is certainly responsible for the spread and maintenance of complex phenotypes, this does not automatically imply that strong selection enhances the chance for the emergence of novel traits, that is, the origination of complexity. Population size is one parameter that alters the relative importance of adaptive and non-adaptive processes: as population size decreases, selection weakens and genetic drift grows in importance. Because of this relationship, many theories invoke a role for population size in the evolution of complexity. Such theories are difficult to test empirically because of the time required for the evolution of complexity in biological populations. Here, we used digital experimental evolution to test whether large or small asexual populations tend to evolve greater complexity. We find that both small and large---but not intermediate-sized---populations are favored to evolve larger genomes, which provides the opportunity for subsequent increases in phenotypic complexity. However, small and large populations followed different evolutionary paths towards these novel traits. Small populations evolved larger genomes by fixing slightly deleterious insertions, while large populations fixed rare beneficial insertions that increased genome size. These results demonstrate that genetic drift can lead to the evolution of complexity in small populations and that purifying selection is not powerful enough to prevent the evolution of complexity in large populations. |
1602.04776 | Vince Grolmusz | Bal\'azs Szalkai and Csaba Kerepesi and B\'alint Varga and Vince
Grolmusz | Parameterizable Consensus Connectomes from the Human Connectome Project:
The Budapest Reference Connectome Server v3.0 | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connections of the living human brain, on a macroscopic scale, can be mapped
by a diffusion MR imaging based workflow. Since the same anatomic regions can
be corresponded between distinct brains, one can compare the presence or the
absence of the edges, connecting the very same two anatomic regions, among
multiple cortices. Previously, we have constructed the consensus braingraphs on
1015 vertices first in five, then in 96 subjects in the Budapest Reference
Connectome Server v1.0 and v2.0, respectively. Here we report the construction
of the version 3.0 of the server, generating the common edges of the
connectomes of variously parameterizable subsets of the 1015-vertex connectomes
of 477 subjects of the Human Connectome Project's 500-subject release. The
consensus connectomes are downloadable in csv and GraphML formats, and they are
also visualized on the server's page. The consensus connectomes of the server
can be considered as the "average, healthy" human connectome since all of their
connections are present in at least $k$ subjects, where the default value of
$k=209$, but it can also be modified freely at the web server. The webserver is
available at \url{http://connectome.pitgroup.org}.
| [
{
"created": "Mon, 15 Feb 2016 19:38:43 GMT",
"version": "v1"
}
] | 2016-02-16 | [
[
"Szalkai",
"Balázs",
""
],
[
"Kerepesi",
"Csaba",
""
],
[
"Varga",
"Bálint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | Connections of the living human brain, on a macroscopic scale, can be mapped by a diffusion MR imaging based workflow. Since the same anatomic regions can be corresponded between distinct brains, one can compare the presence or the absence of the edges, connecting the very same two anatomic regions, among multiple cortices. Previously, we have constructed the consensus braingraphs on 1015 vertices first in five, then in 96 subjects in the Budapest Reference Connectome Server v1.0 and v2.0, respectively. Here we report the construction of the version 3.0 of the server, generating the common edges of the connectomes of variously parameterizable subsets of the 1015-vertex connectomes of 477 subjects of the Human Connectome Project's 500-subject release. The consensus connectomes are downloadable in csv and GraphML formats, and they are also visualized on the server's page. The consensus connectomes of the server can be considered as the "average, healthy" human connectome since all of their connections are present in at least $k$ subjects, where the default value of $k=209$, but it can also be modified freely at the web server. The webserver is available at \url{http://connectome.pitgroup.org}. |
2008.10758 | Manoj P. Samanta | Anne Gvozdjak and Manoj P. Samanta | Genes Preferring Non-AUG Start Codons in Bacteria | null | null | null | null | q-bio.GN q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Here we investigate translational regulation in bacteria by analyzing the
distribution of start codons in fully assembled genomes. We report 36 genes
(infC, rpoC, rnpA, etc.) showing a preference for non-AUG start codons in
evolutionarily diverse phyla ("non-AUG genes"). Most of the non-AUG genes are
functionally associated with translation, transcription or replication. In E.
coli, the percentage of essential genes among these 36 is significantly higher
than among all genes. Furthermore, the functional distribution of these genes
suggests that non-AUG start codons may be used to reduce gene expression during
starvation conditions, possibly through translational autoregulation or
IF3-mediated regulation.
| [
{
"created": "Tue, 25 Aug 2020 00:28:12 GMT",
"version": "v1"
}
] | 2020-08-26 | [
[
"Gvozdjak",
"Anne",
""
],
[
"Samanta",
"Manoj P.",
""
]
] | Here we investigate translational regulation in bacteria by analyzing the distribution of start codons in fully assembled genomes. We report 36 genes (infC, rpoC, rnpA, etc.) showing a preference for non-AUG start codons in evolutionarily diverse phyla ("non-AUG genes"). Most of the non-AUG genes are functionally associated with translation, transcription or replication. In E. coli, the percentage of essential genes among these 36 is significantly higher than among all genes. Furthermore, the functional distribution of these genes suggests that non-AUG start codons may be used to reduce gene expression during starvation conditions, possibly through translational autoregulation or IF3-mediated regulation. |
1608.05494 | Joshua M. Deutsch | J. M. Deutsch | Associative memory by collective regulation of non-coding RNA | 7 pages, 2 figures | null | null | null | q-bio.MN q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The majority of mammalian genomic transcripts do not directly code for
proteins and it is currently believed that most of these are not under
evolutionary constraint. However given the abundance non-coding RNA (ncRNA) and
its strong affinity for inter-RNA binding, these molecules have the potential
to regulate proteins in a highly distributed way, similar to artificial neural
networks. We explore this analogy by devising a simple architecture for a
biochemical network that can function as an associative memory. We show that
the steady state solution for this chemical network has the same structure as
an associative memory neural network model. By allowing the choice of
equilibrium constants between different ncRNA species, the concentration of
unbound ncRNA can be made to follow any pattern and many patterns can be stored
simultaneously. The model is studied numerically and within certain parameter
regimes it functions as predicted. Even if the starting concentration pattern
is quite different, it is shown to converge to the original pattern most of the
time. The network is also robust to mutations in equilibrium constants. This
calls into question the criteria for deciding if a sequence is under
evolutionary constraint.
| [
{
"created": "Fri, 19 Aug 2016 05:07:27 GMT",
"version": "v1"
}
] | 2016-08-22 | [
[
"Deutsch",
"J. M.",
""
]
] | The majority of mammalian genomic transcripts do not directly code for proteins and it is currently believed that most of these are not under evolutionary constraint. However given the abundance non-coding RNA (ncRNA) and its strong affinity for inter-RNA binding, these molecules have the potential to regulate proteins in a highly distributed way, similar to artificial neural networks. We explore this analogy by devising a simple architecture for a biochemical network that can function as an associative memory. We show that the steady state solution for this chemical network has the same structure as an associative memory neural network model. By allowing the choice of equilibrium constants between different ncRNA species, the concentration of unbound ncRNA can be made to follow any pattern and many patterns can be stored simultaneously. The model is studied numerically and within certain parameter regimes it functions as predicted. Even if the starting concentration pattern is quite different, it is shown to converge to the original pattern most of the time. The network is also robust to mutations in equilibrium constants. This calls into question the criteria for deciding if a sequence is under evolutionary constraint. |
1401.2238 | Gennadi Glinsky | Gennadi V. Glinsky | Integrative genomics analysis identifies pericentromeric regions of
human chromosomes affecting patterns of inter-chromosomal interactions | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide analysis of distributions of densities of long-range interactions
of human chromosomes with each other, nucleoli, nuclear lamina, and binding
sites of chromatin state regulatory proteins, CTCF and STAT1, identifies
non-random highly correlated patterns of density distributions along the
chromosome length for all these features. Marked co-enrichments and clustering
of all these interactions are detected at discrete genomic regions on selected
chromosomes, which are located within pericentromeric heterochromatin and
designated Centromeric Regions of Interphase Chromatin Homing (CENTRICH).
CENTRICH manifest 199-716-fold higher density of inter-chromosomal binding
sites compared to genome-wide or chromosomal averages (p =
2.10E-101-1.08E-292). Sequence alignment analysis shows that CENTRICH represent
unique DNA sequences of 3.9 to 22.4 Kb in size which are: 1) associated with
nucleolus; 2) exhibit highly diverse set of DNA-bound chromatin state
regulators, including marked enrichment of CTCF and STAT1 binding sites; 3)
bind multiple intergenic disease-associated genomic loci (IDAGL) with
documented long-range enhancer activities and established links to increased
risk of developing epithelial malignancies and other common human disorders.
Using distances of SNP loci homing sites within genomic coordinates of CENTRICH
as a proxy of likelihood of disease-linked SNP loci binding to CENTRICH, we
demonstrate statistically significant correlations between the probability of
SNP loci binding to CENTRICH and GWAS-defined odds ratios of increased risk of
a disease for cancer, coronary artery disease, and type 2 diabetes. Our
analysis suggests that centromeric sequences and pericentromeric
heterochromatin may play an important role in human cells beyond the critical
functions in chromosome segregation.
| [
{
"created": "Fri, 10 Jan 2014 06:39:08 GMT",
"version": "v1"
}
] | 2014-01-13 | [
[
"Glinsky",
"Gennadi V.",
""
]
] | Genome-wide analysis of distributions of densities of long-range interactions of human chromosomes with each other, nucleoli, nuclear lamina, and binding sites of chromatin state regulatory proteins, CTCF and STAT1, identifies non-random highly correlated patterns of density distributions along the chromosome length for all these features. Marked co-enrichments and clustering of all these interactions are detected at discrete genomic regions on selected chromosomes, which are located within pericentromeric heterochromatin and designated Centromeric Regions of Interphase Chromatin Homing (CENTRICH). CENTRICH manifest 199-716-fold higher density of inter-chromosomal binding sites compared to genome-wide or chromosomal averages (p = 2.10E-101-1.08E-292). Sequence alignment analysis shows that CENTRICH represent unique DNA sequences of 3.9 to 22.4 Kb in size which are: 1) associated with nucleolus; 2) exhibit highly diverse set of DNA-bound chromatin state regulators, including marked enrichment of CTCF and STAT1 binding sites; 3) bind multiple intergenic disease-associated genomic loci (IDAGL) with documented long-range enhancer activities and established links to increased risk of developing epithelial malignancies and other common human disorders. Using distances of SNP loci homing sites within genomic coordinates of CENTRICH as a proxy of likelihood of disease-linked SNP loci binding to CENTRICH, we demonstrate statistically significant correlations between the probability of SNP loci binding to CENTRICH and GWAS-defined odds ratios of increased risk of a disease for cancer, coronary artery disease, and type 2 diabetes. Our analysis suggests that centromeric sequences and pericentromeric heterochromatin may play an important role in human cells beyond the critical functions in chromosome segregation. |
2009.09150 | Christopher Rose | Christopher Rose and Andrew J. Medford and C. Franklin Goldsmith and
Tejs Vegge and Joshua Weitz and Andrew A. Peterson | Population Susceptibility Variation and Its Effect on Contagion Dynamics | 12 pages, 2 figures | null | null | null | q-bio.PE cs.SI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Susceptibility governs the dynamics of contagion. The classical SIR model is
one of the simplest compartmental models of contagion spread, assuming a single
shared susceptibility level. However, variation in susceptibility over a
population can fundamentally affect the dynamics of contagion and thus the
ultimate outcome of a pandemic. We develop mathematical machinery which
explicitly considers susceptibility variation, illuminates how the
susceptibility distribution is sculpted by contagion, and thence how such
variation affects the SIR differential questions that govern contagion. Our
methods allow us to derive closed form expressions for herd immunity thresholds
as a function of initial susceptibility distributions and suggests an
intuitively satisfying approach to inoculation when only a fraction of the
population is accessible to such intervention. Of particular interest, if we
assume static susceptibility of individuals in the susceptible pool, ignoring
susceptibility diversity {\em always} results in overestimation of the herd
immunity threshold and that difference can be dramatic. Therefore, we should
develop robust measures of susceptibility variation as part of public health
strategies for handling pandemics.
| [
{
"created": "Sat, 19 Sep 2020 03:15:34 GMT",
"version": "v1"
}
] | 2020-09-22 | [
[
"Rose",
"Christopher",
""
],
[
"Medford",
"Andrew J.",
""
],
[
"Goldsmith",
"C. Franklin",
""
],
[
"Vegge",
"Tejs",
""
],
[
"Weitz",
"Joshua",
""
],
[
"Peterson",
"Andrew A.",
""
]
] | Susceptibility governs the dynamics of contagion. The classical SIR model is one of the simplest compartmental models of contagion spread, assuming a single shared susceptibility level. However, variation in susceptibility over a population can fundamentally affect the dynamics of contagion and thus the ultimate outcome of a pandemic. We develop mathematical machinery which explicitly considers susceptibility variation, illuminates how the susceptibility distribution is sculpted by contagion, and thence how such variation affects the SIR differential questions that govern contagion. Our methods allow us to derive closed form expressions for herd immunity thresholds as a function of initial susceptibility distributions and suggests an intuitively satisfying approach to inoculation when only a fraction of the population is accessible to such intervention. Of particular interest, if we assume static susceptibility of individuals in the susceptible pool, ignoring susceptibility diversity {\em always} results in overestimation of the herd immunity threshold and that difference can be dramatic. Therefore, we should develop robust measures of susceptibility variation as part of public health strategies for handling pandemics. |
1601.02160 | Thierry Mora | Rhys M. Adams, Thierry Mora, Aleksandra M. Walczak, Justin B. Kinney | Measuring the sequence-affinity landscape of antibodies with massively
parallel titration curves | null | eLife 2016;5:e23156 (2016) | 10.7554/eLife.23156 | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the central role that antibodies play in the adaptive immune system
and in biotechnology, much remains unknown about the quantitative relationship
between an antibody's amino acid sequence and its antigen binding affinity.
Here we describe a new experimental approach, called Tite-Seq, that is capable
of measuring binding titration curves and corresponding affinities for
thousands of variant antibodies in parallel. The measurement of titration
curves eliminates the confounding effects of antibody expression and stability
that arise in standard deep mutational scanning assays. We demonstrate Tite-Seq
on the CDR1H and CDR3H regions of a well-studied scFv antibody. Our data shed
light on the structural basis for antigen binding affinity and suggests a role
for secondary CDR loops in establishing antibody stability. Tite-Seq fills a
large gap in the ability to measure critical aspects of the adaptive immune
system, and can be readily used for studying sequence-affinity landscapes in
other protein systems.
| [
{
"created": "Sat, 9 Jan 2016 22:19:17 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Nov 2016 14:52:39 GMT",
"version": "v2"
}
] | 2018-04-16 | [
[
"Adams",
"Rhys M.",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Kinney",
"Justin B.",
""
]
] | Despite the central role that antibodies play in the adaptive immune system and in biotechnology, much remains unknown about the quantitative relationship between an antibody's amino acid sequence and its antigen binding affinity. Here we describe a new experimental approach, called Tite-Seq, that is capable of measuring binding titration curves and corresponding affinities for thousands of variant antibodies in parallel. The measurement of titration curves eliminates the confounding effects of antibody expression and stability that arise in standard deep mutational scanning assays. We demonstrate Tite-Seq on the CDR1H and CDR3H regions of a well-studied scFv antibody. Our data shed light on the structural basis for antigen binding affinity and suggests a role for secondary CDR loops in establishing antibody stability. Tite-Seq fills a large gap in the ability to measure critical aspects of the adaptive immune system, and can be readily used for studying sequence-affinity landscapes in other protein systems. |
1710.02623 | Yuri A. Dabaghian | Andrey Babichev, Dmitriy Morozov and Yuri Dabaghian | Robust spatial memory maps encoded in networks with transient
connections | 24 pages, 10 figures, 4 supplementary figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spiking activity of principal cells in mammalian hippocampus encodes an
internalized neuronal representation of the ambient space---a cognitive map.
Once learned, such a map enables the animal to navigate a given environment for
a long period. However, the neuronal substrate that produces this map remains
transient: the synaptic connections in the hippocampus and in the downstream
neuronal networks never cease to form and to deteriorate at a rapid rate. How
can the brain maintain a robust, reliable representation of space using a
network that constantly changes its architecture? Here, we demonstrate, using
novel Algebraic Topology techniques, that cognitive map's stability is a
generic, emergent phenomenon. The model allows evaluating the effect produced
by specific physiological parameters, e.g., the distribution of connections'
decay times, on the properties of the cognitive map as a whole. It also points
out that spatial memory deterioration caused by weakening or excessive loss of
the synaptic connections may be compensated by simulating the neuronal
activity. Lastly, the model explicates functional importance of the
complementary learning systems for processing spatial information at different
levels of spatiotemporal granularity, by establishing three complementary
timescales at which spatial information unfolds. Thus, the model provides a
principal insight into how can the brain develop a reliable representation of
the world, learn and retain memories despite complex plasticity of the
underlying networks and allows studying how instabilities and memory
deterioration mechanisms may affect learning process.
| [
{
"created": "Sat, 7 Oct 2017 02:50:37 GMT",
"version": "v1"
}
] | 2017-10-10 | [
[
"Babichev",
"Andrey",
""
],
[
"Morozov",
"Dmitriy",
""
],
[
"Dabaghian",
"Yuri",
""
]
] | The spiking activity of principal cells in mammalian hippocampus encodes an internalized neuronal representation of the ambient space---a cognitive map. Once learned, such a map enables the animal to navigate a given environment for a long period. However, the neuronal substrate that produces this map remains transient: the synaptic connections in the hippocampus and in the downstream neuronal networks never cease to form and to deteriorate at a rapid rate. How can the brain maintain a robust, reliable representation of space using a network that constantly changes its architecture? Here, we demonstrate, using novel Algebraic Topology techniques, that cognitive map's stability is a generic, emergent phenomenon. The model allows evaluating the effect produced by specific physiological parameters, e.g., the distribution of connections' decay times, on the properties of the cognitive map as a whole. It also points out that spatial memory deterioration caused by weakening or excessive loss of the synaptic connections may be compensated by simulating the neuronal activity. Lastly, the model explicates functional importance of the complementary learning systems for processing spatial information at different levels of spatiotemporal granularity, by establishing three complementary timescales at which spatial information unfolds. Thus, the model provides a principal insight into how can the brain develop a reliable representation of the world, learn and retain memories despite complex plasticity of the underlying networks and allows studying how instabilities and memory deterioration mechanisms may affect learning process. |
1605.01381 | Sergey Agapov | S.N. Agapov, V.A. Bulanov, A.V. Zakharov, M.S. Sergeeva | Review of analytical instruments for EEG analysis | Review and rework | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Since it was first used in 1926, EEG has been one of the most useful
instruments of neuroscience. In order to start using EEG data we need not only
EEG apparatus, but also some analytical tools and skills to understand what our
data mean. This article describes several classical analytical tools and also
new one which appeared only several years ago. We hope it will be useful for
those researchers who have only started working in the field of cognitive EEG.
| [
{
"created": "Fri, 4 Mar 2016 12:02:40 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jun 2022 05:34:19 GMT",
"version": "v2"
}
] | 2022-06-22 | [
[
"Agapov",
"S. N.",
""
],
[
"Bulanov",
"V. A.",
""
],
[
"Zakharov",
"A. V.",
""
],
[
"Sergeeva",
"M. S.",
""
]
] | Since it was first used in 1926, EEG has been one of the most useful instruments of neuroscience. In order to start using EEG data we need not only EEG apparatus, but also some analytical tools and skills to understand what our data mean. This article describes several classical analytical tools and also new one which appeared only several years ago. We hope it will be useful for those researchers who have only started working in the field of cognitive EEG. |
0905.3728 | Matthew Parker | Matthew Parker and Alex Kamenev | Extinction in Lotka-Volterra model | 11 pages, 17 figures | null | 10.1103/PhysRevE.80.021129 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Competitive birth-death processes often exhibit an oscillatory behavior. We
investigate a particular case where the oscillation cycles are marginally
stable on the mean-field level. An iconic example of such a system is the
Lotka-Volterra model of predator-prey competition. Fluctuation effects due to
discreteness of the populations destroy the mean-field stability and eventually
drive the system toward extinction of one or both species. We show that the
corresponding extinction time scales as a certain power-law of the population
sizes. This behavior should be contrasted with the extinction of models stable
in the mean-field approximation. In the latter case the extinction time scales
exponentially with size.
| [
{
"created": "Fri, 22 May 2009 17:21:06 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Parker",
"Matthew",
""
],
[
"Kamenev",
"Alex",
""
]
] | Competitive birth-death processes often exhibit an oscillatory behavior. We investigate a particular case where the oscillation cycles are marginally stable on the mean-field level. An iconic example of such a system is the Lotka-Volterra model of predator-prey competition. Fluctuation effects due to discreteness of the populations destroy the mean-field stability and eventually drive the system toward extinction of one or both species. We show that the corresponding extinction time scales as a certain power-law of the population sizes. This behavior should be contrasted with the extinction of models stable in the mean-field approximation. In the latter case the extinction time scales exponentially with size. |
2109.01039 | Paolo Muratore | Cristiano Capone, Paolo Muratore and Pier Stanislao Paolucci | Error-based or target-based? A unifying framework for learning in
recurrent spiking networks | Main text: 14 pages, 5 figures Suppl. Mat.: 12 pages, 3 figures | null | 10.1371/journal.pcbi.1010221 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning in biological or artificial networks means changing the laws
governing the network dynamics in order to better behave in a specific
situation. In the field of supervised learning, two complementary approaches
stand out: error-based and target-based learning. However, there exists no
consensus on which is better suited for which task, and what is the most
biologically plausible. Here we propose a comprehensive theoretical framework
that includes these two frameworks as special cases. This novel theoretical
formulation offers major insights into the differences between the two
approaches. In particular, we show how target-based naturally emerges from
error-based when the number of constraints on the target dynamics, and as a
consequence on the internal network dynamics, is comparable to the degrees of
freedom of the network. Moreover, given the experimental evidences on the
relevance that spikes have in biological networks, we investigate the role of
coding with specific patterns of spikes by introducing a parameter that defines
the tolerance to precise spike timing during learning. Our approach naturally
lends itself to Imitation Learning (and Behavioral Cloning in particular) and
we apply it to solve relevant closed-loop tasks such as the button-and-food
task, and the 2D Bipedal Walker. We show that a high dimensionality feedback
structure is extremely important when it is necessary to solve a task that
requires retaining memory for a long time (button-and-food). On the other hand,
we find that coding with specific patterns of spikes enables optimal
performances in a motor task (the 2D Bipedal Walker). Finally, we show that our
theoretical formulation suggests protocols to deduce the structure of learning
feedback in biological networks.
| [
{
"created": "Thu, 2 Sep 2021 15:57:23 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Sep 2021 17:17:42 GMT",
"version": "v2"
}
] | 2022-10-12 | [
[
"Capone",
"Cristiano",
""
],
[
"Muratore",
"Paolo",
""
],
[
"Paolucci",
"Pier Stanislao",
""
]
] | Learning in biological or artificial networks means changing the laws governing the network dynamics in order to better behave in a specific situation. In the field of supervised learning, two complementary approaches stand out: error-based and target-based learning. However, there exists no consensus on which is better suited for which task, and what is the most biologically plausible. Here we propose a comprehensive theoretical framework that includes these two frameworks as special cases. This novel theoretical formulation offers major insights into the differences between the two approaches. In particular, we show how target-based naturally emerges from error-based when the number of constraints on the target dynamics, and as a consequence on the internal network dynamics, is comparable to the degrees of freedom of the network. Moreover, given the experimental evidences on the relevance that spikes have in biological networks, we investigate the role of coding with specific patterns of spikes by introducing a parameter that defines the tolerance to precise spike timing during learning. Our approach naturally lends itself to Imitation Learning (and Behavioral Cloning in particular) and we apply it to solve relevant closed-loop tasks such as the button-and-food task, and the 2D Bipedal Walker. We show that a high dimensionality feedback structure is extremely important when it is necessary to solve a task that requires retaining memory for a long time (button-and-food). On the other hand, we find that coding with specific patterns of spikes enables optimal performances in a motor task (the 2D Bipedal Walker). Finally, we show that our theoretical formulation suggests protocols to deduce the structure of learning feedback in biological networks. |
1601.06047 | Dan Graur Dan Graur | Dan Graur | Rubbish DNA: The functionless fraction of the human genome | 87 pages, 1 Figure, 1 Table | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Because genomes are products of natural processes rather than intelligent
design, all genomes contain functional and nonfunctional parts. The fraction of
the genome that has no biological function is called rubbish DNA. Rubbish DNA
consists of junk DNA, i.e., the fraction of the genome on which selection does
not operate, and garbage DNA, i.e., sequences that lower the fitness of the
organism, but exist in the genome because purifying selection is neither
omnipotent nor instantaneous. In this chapter, I (1) review the concepts of
genomic function and functionlessness from an evolutionary perspective, (2)
present a precise nomenclature of genomic function, (3) discuss the evidence
for the existence of vast quantities of junk DNA within the human genome, (4)
discuss the mutational mechanisms responsible for generating junk DNA, (5)
spell out the necessary evolutionary conditions for maintaining junk DNA, (6)
outline various methodologies for estimating the functional fraction within the
genome, and (7) present a recent estimate for the functional fraction of our
genome.
| [
{
"created": "Fri, 22 Jan 2016 15:41:37 GMT",
"version": "v1"
}
] | 2016-01-25 | [
[
"Graur",
"Dan",
""
]
] | Because genomes are products of natural processes rather than intelligent design, all genomes contain functional and nonfunctional parts. The fraction of the genome that has no biological function is called rubbish DNA. Rubbish DNA consists of junk DNA, i.e., the fraction of the genome on which selection does not operate, and garbage DNA, i.e., sequences that lower the fitness of the organism, but exist in the genome because purifying selection is neither omnipotent nor instantaneous. In this chapter, I (1) review the concepts of genomic function and functionlessness from an evolutionary perspective, (2) present a precise nomenclature of genomic function, (3) discuss the evidence for the existence of vast quantities of junk DNA within the human genome, (4) discuss the mutational mechanisms responsible for generating junk DNA, (5) spell out the necessary evolutionary conditions for maintaining junk DNA, (6) outline various methodologies for estimating the functional fraction within the genome, and (7) present a recent estimate for the functional fraction of our genome. |
2405.05301 | Gary An | Gary An and Chase Cockrell | A design specification for Critical Illness Digital Twins to cure
sepsis: responding to the National Academies of Sciences, Engineering and
Medicine Report: Foundational Research Gaps and Future Directions for Digital
Twins | 31 pages, 13 Figures, 1 Table | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | On December 15, 2023, The National Academies of Sciences, Engineering and
Medicine (NASEM) released a report entitled: Foundational Research Gaps and
Future Directions for Digital Twins. The ostensible purpose of this report was
to bring some structure to the burgeoning field of digital twins by providing a
working definition and a series of research challenges that need to be
addressed to allow this technology to fulfill its full potential. In the work
presented herein we focus on five specific findings from the NASEM Report: 1)
definition of a Digital Twin, 2) using fit-for-purpose guidance, 3) developing
novel approaches to Verification, Validation and Uncertainty Quantification
(VVUQ) of Digital Twins, 4) incorporating control as an explicit purpose for a
Digital Twin and 5) using a Digital Twin to guide data collection and sensor
development, and describe how these findings are addressed through the design
specifications for a Critical Illness Digital Twin (CIDT) aimed at curing
sepsis.
| [
{
"created": "Wed, 8 May 2024 17:17:58 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jun 2024 22:26:50 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"An",
"Gary",
""
],
[
"Cockrell",
"Chase",
""
]
] | On December 15, 2023, The National Academies of Sciences, Engineering and Medicine (NASEM) released a report entitled: Foundational Research Gaps and Future Directions for Digital Twins. The ostensible purpose of this report was to bring some structure to the burgeoning field of digital twins by providing a working definition and a series of research challenges that need to be addressed to allow this technology to fulfill its full potential. In the work presented herein we focus on five specific findings from the NASEM Report: 1) definition of a Digital Twin, 2) using fit-for-purpose guidance, 3) developing novel approaches to Verification, Validation and Uncertainty Quantification (VVUQ) of Digital Twins, 4) incorporating control as an explicit purpose for a Digital Twin and 5) using a Digital Twin to guide data collection and sensor development, and describe how these findings are addressed through the design specifications for a Critical Illness Digital Twin (CIDT) aimed at curing sepsis. |
2209.08792 | Robin Broersen | R. Broersen and G. J. Stuart | In vivo whole-cell recording from morphologically identified mouse
superior colliculus neurons | 29 pages including 4 figures | STAR Protocols 4 (2023) 101963 | 10.1016/j.xpro.2022.101963 | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In vivo whole-cell recording when combined with morphological
characterization after biocytin labeling is a powerful technique to study
subthreshold synaptic processing in cell-type-identified neuronal populations.
Here, we provide a step-by-step procedure for performing whole-cell recordings
in the superior colliculus of urethane-anesthetized mice, a major visual
processing region in the rodent brain. Two types of visual stimulation methods
are described. While we focus on superior colliculus neurons, this protocol is
applicable to other brain areas.
| [
{
"created": "Mon, 19 Sep 2022 06:52:52 GMT",
"version": "v1"
}
] | 2022-12-26 | [
[
"Broersen",
"R.",
""
],
[
"Stuart",
"G. J.",
""
]
] | In vivo whole-cell recording when combined with morphological characterization after biocytin labeling is a powerful technique to study subthreshold synaptic processing in cell-type-identified neuronal populations. Here, we provide a step-by-step procedure for performing whole-cell recordings in the superior colliculus of urethane-anesthetized mice, a major visual processing region in the rodent brain. Two types of visual stimulation methods are described. While we focus on superior colliculus neurons, this protocol is applicable to other brain areas. |
2405.08827 | Richard Povinelli | Richard J Povinelli, Mathew Dupont | A Dynamical Systems Approach to Predicting Patient Outcome after Cardiac
Arrest | Computing in Cardiology, 2023 | null | 10.22489/CinC.2023.442 | null | q-bio.QM math.DS | http://creativecommons.org/licenses/by-sa/4.0/ | Aim: Approximately six million people suffer cardiac arrests worldwide per
year with very low survival rates (<1%). Thus, the aim of this study is to
estimate the probability of a poor outcome after cardiac arrest. Accurate
outcome predictions avoid removing care too soon for patients with potentially
good outcomes or continuing care for patients with likely poor outcomes.
Method: The method is based on dynamical systems embedding theorems that show
that a reconstructed phase space (RPS) topologically equivalent to an
underlying system can be constructed from measured signals. Here the underlying
system is the human brain after a cardiac arrest, and the signals are the EEG
channels. We model the RPS with a Gaussian mixture model (GMM) and ensemble the
output of the RPS-GMM with clinical data via XGBoost. Results: As team Blue and
Gold in the Predicting Neurological Recovery from Coma After Cardiac Arrest:
The George B. Moody PhysioNet Challenge 2023, our RPS-GMM-XGBoost method
obtained a test set competition score of 0.426 and rank of 24/36.
| [
{
"created": "Mon, 13 May 2024 15:03:34 GMT",
"version": "v1"
}
] | 2024-05-16 | [
[
"Povinelli",
"Richard J",
""
],
[
"Dupont",
"Mathew",
""
]
] | Aim: Approximately six million people suffer cardiac arrests worldwide per year with very low survival rates (<1%). Thus, the aim of this study is to estimate the probability of a poor outcome after cardiac arrest. Accurate outcome predictions avoid removing care too soon for patients with potentially good outcomes or continuing care for patients with likely poor outcomes. Method: The method is based on dynamical systems embedding theorems that show that a reconstructed phase space (RPS) topologically equivalent to an underlying system can be constructed from measured signals. Here the underlying system is the human brain after a cardiac arrest, and the signals are the EEG channels. We model the RPS with a Gaussian mixture model (GMM) and ensemble the output of the RPS-GMM with clinical data via XGBoost. Results: As team Blue and Gold in the Predicting Neurological Recovery from Coma After Cardiac Arrest: The George B. Moody PhysioNet Challenge 2023, our RPS-GMM-XGBoost method obtained a test set competition score of 0.426 and rank of 24/36. |
2106.06638 | Ines Thiele | Almut Heinken, Stefan\'ia Magn\'usd\'ottir, Ronan M.T. Fleming, and
Ines Thiele | DEMETER: Efficient simultaneous curation of genome-scale reconstructions
guided by experimental data and refined gene annotations | 6 pages, 1 Figure | null | null | null | q-bio.GN q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Motivation: Manual curation of genome-scale reconstructions is laborious, yet
existing automated curation tools typically do not take species-specific
experimental data and manually refined genome annotations into account.
Results: We developed DEMETER, a COBRA Toolbox extension that enables the
efficient simultaneous refinement of thousands of draft genome-scale
reconstructions while ensuring adherence to the quality standards in the field,
agreement with available experimental data, and refinement of pathways based on
manually refined genome annotations. Availability: DEMETER and tutorials are
available at https://github.com/opencobra/cobratoolbox.
| [
{
"created": "Fri, 11 Jun 2021 23:28:21 GMT",
"version": "v1"
}
] | 2021-06-15 | [
[
"Heinken",
"Almut",
""
],
[
"Magnúsdóttir",
"Stefanía",
""
],
[
"Fleming",
"Ronan M. T.",
""
],
[
"Thiele",
"Ines",
""
]
] | Motivation: Manual curation of genome-scale reconstructions is laborious, yet existing automated curation tools typically do not take species-specific experimental data and manually refined genome annotations into account. Results: We developed DEMETER, a COBRA Toolbox extension that enables the efficient simultaneous refinement of thousands of draft genome-scale reconstructions while ensuring adherence to the quality standards in the field, agreement with available experimental data, and refinement of pathways based on manually refined genome annotations. Availability: DEMETER and tutorials are available at https://github.com/opencobra/cobratoolbox. |
1501.05973 | Ashish Kapoor | Ashish Kapoor, E. Paxon Frady, Stefanie Jegelka, William B. Kristan
and Eric Horvitz | Inferring and Learning from Neuronal Correspondences | null | null | null | null | q-bio.NC cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and study methods for inferring and learning from
correspondences among neurons. The approach enables alignment of data from
distinct multiunit studies of nervous systems. We show that the methods for
inferring correspondences combine data effectively from cross-animal studies to
make joint inferences about behavioral decision making that are not possible
with the data from a single animal. We focus on data collection, machine
learning, and prediction in the representative and long-studied invertebrate
nervous system of the European medicinal leech. Acknowledging the computational
intractability of the general problem of identifying correspondences among
neurons, we introduce efficient computational procedures for matching neurons
across animals. The methods include techniques that adjust for missing cells or
additional cells in the different data sets that may reflect biological or
experimental variation. The methods highlight the value harnessing inference
and learning in new kinds of computational microscopes for multiunit
neurobiological studies.
| [
{
"created": "Fri, 23 Jan 2015 22:29:13 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jan 2015 08:23:24 GMT",
"version": "v2"
}
] | 2015-01-28 | [
[
"Kapoor",
"Ashish",
""
],
[
"Frady",
"E. Paxon",
""
],
[
"Jegelka",
"Stefanie",
""
],
[
"Kristan",
"William B.",
""
],
[
"Horvitz",
"Eric",
""
]
] | We introduce and study methods for inferring and learning from correspondences among neurons. The approach enables alignment of data from distinct multiunit studies of nervous systems. We show that the methods for inferring correspondences combine data effectively from cross-animal studies to make joint inferences about behavioral decision making that are not possible with the data from a single animal. We focus on data collection, machine learning, and prediction in the representative and long-studied invertebrate nervous system of the European medicinal leech. Acknowledging the computational intractability of the general problem of identifying correspondences among neurons, we introduce efficient computational procedures for matching neurons across animals. The methods include techniques that adjust for missing cells or additional cells in the different data sets that may reflect biological or experimental variation. The methods highlight the value harnessing inference and learning in new kinds of computational microscopes for multiunit neurobiological studies. |
1906.06908 | Christian Bongiorno | Christian Bongiorno, Salvatore Miccich\`e, Rosario N. Mantegna | Nested partitions from hierarchical clustering statistical validation | null | null | 10.1016/j.physa.2022.126933 | null | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a greedy algorithm that is fast and scalable in the detection of a
nested partition extracted from a dendrogram obtained from hierarchical
clustering of a multivariate series. Our algorithm provides a $p$-value for
each clade observed in the hierarchical tree. The $p$-value is obtained by
computing a number of bootstrap replicas of the dissimilarity matrix and by
performing a statistical test on each difference between the dissimilarity
associated with a given clade and the dissimilarity of the clade of its parent
node. We prove the efficacy of our algorithm with a set of benchmarks generated
by using a hierarchical factor model. We compare the results obtained by our
algorithm with those of Pvclust. Pvclust is a widely used algorithm developed
with a global approach originally motivated by phylogenetic studies. In our
numerical experiments we focus on the role of multiple hypothesis test
correction and on the robustness of the algorithms to inaccuracy and errors of
datasets. We also apply our algorithm to a reference empirical dataset. We
verify that our algorithm is much faster than Pvclust algorithm and has a
better scalability both in the number of elements and in the number of records
of the investigated multivariate set. Our algorithm provides a hierarchically
nested partition in much shorter time than currently widely used algorithms
allowing to perform a statistically validated cluster analysis detection in
very large systems.
| [
{
"created": "Mon, 17 Jun 2019 09:08:30 GMT",
"version": "v1"
}
] | 2022-01-21 | [
[
"Bongiorno",
"Christian",
""
],
[
"Miccichè",
"Salvatore",
""
],
[
"Mantegna",
"Rosario N.",
""
]
] | We develop a greedy algorithm that is fast and scalable in the detection of a nested partition extracted from a dendrogram obtained from hierarchical clustering of a multivariate series. Our algorithm provides a $p$-value for each clade observed in the hierarchical tree. The $p$-value is obtained by computing a number of bootstrap replicas of the dissimilarity matrix and by performing a statistical test on each difference between the dissimilarity associated with a given clade and the dissimilarity of the clade of its parent node. We prove the efficacy of our algorithm with a set of benchmarks generated by using a hierarchical factor model. We compare the results obtained by our algorithm with those of Pvclust. Pvclust is a widely used algorithm developed with a global approach originally motivated by phylogenetic studies. In our numerical experiments we focus on the role of multiple hypothesis test correction and on the robustness of the algorithms to inaccuracy and errors of datasets. We also apply our algorithm to a reference empirical dataset. We verify that our algorithm is much faster than Pvclust algorithm and has a better scalability both in the number of elements and in the number of records of the investigated multivariate set. Our algorithm provides a hierarchically nested partition in much shorter time than currently widely used algorithms allowing to perform a statistically validated cluster analysis detection in very large systems. |
2103.08579 | Mar\'ia Vallet-Regi | Anna Aguilar-Colomer, Montserrat Colilla, Isabel Izquierdo-Barba,
Carla Jimenez-Jimenez, Ignacio Mahillo, Jaime Esteban and Maria Vallet-Regi | Impact of the antibiotic-cargo from MSNs on Gram-positive and
Gram-negative bacterial biofilms | 44 pages, 14 figures | Microporous and Mesoporous Materials 311 (2021), 110681 | 10.1016/j.micromeso.2020.110681 | null | q-bio.QM physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mesoporous silica nanoparticles (MSNs) are promising drug nanocarriers for
infection treatment. Many investigations have focused on evaluating the
capacity of MSNs to encapsulate antibiotics and release them in a controlled
fashion. However, little attention has been paid to determine the antibiotic
doses released from these nanosystems that are effective against biofilm during
the entire release time. Herein, we report a systematic and quantitative study
of the direct effect of the antibiotic-cargo released from MSNs on
Gram-positive and Gram-negative bacterial biofilms. Levofloxacin (LVX),
gentamicin (GM) and rifampin (RIF) were separately loaded into pure-silica and
amino-modified MSNs. This accounts for the versatility of these nanosystems
since they were able to load and release different antibiotic molecules of
diverse chemical nature. Biological activity curves of the released antibiotic
were determined for both bacterial strains, which allowed to calculate the
active doses that are effective against bacterial biofilms. Furthermore, in
vitro biocompatibility assays on osteoblast-like cells were carried out at
different periods of times. Albeit a slight decrease in cell viability was
observed at the very initial stage, due to the initial burst antibiotic
release, the biocompatibility of these nanosystems is evidenced since a
recovery of cell viability was achieved after 72 h of assay. Biological
activity curves for GM released from MSNs exhibited sustained patterns and
antibiotic doses in the 2-6 {\mu}g/mL range up to 100 h, which were not enough
to eradicate biofilm. In the case of LVX and RIF first-order kinetics featuring
an initial burst effect followed by a sustained release above the MIC up to 96
h were observed. Such doses reduced by 99.9% bacterial biofilm and remained
active up to 72 h with no emergence of bacterial resistance.
| [
{
"created": "Mon, 15 Mar 2021 17:47:24 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Aguilar-Colomer",
"Anna",
""
],
[
"Colilla",
"Montserrat",
""
],
[
"Izquierdo-Barba",
"Isabel",
""
],
[
"Jimenez-Jimenez",
"Carla",
""
],
[
"Mahillo",
"Ignacio",
""
],
[
"Esteban",
"Jaime",
""
],
[
"Vallet-Regi",
"Maria",
""
]
] | Mesoporous silica nanoparticles (MSNs) are promising drug nanocarriers for infection treatment. Many investigations have focused on evaluating the capacity of MSNs to encapsulate antibiotics and release them in a controlled fashion. However, little attention has been paid to determine the antibiotic doses released from these nanosystems that are effective against biofilm during the entire release time. Herein, we report a systematic and quantitative study of the direct effect of the antibiotic-cargo released from MSNs on Gram-positive and Gram-negative bacterial biofilms. Levofloxacin (LVX), gentamicin (GM) and rifampin (RIF) were separately loaded into pure-silica and amino-modified MSNs. This accounts for the versatility of these nanosystems since they were able to load and release different antibiotic molecules of diverse chemical nature. Biological activity curves of the released antibiotic were determined for both bacterial strains, which allowed to calculate the active doses that are effective against bacterial biofilms. Furthermore, in vitro biocompatibility assays on osteoblast-like cells were carried out at different periods of times. Albeit a slight decrease in cell viability was observed at the very initial stage, due to the initial burst antibiotic release, the biocompatibility of these nanosystems is evidenced since a recovery of cell viability was achieved after 72 h of assay. Biological activity curves for GM released from MSNs exhibited sustained patterns and antibiotic doses in the 2-6 {\mu}g/mL range up to 100 h, which were not enough to eradicate biofilm. In the case of LVX and RIF first-order kinetics featuring an initial burst effect followed by a sustained release above the MIC up to 96 h were observed. Such doses reduced by 99.9% bacterial biofilm and remained active up to 72 h with no emergence of bacterial resistance. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.