id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1105.4335 | Jordi Garcia-Ojalvo | Jordi Garcia-Ojalvo | Physical approaches to the dynamics of genetic circuits: A tutorial | 36 pages, 8 figures, 153 references, to be published in Contemporary
Physics | Contemporary Physics Vol. 52, No. 5, 439-464 (2011) | 10.1080/00107514.2011.588432 | null | q-bio.MN nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular behavior is governed by gene regulatory processes that are
intrinsically dynamic and nonlinear, and are subject to non-negligible amounts
of random fluctuations. Such conditions are ubiquitous in physical systems,
where they have been studied for decades using the tools of statistical and
nonlinear physics. The goal of this review is to show how approaches
traditionally used in physics can help in reaching a systems-level
understanding of living cells. To that end, we present an overview of the
dynamical phenomena exhibited by genetic circuits and their functional
significance. We also describe the theoretical and experimental approaches that
are being used to unravel the relationship between circuit structure and
function in dynamical cellular processes under the influence of noise, both at
the single-cell level and in cellular populations, where intercellular coupling
plays an important role.
| [
{
"created": "Sun, 22 May 2011 13:13:18 GMT",
"version": "v1"
}
] | 2011-09-21 | [
[
"Garcia-Ojalvo",
"Jordi",
""
]
] | Cellular behavior is governed by gene regulatory processes that are intrinsically dynamic and nonlinear, and are subject to non-negligible amounts of random fluctuations. Such conditions are ubiquitous in physical systems, where they have been studied for decades using the tools of statistical and nonlinear physics. The goal of this review is to show how approaches traditionally used in physics can help in reaching a systems-level understanding of living cells. To that end, we present an overview of the dynamical phenomena exhibited by genetic circuits and their functional significance. We also describe the theoretical and experimental approaches that are being used to unravel the relationship between circuit structure and function in dynamical cellular processes under the influence of noise, both at the single-cell level and in cellular populations, where intercellular coupling plays an important role. |
1302.2041 | Sim-Hui Tee | Sim-Hui Tee | Pattern Analysis of Tandem Repeats in Nlrp1 | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pattern analysis of tandem repeats in gene is an indispensable computational
approach to the understanding of the gene expression and pathogenesis of
diseases. This research applied a computational motif model and database
techniques to study the distribution of tandem repeats in Nlrp1 gene, which is
a critical gene to detect the invading pathogens in the immunologic mechanisms.
The frequency of tandem repeats in Nlrp1 gene was studied for mono-, di-, tri-,
and tetranucleotides. Mutations of Nlrp1 gene were analyzed to identify the
insertion,deletion, and substitution of nucleotides. The results of this
research provide a basis for future work in computational drug design and
biomedical engineering in tackling diseases associated with immune system.
| [
{
"created": "Fri, 8 Feb 2013 14:17:50 GMT",
"version": "v1"
}
] | 2013-02-11 | [
[
"Tee",
"Sim-Hui",
""
]
] | Pattern analysis of tandem repeats in gene is an indispensable computational approach to the understanding of the gene expression and pathogenesis of diseases. This research applied a computational motif model and database techniques to study the distribution of tandem repeats in Nlrp1 gene, which is a critical gene to detect the invading pathogens in the immunologic mechanisms. The frequency of tandem repeats in Nlrp1 gene was studied for mono-, di-, tri-, and tetranucleotides. Mutations of Nlrp1 gene were analyzed to identify the insertion,deletion, and substitution of nucleotides. The results of this research provide a basis for future work in computational drug design and biomedical engineering in tackling diseases associated with immune system. |
q-bio/0612015 | Jerome Vanclay | Jerome K Vanclay | Effects of Selection Logging on Rainforest Productivity | 20 pages, 9 tables, 3 figures | Australian Forestry 53:200-214 (1990) | null | null | q-bio.QM | null | An analysis of data from 212 permanent sample plots provided no evidence of
any decline in rainforest productivity after three cycles of selection logging
in the tropical rainforests of north Queensland. Relative productivity was
determined as the difference between observed diameter increments and
increments predicted from a diameter increment function which incorporated tree
size, stand density and site quality. Analyses of variance and regression
analyses revealed no significant decline in productivity after repeated
harvesting. There is evidence to support the assertion that if any permanent
productivity decline exists, it does not exceed six per cent per harvest.
| [
{
"created": "Fri, 8 Dec 2006 10:47:18 GMT",
"version": "v1"
}
] | 2007-12-12 | [
[
"Vanclay",
"Jerome K",
""
]
] | An analysis of data from 212 permanent sample plots provided no evidence of any decline in rainforest productivity after three cycles of selection logging in the tropical rainforests of north Queensland. Relative productivity was determined as the difference between observed diameter increments and increments predicted from a diameter increment function which incorporated tree size, stand density and site quality. Analyses of variance and regression analyses revealed no significant decline in productivity after repeated harvesting. There is evidence to support the assertion that if any permanent productivity decline exists, it does not exceed six per cent per harvest. |
2406.01718 | Alexandru Hening | Alexandru Hening, Dang H. Nguyen, Tran Ta and Sergiu C. Ungureanu | Long-term behavior of stochastic SIQRS epidemic models | 22 pages, 6 figures | null | null | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by/4.0/ | In this paper we analyze and classify the dynamics of SIQRS epidemiological
models with susceptible, infected, quarantined, and recovered classes, where
the recovered individuals can become reinfected. We are able to treat general
incidence functional responses. Our models are more realistic than what has
been studied in the literature since they include two important types of random
fluctuations. The first type is due to small fluctuations of the various model
parameters and leads to white noise terms. The second type of noise is due to
significant environment regime shifts in the that can happen at random. The
environment switches randomly between a finite number of environmental states,
each with a possibly different disease dynamic. We prove that the long-term
fate of the disease is fully determined by a real-valued threshold $\lambda$.
When $\lambda < 0$ the disease goes extinct asymptotically at an exponential
rate. On the other hand, if $\lambda > 0$ the disease will persist
indefinitely. We end our analysis by looking at some important examples where
$\lambda$ can be computed explicitly, and by showcasing some simulation results
that shed light on real-world situations.
| [
{
"created": "Mon, 3 Jun 2024 18:26:11 GMT",
"version": "v1"
}
] | 2024-06-05 | [
[
"Hening",
"Alexandru",
""
],
[
"Nguyen",
"Dang H.",
""
],
[
"Ta",
"Tran",
""
],
[
"Ungureanu",
"Sergiu C.",
""
]
] | In this paper we analyze and classify the dynamics of SIQRS epidemiological models with susceptible, infected, quarantined, and recovered classes, where the recovered individuals can become reinfected. We are able to treat general incidence functional responses. Our models are more realistic than what has been studied in the literature since they include two important types of random fluctuations. The first type is due to small fluctuations of the various model parameters and leads to white noise terms. The second type of noise is due to significant environment regime shifts in the that can happen at random. The environment switches randomly between a finite number of environmental states, each with a possibly different disease dynamic. We prove that the long-term fate of the disease is fully determined by a real-valued threshold $\lambda$. When $\lambda < 0$ the disease goes extinct asymptotically at an exponential rate. On the other hand, if $\lambda > 0$ the disease will persist indefinitely. We end our analysis by looking at some important examples where $\lambda$ can be computed explicitly, and by showcasing some simulation results that shed light on real-world situations. |
1106.3293 | Fernando Antonio | F. J. Antonio, R. S. Mendes, and S. M. Thomaz | Identifying and modeling patterns of tetrapod vertebrate mortality rates
in the Gulf of Mexico oil spill | 4 pages, 1 figure | Aquatic Toxicology Volume 105, Issues 1-2, September 2011, Pages
177-179 | 10.1016/j.aquatox.2011.05.022 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accidental oil spill in the Gulf of Mexico in 2010 has caused perceptible
damage to marine and freshwater ecosystems. The large quantity of oil leaking
at a constant rate and the long duration of the event caused an exponentially
increasing mortality of vertebrates. Using data provided by NOAA and USFWS, we
assessed the effects of this event on birds, sea turtles, and mammals.
Mortality rates (measured as the number of carcasses recorded per day) were
exponential for all three groups. Birds were the most affected group, as
indicated by the steepest increase of mortality rates over time. For sea
turtles and mammals, an exponential increase in mortality was observed after an
initial delay. These exponential behaviors are consistent with a unified
scenario for the mortality rate for tetrapod vertebrates. However, at least for
mammals, pre-spill data seem to indicate that the growth in the mortality rate
is not entirely a consequence of the spill.
| [
{
"created": "Thu, 16 Jun 2011 18:02:00 GMT",
"version": "v1"
}
] | 2011-07-19 | [
[
"Antonio",
"F. J.",
""
],
[
"Mendes",
"R. S.",
""
],
[
"Thomaz",
"S. M.",
""
]
] | The accidental oil spill in the Gulf of Mexico in 2010 has caused perceptible damage to marine and freshwater ecosystems. The large quantity of oil leaking at a constant rate and the long duration of the event caused an exponentially increasing mortality of vertebrates. Using data provided by NOAA and USFWS, we assessed the effects of this event on birds, sea turtles, and mammals. Mortality rates (measured as the number of carcasses recorded per day) were exponential for all three groups. Birds were the most affected group, as indicated by the steepest increase of mortality rates over time. For sea turtles and mammals, an exponential increase in mortality was observed after an initial delay. These exponential behaviors are consistent with a unified scenario for the mortality rate for tetrapod vertebrates. However, at least for mammals, pre-spill data seem to indicate that the growth in the mortality rate is not entirely a consequence of the spill. |
2102.03687 | Hue Sun Chan | Jonas Wess\'en, Tanmoy Pal, Suman Das, Yi-Hsuan Lin, and Hue Sun Chan | A Simple Explicit-Solvent Model of Polyampholyte Phase Behaviors and its
Ramifications for Dielectric Effects in Biomolecular Condensates | 54 pages, 14 figures, 1 table, and 132 references. Accepted for
publication in the Journal of Physical Chemistry B ("Liquid-Liquid Phase
Separation" Special Issue) | J. Phys. Chem. B 125, 4337-4358 (2021) | 10.1021/acs.jpcb.1c00954 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Biomolecular condensates such as membraneless organelles, underpinned by
liquid-liquid phase separation (LLPS), are important for physiological
function, with electrostatics -- among other interaction types -- being a
prominent force in their assembly. Charge interactions of intrinsically
disordered proteins (IDPs) and other biomolecules are sensitive to the aqueous
dielectric environment. Because the relative permittivity of protein is
significantly lower than that of water, the interior of an IDP condensate is a
relatively low-dielectric regime, which, aside from its possible functional
effects on client molecules, should facilitate stronger electrostatic
interactions among the scaffold IDPs. To gain insight into this LLPS-induced
dielectric heterogeneity, addressing in particular whether a low-dielectric
condensed phase entails more favorable LLPS than that posited by assuming IDP
electrostatic interactions are uniformly modulated by the higher dielectric
constant of the pure solvent, we consider a simplified multiple-chain model of
polyampholytes immersed in explicit solvents that are either polarizable or
possess a permanent dipole. Notably, simulated phase behaviors of these systems
exhibit only minor to moderate differences from those obtained using
implicit-solvent models with a uniform relative permittivity equals to that of
pure solvent. Buttressed by theoretical treatments developed here using random
phase approximation and polymer field-theoretic simulations, these observations
indicate a partial compensation of effects between favorable solvent-mediated
interactions among the polyampholytes in the condensed phase and favorable
polyampholyte-solvent interactions in the dilute phase, often netting only a
minor enhancement of overall LLPS propensity from the very dielectric
heterogeneity that arises from the LLPS itself. Further ramifications of this
principle are discussed.
| [
{
"created": "Sat, 6 Feb 2021 23:52:36 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Apr 2021 06:08:30 GMT",
"version": "v2"
}
] | 2021-05-26 | [
[
"Wessén",
"Jonas",
""
],
[
"Pal",
"Tanmoy",
""
],
[
"Das",
"Suman",
""
],
[
"Lin",
"Yi-Hsuan",
""
],
[
"Chan",
"Hue Sun",
""
]
] | Biomolecular condensates such as membraneless organelles, underpinned by liquid-liquid phase separation (LLPS), are important for physiological function, with electrostatics -- among other interaction types -- being a prominent force in their assembly. Charge interactions of intrinsically disordered proteins (IDPs) and other biomolecules are sensitive to the aqueous dielectric environment. Because the relative permittivity of protein is significantly lower than that of water, the interior of an IDP condensate is a relatively low-dielectric regime, which, aside from its possible functional effects on client molecules, should facilitate stronger electrostatic interactions among the scaffold IDPs. To gain insight into this LLPS-induced dielectric heterogeneity, addressing in particular whether a low-dielectric condensed phase entails more favorable LLPS than that posited by assuming IDP electrostatic interactions are uniformly modulated by the higher dielectric constant of the pure solvent, we consider a simplified multiple-chain model of polyampholytes immersed in explicit solvents that are either polarizable or possess a permanent dipole. Notably, simulated phase behaviors of these systems exhibit only minor to moderate differences from those obtained using implicit-solvent models with a uniform relative permittivity equals to that of pure solvent. Buttressed by theoretical treatments developed here using random phase approximation and polymer field-theoretic simulations, these observations indicate a partial compensation of effects between favorable solvent-mediated interactions among the polyampholytes in the condensed phase and favorable polyampholyte-solvent interactions in the dilute phase, often netting only a minor enhancement of overall LLPS propensity from the very dielectric heterogeneity that arises from the LLPS itself. Further ramifications of this principle are discussed. |
2202.11510 | D\'ebora Princepe | D\'ebora Princepe, Simone Czarnobai, Thiago M. Pradella, Rodrigo A.
Caetano, Flavia M. D. Marquitti, Marcus A. M. de Aguiar, Sabrina B. L. Araujo | Diversity patterns and speciation processes in a two-island system with
continuous migration | null | Evolution, 2022 | 10.1111/evo.14603 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Geographic isolation is a central mechanism of speciation, but perfect
isolation of populations is rare. Although speciation can be hindered if gene
flow is large, intermediate levels of migration can enhance speciation by
introducing genetic novelty in the semi-isolated populations or founding small
communities of migrants. Here we consider a two island neutral model of
speciation with continuous migration and study diversity patterns as a function
of the migration probability, population size, and number of genes involved in
reproductive isolation (dubbed as genome size). For small genomes, low levels
of migration induce speciation on the islands that otherwise would not occur.
Diversity, however, drops sharply to a single species inhabiting both islands
as the migration probability increases. For large genomes, sympatric speciation
occurs even when the islands are strictly isolated. Then species richness per
island increases with the probability of migration, but the total number of
species decreases as they become cosmopolitan. For each genome size, there is
an optimal migration intensity for each population size that maximizes the
number of species. We discuss the observed modes of speciation induced by
migration and how they increase species richness in the insular system while
promoting asymmetry between the islands and hindering endemism.
| [
{
"created": "Wed, 23 Feb 2022 13:46:06 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jun 2022 14:51:39 GMT",
"version": "v2"
}
] | 2022-10-14 | [
[
"Princepe",
"Débora",
""
],
[
"Czarnobai",
"Simone",
""
],
[
"Pradella",
"Thiago M.",
""
],
[
"Caetano",
"Rodrigo A.",
""
],
[
"Marquitti",
"Flavia M. D.",
""
],
[
"de Aguiar",
"Marcus A. M.",
""
],
[
"Araujo",
... | Geographic isolation is a central mechanism of speciation, but perfect isolation of populations is rare. Although speciation can be hindered if gene flow is large, intermediate levels of migration can enhance speciation by introducing genetic novelty in the semi-isolated populations or founding small communities of migrants. Here we consider a two island neutral model of speciation with continuous migration and study diversity patterns as a function of the migration probability, population size, and number of genes involved in reproductive isolation (dubbed as genome size). For small genomes, low levels of migration induce speciation on the islands that otherwise would not occur. Diversity, however, drops sharply to a single species inhabiting both islands as the migration probability increases. For large genomes, sympatric speciation occurs even when the islands are strictly isolated. Then species richness per island increases with the probability of migration, but the total number of species decreases as they become cosmopolitan. For each genome size, there is an optimal migration intensity for each population size that maximizes the number of species. We discuss the observed modes of speciation induced by migration and how they increase species richness in the insular system while promoting asymmetry between the islands and hindering endemism. |
2107.02430 | Indrajit Ghosh | Indrajit Ghosh, Sk Shahid Nadim, Soumyendu Raha, Debnath Pal | Metapopulation dynamics of a respiratory disease with infection during
travel | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We formulate a compartmental model for the propagation of a respiratory
disease in a patchy environment. The patches are connected through the mobility
of individuals, and we assume that disease transmission and recovery are
possible during travel. Moreover, the migration terms are assumed to depend on
the distance between patches and the perceived severity of the disease. The
positivity and boundedness of the model solutions are discussed. We
analytically show the existence and global asymptotic stability of the
disease-free equilibrium. We study three different network topologies
numerically and find that underlying network structure is crucial for disease
transmission. Further numerical simulations reveal that infection during travel
has the potential to change the stability of disease-free equilibrium from
stable to unstable. The coupling strength and transmission coefficients are
also very crucial in disease propagation. Different exit screening scenarios
indicate that the patch with the highest prevalence may have adverse effects
but other patches will be benefited from exit screening. Furthermore, while
studying the multi-strain dynamics, it is observed that two co-circulating
strains will not persist simultaneously in the community but only one of the
strains may persist in the long run. Transmission coefficients corresponding to
the second strain are very crucial and show threshold like behavior with
respect to the equilibrium density of the second strain.
| [
{
"created": "Tue, 6 Jul 2021 07:10:01 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Oct 2021 14:13:50 GMT",
"version": "v2"
}
] | 2021-10-11 | [
[
"Ghosh",
"Indrajit",
""
],
[
"Nadim",
"Sk Shahid",
""
],
[
"Raha",
"Soumyendu",
""
],
[
"Pal",
"Debnath",
""
]
] | We formulate a compartmental model for the propagation of a respiratory disease in a patchy environment. The patches are connected through the mobility of individuals, and we assume that disease transmission and recovery are possible during travel. Moreover, the migration terms are assumed to depend on the distance between patches and the perceived severity of the disease. The positivity and boundedness of the model solutions are discussed. We analytically show the existence and global asymptotic stability of the disease-free equilibrium. We study three different network topologies numerically and find that underlying network structure is crucial for disease transmission. Further numerical simulations reveal that infection during travel has the potential to change the stability of disease-free equilibrium from stable to unstable. The coupling strength and transmission coefficients are also very crucial in disease propagation. Different exit screening scenarios indicate that the patch with the highest prevalence may have adverse effects but other patches will be benefited from exit screening. Furthermore, while studying the multi-strain dynamics, it is observed that two co-circulating strains will not persist simultaneously in the community but only one of the strains may persist in the long run. Transmission coefficients corresponding to the second strain are very crucial and show threshold like behavior with respect to the equilibrium density of the second strain. |
1905.04165 | Roland Wittler | Roland Wittler | Alignment- and reference-free phylogenomics with colored de-Bruijn
graphs | null | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new whole-genome based approach to infer large-scale phylogenies
that is alignment- and reference-free. In contrast to other methods, it does
not rely on pairwise comparisons to determine distances to infer edges in a
tree. Instead, a colored de-Bruijn graph is constructed, and information on
common subsequences is extracted to infer phylogenetic splits. Application to
different datasets confirms robustness of the approach. A comparison to other
state-of-the-art whole-genome based methods indicates comparable or higher
accuracy and efficiency.
| [
{
"created": "Fri, 10 May 2019 13:33:06 GMT",
"version": "v1"
},
{
"created": "Wed, 15 May 2019 08:00:21 GMT",
"version": "v2"
}
] | 2019-05-16 | [
[
"Wittler",
"Roland",
""
]
] | We present a new whole-genome based approach to infer large-scale phylogenies that is alignment- and reference-free. In contrast to other methods, it does not rely on pairwise comparisons to determine distances to infer edges in a tree. Instead, a colored de-Bruijn graph is constructed, and information on common subsequences is extracted to infer phylogenetic splits. Application to different datasets confirms robustness of the approach. A comparison to other state-of-the-art whole-genome based methods indicates comparable or higher accuracy and efficiency. |
1702.07368 | Benjamin Lansdell | Benjamin Lansdell, Ivana Milovanovic, Cooper Mellema, Eberhard E Fetz,
Adrienne L Fairhall, Chet T Moritz | Reconfiguring motor circuits for a joint manual and BCI task | 17 pages, 5 figures, 2 supplementary figures. Minor revisions | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing brain-computer interfaces (BCIs) that can be used in conjunction
with ongoing motor behavior requires an understanding of how neural activity
co-opted for brain control interacts with existing neural circuits. For
example, BCIs may be used to regain lost motor function after stroke. This
requires that neural activity controlling unaffected limbs is dissociated from
activity controlling the BCI. In this study we investigated how primary motor
cortex accomplishes simultaneous BCI control and motor control in a task that
explicitly required both activities to be driven from the same brain region
(i.e. a dual-control task). Single-unit activity was recorded from
intracortical, multi-electrode arrays while a non-human primate performed this
dual-control task. Compared to activity observed during naturalistic motor
control, we found that both units used to drive the BCI directly (control
units) and units that did not directly control the BCI (non-control units)
significantly changed their tuning to wrist torque. Using a measure of
effective connectivity, we observed that control units decrease their
connectivity. Through an analysis of variance we found that the intrinsic
variability of the control units has a significant effect on task proficiency.
When this variance is accounted for, motor cortical activity is flexible enough
to perform novel BCI tasks that require active decoupling of natural
associations to wrist motion. This study provides insight into the neural
activity that enables a dual-control brain-computer interface.
| [
{
"created": "Thu, 23 Feb 2017 19:13:00 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jan 2019 15:09:07 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Sep 2019 15:34:54 GMT",
"version": "v3"
}
] | 2019-09-13 | [
[
"Lansdell",
"Benjamin",
""
],
[
"Milovanovic",
"Ivana",
""
],
[
"Mellema",
"Cooper",
""
],
[
"Fetz",
"Eberhard E",
""
],
[
"Fairhall",
"Adrienne L",
""
],
[
"Moritz",
"Chet T",
""
]
] | Designing brain-computer interfaces (BCIs) that can be used in conjunction with ongoing motor behavior requires an understanding of how neural activity co-opted for brain control interacts with existing neural circuits. For example, BCIs may be used to regain lost motor function after stroke. This requires that neural activity controlling unaffected limbs is dissociated from activity controlling the BCI. In this study we investigated how primary motor cortex accomplishes simultaneous BCI control and motor control in a task that explicitly required both activities to be driven from the same brain region (i.e. a dual-control task). Single-unit activity was recorded from intracortical, multi-electrode arrays while a non-human primate performed this dual-control task. Compared to activity observed during naturalistic motor control, we found that both units used to drive the BCI directly (control units) and units that did not directly control the BCI (non-control units) significantly changed their tuning to wrist torque. Using a measure of effective connectivity, we observed that control units decrease their connectivity. Through an analysis of variance we found that the intrinsic variability of the control units has a significant effect on task proficiency. When this variance is accounted for, motor cortical activity is flexible enough to perform novel BCI tasks that require active decoupling of natural associations to wrist motion. This study provides insight into the neural activity that enables a dual-control brain-computer interface. |
1909.08158 | Takahiro Homma | Takahiro Homma | Generation mechanism of cell assembly to store information about hand
recognition | null | Heliyon Volume 6, Issue 11, November 2020, e05347 | 10.1016/j.heliyon.2020.e05347 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A specific memory is stored in a cell assembly that is activated during fear
learning in mice; however, research regarding cell assemblies associated with
procedural and habit learning processes is lacking. In modeling studies,
simulations of the learning process for hand regard, which is a type of
procedural learning, resulted in the formation of cell assemblies. However, the
mechanisms through which the cell assemblies form and the information stored in
these cell assemblies remain unknown. In this paper, the relationship between
hand movements and weight changes during the simulated learning process for
hand regard was used to elucidate the mechanism through which inhibitory
weights are generated, which plays an important role in the formation of cell
assemblies. During the early training phase, trial and error attempts to bring
the hand into the field of view caused the generation of inhibitory weights,
and the cell assemblies self-organized from these inhibitory weights. The
information stored in the cell assemblies was estimated by examining the
contributions of the cell assemblies outputs to hand movements. During
sustained hand regard, the outputs from these cell assemblies moved the hand
into the field of view, using hand-related inputs almost exclusively.
Therefore, infants are likely able to select the inputs associated with their
hand (that is, distinguish between their hand and others), based on the
information stored in the cell assembly, and move their hands into the field of
view during sustained hand regard.
| [
{
"created": "Wed, 18 Sep 2019 01:19:55 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Apr 2020 08:07:12 GMT",
"version": "v2"
}
] | 2020-11-09 | [
[
"Homma",
"Takahiro",
""
]
] | A specific memory is stored in a cell assembly that is activated during fear learning in mice; however, research regarding cell assemblies associated with procedural and habit learning processes is lacking. In modeling studies, simulations of the learning process for hand regard, which is a type of procedural learning, resulted in the formation of cell assemblies. However, the mechanisms through which the cell assemblies form and the information stored in these cell assemblies remain unknown. In this paper, the relationship between hand movements and weight changes during the simulated learning process for hand regard was used to elucidate the mechanism through which inhibitory weights are generated, which plays an important role in the formation of cell assemblies. During the early training phase, trial and error attempts to bring the hand into the field of view caused the generation of inhibitory weights, and the cell assemblies self-organized from these inhibitory weights. The information stored in the cell assemblies was estimated by examining the contributions of the cell assemblies outputs to hand movements. During sustained hand regard, the outputs from these cell assemblies moved the hand into the field of view, using hand-related inputs almost exclusively. Therefore, infants are likely able to select the inputs associated with their hand (that is, distinguish between their hand and others), based on the information stored in the cell assembly, and move their hands into the field of view during sustained hand regard. |
1811.12537 | Xin Li | Xin Li, and D. Thirumalai | Share, but unequally: A plausible mechanism for emergence and
maintenance of intratumor heterogeneity | 27 pages, 7 figures | null | null | null | q-bio.PE cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intratumor heterogeneity (ITH), referring to coexistence of different cell
subpopulations in a single tumor, has been a major puzzle in cancer research
for almost half a century. The lack of understanding of the underlying
mechanism of ITH hinders progress in developing effective therapies for
cancers. Based on the findings in a recent quantitative experiment on
pancreatic cancer, we developed a general evolutionary model for one type of
cancer, accounting for interactions between different cell populations through
paracrine or juxtacrine factors. We show that the emergence of a stable
heterogeneous state in a tumor requires an unequal allocation of paracrine
growth factors ("public goods") between cells that produce them and those that
merely consume them. Our model provides a quantitative explanation of recent
{\it in vitro} experimental studies in pancreatic cancer in which insulin
growth factor (IGF-II) plays the role of public goods. The calculated phase
diagrams as a function of exogenous resources and fraction of growth factor
producing cells show ITH persists only in a narrow range of concentration of
exogenous IGF-II. Remarkably, maintenance of ITH requires cooperation among
tumor cell subpopulations in harsh conditions, specified by lack of exogenous
IGF-II, whereas surplus exogenous IGF-II elicits competition. Our theory also
quantitatively accounts for measured {\it in vivo} tumor growth in glioblastoma
multiforme (GBM). The predictions for GBM tumor growth as a function of the
fraction of tumor cells are amenable to experimental tests. The mechanism for
ITH also provides hints for devising efficacious therapies.
| [
{
"created": "Thu, 29 Nov 2018 23:39:49 GMT",
"version": "v1"
}
] | 2019-02-22 | [
[
"Li",
"Xin",
""
],
[
"Thirumalai",
"D.",
""
]
] | Intratumor heterogeneity (ITH), referring to coexistence of different cell subpopulations in a single tumor, has been a major puzzle in cancer research for almost half a century. The lack of understanding of the underlying mechanism of ITH hinders progress in developing effective therapies for cancers. Based on the findings in a recent quantitative experiment on pancreatic cancer, we developed a general evolutionary model for one type of cancer, accounting for interactions between different cell populations through paracrine or juxtacrine factors. We show that the emergence of a stable heterogeneous state in a tumor requires an unequal allocation of paracrine growth factors ("public goods") between cells that produce them and those that merely consume them. Our model provides a quantitative explanation of recent {\it in vitro} experimental studies in pancreatic cancer in which insulin growth factor (IGF-II) plays the role of public goods. The calculated phase diagrams as a function of exogenous resources and fraction of growth factor producing cells show ITH persists only in a narrow range of concentration of exogenous IGF-II. Remarkably, maintenance of ITH requires cooperation among tumor cell subpopulations in harsh conditions, specified by lack of exogenous IGF-II, whereas surplus exogenous IGF-II elicits competition. Our theory also quantitatively accounts for measured {\it in vivo} tumor growth in glioblastoma multiforme (GBM). The predictions for GBM tumor growth as a function of the fraction of tumor cells are amenable to experimental tests. The mechanism for ITH also provides hints for devising efficacious therapies. |
2106.10631 | Leonardo Novelli | Leonardo Novelli, Adeel Razi | A mathematical perspective on edge-centric brain functional connectivity | null | Nat Commun 13, 2693 (2022) | 10.1038/s41467-022-29775-7 | null | q-bio.NC physics.data-an | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Edge time series are increasingly used in brain functional imaging to study
the node functional connectivity (nFC) dynamics at the finest temporal
resolution while avoiding sliding windows. Here, we lay the mathematical
foundations for the edge-centric analysis of neuroimaging time series,
explaining why a few high-amplitude cofluctuations drive the nFC across
datasets. Our exposition also constitutes a critique of the existing
edge-centric studies, showing that their main findings can be derived from the
nFC under a static null hypothesis that disregards temporal correlations.
Testing the analytic predictions on functional MRI data from the Human
Connectome Project confirms that the nFC can explain most variation in the edge
FC matrix, the edge communities, the large cofluctuations, and the
corresponding spatial patterns. We encourage the use of dynamic measures in
future research, which exploit the temporal structure of the edge time series
and cannot be replicated by static null models.
| [
{
"created": "Sun, 20 Jun 2021 05:53:12 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jul 2022 11:51:57 GMT",
"version": "v2"
}
] | 2022-07-15 | [
[
"Novelli",
"Leonardo",
""
],
[
"Razi",
"Adeel",
""
]
] | Edge time series are increasingly used in brain functional imaging to study the node functional connectivity (nFC) dynamics at the finest temporal resolution while avoiding sliding windows. Here, we lay the mathematical foundations for the edge-centric analysis of neuroimaging time series, explaining why a few high-amplitude cofluctuations drive the nFC across datasets. Our exposition also constitutes a critique of the existing edge-centric studies, showing that their main findings can be derived from the nFC under a static null hypothesis that disregards temporal correlations. Testing the analytic predictions on functional MRI data from the Human Connectome Project confirms that the nFC can explain most variation in the edge FC matrix, the edge communities, the large cofluctuations, and the corresponding spatial patterns. We encourage the use of dynamic measures in future research, which exploit the temporal structure of the edge time series and cannot be replicated by static null models. |
q-bio/0701018 | Eduardo D. Sontag | David Angeli and Eduardo D. Sontag | Oscillations in I/O monotone systems under negative feedback | Related work can be retrieved from second author's website | null | null | null | q-bio.QM q-bio.MN | null | Oscillatory behavior is a key property of many biological systems. The
Small-Gain Theorem (SGT) for input/output monotone systems provides a
sufficient condition for global asymptotic stability of an equilibrium and
hence its violation is a necessary condition for the existence of periodic
solutions. One advantage of the use of the monotone SGT technique is its
robustness with respect to all perturbations that preserve monotonicity and
stability properties of a very low-dimensional (in many interesting examples,
just one-dimensional) model reduction. This robustness makes the technique
useful in the analysis of molecular biological models in which there is large
uncertainty regarding the values of kinetic and other parameters. However,
verifying the conditions needed in order to apply the SGT is not always easy.
This paper provides an approach to the verification of the needed properties,
and illustrates the approach through an application to a classical model of
circadian oscillations, as a nontrivial ``case study,'' and also provides a
theorem in the converse direction of predicting oscillations when the SGT
conditions fail.
| [
{
"created": "Sun, 14 Jan 2007 14:43:29 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Angeli",
"David",
""
],
[
"Sontag",
"Eduardo D.",
""
]
] | Oscillatory behavior is a key property of many biological systems. The Small-Gain Theorem (SGT) for input/output monotone systems provides a sufficient condition for global asymptotic stability of an equilibrium and hence its violation is a necessary condition for the existence of periodic solutions. One advantage of the use of the monotone SGT technique is its robustness with respect to all perturbations that preserve monotonicity and stability properties of a very low-dimensional (in many interesting examples, just one-dimensional) model reduction. This robustness makes the technique useful in the analysis of molecular biological models in which there is large uncertainty regarding the values of kinetic and other parameters. However, verifying the conditions needed in order to apply the SGT is not always easy. This paper provides an approach to the verification of the needed properties, and illustrates the approach through an application to a classical model of circadian oscillations, as a nontrivial ``case study,'' and also provides a theorem in the converse direction of predicting oscillations when the SGT conditions fail. |
0906.5535 | Thomas Butler | Thomas Butler and Nigel Goldenfeld | Robust ecological pattern formation induced by demographic noise | Revised version. Supporting simulation at:
http://guava.physics.uiuc.edu/~tom/Netlogo/ | null | 10.1103/PhysRevE.80.030902 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate that demographic noise can induce persistent spatial pattern
formation and temporal oscillations in the Levin-Segel predator-prey model for
plankton-herbivore population dynamics. Although the model exhibits a Turing
instability in mean field theory, demographic noise greatly enlarges the region
of parameter space where pattern formation occurs. To distinguish between
patterns generated by fluctuations and those present at the mean field level in
real ecosystems, we calculate the power spectrum in the noise-driven case and
predict the presence of fat tails not present in the mean field case. These
results may account for the prevalence of large-scale ecological patterns,
beyond that expected from traditional non-stochastic approaches.
| [
{
"created": "Tue, 30 Jun 2009 14:12:21 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jun 2009 20:54:34 GMT",
"version": "v2"
},
{
"created": "Wed, 29 Jul 2009 00:31:02 GMT",
"version": "v3"
}
] | 2015-05-13 | [
[
"Butler",
"Thomas",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | We demonstrate that demographic noise can induce persistent spatial pattern formation and temporal oscillations in the Levin-Segel predator-prey model for plankton-herbivore population dynamics. Although the model exhibits a Turing instability in mean field theory, demographic noise greatly enlarges the region of parameter space where pattern formation occurs. To distinguish between patterns generated by fluctuations and those present at the mean field level in real ecosystems, we calculate the power spectrum in the noise-driven case and predict the presence of fat tails not present in the mean field case. These results may account for the prevalence of large-scale ecological patterns, beyond that expected from traditional non-stochastic approaches. |
2012.09325 | Jacob Moran | Jacob Moran, Devon Finlay, and Mikhail Tikhonov | Improve it or lose it: evolvability costs of competition for expression | 6 pages, 4 figures + Supplementary Material | Phys. Rev. E 103, 062402 (2021) | 10.1103/PhysRevE.103.062402 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Expression level is known to be a strong determinant of a protein's rate of
evolution. But the converse can also be true: evolutionary dynamics can affect
expression levels of proteins. Having implications in both directions fosters
the possibility of a feedback loop, where higher expressed systems are more
likely to improve and be expressed even higher, while those that are expressed
less are eventually lost to drift. Using a minimal model to study this in the
context of a changing environment, we demonstrate that one unexpected
consequence of such a feedback loop is that a slow switch to a new environment
can allow genotypes to reach higher fitness sooner than a direct exposure to
it.
| [
{
"created": "Wed, 16 Dec 2020 23:52:21 GMT",
"version": "v1"
}
] | 2021-06-09 | [
[
"Moran",
"Jacob",
""
],
[
"Finlay",
"Devon",
""
],
[
"Tikhonov",
"Mikhail",
""
]
] | Expression level is known to be a strong determinant of a protein's rate of evolution. But the converse can also be true: evolutionary dynamics can affect expression levels of proteins. Having implications in both directions fosters the possibility of a feedback loop, where higher expressed systems are more likely to improve and be expressed even higher, while those that are expressed less are eventually lost to drift. Using a minimal model to study this in the context of a changing environment, we demonstrate that one unexpected consequence of such a feedback loop is that a slow switch to a new environment can allow genotypes to reach higher fitness sooner than a direct exposure to it. |
1411.3801 | Tom Chou | Tom Chou, Yu Wang | Fixation times in differentiation and evolution in the presence of
bottlenecks, deserts, and oases | 16 pages, 9 figures | null | null | null | q-bio.PE cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cellular differentiation and evolution are stochastic processes that can
involve multiple types (or states) of particles moving on a complex,
high-dimensional state-space or "fitness" landscape. Cells of each specific
type can thus be quantified by their population at a corresponding node within
a network of states. Their dynamics across the state-space network involve
genotypic or phenotypic transitions that can occur upon cell division, such as
during symmetric or asymmetric cell differentiation, or upon spontaneous
mutation. Waiting times between transitions can be nonexponentially distributed
and reflect e.g., the cell cycle. Here, we use a multi-type branching processes
to study first passage time statistics for a single cell to appear in a
specific state. We present results for a sequential evolutionary process in
which $L$ successive transitions propel a population from a "wild-type" state
to a given "terminally differentiated," "resistant," or "cancerous" state.
Analytic and numeric results are also found for first passage times across an
evolutionary chain containing a node with increased death or proliferation
rate, representing a desert/bottleneck or an oasis. Processes involving cell
proliferation are shown to be "nonlinear" (even though mean-field equations for
the expected particle numbers are linear) resulting in first passage time
statistics that depend on the position of the bottleneck or oasis. Our results
highlight the sensitivity of stochastic measures to cell division fate and
quantify the limitations of using certain approximations and assumptions (such
as fixed-population and mean-field assumptions) in evaluating fixation times.
| [
{
"created": "Fri, 14 Nov 2014 05:43:33 GMT",
"version": "v1"
}
] | 2014-11-17 | [
[
"Chou",
"Tom",
""
],
[
"Wang",
"Yu",
""
]
] | Cellular differentiation and evolution are stochastic processes that can involve multiple types (or states) of particles moving on a complex, high-dimensional state-space or "fitness" landscape. Cells of each specific type can thus be quantified by their population at a corresponding node within a network of states. Their dynamics across the state-space network involve genotypic or phenotypic transitions that can occur upon cell division, such as during symmetric or asymmetric cell differentiation, or upon spontaneous mutation. Waiting times between transitions can be nonexponentially distributed and reflect e.g., the cell cycle. Here, we use a multi-type branching processes to study first passage time statistics for a single cell to appear in a specific state. We present results for a sequential evolutionary process in which $L$ successive transitions propel a population from a "wild-type" state to a given "terminally differentiated," "resistant," or "cancerous" state. Analytic and numeric results are also found for first passage times across an evolutionary chain containing a node with increased death or proliferation rate, representing a desert/bottleneck or an oasis. Processes involving cell proliferation are shown to be "nonlinear" (even though mean-field equations for the expected particle numbers are linear) resulting in first passage time statistics that depend on the position of the bottleneck or oasis. Our results highlight the sensitivity of stochastic measures to cell division fate and quantify the limitations of using certain approximations and assumptions (such as fixed-population and mean-field assumptions) in evaluating fixation times. |
2004.03575 | Jos\'e Carcione M | Jose' M. Carcione, Juan E. Santos, Claudio Bagaini, Jing Ba | A simulation of a COVID-19 epidemic based on a deterministic SEIR model | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An epidemic disease caused by a new coronavirus has spread in Northern Italy
with a strong contagion rate. We implement an SEIR model to compute the
infected population and number of casualties of this epidemic. The example may
ideally regard the situation in the Italian Region of Lombardy, where the
epidemic started on February 25. We calibrate the model with the number of dead
individuals to date (May 5, 2020) and constraint the parameters on the basis of
values reported in the literature. The peak occurs at day 37 (March 31)
approximately, when there is a rapid decrease, with a reproduction ratio R0 = 3
initially, 1.36 at day 22 and 0.8 after day 35, indicating different degrees of
lockdown. The predicted death toll is approximately 15600 casualties, with 2.7
million infected individuals at the end of the epidemic. The incubation period
providing a better fit of the dead individuals is 4.25 days and the infection
period is 4 days, with a fatality rate of 0.00144/day [values based on the
reported (official) number of casualties]. The infection fatality rate (IFR) is
0.57 %, and 2.36 % if twice the reported number of casualties is assumed.
However, these rates depend on the initially exposed individuals. If
approximately nine times more individuals are exposed, there are three times
more infected people at the end of the epidemic and IFR = 0.47 %. If we relax
these constraints and use a wider range of lower and upper bounds for the
incubation and infection periods, we observe that a higher incubation period
(13 versus 4.25 days) gives the same IFR (0.6 versus 0.57 %), but nine times
more exposed individuals in the first case. Therefore, a precise determination
of the fatality rate is subject to the knowledge of the characteristics of the
epidemic.
| [
{
"created": "Tue, 7 Apr 2020 17:54:33 GMT",
"version": "v1"
},
{
"created": "Sun, 10 May 2020 09:18:19 GMT",
"version": "v10"
},
{
"created": "Wed, 8 Apr 2020 09:03:36 GMT",
"version": "v2"
},
{
"created": "Sat, 11 Apr 2020 08:03:24 GMT",
"version": "v3"
},
{
"cr... | 2020-05-12 | [
[
"Carcione",
"Jose' M.",
""
],
[
"Santos",
"Juan E.",
""
],
[
"Bagaini",
"Claudio",
""
],
[
"Ba",
"Jing",
""
]
] | An epidemic disease caused by a new coronavirus has spread in Northern Italy with a strong contagion rate. We implement an SEIR model to compute the infected population and number of casualties of this epidemic. The example may ideally regard the situation in the Italian Region of Lombardy, where the epidemic started on February 25. We calibrate the model with the number of dead individuals to date (May 5, 2020) and constraint the parameters on the basis of values reported in the literature. The peak occurs at day 37 (March 31) approximately, when there is a rapid decrease, with a reproduction ratio R0 = 3 initially, 1.36 at day 22 and 0.8 after day 35, indicating different degrees of lockdown. The predicted death toll is approximately 15600 casualties, with 2.7 million infected individuals at the end of the epidemic. The incubation period providing a better fit of the dead individuals is 4.25 days and the infection period is 4 days, with a fatality rate of 0.00144/day [values based on the reported (official) number of casualties]. The infection fatality rate (IFR) is 0.57 %, and 2.36 % if twice the reported number of casualties is assumed. However, these rates depend on the initially exposed individuals. If approximately nine times more individuals are exposed, there are three times more infected people at the end of the epidemic and IFR = 0.47 %. If we relax these constraints and use a wider range of lower and upper bounds for the incubation and infection periods, we observe that a higher incubation period (13 versus 4.25 days) gives the same IFR (0.6 versus 0.57 %), but nine times more exposed individuals in the first case. Therefore, a precise determination of the fatality rate is subject to the knowledge of the characteristics of the epidemic. |
2306.08486 | Giorgio Gonnella | Serena Lam and Giorgio Gonnella | Collection of prokaryotic genome contents expectation rules from
scientific literature | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Shaped by natural selection and other evolutionary forces, an organism's
evolutionary history is reflected through its genome sequence, content of
functional elements and organization. Consequently, organisms connected through
phylogeny, metabolic or morphological traits, geographical proximity, or
habitat features are likely to exhibit similarities in their genomes. These
similarities give rise to expectations about the content of genomes within
these organism groups.
Such expectations are often informally expressed in scientific literature,
focusing on the analysis of individual genomes or comparisons among related
groups of organisms. Our objective is to develop a system for formalized
expectations as rules, facilitating automated verification, and evaluation of
newly sequenced genomes.
In this study, we present a database comprising rules manually extracted from
scientific literature. Furthermore, we explore the feasibility of automatizing
the extraction and analysis process using large language models, such as GPT3.5
and GPT4.
We have developed a web application, EGCWebApp, which enables users to
visualize and edit the rules. Additionally, we provided a Python library and
command-line tools collection, egctools, to further extend the functionality
for processing and managing these rules.
| [
{
"created": "Wed, 14 Jun 2023 13:03:48 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Lam",
"Serena",
""
],
[
"Gonnella",
"Giorgio",
""
]
] | Shaped by natural selection and other evolutionary forces, an organism's evolutionary history is reflected through its genome sequence, content of functional elements and organization. Consequently, organisms connected through phylogeny, metabolic or morphological traits, geographical proximity, or habitat features are likely to exhibit similarities in their genomes. These similarities give rise to expectations about the content of genomes within these organism groups. Such expectations are often informally expressed in scientific literature, focusing on the analysis of individual genomes or comparisons among related groups of organisms. Our objective is to develop a system for formalized expectations as rules, facilitating automated verification, and evaluation of newly sequenced genomes. In this study, we present a database comprising rules manually extracted from scientific literature. Furthermore, we explore the feasibility of automatizing the extraction and analysis process using large language models, such as GPT3.5 and GPT4. We have developed a web application, EGCWebApp, which enables users to visualize and edit the rules. Additionally, we provided a Python library and command-line tools collection, egctools, to further extend the functionality for processing and managing these rules. |
2111.09138 | Rachel Cavill | Nordine Aouni, Luc Linders, David Robinson, Len Vandelaer, Jessica
Wiezorek, Geetesh Gupta, Rachel Cavill | Interpreting multi-variate models with setPCA | null | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Principal Component Analysis (PCA) and other multi-variate models are often
used in the analysis of "omics" data. These models contain much information
which is currently neither easily accessible nor interpretable. Here we present
an algorithmic method which has been developed to integrate this information
with existing databases of background knowledge, stored in the form of known
sets (for instance genesets or pathways). To make this accessible we have
produced a Graphical User Interface (GUI) in Matlab which allows the overlay of
known set information onto the loadings plot and thus improves the
interpretability of the multi-variate model. For each known set the optimal
convex hull, covering a subset of elements from the known set, is found through
a search algorithm and displayed. In this paper we discuss two main topics; the
details of the search algorithm for the optimal convex hull for this problem
and the GUI interface which is freely available for download for academic use.
| [
{
"created": "Wed, 17 Nov 2021 14:22:19 GMT",
"version": "v1"
}
] | 2021-11-18 | [
[
"Aouni",
"Nordine",
""
],
[
"Linders",
"Luc",
""
],
[
"Robinson",
"David",
""
],
[
"Vandelaer",
"Len",
""
],
[
"Wiezorek",
"Jessica",
""
],
[
"Gupta",
"Geetesh",
""
],
[
"Cavill",
"Rachel",
""
]
] | Principal Component Analysis (PCA) and other multi-variate models are often used in the analysis of "omics" data. These models contain much information which is currently neither easily accessible nor interpretable. Here we present an algorithmic method which has been developed to integrate this information with existing databases of background knowledge, stored in the form of known sets (for instance genesets or pathways). To make this accessible we have produced a Graphical User Interface (GUI) in Matlab which allows the overlay of known set information onto the loadings plot and thus improves the interpretability of the multi-variate model. For each known set the optimal convex hull, covering a subset of elements from the known set, is found through a search algorithm and displayed. In this paper we discuss two main topics; the details of the search algorithm for the optimal convex hull for this problem and the GUI interface which is freely available for download for academic use. |
1711.09133 | Michael Deem | Qiuhai Yue, Randi Martin, Simon Fischer-Baum, Aurora I. Ramos-Nu\~nez,
Fengdan Ye, and Michael W. Deem | Brain Modularity Mediates the Relation between Task Complexity and
Performance | 47 pages; 4 figures | J. Cog. Neurosci. 29 (2017) 1532-1546 | 10.1162/jocn_a_01142 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work in cognitive neuroscience has focused on analyzing the brain as a
network, rather than as a collection of independent regions. Prior studies
taking this approach have found that individual differences in the degree of
modularity of the brain network relate to performance on cognitive tasks.
However, inconsistent results concerning the direction of this relationship
have been obtained, with some tasks showing better performance as modularity
increases and other tasks showing worse performance. A recent theoretical model
(Chen & Deem, 2015) suggests that these inconsistencies may be explained on the
grounds that high-modularity networks favor performance on simple tasks whereas
low-modularity networks favor performance on more complex tasks. The current
study tests these predictions by relating modularity from resting-state fMRI to
performance on a set of simple and complex behavioral tasks. Complex and simple
tasks were defined on the basis of whether they did or did not draw on
executive attention. Consistent with predictions, we found a negative
correlation between individuals' modularity and their performance on a
composite measure combining scores from the complex tasks but a positive
correlation with performance on a composite measure combining scores from the
simple tasks. These results and theory presented here provide a framework for
linking measures of whole brain organization from network neuroscience to
cognitive processing.
| [
{
"created": "Fri, 24 Nov 2017 20:50:59 GMT",
"version": "v1"
}
] | 2017-11-28 | [
[
"Yue",
"Qiuhai",
""
],
[
"Martin",
"Randi",
""
],
[
"Fischer-Baum",
"Simon",
""
],
[
"Ramos-Nuñez",
"Aurora I.",
""
],
[
"Ye",
"Fengdan",
""
],
[
"Deem",
"Michael W.",
""
]
] | Recent work in cognitive neuroscience has focused on analyzing the brain as a network, rather than as a collection of independent regions. Prior studies taking this approach have found that individual differences in the degree of modularity of the brain network relate to performance on cognitive tasks. However, inconsistent results concerning the direction of this relationship have been obtained, with some tasks showing better performance as modularity increases and other tasks showing worse performance. A recent theoretical model (Chen & Deem, 2015) suggests that these inconsistencies may be explained on the grounds that high-modularity networks favor performance on simple tasks whereas low-modularity networks favor performance on more complex tasks. The current study tests these predictions by relating modularity from resting-state fMRI to performance on a set of simple and complex behavioral tasks. Complex and simple tasks were defined on the basis of whether they did or did not draw on executive attention. Consistent with predictions, we found a negative correlation between individuals' modularity and their performance on a composite measure combining scores from the complex tasks but a positive correlation with performance on a composite measure combining scores from the simple tasks. These results and theory presented here provide a framework for linking measures of whole brain organization from network neuroscience to cognitive processing. |
2201.09960 | Hirokuni Miyamoto | Hirokuni Miyamoto, Futo Asano, Koutarou Ishizawa, Wataru Suda, Hisashi
Miyamoto, Naoko Tsuji, Makiko Matsuura, Arisa Tsuboi, Chitose Ishii, Teruno
Nakaguma, Chie Shindo, Tamotsu Kato, Atsushi Kurotani, Hideaki Shima,
Shigeharu Moriya, Masahira Hattori, Hiroaki Kodama, Hiroshi Ohno, Jun Kikuchi | Symbiotic bacterial network structure involved in carbon and nitrogen
metabolism of wood-utilizing insect larvae | null | null | 10.1016/J.SCITOTENV.2022.155520 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective biological utilization of wood biomass is necessary worldwide.
Since several insect larvae can use wood biomass as a nutrient source, studies
on their digestive mechanism are expected to speculate a novel rule in wood
biomass processing. Here, the relationships of inhabitant bacteria involved in
carbon and nitrogen metabolism in the intestine of beetle larvae, an insect
model, are investigated. Bacterial analysis of larval feces showed enrichment
of members of which could include candidates for plant growth promotion,
nitrogen cycle modulation, and/or environmental protection. The abundances of
these bacteria were not necessarily positively correlated with the abundance in
the habitat, suggesting that they might be selectively enriched in the
intestines of larvae. Further association analysis predicted that carbon and
nitrogen metabolism in the intestine was affected by the presence of the other
common bacteria, the populations of which were not remarkably altered in the
habitat and feces. Based on hypotheses targeting these selected bacterial
groups, structural estimation modeling analyses statistically suggested that
their metabolism of carbon and nitrogen and their stable isotopes, {\delta}13C
and {\delta}15N, may be associated with fecal enriched bacteria and other
common bacteria. In addition, other causal inference analyses, such as causal
mediation analysis, linear non-Gaussian acyclic model (LiNGAM), and
BayesLiNGAM, did not necessarily affirm the existence of prominent bacteria
involved in metabolism, implying its importance as the bacterial groups for
metabolism rather than a remarkable bacterium. Thus, these observations
highlight a multifaceted view of symbiotic bacterial groups utilizing carbon
and nitrogen from wood biomass in insect larvae as a cultivator of potentially
environmentally beneficial bacteria.
| [
{
"created": "Mon, 24 Jan 2022 21:20:58 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Oct 2022 09:29:00 GMT",
"version": "v2"
}
] | 2022-10-18 | [
[
"Miyamoto",
"Hirokuni",
""
],
[
"Asano",
"Futo",
""
],
[
"Ishizawa",
"Koutarou",
""
],
[
"Suda",
"Wataru",
""
],
[
"Miyamoto",
"Hisashi",
""
],
[
"Tsuji",
"Naoko",
""
],
[
"Matsuura",
"Makiko",
""
],
[
... | Effective biological utilization of wood biomass is necessary worldwide. Since several insect larvae can use wood biomass as a nutrient source, studies on their digestive mechanism are expected to speculate a novel rule in wood biomass processing. Here, the relationships of inhabitant bacteria involved in carbon and nitrogen metabolism in the intestine of beetle larvae, an insect model, are investigated. Bacterial analysis of larval feces showed enrichment of members of which could include candidates for plant growth promotion, nitrogen cycle modulation, and/or environmental protection. The abundances of these bacteria were not necessarily positively correlated with the abundance in the habitat, suggesting that they might be selectively enriched in the intestines of larvae. Further association analysis predicted that carbon and nitrogen metabolism in the intestine was affected by the presence of the other common bacteria, the populations of which were not remarkably altered in the habitat and feces. Based on hypotheses targeting these selected bacterial groups, structural estimation modeling analyses statistically suggested that their metabolism of carbon and nitrogen and their stable isotopes, {\delta}13C and {\delta}15N, may be associated with fecal enriched bacteria and other common bacteria. In addition, other causal inference analyses, such as causal mediation analysis, linear non-Gaussian acyclic model (LiNGAM), and BayesLiNGAM, did not necessarily affirm the existence of prominent bacteria involved in metabolism, implying its importance as the bacterial groups for metabolism rather than a remarkable bacterium. Thus, these observations highlight a multifaceted view of symbiotic bacterial groups utilizing carbon and nitrogen from wood biomass in insect larvae as a cultivator of potentially environmentally beneficial bacteria. |
2402.03072 | Carlos A. Velazquez-Vargas | Carlos A. Velazquez-Vargas, Isaac Ray Christian, Jordan A. Taylor and
Sreejan Kumar | Learning to Abstract Visuomotor Mappings using Meta-Reinforcement
Learning | null | null | null | null | q-bio.NC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We investigated the human capacity to acquire multiple visuomotor mappings
for de novo skills. Using a grid navigation paradigm, we tested whether
contextual cues implemented as different "grid worlds", allow participants to
learn two distinct key-mappings more efficiently. Our results indicate that
when contextual information is provided, task performance is significantly
better. The same held true for meta-reinforcement learning agents that differed
in whether or not they receive contextual information when performing the task.
We evaluated their accuracy in predicting human performance in the task and
analyzed their internal representations. The results indicate that contextual
cues allow the formation of separate representations in space and time when
using different visuomotor mappings, whereas the absence of them favors sharing
one representation. While both strategies can allow learning of multiple
visuomotor mappings, we showed contextual cues provide a computational
advantage in terms of how many mappings can be learned.
| [
{
"created": "Mon, 5 Feb 2024 15:02:35 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Velazquez-Vargas",
"Carlos A.",
""
],
[
"Christian",
"Isaac Ray",
""
],
[
"Taylor",
"Jordan A.",
""
],
[
"Kumar",
"Sreejan",
""
]
] | We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned. |
2203.15743 | Prabhakar Varuni | P. Varuni, Shakti N. Menon, and Gautam I. Menon | Phototactic cyanobacteria as an active matter system | 7 pages, 4 figures | null | 10.1007/s12648-022-02371-7 | null | q-bio.CB cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flocks of birds, schools of fish, mixtures of motors and cytoskeletal
filaments, swimming bacteria and driven granular media are systems of
interacting motile units that exhibit collective behaviour. These can all be
described as active matter systems, since each individual unit takes energy
from an internal energy depot and transduces it into work performed on the
environment. We review a model for cyanobacterial phototaxis, emphasising the
differences from other models for collective behaviour in active matter
systems. The interactions between individual cells during phototaxis are
dominated by mechanical forces mediated by their physical attachments through
type IV pili (T4P) and through the production of "slime", a complex mixture of
non-diffusible polysaccharides deposited by cells that acts to decrease
friction locally. The slime, in particular, adds a component to the interaction
that is local in space but non-local in time, perhaps most comparable to the
pheromones laid down in ant trails. Our results suggest that the time-delayed
component of the interactions between bacteria qualify their description as a
novel active system, which we refer to as "damp" active matter.
| [
{
"created": "Tue, 29 Mar 2022 16:51:11 GMT",
"version": "v1"
}
] | 2022-06-08 | [
[
"Varuni",
"P.",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Menon",
"Gautam I.",
""
]
] | Flocks of birds, schools of fish, mixtures of motors and cytoskeletal filaments, swimming bacteria and driven granular media are systems of interacting motile units that exhibit collective behaviour. These can all be described as active matter systems, since each individual unit takes energy from an internal energy depot and transduces it into work performed on the environment. We review a model for cyanobacterial phototaxis, emphasising the differences from other models for collective behaviour in active matter systems. The interactions between individual cells during phototaxis are dominated by mechanical forces mediated by their physical attachments through type IV pili (T4P) and through the production of "slime", a complex mixture of non-diffusible polysaccharides deposited by cells that acts to decrease friction locally. The slime, in particular, adds a component to the interaction that is local in space but non-local in time, perhaps most comparable to the pheromones laid down in ant trails. Our results suggest that the time-delayed component of the interactions between bacteria qualify their description as a novel active system, which we refer to as "damp" active matter. |
2202.12955 | Sergei Gepshtein | Sergei Gepshtein, Ambarish Pawar, Sunwoo Kwon, Sergey Savel'ev, Thomas
D. Albright | Spatially distributed computation in cortical circuits | 45 pages | Science Advances 8 (16), eabl5865 (2022) | 10.1126/sciadv.abl5865 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The traditional view of neural computation in the cerebral cortex holds that
sensory neurons are specialized, i.e., selective for certain dimensions of
sensory stimuli. This view was challenged by evidence of contextual
interactions between stimulus dimensions in which a neuron's response to one
dimension strongly depends on other dimensions. Here we use methods of
mathematical modeling, psychophysics, and electrophysiology to address
shortcomings of the traditional view. Using a model of a generic cortical
circuit, we begin with the simple demonstration that cortical responses are
always distributed among neurons, forming characteristic waveforms, which we
call neural waves. When stimulated by patterned stimuli, circuit responses
arise by interference of neural waves. Resulting patterns of interference
depend on interaction between stimulus dimensions. Comparison of these modeled
responses with responses of biological vision makes it clear that the framework
of neural wave interference provides a useful alternative to the standard
concept of neural computation.
| [
{
"created": "Fri, 25 Feb 2022 20:10:03 GMT",
"version": "v1"
}
] | 2022-04-26 | [
[
"Gepshtein",
"Sergei",
""
],
[
"Pawar",
"Ambarish",
""
],
[
"Kwon",
"Sunwoo",
""
],
[
"Savel'ev",
"Sergey",
""
],
[
"Albright",
"Thomas D.",
""
]
] | The traditional view of neural computation in the cerebral cortex holds that sensory neurons are specialized, i.e., selective for certain dimensions of sensory stimuli. This view was challenged by evidence of contextual interactions between stimulus dimensions in which a neuron's response to one dimension strongly depends on other dimensions. Here we use methods of mathematical modeling, psychophysics, and electrophysiology to address shortcomings of the traditional view. Using a model of a generic cortical circuit, we begin with the simple demonstration that cortical responses are always distributed among neurons, forming characteristic waveforms, which we call neural waves. When stimulated by patterned stimuli, circuit responses arise by interference of neural waves. Resulting patterns of interference depend on interaction between stimulus dimensions. Comparison of these modeled responses with responses of biological vision makes it clear that the framework of neural wave interference provides a useful alternative to the standard concept of neural computation. |
q-bio/0702032 | Antoine Danchin | Antoine Danchin (REG) | Bacteria are not Lamarckian | Work performed to show that the interpretation of Cairns experiments
on adaptive mutations was wrong: bacteria are not lamarckian; the set up
provided shows that when submitted to some sort of starvation, individual
within colonies can find unexpected ways out | null | null | null | q-bio.GN | null | Instructive influence of environment on heredity has been a debated topic for
centuries. Darwin's identification of natural selection coupled to chance
variation as the driving force for evolution, against a formal interpretation
proposed by Lamarck, convinced most scientists that environment does not
specifically instruct evolution in an oriented direction. This is true for
multicellular organisms. In contrast, bacteria were long thought of as prone to
receive oriented influences from their environment, although much was in favour
of the Darwinian route (1). In this context Cairns et al. raised a passionate
debate by suggesting that bacteria generate mutations oriented by the
environmental conditions (2). Several independent pieces of work subsequently
demonstrated that mutations overcoming specific defects arised as a consequence
of cultivation on specific media (3-7). Two diametrically opposed
interpretations were proposed to explain these observations : either induction
of mutations instructed by the environment (e.g. by a process involving a
putative reverse transcription) or selection of variants among a large set of
mutant bacteria generated when stress conditions are present. The experiments
presented below indicate that the Darwinian paradigm is the most plausible.
| [
{
"created": "Wed, 14 Feb 2007 13:20:31 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Danchin",
"Antoine",
"",
"REG"
]
] | Instructive influence of environment on heredity has been a debated topic for centuries. Darwin's identification of natural selection coupled to chance variation as the driving force for evolution, against a formal interpretation proposed by Lamarck, convinced most scientists that environment does not specifically instruct evolution in an oriented direction. This is true for multicellular organisms. In contrast, bacteria were long thought of as prone to receive oriented influences from their environment, although much was in favour of the Darwinian route (1). In this context Cairns et al. raised a passionate debate by suggesting that bacteria generate mutations oriented by the environmental conditions (2). Several independent pieces of work subsequently demonstrated that mutations overcoming specific defects arised as a consequence of cultivation on specific media (3-7). Two diametrically opposed interpretations were proposed to explain these observations : either induction of mutations instructed by the environment (e.g. by a process involving a putative reverse transcription) or selection of variants among a large set of mutant bacteria generated when stress conditions are present. The experiments presented below indicate that the Darwinian paradigm is the most plausible. |
2304.09238 | Jeremiah Doody Dr | J. Sean Doody, Gordon Burghardt and Vladimir Dinets | The Evolution of Sociality and the Polyvagal Theory | 15 pages, 1 figure | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The polyvagal theory (PT), offered by Porges (2021), proposes that the
autonomic nervous system (ANS) was repurposed in mammals, via a second vagal
nerve, to suppress defensive strategies and support the expression of
sociality. Three critical assumptions of this theory are that (1) the
transition of the ANS was associated with the evolution of social mammals from
asocial reptiles; (2) the transition enabled mammals, unlike their reptilian
ancestors, to derive a biological benefit from social interactions; and (3) the
transition forces a less parsimonious explanation (convergence) for the
evolution of social behavior in birds and mammals, since birds evolved from a
reptilian lineage. Two recently published reviews, however, provided compelling
evidence that the social asocial dichotomy is overly simplistic, neglects the
diversity of vertebrate social systems, impedes our understanding of the
evolution of social behavior, and perpetuates the erroneous belief that one
group, non-avian reptiles, is incapable of complex social behavior. In the
worst case, if PT depends upon a transition from asocial reptiles to social
mammals, then the ability of PT to explain the evolution of the mammalian ANS
is highly questionable. A great number of social behaviors occur in both
reptiles and mammals. In the best case, PT has misused the terms social and
asocial. Even here, however, the theory would still need to identify a
particular suite of behaviors found in mammals and not reptiles that could be
associated with, or explain, the transition of the ANS, and then replace the
asocial and social labels with more specific descriptors.
| [
{
"created": "Tue, 18 Apr 2023 18:55:01 GMT",
"version": "v1"
}
] | 2023-04-20 | [
[
"Doody",
"J. Sean",
""
],
[
"Burghardt",
"Gordon",
""
],
[
"Dinets",
"Vladimir",
""
]
] | The polyvagal theory (PT), offered by Porges (2021), proposes that the autonomic nervous system (ANS) was repurposed in mammals, via a second vagal nerve, to suppress defensive strategies and support the expression of sociality. Three critical assumptions of this theory are that (1) the transition of the ANS was associated with the evolution of social mammals from asocial reptiles; (2) the transition enabled mammals, unlike their reptilian ancestors, to derive a biological benefit from social interactions; and (3) the transition forces a less parsimonious explanation (convergence) for the evolution of social behavior in birds and mammals, since birds evolved from a reptilian lineage. Two recently published reviews, however, provided compelling evidence that the social asocial dichotomy is overly simplistic, neglects the diversity of vertebrate social systems, impedes our understanding of the evolution of social behavior, and perpetuates the erroneous belief that one group, non-avian reptiles, is incapable of complex social behavior. In the worst case, if PT depends upon a transition from asocial reptiles to social mammals, then the ability of PT to explain the evolution of the mammalian ANS is highly questionable. A great number of social behaviors occur in both reptiles and mammals. In the best case, PT has misused the terms social and asocial. Even here, however, the theory would still need to identify a particular suite of behaviors found in mammals and not reptiles that could be associated with, or explain, the transition of the ANS, and then replace the asocial and social labels with more specific descriptors. |
q-bio/0611017 | Mauro Copelli | Mauro Copelli | Physics of Psychophysics: it is critical to sense | 7 pages, 4 figures. Contribution to the 9th Granada Seminar in
Computational and Statistical Physics. Computational and Mathematical
Modelling of Cooperative Behavior in Neural Systems, (2006). University of
Granada, Spain. AIP Proceedings | AIP Conference Proceedings -- February 8, 2007 -- Volume 887, pp.
13-20, "Cooperative Behavior in Neural Systems: Ninth Granada Lectures",
edited by J. Marro, P. L. Garrido and J. J. Torres | 10.1063/1.2709581 | null | q-bio.NC cond-mat.dis-nn nlin.AO physics.bio-ph | null | It has been known for about a century that psychophysical response curves
(perception of a given physical stimulus vs. stimulus intensity) have a large
dynamic range: many decades of stimulus intensity can be appropriately
discriminated before saturation. This is in stark contrast with the response
curves of sensory neurons, whose dynamic range is small, usually covering only
about one decade. We claim that this paradox can be solved by means of a
collective phenomenon. By coupling excitable elements with small dynamic range,
the {\em collective} response function shows a much larger dynamic range, due
to the amplification mediated by excitable waves. Moreover, the dynamic range
is optimal at the phase transition where self-sustained activity becomes
stable, providing a clear example of a biologically relevant quantity being
optimized at criticality. We present a pedagogical account of these ideas,
which are illustrated with a simple mean field model.
| [
{
"created": "Mon, 6 Nov 2006 14:49:48 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Copelli",
"Mauro",
""
]
] | It has been known for about a century that psychophysical response curves (perception of a given physical stimulus vs. stimulus intensity) have a large dynamic range: many decades of stimulus intensity can be appropriately discriminated before saturation. This is in stark contrast with the response curves of sensory neurons, whose dynamic range is small, usually covering only about one decade. We claim that this paradox can be solved by means of a collective phenomenon. By coupling excitable elements with small dynamic range, the {\em collective} response function shows a much larger dynamic range, due to the amplification mediated by excitable waves. Moreover, the dynamic range is optimal at the phase transition where self-sustained activity becomes stable, providing a clear example of a biologically relevant quantity being optimized at criticality. We present a pedagogical account of these ideas, which are illustrated with a simple mean field model. |
1906.02757 | Francesco Cremonesi | Francesco Cremonesi and Felix Sch\"urmann | Telling neuronal apples from oranges: analytical performance modeling of
neural tissue simulations | 44 pages, 9 figures | null | null | null | q-bio.NC physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational modeling and simulation have become essential tools in the
quest to better understand the brain's makeup and to decipher the causal
interrelations of its components. The breadth of biochemical and biophysical
processes and structures in the brain has led to the development of a large
variety of model abstractions and specialized tools, often times requiring high
performance computing resource for their timely execution. What has been
missing so far was an in-depth analysis of the complexity of the computational
kernels, hindering a systematic approach to identifying bottlenecks of
algorithms and hardware, and their combinations. If whole brain models are to
be achieved on emerging computer generations, models and simulation engines
will have to be carefully co-designed for the intrinsic hardware tradeoffs. For
the first time, we present a systematic exploration based on analytic
performance modeling. We base our analysis on three in silico models, chosen as
representative examples of the most widely employed modeling abstractions. We
identify that the synaptic formalism, i.e. current or conductance based
representations, and not the level of morphological detail, is the most
significant factor in determining the properties of memory bandwidth saturation
and shared-memory scaling of in silico models. Even though general purpose
computing has, until now, largely been able to deliver high performance, we
find that for all types of abstractions, network latency and memory bandwidth
will become severe bottlenecks as the number of neurons to be simulated grows.
By adapting and extending a performance modeling approach, we deliver a first
characterization of the performance landscape of brain tissue simulations,
allowing us to pinpoint current bottlenecks in state-of-the-art in silico
models, and make projections for future hardware and software requirements.
| [
{
"created": "Thu, 6 Jun 2019 18:00:53 GMT",
"version": "v1"
}
] | 2019-06-10 | [
[
"Cremonesi",
"Francesco",
""
],
[
"Schürmann",
"Felix",
""
]
] | Computational modeling and simulation have become essential tools in the quest to better understand the brain's makeup and to decipher the causal interrelations of its components. The breadth of biochemical and biophysical processes and structures in the brain has led to the development of a large variety of model abstractions and specialized tools, often times requiring high performance computing resource for their timely execution. What has been missing so far was an in-depth analysis of the complexity of the computational kernels, hindering a systematic approach to identifying bottlenecks of algorithms and hardware, and their combinations. If whole brain models are to be achieved on emerging computer generations, models and simulation engines will have to be carefully co-designed for the intrinsic hardware tradeoffs. For the first time, we present a systematic exploration based on analytic performance modeling. We base our analysis on three in silico models, chosen as representative examples of the most widely employed modeling abstractions. We identify that the synaptic formalism, i.e. current or conductance based representations, and not the level of morphological detail, is the most significant factor in determining the properties of memory bandwidth saturation and shared-memory scaling of in silico models. Even though general purpose computing has, until now, largely been able to deliver high performance, we find that for all types of abstractions, network latency and memory bandwidth will become severe bottlenecks as the number of neurons to be simulated grows. By adapting and extending a performance modeling approach, we deliver a first characterization of the performance landscape of brain tissue simulations, allowing us to pinpoint current bottlenecks in state-of-the-art in silico models, and make projections for future hardware and software requirements. |
2203.02011 | Americo Cunha Jr | Paulo Roberto de Lima Gianfelice, Ricardo Sovek Oyarzabal, Americo
Cunha Jr, Jose Mario Vicensi Grzybowski, Fernando da Concei\c{c}\~ao Batista,
Elbert E. N. Macau | The starting dates of COVID-19 multiple waves | null | Chaos 32, 031101 (2022) | 10.1063/5.0079904 | null | q-bio.PE math.DS stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The severe acute respiratory syndrome of coronavirus 2 spread globally very
quickly, causing great concern at the international level due to the severity
of the associated respiratory disease, the so-called COVID-19. Considering Rio
de Janeiro city (Brazil) as an example, the first diagnosis of this disease
occurred in March 2020, but the exact moment when the local spread of the virus
started is uncertain as the Brazilian epidemiological surveillance system was
not widely prepared to detect suspected cases of COVID-19 at that time.
Improvements in this surveillance system occurred over the pandemic, but due to
the complex nature of the disease transmission process, specifying the exact
moment of emergence of new community contagion outbreaks is a complicated task.
This work aims to propose a general methodology to determine possible start
dates for the multiple community outbreaks of COVID-19, using for this purpose
a parametric statistical approach that combines surveillance data, nonlinear
regression, and information criteria to obtain a statistical model capable of
describing the multiple waves of contagion observed. The dynamics of COVID-19
in the city of Rio de Janeiro is taken as a case study, and the results suggest
that the original strain of the virus was already circulating in Rio de Janeiro
city as early as late February 2020, probably being massively disseminated in
the population during the carnival festivities.
| [
{
"created": "Thu, 3 Mar 2022 20:49:02 GMT",
"version": "v1"
}
] | 2022-03-07 | [
[
"Gianfelice",
"Paulo Roberto de Lima",
""
],
[
"Oyarzabal",
"Ricardo Sovek",
""
],
[
"Cunha",
"Americo",
"Jr"
],
[
"Grzybowski",
"Jose Mario Vicensi",
""
],
[
"Batista",
"Fernando da Conceição",
""
],
[
"Macau",
"Elbert E. N."... | The severe acute respiratory syndrome of coronavirus 2 spread globally very quickly, causing great concern at the international level due to the severity of the associated respiratory disease, the so-called COVID-19. Considering Rio de Janeiro city (Brazil) as an example, the first diagnosis of this disease occurred in March 2020, but the exact moment when the local spread of the virus started is uncertain as the Brazilian epidemiological surveillance system was not widely prepared to detect suspected cases of COVID-19 at that time. Improvements in this surveillance system occurred over the pandemic, but due to the complex nature of the disease transmission process, specifying the exact moment of emergence of new community contagion outbreaks is a complicated task. This work aims to propose a general methodology to determine possible start dates for the multiple community outbreaks of COVID-19, using for this purpose a parametric statistical approach that combines surveillance data, nonlinear regression, and information criteria to obtain a statistical model capable of describing the multiple waves of contagion observed. The dynamics of COVID-19 in the city of Rio de Janeiro is taken as a case study, and the results suggest that the original strain of the virus was already circulating in Rio de Janeiro city as early as late February 2020, probably being massively disseminated in the population during the carnival festivities. |
1802.01055 | S. H. Andy Yun | Peng Shao, Amira M. Eltony, Theo G. Seiler, Behrouz Tavakol, Roberto
Pineda, Tobias Koller, Theo Seiler, Seok-Hyun Yun | Spatially-resolved Brillouin spectroscopy reveals biomechanical changes
in early ectatic corneal disease and post-crosslinking in vivo | 39 pages, 8 main figures, supplementary information | null | null | null | q-bio.QM physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mounting evidence connects the biomechanical properties of tissues to the
development of eye diseases such as keratoconus, a common disease in which the
cornea thins and bulges into a conical shape. However, measuring biomechanical
changes in vivo with sufficient sensitivity for disease detection has proved
challenging. Here, we present a first large-scale study (~200 subjects,
including normal and keratoconus patients) using Brillouin light-scattering
microscopy to measure longitudinal modulus in corneal tissues with high
sensitivity and spatial resolution. Our results in vivo provide evidence of
biomechanical inhomogeneity at the onset of keratoconus and suggest that
biomechanical asymmetry between the left and right eyes may presage disease
development. We additionally measure the stiffening effect of corneal
crosslinking treatment in vivo for the first time. Our results demonstrate the
promise of Brillouin microscopy for diagnosis and treatment of keratoconus, and
potentially other diseases.
| [
{
"created": "Sun, 4 Feb 2018 01:26:51 GMT",
"version": "v1"
}
] | 2018-02-06 | [
[
"Shao",
"Peng",
""
],
[
"Eltony",
"Amira M.",
""
],
[
"Seiler",
"Theo G.",
""
],
[
"Tavakol",
"Behrouz",
""
],
[
"Pineda",
"Roberto",
""
],
[
"Koller",
"Tobias",
""
],
[
"Seiler",
"Theo",
""
],
[
"Y... | Mounting evidence connects the biomechanical properties of tissues to the development of eye diseases such as keratoconus, a common disease in which the cornea thins and bulges into a conical shape. However, measuring biomechanical changes in vivo with sufficient sensitivity for disease detection has proved challenging. Here, we present a first large-scale study (~200 subjects, including normal and keratoconus patients) using Brillouin light-scattering microscopy to measure longitudinal modulus in corneal tissues with high sensitivity and spatial resolution. Our results in vivo provide evidence of biomechanical inhomogeneity at the onset of keratoconus and suggest that biomechanical asymmetry between the left and right eyes may presage disease development. We additionally measure the stiffening effect of corneal crosslinking treatment in vivo for the first time. Our results demonstrate the promise of Brillouin microscopy for diagnosis and treatment of keratoconus, and potentially other diseases. |
2303.04919 | Vaibhava Srivastava | Shangming Chen, Fengde Chen, Vaibhava Srivastava and Rana D. Parshad | Dynamical Analysis of a Lotka-Volterra Competition Model with both Allee
and Fear Effect | null | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | Population ecology theory is replete with density dependent processes.
However trait-mediated or behavioral indirect interactions can both reinforce
or oppose density-dependent effects. This paper presents the first two species
competitive ODE and PDE systems where an Allee effect, which is a density
dependent process and the fear effect, which is non-consumptive and behavioral
are both present. The stability of the equilibria is discussed analytically
using the qualitative theory of ordinary differential equations. It is found
that the Allee effect and the fear effect change the extinction dynamics of the
system and the number of positive equilibrium points, but they do not affect
the stability of the positive equilibria. We also observe some special dynamics
that induce bifurcations in the system by varying the Allee or fear parameter.
Interestingly we find that the Allee effect working in conjunction with the
fear effect, can bring about several qualitative changes to the dynamical
behavior of the system with only the fear effect in place, in regimes of small
fear. That is, for small amounts of the fear parameter, it can change a
competitive exclusion type situation to a strong competition type situation. It
can also change a weak competition type situation to a bi-stability type
situation. However for large fear regimes the Allee effect reinforces the
dynamics driven by the fear effect. The analysis of the corresponding spatially
explicit model is also presented. To this end the comparison principle for
parabolic PDE is used. The conclusions of this paper have strong implications
for conservation biology, biological control as well as the preservation of
biodiversity.
| [
{
"created": "Wed, 8 Mar 2023 22:27:45 GMT",
"version": "v1"
}
] | 2023-03-10 | [
[
"Chen",
"Shangming",
""
],
[
"Chen",
"Fengde",
""
],
[
"Srivastava",
"Vaibhava",
""
],
[
"Parshad",
"Rana D.",
""
]
] | Population ecology theory is replete with density dependent processes. However trait-mediated or behavioral indirect interactions can both reinforce or oppose density-dependent effects. This paper presents the first two species competitive ODE and PDE systems where an Allee effect, which is a density dependent process and the fear effect, which is non-consumptive and behavioral are both present. The stability of the equilibria is discussed analytically using the qualitative theory of ordinary differential equations. It is found that the Allee effect and the fear effect change the extinction dynamics of the system and the number of positive equilibrium points, but they do not affect the stability of the positive equilibria. We also observe some special dynamics that induce bifurcations in the system by varying the Allee or fear parameter. Interestingly we find that the Allee effect working in conjunction with the fear effect, can bring about several qualitative changes to the dynamical behavior of the system with only the fear effect in place, in regimes of small fear. That is, for small amounts of the fear parameter, it can change a competitive exclusion type situation to a strong competition type situation. It can also change a weak competition type situation to a bi-stability type situation. However for large fear regimes the Allee effect reinforces the dynamics driven by the fear effect. The analysis of the corresponding spatially explicit model is also presented. To this end the comparison principle for parabolic PDE is used. The conclusions of this paper have strong implications for conservation biology, biological control as well as the preservation of biodiversity. |
1207.1236 | Marta Casanellas | Ania Kedzierska and Marta Casanellas | Empar: EM-based algorithm for parameter estimation of Markov models on
trees | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of branch length estimation in phylogenetic inference is to estimate
the divergence time between a set of sequences based on compositional
differences between them. A number of software is currently available
facilitating branch lengths estimation for homogeneous and stationary
evolutionary models. Homogeneity of the evolutionary process imposes fixed
rates of evolution throughout the tree. In complex data problems this
assumption is likely to put the results of the analyses in question.
In this work we propose an algorithm for parameter and branch lengths
inference in the discrete-time Markov processes on trees. This broad class of
nonhomogeneous models comprises the general Markov model and all its submodels,
including both stationary and nonstationary models.
Here, we adapted the well-known Expectation-Maximization algorithm and
present a detailed performance study of this approach for a selection of
nonhomogeneous evolutionary models. We conducted an extensive performance
assessment on multiple sequence alignments simulated under a variety of
settings. We demonstrated high accuracy of the tool in parameter estimation and
branch lengths recovery, proving the method to be a valuable tool for
phylogenetic inference in real life problems. $\empar$ is an open-source C++
implementation of the methods introduced in this paper and is the first tool
designed to handle nonhomogeneous data.
| [
{
"created": "Thu, 5 Jul 2012 12:12:14 GMT",
"version": "v1"
}
] | 2012-07-06 | [
[
"Kedzierska",
"Ania",
""
],
[
"Casanellas",
"Marta",
""
]
] | The goal of branch length estimation in phylogenetic inference is to estimate the divergence time between a set of sequences based on compositional differences between them. A number of software is currently available facilitating branch lengths estimation for homogeneous and stationary evolutionary models. Homogeneity of the evolutionary process imposes fixed rates of evolution throughout the tree. In complex data problems this assumption is likely to put the results of the analyses in question. In this work we propose an algorithm for parameter and branch lengths inference in the discrete-time Markov processes on trees. This broad class of nonhomogeneous models comprises the general Markov model and all its submodels, including both stationary and nonstationary models. Here, we adapted the well-known Expectation-Maximization algorithm and present a detailed performance study of this approach for a selection of nonhomogeneous evolutionary models. We conducted an extensive performance assessment on multiple sequence alignments simulated under a variety of settings. We demonstrated high accuracy of the tool in parameter estimation and branch lengths recovery, proving the method to be a valuable tool for phylogenetic inference in real life problems. $\empar$ is an open-source C++ implementation of the methods introduced in this paper and is the first tool designed to handle nonhomogeneous data. |
2107.09696 | Katharina Huber | Katharina T. Huber, Vincent Moulton, Andreas Spillner | Phylogenetic consensus networks: Computing a consensus of 1-nested
phylogenetic networks | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An important and well-studied problem in phylogenetics is to compute a
\emph{consensus tree} so as to summarize the common features within a
collection of rooted phylogenetic trees, all whose leaf-sets are bijectively
labeled by the same set~(X) of species. More recently, however, it has become
of interest to find a consensus for a collection of more general, rooted
directed acyclic graphs all of whose sink-sets are bijectively labeled by~(X),
so called rooted \emph{phylogenetic networks}. These networks are used to
analyse the evolution of species that cross with one another, such as plants
and viruses. In this paper, we introduce an algorithm for computing a consensus
for a collection of so-called 1-\emph{nested} phylogenetic networks. Our
approach builds on a previous result by Rosell\'o et al. that describes an
encoding for any 1-nested phylogenetic network in terms of a collection of
ordered pairs of subsets of (X).More specifically, we characterize those
collections of ordered pairs that arise as the encoding of some 1-nested
phylogenetic network, and then use this characterization to compute a
\emph{consensus network} for a collection of~$t$ 1-nested networks in
$O(t|X|^2+|X|^3)$ time. Applying our algorithm to a collection of phylogenetic
trees yields the well-known majority rule consensus tree. Our approach leads to
several new directions for futurework, and we expect that it should provide a
useful new tool to help understand complex evolutionary scenarios.
| [
{
"created": "Tue, 20 Jul 2021 18:02:21 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Huber",
"Katharina T.",
""
],
[
"Moulton",
"Vincent",
""
],
[
"Spillner",
"Andreas",
""
]
] | An important and well-studied problem in phylogenetics is to compute a \emph{consensus tree} so as to summarize the common features within a collection of rooted phylogenetic trees, all whose leaf-sets are bijectively labeled by the same set~(X) of species. More recently, however, it has become of interest to find a consensus for a collection of more general, rooted directed acyclic graphs all of whose sink-sets are bijectively labeled by~(X), so called rooted \emph{phylogenetic networks}. These networks are used to analyse the evolution of species that cross with one another, such as plants and viruses. In this paper, we introduce an algorithm for computing a consensus for a collection of so-called 1-\emph{nested} phylogenetic networks. Our approach builds on a previous result by Rosell\'o et al. that describes an encoding for any 1-nested phylogenetic network in terms of a collection of ordered pairs of subsets of (X).More specifically, we characterize those collections of ordered pairs that arise as the encoding of some 1-nested phylogenetic network, and then use this characterization to compute a \emph{consensus network} for a collection of~$t$ 1-nested networks in $O(t|X|^2+|X|^3)$ time. Applying our algorithm to a collection of phylogenetic trees yields the well-known majority rule consensus tree. Our approach leads to several new directions for futurework, and we expect that it should provide a useful new tool to help understand complex evolutionary scenarios. |
1509.03621 | Luca Mazzucato | Luca Mazzucato, Alfredo Fontanini, Giancarlo La Camera | Stimuli reduce the dimensionality of cortical activity | 30 pages, 8 figures; v2 in press, 9 figures, major improvements,
including comparison to shuffled datasets, analytical derivation of
estimation bias; v3, fixed typo in Fig. 8A | Front Syst Neurosci. 2016 Feb 17;10:11 | 10.3389/fnsys.2016.00011 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The activity of ensembles of simultaneously recorded neurons can be
represented as a set of points in the space of firing rates. Even though the
dimension of this space is equal to the ensemble size, neural activity can be
effectively localized on smaller subspaces. The dimensionality of the neural
space is an important determinant of the computational tasks supported by the
neural activity. Here, we investigate the dimensionality of neural ensembles
from the sensory cortex of alert rats during period of ongoing (inter-trial)
and stimulus-evoked activity. We find that dimensionality grows linearly with
ensemble size, and grows significantly faster during ongoing activity compared
to evoked activity. We explain these results using a spiking network model
based on a clustered architecture. The model captures the difference in growth
rate between ongoing and evoked activity and predicts a characteristic scaling
with ensemble size that could be tested in high-density multi-electrode
recordings. Moreover, the model predicts the existence of an upper bound on
dimensionality. This upper bound is inversely proportional to the amount of
pair-wise correlations and, compared to a homogeneous network without clusters,
it is larger by a factor equal to the number of clusters. The empirical
estimation of such bounds depends on the number and duration of trials.
Together, these results provide a framework to analyze neural dimensionality in
alert animals, its behavior under stimulus presentation, and its theoretical
dependence on ensemble size, number of clusters, and pair-wise correlations in
spiking network models.
| [
{
"created": "Fri, 11 Sep 2015 19:36:39 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Feb 2016 19:15:49 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Mar 2016 14:25:55 GMT",
"version": "v3"
}
] | 2016-03-17 | [
[
"Mazzucato",
"Luca",
""
],
[
"Fontanini",
"Alfredo",
""
],
[
"La Camera",
"Giancarlo",
""
]
] | The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during period of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, the model predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and pair-wise correlations in spiking network models. |
1309.4692 | Liane Gabora | Liane Gabora | An Analysis of the 'Blind Variation and Selective Retention' Theory of
Creativity | null | Creativity Research Journal, 23(2), 155-165 (2011) | 10.1080/10400419.2011.571187 | null | q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Picasso's Guernica sketches continue to provide a fruitful testing ground for
examining and assessing the Blind Variation Selective Retention (BVSR) theory
of creativity. Nonmonotonicity--e.g. as indicated by a lack of similarity of
successive sketches--is not evidence of a selectionist process; Darwin's theory
explains adaptive change, not nonmonotonicity. Although the notion of blindness
originally implied randomness, it now encompasses phenomena that bias idea
generation, e.g. the influence of remote associations on sketch ideas. However,
for a selectionist framework is to be applicable, such biases must be
negligible, otherwise evolutionary change is attributed to those biases, not to
selection. The notion of 'variants' should not be applied to creativity;
without a mechanism of inheritance, there is no basis upon which to delineate,
for example, which sketch ideas are or are not variants of a given sketch idea.
The notion of selective retention is also problematic. Selection provides an
explanation when acquired change is not transmitted; it cannot apply to
Picasso's painting (or other creative acts) because his ideas acquired
modifications as he thought them through that were incorporated into paintings
and viewed by others. The generation of one sketch affects the criteria by
which the next is judged, so sequentially generated sketches cannot be treated
as members of a generation, and selected amongst. Although BVSR is
inappropriate as a theoretical framework for creativity, exploring to what
extent selectionism explains the generation of not just biological form but
masterpieces such as Picasso's Guernica is useful for gaining insight into
creativity.
| [
{
"created": "Wed, 18 Sep 2013 16:16:35 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2019 02:25:18 GMT",
"version": "v2"
}
] | 2019-07-02 | [
[
"Gabora",
"Liane",
""
]
] | Picasso's Guernica sketches continue to provide a fruitful testing ground for examining and assessing the Blind Variation Selective Retention (BVSR) theory of creativity. Nonmonotonicity--e.g. as indicated by a lack of similarity of successive sketches--is not evidence of a selectionist process; Darwin's theory explains adaptive change, not nonmonotonicity. Although the notion of blindness originally implied randomness, it now encompasses phenomena that bias idea generation, e.g. the influence of remote associations on sketch ideas. However, for a selectionist framework is to be applicable, such biases must be negligible, otherwise evolutionary change is attributed to those biases, not to selection. The notion of 'variants' should not be applied to creativity; without a mechanism of inheritance, there is no basis upon which to delineate, for example, which sketch ideas are or are not variants of a given sketch idea. The notion of selective retention is also problematic. Selection provides an explanation when acquired change is not transmitted; it cannot apply to Picasso's painting (or other creative acts) because his ideas acquired modifications as he thought them through that were incorporated into paintings and viewed by others. The generation of one sketch affects the criteria by which the next is judged, so sequentially generated sketches cannot be treated as members of a generation, and selected amongst. Although BVSR is inappropriate as a theoretical framework for creativity, exploring to what extent selectionism explains the generation of not just biological form but masterpieces such as Picasso's Guernica is useful for gaining insight into creativity. |
1303.6700 | Tatiana Tatarinova | Tatiana Tatarinova, Michael Neely, Jay Bartroff, Michael van Guilder,
Walter Yamada, David Bayard, Roger Jelliffe, Robert Leary, Alyona Chubatiuk
and Alan Schumitzky | Two General Methods for Population Pharmacokinetic Modeling:
Non-Parametric Adaptive Grid and Non-Parametric Bayesian | null | Tatarinova et al, Journal of Pharmacokinetics and
Pharmacodynamics, 2013, vol. 40 no 1 | 10.1007/s10928-013-9302-8 | null | q-bio.QM q-bio.GN stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Population pharmacokinetic (PK) modeling methods can be statistically
classified as either parametric or nonparametric (NP). Each classification can
be divided into maximum likelihood (ML) or Bayesian (B) approaches. In this
paper we discuss the nonparametric case using both maximum likelihood and
Bayesian approaches. We present two nonparametric methods for estimating the
unknown joint population distribution of model parameter values in a
pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP
Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a
stick-breaking process to construct a Dirichlet prior. Our objective is to
compare the performance of these two methods using a simulated PK/PD dataset.
Our results showed excellent performance of NPAG and NPB in a realistically
simulated PK study. This simulation allowed us to have benchmarks in the form
of the true population parameters to compare with the estimates produced by the
two methods, while incorporating challenges like unbalanced sample times and
sample numbers as well as the ability to include the covariate of patient
weight. We conclude that both NPML and NPB can be used in realistic PK/PD
population analysis problems. The advantages of one versus the other are
discussed in the paper. NPAG and NPB are implemented in R and freely available
for download within the Pmetrics package from www.lapk.org.
| [
{
"created": "Tue, 26 Mar 2013 23:04:41 GMT",
"version": "v1"
}
] | 2013-03-29 | [
[
"Tatarinova",
"Tatiana",
""
],
[
"Neely",
"Michael",
""
],
[
"Bartroff",
"Jay",
""
],
[
"van Guilder",
"Michael",
""
],
[
"Yamada",
"Walter",
""
],
[
"Bayard",
"David",
""
],
[
"Jelliffe",
"Roger",
""
],
... | Population pharmacokinetic (PK) modeling methods can be statistically classified as either parametric or nonparametric (NP). Each classification can be divided into maximum likelihood (ML) or Bayesian (B) approaches. In this paper we discuss the nonparametric case using both maximum likelihood and Bayesian approaches. We present two nonparametric methods for estimating the unknown joint population distribution of model parameter values in a pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a stick-breaking process to construct a Dirichlet prior. Our objective is to compare the performance of these two methods using a simulated PK/PD dataset. Our results showed excellent performance of NPAG and NPB in a realistically simulated PK study. This simulation allowed us to have benchmarks in the form of the true population parameters to compare with the estimates produced by the two methods, while incorporating challenges like unbalanced sample times and sample numbers as well as the ability to include the covariate of patient weight. We conclude that both NPML and NPB can be used in realistic PK/PD population analysis problems. The advantages of one versus the other are discussed in the paper. NPAG and NPB are implemented in R and freely available for download within the Pmetrics package from www.lapk.org. |
1904.00445 | Nicholas Heller | Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara, Edward
Walczak, Keenan Moore, Heather Kaluzniak, Joel Rosenberg, Paul Blake, Zachary
Rengel, Makinna Oestreich, Joshua Dean, Michael Tradewell, Aneri Shah, Resha
Tejpaul, Zachary Edgerton, Matthew Peterson, Shaneabbas Raza, Subodh Regmi,
Nikolaos Papanikolopoulos, and Christopher Weight | The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context,
CT Semantic Segmentations, and Surgical Outcomes | 13 pages, 2 figures | null | null | null | q-bio.QM cs.LG stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | The morphometry of a kidney tumor revealed by contrast-enhanced Computed
Tomography (CT) imaging is an important factor in clinical decision making
surrounding the lesion's diagnosis and treatment. Quantitative study of the
relationship between kidney tumor morphology and clinical outcomes is difficult
due to data scarcity and the laborious nature of manually quantifying imaging
predictors. Automatic semantic segmentation of kidneys and kidney tumors is a
promising tool towards automatically quantifying a wide array of morphometric
features, but no sizeable annotated dataset is currently available to train
models for this task. We present the KiTS19 challenge dataset: A collection of
multi-phase CT imaging, segmentation masks, and comprehensive clinical outcomes
for 300 patients who underwent nephrectomy for kidney tumors at our center
between 2010 and 2018. 210 (70%) of these patients were selected at random as
the training set for the 2019 MICCAI KiTS Kidney Tumor Segmentation Challenge
and have been released publicly. With the presence of clinical context and
surgical outcomes, this data can serve not only for benchmarking semantic
segmentation models, but also for developing and studying biomarkers which make
use of the imaging and semantic segmentation masks.
| [
{
"created": "Sun, 31 Mar 2019 16:56:10 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Mar 2020 14:06:45 GMT",
"version": "v2"
}
] | 2020-03-17 | [
[
"Heller",
"Nicholas",
""
],
[
"Sathianathen",
"Niranjan",
""
],
[
"Kalapara",
"Arveen",
""
],
[
"Walczak",
"Edward",
""
],
[
"Moore",
"Keenan",
""
],
[
"Kaluzniak",
"Heather",
""
],
[
"Rosenberg",
"Joel",
"... | The morphometry of a kidney tumor revealed by contrast-enhanced Computed Tomography (CT) imaging is an important factor in clinical decision making surrounding the lesion's diagnosis and treatment. Quantitative study of the relationship between kidney tumor morphology and clinical outcomes is difficult due to data scarcity and the laborious nature of manually quantifying imaging predictors. Automatic semantic segmentation of kidneys and kidney tumors is a promising tool towards automatically quantifying a wide array of morphometric features, but no sizeable annotated dataset is currently available to train models for this task. We present the KiTS19 challenge dataset: A collection of multi-phase CT imaging, segmentation masks, and comprehensive clinical outcomes for 300 patients who underwent nephrectomy for kidney tumors at our center between 2010 and 2018. 210 (70%) of these patients were selected at random as the training set for the 2019 MICCAI KiTS Kidney Tumor Segmentation Challenge and have been released publicly. With the presence of clinical context and surgical outcomes, this data can serve not only for benchmarking semantic segmentation models, but also for developing and studying biomarkers which make use of the imaging and semantic segmentation masks. |
2008.04940 | Corey Weistuch | Corey Weistuch, Lilianne R. Mujica-Parodi, and Ken Dill | The refractory period matters: unifying mechanisms of macroscopic brain
waves | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | The relationship between complex, brain oscillations and the dynamics of
individual neurons is poorly understood. Here we utilize Maximum Caliber, a
dynamical inference principle, to build a minimal, yet general model of the
collective (mean-field) dynamics of large populations of neurons. In agreement
with previous experimental observations, we describe a simple, testable
mechanism, involving only a single type of neuron, by which many of these
complex oscillatory patterns may emerge. Our model predicts that the refractory
period of neurons, which has been previously neglected, is essential for these
behaviors.
| [
{
"created": "Tue, 11 Aug 2020 18:10:43 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Sep 2020 16:31:43 GMT",
"version": "v2"
}
] | 2020-09-04 | [
[
"Weistuch",
"Corey",
""
],
[
"Mujica-Parodi",
"Lilianne R.",
""
],
[
"Dill",
"Ken",
""
]
] | The relationship between complex, brain oscillations and the dynamics of individual neurons is poorly understood. Here we utilize Maximum Caliber, a dynamical inference principle, to build a minimal, yet general model of the collective (mean-field) dynamics of large populations of neurons. In agreement with previous experimental observations, we describe a simple, testable mechanism, involving only a single type of neuron, by which many of these complex oscillatory patterns may emerge. Our model predicts that the refractory period of neurons, which has been previously neglected, is essential for these behaviors. |
0912.0157 | Mauro Mobilia | Mauro Mobilia and Michael Assaf | Fixation in Evolutionary Games under Non-Vanishing Selection | 4 figures, to appear in EPL (Europhysics Letters) | EPL Vol. 91, 10002 (2010) | 10.1209/0295-5075/91/10002 | null | q-bio.PE cond-mat.stat-mech nlin.AO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most striking effect of fluctuations in evolutionary game theory
is the possibility for mutants to fixate (take over) an entire population.
Here, we generalize a recent WKB-based theory to study fixation in evolutionary
games under non-vanishing selection, and investigate the relation between
selection intensity w and demographic (random) fluctuations. This allows the
accurate treatment of large fluctuations and yields the probability and mean
times of fixation beyond the weak selection limit. The power of the theory is
demonstrated on prototypical models of cooperation dilemmas with multiple
absorbing states. Our predictions compare excellently with numerical
simulations and, for finite w, significantly improve over those of the
Fokker-Planck approximation.
| [
{
"created": "Tue, 1 Dec 2009 13:55:05 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Feb 2010 18:47:15 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Jun 2010 19:16:29 GMT",
"version": "v3"
}
] | 2010-08-27 | [
[
"Mobilia",
"Mauro",
""
],
[
"Assaf",
"Michael",
""
]
] | One of the most striking effect of fluctuations in evolutionary game theory is the possibility for mutants to fixate (take over) an entire population. Here, we generalize a recent WKB-based theory to study fixation in evolutionary games under non-vanishing selection, and investigate the relation between selection intensity w and demographic (random) fluctuations. This allows the accurate treatment of large fluctuations and yields the probability and mean times of fixation beyond the weak selection limit. The power of the theory is demonstrated on prototypical models of cooperation dilemmas with multiple absorbing states. Our predictions compare excellently with numerical simulations and, for finite w, significantly improve over those of the Fokker-Planck approximation. |
2104.01829 | Nelson Duran | Nelson Duran, Joao C.C. Alonso, Wagner J. Favaro | Deprenyl, an old drug with new anticancer potential: Mini review | 10 pages 1 figure | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The anticancer potential of monoamine oxidase (MAO) was observed in
pre-clinical assays conducted with cell cultures and animals. L-Deprenyl (DEP)
causes apoptosis in melanoma, leukemia and mammary cells. High-dose DEP has
shown toxicity in mammary and pituitary cancers, as well as in monoblastic
leukemia, in rats. DEP accounts for immune-stimulant effect capable of
increasing natural killer cell activity, IL-2 generation, as well as of
inhibiting tumor growth. DEP administration in old female rats has increased
IL-2 generation and inverted the age-related depletion of IFN-{\gamma}
generation in the spleen. Co-adjuvant DEP administration helped
preventing/mitigating symptoms associated with peripheral neuropathy in cancer
treatment. It also enhanced the cytotoxic effects of antineoplastic drugs -
such as doxorubicin, cisplatin, among others - in cancer cells while they
protected healthy cells from being damaged. DEP presented effect against
dysfunctions such as debilitating hormone imbalance triggered by pituitary
gland tumor; this gland produces the stimulatory hormone of adrenocorticotropic
hormone which was related to the exacerbation of this disease. Thus, DEP
emerges as an excellent potential drug against several cancer types and it also
presents low toxicity in Parkinson`s disease patients subjected to long
treatment with it.
| [
{
"created": "Mon, 5 Apr 2021 09:50:51 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Duran",
"Nelson",
""
],
[
"Alonso",
"Joao C. C.",
""
],
[
"Favaro",
"Wagner J.",
""
]
] | The anticancer potential of monoamine oxidase (MAO) was observed in pre-clinical assays conducted with cell cultures and animals. L-Deprenyl (DEP) causes apoptosis in melanoma, leukemia and mammary cells. High-dose DEP has shown toxicity in mammary and pituitary cancers, as well as in monoblastic leukemia, in rats. DEP accounts for immune-stimulant effect capable of increasing natural killer cell activity, IL-2 generation, as well as of inhibiting tumor growth. DEP administration in old female rats has increased IL-2 generation and inverted the age-related depletion of IFN-{\gamma} generation in the spleen. Co-adjuvant DEP administration helped preventing/mitigating symptoms associated with peripheral neuropathy in cancer treatment. It also enhanced the cytotoxic effects of antineoplastic drugs - such as doxorubicin, cisplatin, among others - in cancer cells while they protected healthy cells from being damaged. DEP presented effect against dysfunctions such as debilitating hormone imbalance triggered by pituitary gland tumor; this gland produces the stimulatory hormone of adrenocorticotropic hormone which was related to the exacerbation of this disease. Thus, DEP emerges as an excellent potential drug against several cancer types and it also presents low toxicity in Parkinson`s disease patients subjected to long treatment with it. |
1003.0104 | Konstantin Klemm | Gunnar Boldhaus, Nils Bertschinger, Johannes Rauh, Eckehard Olbrich,
and Konstantin Klemm | Knockouts, Robustness and Cell Cycles | 11 pages, 3 figures, 3 tables | null | 10.1103/PhysRevE.82.021916 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The response to a knockout of a node is a characteristic feature of a
networked dynamical system. Knockout resilience in the dynamics of the
remaining nodes is a sign of robustness. Here we study the effect of knockouts
for binary state sequences and their implementations in terms of Boolean
threshold networks. Beside random sequences with biologically plausible
constraints, we analyze the cell cycle sequence of the species Saccharomyces
cerevisiae and the Boolean networks implementing it. Comparing with an
appropriate null model we do not find evidence that the yeast wildtype network
is optimized for high knockout resilience. Our notion of knockout resilience
weakly correlates with the size of the basin of attraction, which has also been
considered a measure of robustness.
| [
{
"created": "Sat, 27 Feb 2010 15:23:50 GMT",
"version": "v1"
}
] | 2013-05-29 | [
[
"Boldhaus",
"Gunnar",
""
],
[
"Bertschinger",
"Nils",
""
],
[
"Rauh",
"Johannes",
""
],
[
"Olbrich",
"Eckehard",
""
],
[
"Klemm",
"Konstantin",
""
]
] | The response to a knockout of a node is a characteristic feature of a networked dynamical system. Knockout resilience in the dynamics of the remaining nodes is a sign of robustness. Here we study the effect of knockouts for binary state sequences and their implementations in terms of Boolean threshold networks. Beside random sequences with biologically plausible constraints, we analyze the cell cycle sequence of the species Saccharomyces cerevisiae and the Boolean networks implementing it. Comparing with an appropriate null model we do not find evidence that the yeast wildtype network is optimized for high knockout resilience. Our notion of knockout resilience weakly correlates with the size of the basin of attraction, which has also been considered a measure of robustness. |
0910.2783 | Simon Childs | S. J. Childs | The Finite Element Implementation of a K.P.P. Equation for the
Simulation of Tsetse Control Measures in the Vicinity of a Game Reserve | 31 pages, 14 figures, 4 tables | Mathematical Biosciences, 227: 29--43, 2010 | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An equation, strongly reminiscent of Fisher's equation, is used to model the
response of tsetse populations to proposed control measures in the vicinity of
a game reserve. The model assumes movement is by diffusion and that growth is
logistic. This logistic growth is dependent on an historical population, in
contrast to Fisher's equation which bases it on the present population. The
model therefore takes into account the fact that new additions to the adult fly
population are, in actual fact, the descendents of a population which existed
one puparial duration ago, furthermore, that this puparial duration is
temperature dependent. Artificially imposed mortality is modelled as a
proportion at a constant rate. Fisher's equation is also solved as a formality.
The temporary imposition of a 2 % $\mathrm{day}^{-1}$ mortality everywhere
outside the reserve for a period of 2 years will have no lasting effect on the
influence of the reserve on either the Glossina austeni or the G. brevipalpis
populations, although it certainly will eradicate tsetse from poor habitat,
outside the reserve. A 5 $\mathrm{km}$-wide barrier with a minimum mortality of
4 % $\mathrm{day}^{-1}$, throughout, will succeed in isolating a worst-case, G.
austeni population and its associated trypanosomiasis from the surrounding
areas. A more optimistic estimate of its mobility suggests a mortality of 2 %
$\mathrm{day}^{-1}$ will suffice. For a given target-related mortality, more
mobile species are found to be more vulnerable to eradication than more
sedentary species, while the opposite is true for containment.
| [
{
"created": "Thu, 15 Oct 2009 17:32:27 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Oct 2009 14:28:37 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Aug 2014 09:12:06 GMT",
"version": "v3"
},
{
"created": "Mon, 25 May 2015 12:32:34 GMT",
"version": "v4"
}
] | 2015-05-26 | [
[
"Childs",
"S. J.",
""
]
] | An equation, strongly reminiscent of Fisher's equation, is used to model the response of tsetse populations to proposed control measures in the vicinity of a game reserve. The model assumes movement is by diffusion and that growth is logistic. This logistic growth is dependent on an historical population, in contrast to Fisher's equation which bases it on the present population. The model therefore takes into account the fact that new additions to the adult fly population are, in actual fact, the descendents of a population which existed one puparial duration ago, furthermore, that this puparial duration is temperature dependent. Artificially imposed mortality is modelled as a proportion at a constant rate. Fisher's equation is also solved as a formality. The temporary imposition of a 2 % $\mathrm{day}^{-1}$ mortality everywhere outside the reserve for a period of 2 years will have no lasting effect on the influence of the reserve on either the Glossina austeni or the G. brevipalpis populations, although it certainly will eradicate tsetse from poor habitat, outside the reserve. A 5 $\mathrm{km}$-wide barrier with a minimum mortality of 4 % $\mathrm{day}^{-1}$, throughout, will succeed in isolating a worst-case, G. austeni population and its associated trypanosomiasis from the surrounding areas. A more optimistic estimate of its mobility suggests a mortality of 2 % $\mathrm{day}^{-1}$ will suffice. For a given target-related mortality, more mobile species are found to be more vulnerable to eradication than more sedentary species, while the opposite is true for containment. |
2103.13919 | Bertrand Roehner | Eduardo M. Garcia-Roger, Peter Richmond, Bertrand M. Roehner | Is there an infant mortality in bacteria? | 16 p., 5 figures | null | null | null | q-bio.CB physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | This manuscript proposes a significant step in our long-run investigation of
infant mortality across species. Since 2016 (Berrut et al. 2016) a succession
of studies (Bois et al. 2019) have traced infant mortality from organisms of
high complexity (e.g. mammals) down to unicellular organisms. Infant mortality
may be considered as a filtering process through which organisms with
potentially lethal congenital defects are eliminated. Such defects may have
many causes but here we focus particularly on mishaps resulting from
non-optimal conditions in the production of proteins, enzymes and other crucial
macromolecules. The statistical signature of infant mortality consists in a
falling age-specific death rate. The question we address here is whether infant
mortality episodes take place in bacteria in the minutes precededing or
following cell division. It will be shown that while experiments carried out in
the 20th century tried but failed to detect such an effect (mostly because of
limited sample size), more recent observations provided consistent evidence of
a sizeable mortality, with a rate of the order of 0.7 per 1,000 per hour, in
the exponential growth phase of E. coli. A further crucial test will be to
measure the age-specific, post-division death rate. An experiment is outlined
for that purpose. It is based on the selection of stained cells through flow
cytometry and the derivation of their ages at death from their sizes. If an
infant mortality effect can be identified in E. coli it can be conjectured that
a similar effect also exists in other unicellular organisms, both prokaryote
and eukaryote.
| [
{
"created": "Thu, 18 Mar 2021 22:39:43 GMT",
"version": "v1"
}
] | 2021-03-26 | [
[
"Garcia-Roger",
"Eduardo M.",
""
],
[
"Richmond",
"Peter",
""
],
[
"Roehner",
"Bertrand M.",
""
]
] | This manuscript proposes a significant step in our long-run investigation of infant mortality across species. Since 2016 (Berrut et al. 2016) a succession of studies (Bois et al. 2019) have traced infant mortality from organisms of high complexity (e.g. mammals) down to unicellular organisms. Infant mortality may be considered as a filtering process through which organisms with potentially lethal congenital defects are eliminated. Such defects may have many causes but here we focus particularly on mishaps resulting from non-optimal conditions in the production of proteins, enzymes and other crucial macromolecules. The statistical signature of infant mortality consists in a falling age-specific death rate. The question we address here is whether infant mortality episodes take place in bacteria in the minutes precededing or following cell division. It will be shown that while experiments carried out in the 20th century tried but failed to detect such an effect (mostly because of limited sample size), more recent observations provided consistent evidence of a sizeable mortality, with a rate of the order of 0.7 per 1,000 per hour, in the exponential growth phase of E. coli. A further crucial test will be to measure the age-specific, post-division death rate. An experiment is outlined for that purpose. It is based on the selection of stained cells through flow cytometry and the derivation of their ages at death from their sizes. If an infant mortality effect can be identified in E. coli it can be conjectured that a similar effect also exists in other unicellular organisms, both prokaryote and eukaryote. |
1202.4578 | Anastasia Lavrova | Anastasia I. Lavrova, Michael A. Zaks, Lutz Schimansky-Geier | Modeling rhythmic patterns in the hippocampus | 10 pages, 9 figures | Phys. Rev. E (2012) 85, 041922 | 10.1103/PhysRevE.85.041922 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate different dynamical regimes of neuronal network in the CA3
area of the hippocampus. The proposed neuronal circuit includes two fast- and
two slowly-spiking cells which are interconnected by means of dynamical
synapses. On the individual level, each neuron is modeled by FitzHugh-Nagumo
equations. Three basic rhythmic patterns are observed: gamma-rhythm in which
the fast neurons are uniformly spiking, theta-rhythm in which the individual
spikes are separated by quiet epochs, and theta/gamma rhythm with repeated
patches of spikes. We analyze the influence of asymmetry of synaptic strengths
on the synchronization in the network and demonstrate that strong asymmetry
reduces the variety of available dynamical states. The model network exhibits
multistability; this results in occurrence of hysteresis in dependence on the
conductances of individual connections. We show that switching between
different rhythmic patterns in the network depends on the degree of
synchronization between the slow cells.
| [
{
"created": "Tue, 21 Feb 2012 09:52:48 GMT",
"version": "v1"
}
] | 2015-11-20 | [
[
"Lavrova",
"Anastasia I.",
""
],
[
"Zaks",
"Michael A.",
""
],
[
"Schimansky-Geier",
"Lutz",
""
]
] | We investigate different dynamical regimes of neuronal network in the CA3 area of the hippocampus. The proposed neuronal circuit includes two fast- and two slowly-spiking cells which are interconnected by means of dynamical synapses. On the individual level, each neuron is modeled by FitzHugh-Nagumo equations. Three basic rhythmic patterns are observed: gamma-rhythm in which the fast neurons are uniformly spiking, theta-rhythm in which the individual spikes are separated by quiet epochs, and theta/gamma rhythm with repeated patches of spikes. We analyze the influence of asymmetry of synaptic strengths on the synchronization in the network and demonstrate that strong asymmetry reduces the variety of available dynamical states. The model network exhibits multistability; this results in occurrence of hysteresis in dependence on the conductances of individual connections. We show that switching between different rhythmic patterns in the network depends on the degree of synchronization between the slow cells. |
1908.07428 | Gautam Kumar | Benjamin Plaster and Gautam Kumar | Data-Driven Predictive Modeling of Neuronal Dynamics using Long
Short-Term Memory | 35 pages, 26 figures | null | null | null | q-bio.NC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling brain dynamics to better understand and control complex behaviors
underlying various cognitive brain functions are of interests to engineers,
mathematicians, and physicists from the last several decades. With a motivation
of developing computationally efficient models of brain dynamics to use in
designing control-theoretic neurostimulation strategies, we have developed a
novel data-driven approach in a long short-term memory (LSTM) neural network
architecture to predict the temporal dynamics of complex systems over an
extended long time-horizon in future. In contrast to recent LSTM-based
dynamical modeling approaches that make use of multi-layer perceptrons or
linear combination layers as output layers, our architecture uses a single
fully connected output layer and reversed-order sequence-to-sequence mapping to
improve short time-horizon prediction accuracy and to make multi-timestep
predictions of dynamical behaviors. We demonstrate the efficacy of our approach
in reconstructing the regular spiking to bursting dynamics exhibited by an
experimentally-validated 9-dimensional Hodgkin-Huxley model of hippocampal CA1
pyramidal neurons. Through simulations, we show that our LSTM neural network
can predict the multi-time scale temporal dynamics underlying various spiking
patterns with reasonable accuracy. Moreover, our results show that the
predictions improve with increasing predictive time-horizon in the
multi-timestep deep LSTM neural network.
| [
{
"created": "Sun, 11 Aug 2019 17:36:46 GMT",
"version": "v1"
}
] | 2019-08-21 | [
[
"Plaster",
"Benjamin",
""
],
[
"Kumar",
"Gautam",
""
]
] | Modeling brain dynamics to better understand and control complex behaviors underlying various cognitive brain functions are of interests to engineers, mathematicians, and physicists from the last several decades. With a motivation of developing computationally efficient models of brain dynamics to use in designing control-theoretic neurostimulation strategies, we have developed a novel data-driven approach in a long short-term memory (LSTM) neural network architecture to predict the temporal dynamics of complex systems over an extended long time-horizon in future. In contrast to recent LSTM-based dynamical modeling approaches that make use of multi-layer perceptrons or linear combination layers as output layers, our architecture uses a single fully connected output layer and reversed-order sequence-to-sequence mapping to improve short time-horizon prediction accuracy and to make multi-timestep predictions of dynamical behaviors. We demonstrate the efficacy of our approach in reconstructing the regular spiking to bursting dynamics exhibited by an experimentally-validated 9-dimensional Hodgkin-Huxley model of hippocampal CA1 pyramidal neurons. Through simulations, we show that our LSTM neural network can predict the multi-time scale temporal dynamics underlying various spiking patterns with reasonable accuracy. Moreover, our results show that the predictions improve with increasing predictive time-horizon in the multi-timestep deep LSTM neural network. |
2208.01868 | Alexander Browning | Alexander P Browning and Matthew J Simpson | Geometric analysis enables biological insight from complex
non-identifiable models using simple surrogates | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | An enduring challenge in computational biology is to balance data quality and
quantity with model complexity. Tools such as identifiability analysis and
information criterion have been developed to harmonise this juxtaposition, yet
cannot always resolve the mismatch between available data and the granularity
required in mathematical models to answer important biological questions.
Often, it is only simple phenomenological models, such as the logistic and
Gompertz growth models, that are identifiable from standard experimental
measurements. To draw insights from the complex, non-identifiable models that
incorporate key biological mechanisms of interest, we study the geometry of a
map in parameter space from the complex model to a simple, identifiable,
surrogate model. By studying how non-identifiable parameters in the complex
model quantitatively relate to identifiable parameters in surrogate, we
introduce and exploit a layer of interpretation between the set of
non-identifiable parameters and the goodness-of-fit metric or likelihood
studied in typical identifiability analysis. We demonstrate our approach by
analysing a hierarchy of mathematical models for multicellular tumour spheroid
growth. Typical data from tumour spheroid experiments are limited and noisy,
and corresponding mathematical models are very often made arbitrarily complex.
Our geometric approach is able to predict non-identifiabilities, subset
non-identifiable parameter spaces into identifiable parameter combinations that
relate to individual data features, and overall provide additional biological
insight from complex non-identifiable models.
| [
{
"created": "Wed, 3 Aug 2022 06:37:47 GMT",
"version": "v1"
}
] | 2022-08-04 | [
[
"Browning",
"Alexander P",
""
],
[
"Simpson",
"Matthew J",
""
]
] | An enduring challenge in computational biology is to balance data quality and quantity with model complexity. Tools such as identifiability analysis and information criterion have been developed to harmonise this juxtaposition, yet cannot always resolve the mismatch between available data and the granularity required in mathematical models to answer important biological questions. Often, it is only simple phenomenological models, such as the logistic and Gompertz growth models, that are identifiable from standard experimental measurements. To draw insights from the complex, non-identifiable models that incorporate key biological mechanisms of interest, we study the geometry of a map in parameter space from the complex model to a simple, identifiable, surrogate model. By studying how non-identifiable parameters in the complex model quantitatively relate to identifiable parameters in surrogate, we introduce and exploit a layer of interpretation between the set of non-identifiable parameters and the goodness-of-fit metric or likelihood studied in typical identifiability analysis. We demonstrate our approach by analysing a hierarchy of mathematical models for multicellular tumour spheroid growth. Typical data from tumour spheroid experiments are limited and noisy, and corresponding mathematical models are very often made arbitrarily complex. Our geometric approach is able to predict non-identifiabilities, subset non-identifiable parameter spaces into identifiable parameter combinations that relate to individual data features, and overall provide additional biological insight from complex non-identifiable models. |
1002.2455 | Z. Nevin Gerek | C Atilgan, Z N Gerek, S B Ozkan, A R Atilgan | Manipulation of conformational change in proteins by single residue
perturbations | null | null | 10.1016/j.bpj.2010.05.020 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using the perturbation-response scanning (PRS) technique, we study a set of
23 proteins that display a variety of conformational motions upon ligand
binding (e.g. shear, hinge, allosteric). In most cases, PRS determines residues
that may be manipulated to achieve the resulting conformational change. PRS
reveals that for some proteins, binding induced conformational change may be
achieved through the perturbation of residues scattered throughout the protein,
whereas in others, perturbation of specific residues confined to a highly
specific region are necessary. Correlations between the experimental and
calculated atomic displacements are always better or equivalent to those
obtained from a modal analysis of elastic network models. Furthermore, best
correlations obtained by the latter approach do not always appear in the most
collective modes. We show that success of the modal analysis depends on the
lack of redundant paths that exist in the protein. PRS thus demonstrates that
several relevant modes may simultaneously be induced by perturbing a single
select residue on the protein. We also illustrate the biological relevance of
applying PRS on the GroEL and ADK structures in detail, where we show that the
residues whose perturbation lead to the precise conformational changes usually
correspond to those experimentally determined to be functionally important.
| [
{
"created": "Fri, 12 Feb 2010 00:52:21 GMT",
"version": "v1"
}
] | 2015-05-18 | [
[
"Atilgan",
"C",
""
],
[
"Gerek",
"Z N",
""
],
[
"Ozkan",
"S B",
""
],
[
"Atilgan",
"A R",
""
]
] | Using the perturbation-response scanning (PRS) technique, we study a set of 23 proteins that display a variety of conformational motions upon ligand binding (e.g. shear, hinge, allosteric). In most cases, PRS determines residues that may be manipulated to achieve the resulting conformational change. PRS reveals that for some proteins, binding induced conformational change may be achieved through the perturbation of residues scattered throughout the protein, whereas in others, perturbation of specific residues confined to a highly specific region are necessary. Correlations between the experimental and calculated atomic displacements are always better or equivalent to those obtained from a modal analysis of elastic network models. Furthermore, best correlations obtained by the latter approach do not always appear in the most collective modes. We show that success of the modal analysis depends on the lack of redundant paths that exist in the protein. PRS thus demonstrates that several relevant modes may simultaneously be induced by perturbing a single select residue on the protein. We also illustrate the biological relevance of applying PRS on the GroEL and ADK structures in detail, where we show that the residues whose perturbation lead to the precise conformational changes usually correspond to those experimentally determined to be functionally important. |
1810.10409 | Manuel P\'ajaro Di\'eguez | Manuel P\'ajaro, Irene Otero-Muras, Carlos V\'azquez and Antonio A.
Alonso | Transient hysteresis and inherent stochasticity in gene regulatory
networks | 35 pages, 13 figures | Nature Communications (2019), volume 10, Article number: 4581 | 10.1038/s41467-019-12344-w | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell fate determination, the process through which cells commit to
differentiated states is commonly mediated by gene regulatory motifs with
mutually exclusive expression states. The classical deterministic picture for
cell fate determination includes bistability and hysteresis, which enables the
persistence of the acquired cellular state after withdrawal of the stimulus,
ensuring a robust cellular response. However, the stochasticity inherent to
gene expression dynamics is not compatible with hysteresis, since the
stationary solution of the governing Chemical Master Equation does not depend
on the initial conditions. In this work, we provide a quantitative description
of a transient hysteresis phenomenon that reconciles experimental evidence of
hysteretic behaviour in gene regulatory networks with their inherent
stochasticity. Under sufficiently slow dynamics, the dependency of the
non-stationary solutions on the initial state of the cells can lead to what we
denote here as transient hysteresis. To quantify this phenomenon, we provide an
estimate of the convergence rate to the equilibrium. We also introduce the
equation of a natural landscape capturing the evolution of the system that,
unlike traditional cell fate potential landscapes, is compatible with the
notion of coexistence at the microscopic level.
| [
{
"created": "Wed, 24 Oct 2018 14:09:08 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2018 17:17:07 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Apr 2019 11:04:19 GMT",
"version": "v3"
},
{
"created": "Tue, 8 Oct 2019 12:58:50 GMT",
"version": "v4"
}
] | 2019-10-09 | [
[
"Pájaro",
"Manuel",
""
],
[
"Otero-Muras",
"Irene",
""
],
[
"Vázquez",
"Carlos",
""
],
[
"Alonso",
"Antonio A.",
""
]
] | Cell fate determination, the process through which cells commit to differentiated states is commonly mediated by gene regulatory motifs with mutually exclusive expression states. The classical deterministic picture for cell fate determination includes bistability and hysteresis, which enables the persistence of the acquired cellular state after withdrawal of the stimulus, ensuring a robust cellular response. However, the stochasticity inherent to gene expression dynamics is not compatible with hysteresis, since the stationary solution of the governing Chemical Master Equation does not depend on the initial conditions. In this work, we provide a quantitative description of a transient hysteresis phenomenon that reconciles experimental evidence of hysteretic behaviour in gene regulatory networks with their inherent stochasticity. Under sufficiently slow dynamics, the dependency of the non-stationary solutions on the initial state of the cells can lead to what we denote here as transient hysteresis. To quantify this phenomenon, we provide an estimate of the convergence rate to the equilibrium. We also introduce the equation of a natural landscape capturing the evolution of the system that, unlike traditional cell fate potential landscapes, is compatible with the notion of coexistence at the microscopic level. |
2403.13853 | Olivier Fridolin MAMINIAINA | Of Maminiaina (FOFIFA-DRZVP, IMVAVET), M. Koko, J. J. Rajaonarison, R.
Razafindrakoto (IMVAVET), J. Ravaomanana (FOFIFA-DRZVP), A. D. Shannon | Valeur des tests PACE et CTB_ELISA dans le diagnostic de la peste
porcine classique (PPC) et le contr{\^o}le de qualit{\'e} du vaccin
correspondant {\`a} Madagascar | in French language | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From 1994, we began to use ELISA (Enzyme Linked Immunosorbent Assay) in the
diagnosis of PCP. This is aELISA for capturing antigens (PACE) possibly
contained in the samples. The advantage of this test comes from the fact that
it is completelyindependent of cell cultures. In addition, it is fast: the
result can be obtained in less than 36 hours. A study of its
standardizationcarried out in Australia gave a sensitivity (Se) of 99%, a
specificity (Sp) close to 100% and a negative predictive value (NPV) of 99.7%.
Due to its high specificity, the test gives a negative result to all true
negatives, in other words, the negatives of the test correspond to thetrue
negatives. A variant of the capture ELISA, the CTB-ELISA or complex trapping
blocking ELISA allows the quantity of antibodies to be measureddirected against
the non-structural protein, p80 (or NS3), contained in animal sera. Evaluation
of the level of anti-NS3 antibodiesconstitutes an excellent assessment of the
level of neutralizing antibodies because the correlation coefficient between
these two types of antibodies, the firstobtained by CTB-ELISA, and the second
by serum neutralization (VNT), is very high (r = 0.98).The two tests being
capable, one of detecting pestiviral antigens and the other of measuring
antibodies specific to each of thegroups, constitutes an excellent tool for the
qualitative control of anti-CSF vaccine.
| [
{
"created": "Tue, 19 Mar 2024 08:46:53 GMT",
"version": "v1"
}
] | 2024-03-22 | [
[
"Maminiaina",
"Of",
"",
"FOFIFA-DRZVP, IMVAVET"
],
[
"Koko",
"M.",
"",
"IMVAVET"
],
[
"Rajaonarison",
"J. J.",
"",
"IMVAVET"
],
[
"Razafindrakoto",
"R.",
"",
"IMVAVET"
],
[
"Ravaomanana",
"J.",
"",
"FOFIFA-DRZVP"
],
... | From 1994, we began to use ELISA (Enzyme Linked Immunosorbent Assay) in the diagnosis of PCP. This is aELISA for capturing antigens (PACE) possibly contained in the samples. The advantage of this test comes from the fact that it is completelyindependent of cell cultures. In addition, it is fast: the result can be obtained in less than 36 hours. A study of its standardizationcarried out in Australia gave a sensitivity (Se) of 99%, a specificity (Sp) close to 100% and a negative predictive value (NPV) of 99.7%. Due to its high specificity, the test gives a negative result to all true negatives, in other words, the negatives of the test correspond to thetrue negatives. A variant of the capture ELISA, the CTB-ELISA or complex trapping blocking ELISA allows the quantity of antibodies to be measureddirected against the non-structural protein, p80 (or NS3), contained in animal sera. Evaluation of the level of anti-NS3 antibodiesconstitutes an excellent assessment of the level of neutralizing antibodies because the correlation coefficient between these two types of antibodies, the firstobtained by CTB-ELISA, and the second by serum neutralization (VNT), is very high (r = 0.98).The two tests being capable, one of detecting pestiviral antigens and the other of measuring antibodies specific to each of thegroups, constitutes an excellent tool for the qualitative control of anti-CSF vaccine. |
1901.07016 | Katerina Kaouri Dr | Katerina Kaouri, Philip K. Maini, Paris Skourides, Neophytos
Christodoulou, S. Jonathan Chapman | A simple mechanochemical model for calcium signalling in embryonic
epithelial cells | 37 pages (this is the revised version after being accepted with minor
revisions at the Journal of Mathematical Biology) | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Calcium (Ca2+) signalling is one of the most important mechanisms of
information propagation in the body. In embryogenesis the interplay between
Ca2+ signalling and mechanical forces is critical to the healthy development of
an embryo but poorly understood. Several types of embryonic cells exhibit
calcium-induced contractions and many experiments indicate that Ca2+ signals
and contractions are coupled via a two-way mechanochemical coupling. We present
a new analysis of experimental data that supports the existence of this
coupling during Apical Constriction in Neural Tube Closure. We then propose a
mechanochemical model, building on early models that couple Ca2+ dynamics to
cell mechanics and replace the bistable Ca2+ release with modern,
experimentally validated Ca2+ dynamics. We assume that the cell is a linear
viscoelastic material and model the Ca2+-induced contraction stress with a Hill
function saturating at high Ca2+ levels. We also express, for the first time,
the "stretch-activation" Ca2+ flux in the early mechanochemical models as a
bottom-up contribution from stretch-sensitive Ca2+ channels on the cell
membrane. We reduce the model to three ordinary differential equations and
analyse its bifurcation structure semi-analytically as the $IP_3$
concentration, and the "strength" of stretch activation, $\lambda$ vary. The
Ca2+ system ($\lambda=0$, no mechanics) exhibits relaxation oscillations for a
certain range of $IP_3$ values. As $\lambda$ is increased the range of $IP_3$
values decreases, the oscillation amplitude decreases and the frequency
increases. Oscillations vanish for a sufficiently high value of $\lambda$.
These results agree with experiments in embryonic cells that also link the loss
of Ca2+ oscillations to embryo abnormalities. The work addresses a very
important and understudied question on the coupling of chemical and mechanical
signalling in embryogenesis.
| [
{
"created": "Mon, 21 Jan 2019 18:08:11 GMT",
"version": "v1"
}
] | 2019-01-23 | [
[
"Kaouri",
"Katerina",
""
],
[
"Maini",
"Philip K.",
""
],
[
"Skourides",
"Paris",
""
],
[
"Christodoulou",
"Neophytos",
""
],
[
"Chapman",
"S. Jonathan",
""
]
] | Calcium (Ca2+) signalling is one of the most important mechanisms of information propagation in the body. In embryogenesis the interplay between Ca2+ signalling and mechanical forces is critical to the healthy development of an embryo but poorly understood. Several types of embryonic cells exhibit calcium-induced contractions and many experiments indicate that Ca2+ signals and contractions are coupled via a two-way mechanochemical coupling. We present a new analysis of experimental data that supports the existence of this coupling during Apical Constriction in Neural Tube Closure. We then propose a mechanochemical model, building on early models that couple Ca2+ dynamics to cell mechanics and replace the bistable Ca2+ release with modern, experimentally validated Ca2+ dynamics. We assume that the cell is a linear viscoelastic material and model the Ca2+-induced contraction stress with a Hill function saturating at high Ca2+ levels. We also express, for the first time, the "stretch-activation" Ca2+ flux in the early mechanochemical models as a bottom-up contribution from stretch-sensitive Ca2+ channels on the cell membrane. We reduce the model to three ordinary differential equations and analyse its bifurcation structure semi-analytically as the $IP_3$ concentration, and the "strength" of stretch activation, $\lambda$ vary. The Ca2+ system ($\lambda=0$, no mechanics) exhibits relaxation oscillations for a certain range of $IP_3$ values. As $\lambda$ is increased the range of $IP_3$ values decreases, the oscillation amplitude decreases and the frequency increases. Oscillations vanish for a sufficiently high value of $\lambda$. These results agree with experiments in embryonic cells that also link the loss of Ca2+ oscillations to embryo abnormalities. The work addresses a very important and understudied question on the coupling of chemical and mechanical signalling in embryogenesis. |
2301.07016 | Vladimir Aksyuk | V.A. Aksyuk | Consciousness is learning: predictive processing systems that learn by
binding may perceive themselves as conscious | This version adds 5 figures (new) and only modifies the text to
reference the figures | null | null | null | q-bio.NC cs.AI cs.LG cs.NE cs.RO | http://creativecommons.org/licenses/by/4.0/ | Machine learning algorithms have achieved superhuman performance in specific
complex domains. Yet learning online from few examples and efficiently
generalizing across domains remains elusive. In humans such learning proceeds
via declarative memory formation and is closely associated with consciousness.
Predictive processing has been advanced as a principled Bayesian inference
framework for understanding the cortex as implementing deep generative
perceptual models for both sensory data and action control. However, predictive
processing offers little direct insight into fast compositional learning or the
mystery of consciousness. Here we propose that through implementing online
learning by hierarchical binding of unpredicted inferences, a predictive
processing system may flexibly generalize in novel situations by forming
working memories for perceptions and actions from single examples, which can
become short- and long-term declarative memories retrievable by associative
recall. We argue that the contents of such working memories are unified yet
differentiated, can be maintained by selective attention and are consistent
with observations of masking, postdictive perceptual integration, and other
paradigm cases of consciousness research. We describe how the brain could have
evolved to use perceptual value prediction for reinforcement learning of
complex action policies simultaneously implementing multiple survival and
reproduction strategies. 'Conscious experience' is how such a learning system
perceptually represents its own functioning, suggesting an answer to the meta
problem of consciousness. Our proposal naturally unifies feature binding,
recurrent processing, and predictive processing with global workspace, and, to
a lesser extent, the higher order theories of consciousness.
| [
{
"created": "Tue, 17 Jan 2023 17:06:48 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Apr 2023 22:23:05 GMT",
"version": "v2"
}
] | 2023-04-19 | [
[
"Aksyuk",
"V. A.",
""
]
] | Machine learning algorithms have achieved superhuman performance in specific complex domains. Yet learning online from few examples and efficiently generalizing across domains remains elusive. In humans such learning proceeds via declarative memory formation and is closely associated with consciousness. Predictive processing has been advanced as a principled Bayesian inference framework for understanding the cortex as implementing deep generative perceptual models for both sensory data and action control. However, predictive processing offers little direct insight into fast compositional learning or the mystery of consciousness. Here we propose that through implementing online learning by hierarchical binding of unpredicted inferences, a predictive processing system may flexibly generalize in novel situations by forming working memories for perceptions and actions from single examples, which can become short- and long-term declarative memories retrievable by associative recall. We argue that the contents of such working memories are unified yet differentiated, can be maintained by selective attention and are consistent with observations of masking, postdictive perceptual integration, and other paradigm cases of consciousness research. We describe how the brain could have evolved to use perceptual value prediction for reinforcement learning of complex action policies simultaneously implementing multiple survival and reproduction strategies. 'Conscious experience' is how such a learning system perceptually represents its own functioning, suggesting an answer to the meta problem of consciousness. Our proposal naturally unifies feature binding, recurrent processing, and predictive processing with global workspace, and, to a lesser extent, the higher order theories of consciousness. |
1411.7364 | Gerardo Chowell | Gerardo Chowell, C\'ecile Viboud, James M. Hyman, Lone Simonsen | The Western Africa Ebola virus disease epidemic exhibits both global
exponential and local polynomial growth rates | Published version in PLOS Currents Outbreaks. Jan 21st. 2015
http://currents.plos.org/outbreaks/article/the-western-africa-ebola-virus-disease-epidemic-exhibits-both-global-exponential-and-local-polynomial-growth-rates/ | PLOS Currents Outbreaks. 2015 Jan 21. Edition 1 | null | null | q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | Background: While many infectious disease epidemics are initially
characterized by an exponential growth in time, we show that district-level
Ebola virus disease (EVD) outbreaks in West Africa follow slower
polynomial-based growth kinetics over several generations of the disease.
Methods: We analyzed epidemic growth patterns at three different spatial scales
(regional, national, and subnational) of the Ebola virus disease epidemic in
Guinea, Sierra Leone and Liberia by compiling publicly available weekly time
series of reported EVD case numbers from the patient database available from
the World Health Organization website for the period 05-Jan to 17-Dec 2014.
Results: We found significant differences in the growth patterns of EVD cases
at the scale of the country, district, and other subnational administrative
divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone,
and Liberia show periods of approximate exponential growth. In contrast, local
epidemics are asynchronous and exhibit slow growth patterns during 3 or more
EVD generations, which can be better approximated by a polynomial than an
exponential. Conclusions: The slower than expected growth pattern of local EVD
outbreaks could result from a variety of factors, including behavior changes,
success of control interventions, or intrinsic features of the disease such as
a high level of clustering. Quantifying the contribution of each of these
factors could help refine estimates of final epidemic size and the relative
impact of different mitigation efforts in current and future EVD outbreaks.
| [
{
"created": "Wed, 26 Nov 2014 20:42:03 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Nov 2014 17:57:34 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Jan 2015 15:37:59 GMT",
"version": "v3"
}
] | 2015-01-28 | [
[
"Chowell",
"Gerardo",
""
],
[
"Viboud",
"Cécile",
""
],
[
"Hyman",
"James M.",
""
],
[
"Simonsen",
"Lone",
""
]
] | Background: While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. Methods: We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. Results: We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential. Conclusions: The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks. |
1506.03290 | Jayajit Das | Jayajit Das | Limiting energy dissipation induces glassy kinetics in single cell high
precision responses | Revised version. In press in Biophysical Journal | null | 10.1016/j.bpj.2016.01.022 | null | q-bio.CB cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single cells often generate precise responses by involving dissipative
out-of-thermodynamic equilibrium processes in signaling networks. The available
free energy to fuel these processes could become limited depending on the
metabolic state of an individual cell. How does limiting dissipation affect the
kinetics of high precision responses in single cells? I address this question
in the context of a kinetic proofreading scheme used in a simple model of early
time T cell signaling. I show using exact analytical calculations and numerical
simulations that limiting dissipation qualitatively changes the kinetics in
single cells marked by emergence of slow kinetics, large cell-to-cell
variations of copy numbers, temporally correlated stochastic events (dynamic
facilitation), and, ergodicity breaking. Thus, constraints in energy
dissipation, in addition to negatively affecting ligand discrimination in T
cells, can create a fundamental difficulty in interpreting single cell kinetics
from cell population level results.
| [
{
"created": "Wed, 10 Jun 2015 13:27:09 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jan 2016 00:05:59 GMT",
"version": "v2"
}
] | 2016-04-20 | [
[
"Das",
"Jayajit",
""
]
] | Single cells often generate precise responses by involving dissipative out-of-thermodynamic equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early time T cell signaling. I show using exact analytical calculations and numerical simulations that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and, ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in interpreting single cell kinetics from cell population level results. |
q-bio/0402039 | Sagar Khare | Jainab Kahtun, Sagar D. Khare, Nikolay V. Dokholyan | Can contact potentials reliably predict stability of proteins? | 28 pages, 7 figs, 2 tables | J. Mol. Biol. 336: 1223-1238 (2004) | null | null | q-bio.BM | null | The simplest approximation of interaction potential between amino-acids in
proteins is the contact potential, which defines the effective free energy of a
protein conformation by a set of amino acid contacts formed in this
conformation. Finding a contact potential capable of predicting free energies
of protein states across a variety of protein families will aid protein folding
and engineering in silico on a computationally tractable time-scale. We test
the ability of contact potentials to accurately and transferably (across
various protein families) predict stability changes of proteins upon mutations.
We develop a new methodology to determine the contact potentials in proteins
from experimental measurements of changes in protein thermodynamic stabilities
(ddG) upon mutations. We apply our methodology to derive sets of contact
interaction parameters for a hierarchy of interaction models including
solvation and multi-body contact parameters. We test how well our models
reproduce experimental measurements by statistical tests. We evaluate the
maximum accuracy of predictions obtained by using contact potentials and the
correlation between parameters derived from different data-sets of experimental
ddG values. We argue that it is impossible to reach experimental accuracy and
derive fully transferable contact parameters using the contact models of
potentials. However, contact parameters can yield reliable predictions of ddG
for datasets of mutations confined to specific amino-acid positions in the
sequence of a single protein.
| [
{
"created": "Thu, 19 Feb 2004 21:58:20 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kahtun",
"Jainab",
""
],
[
"Khare",
"Sagar D.",
""
],
[
"Dokholyan",
"Nikolay V.",
""
]
] | The simplest approximation of interaction potential between amino-acids in proteins is the contact potential, which defines the effective free energy of a protein conformation by a set of amino acid contacts formed in this conformation. Finding a contact potential capable of predicting free energies of protein states across a variety of protein families will aid protein folding and engineering in silico on a computationally tractable time-scale. We test the ability of contact potentials to accurately and transferably (across various protein families) predict stability changes of proteins upon mutations. We develop a new methodology to determine the contact potentials in proteins from experimental measurements of changes in protein thermodynamic stabilities (ddG) upon mutations. We apply our methodology to derive sets of contact interaction parameters for a hierarchy of interaction models including solvation and multi-body contact parameters. We test how well our models reproduce experimental measurements by statistical tests. We evaluate the maximum accuracy of predictions obtained by using contact potentials and the correlation between parameters derived from different data-sets of experimental ddG values. We argue that it is impossible to reach experimental accuracy and derive fully transferable contact parameters using the contact models of potentials. However, contact parameters can yield reliable predictions of ddG for datasets of mutations confined to specific amino-acid positions in the sequence of a single protein. |
1410.0557 | Rodrigo Echeveste | Rodrigo Echeveste and Claudius Gros | Two-trace model for spike-timing-dependent synaptic plasticity | Neural Computation (in press) | Neural Computation 2015, 27(3), 672-698 | 10.1162/NECO_a_00707 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an effective model for timing-dependent synaptic plasticity (STDP)
in terms of two interacting traces, corresponding to the fraction of activated
NMDA receptors and the Ca2+ concentration in the dendritic spine of the
postsynaptic neuron. This model intends to bridge the worlds of existing
simplistic phenomenological rules and highly detailed models, constituting thus
a practical tool for the study of the interplay between neural activity and
synaptic plasticity in extended spiking neural networks. For isolated pairs of
pre- and postsynaptic spikes the standard pairwise STDP rule is reproduced,
with appropriate parameters determining the respective weights and time scales
for the causal and the anti-causal contributions. The model contains otherwise
only three free parameters which can be adjusted to reproduce triplet
nonlinearities in both hippocampal culture and cortical slices. We also
investigate the transition from time-dependent to rate-dependent plasticity
occurring for both correlated and uncorrelated spike patterns.
| [
{
"created": "Thu, 2 Oct 2014 14:11:20 GMT",
"version": "v1"
}
] | 2015-02-26 | [
[
"Echeveste",
"Rodrigo",
""
],
[
"Gros",
"Claudius",
""
]
] | We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the Ca2+ concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, constituting thus a practical tool for the study of the interplay between neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and time scales for the causal and the anti-causal contributions. The model contains otherwise only three free parameters which can be adjusted to reproduce triplet nonlinearities in both hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns. |
1812.09203 | Xi Chen | Xi Chen, Jin Xie, Qingcong Yuan | Pan-Cancer Epigenetic Biomarker Selection from Blood Samples Using SAS | 9 pages, MWSUG 2018 | MWSUG 2018 conference proceedings | null | HS-45 | q-bio.GN stat.ME | http://creativecommons.org/publicdomain/zero/1.0/ | A key focus in current cancer research is the discovery of cancer biomarkers
that allow earlier detection with high accuracy and lower costs for both
patients and hospitals. Blood samples have long been used as a health status
indicator, but DNA methylation signatures in blood have not been fully
appreciated in cancer research. Historically, analysis of cancer has been
conducted directly with the patient's tumor or related tissues. Such analyses
allow physicians to diagnose a patient's health and cancer status; however,
physicians must observe certain symptoms that prompt them to use biopsies or
imaging to verify the diagnosis. This is a post-hoc approach. Our study will
focus on epigenetic information for cancer detection, specifically information
about DNA methylation in human peripheral blood samples in cancer discordant
monozygotic twin-pairs. This information might be able to help us detect cancer
much earlier, before the first symptom appears. Several other types of
epigenetic data can also be used, but here we demonstrate the potential of
blood DNA methylation data as a biomarker for pan-cancer using SAS 9.3 and SAS
EM. We report that 55 methylation CpG sites measurable in blood samples can be
used as biomarkers for early cancer detection and classification.
| [
{
"created": "Fri, 21 Dec 2018 15:42:00 GMT",
"version": "v1"
}
] | 2018-12-24 | [
[
"Chen",
"Xi",
""
],
[
"Xie",
"Jin",
""
],
[
"Yuan",
"Qingcong",
""
]
] | A key focus in current cancer research is the discovery of cancer biomarkers that allow earlier detection with high accuracy and lower costs for both patients and hospitals. Blood samples have long been used as a health status indicator, but DNA methylation signatures in blood have not been fully appreciated in cancer research. Historically, analysis of cancer has been conducted directly with the patient's tumor or related tissues. Such analyses allow physicians to diagnose a patient's health and cancer status; however, physicians must observe certain symptoms that prompt them to use biopsies or imaging to verify the diagnosis. This is a post-hoc approach. Our study will focus on epigenetic information for cancer detection, specifically information about DNA methylation in human peripheral blood samples in cancer discordant monozygotic twin-pairs. This information might be able to help us detect cancer much earlier, before the first symptom appears. Several other types of epigenetic data can also be used, but here we demonstrate the potential of blood DNA methylation data as a biomarker for pan-cancer using SAS 9.3 and SAS EM. We report that 55 methylation CpG sites measurable in blood samples can be used as biomarkers for early cancer detection and classification. |
2210.02273 | Nathaniel Braman | Nathaniel Braman, Prateek Prasanna, Kaustav Bera, Mehdi Alilou,
Mohammadhadi Khorrami, Patrick Leo, Maryam Etesami, Manasa Vulchi, Paulette
Turk, Amit Gupta, Prantesh Jain, Pingfu Fu, Nathan Pennell, Vamsidhar
Velcheti, Jame Abraham, Donna Plecha and Anant Madabhushi | Novel Radiomic Measurements of Tumor- Associated Vasculature Morphology
on Clinical Imaging as a Biomarker of Treatment Response in Multiple Cancers | This manuscript has been accepted for publication in Clinical Cancer
Research, which is published by the American Association for Cancer Research | null | 10.1158/1078-0432.CCR-21-4148 | null | q-bio.QM cs.CV q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Purpose: Tumor-associated vasculature differs from healthy blood vessels by
its chaotic architecture and twistedness, which promotes treatment resistance.
Measurable differences in these attributes may help stratify patients by likely
benefit of systemic therapy (e.g. chemotherapy). In this work, we present a new
category of radiomic biomarkers called quantitative tumor-associated
vasculature (QuanTAV) features, and demonstrate their ability to predict
response and survival across multiple cancers, imaging modalities, and
treatment regimens.
Experimental Design: We segmented tumor vessels and computed mathematical
measurements of twistedness and organization on routine pre-treatment radiology
(CT or contrast-enhanced MRI) from 558 patients, who received one of four
first-line chemotherapy-based therapeutic intervention strategies for breast
(n=371) or non-small cell lung cancer (NSCLC, n=187).
Results: Across 4 chemotherapy-based treatment strategies, classifiers of
QuanTAV measurements significantly (p<.05) predicted response in held out
testing cohorts alone (AUC=0.63-0.71) and increased AUC by 0.06-0.12 when added
to models of significant clinical variables alone. QuanTAV risk scores were
prognostic of recurrence free survival in treatment cohorts chemotherapy for
breast cancer (p=0.002, HR=1.25, 95% CI 1.08-1.44, C-index=.66) and
chemoradiation for NSCLC (p=0.039, HR=1.28, 95% CI 1.01-1.62, C-index=0.66).
Categorical QuanTAV risk groups were independently prognostic among all
treatment groups, including NSCLC patients receiving chemotherapy (p=0.034,
HR=2.29, 95% CI 1.07-4.94, C-index=0.62).
Conclusions: Across these domains, we observed an association of vascular
morphology on radiology with treatment outcome. Our findings suggest the
potential of tumor-associated vasculature shape and structure as a prognostic
and predictive biomarker for multiple cancers and treatments.
| [
{
"created": "Wed, 5 Oct 2022 13:58:27 GMT",
"version": "v1"
}
] | 2022-10-06 | [
[
"Braman",
"Nathaniel",
""
],
[
"Prasanna",
"Prateek",
""
],
[
"Bera",
"Kaustav",
""
],
[
"Alilou",
"Mehdi",
""
],
[
"Khorrami",
"Mohammadhadi",
""
],
[
"Leo",
"Patrick",
""
],
[
"Etesami",
"Maryam",
""
],... | Purpose: Tumor-associated vasculature differs from healthy blood vessels by its chaotic architecture and twistedness, which promotes treatment resistance. Measurable differences in these attributes may help stratify patients by likely benefit of systemic therapy (e.g. chemotherapy). In this work, we present a new category of radiomic biomarkers called quantitative tumor-associated vasculature (QuanTAV) features, and demonstrate their ability to predict response and survival across multiple cancers, imaging modalities, and treatment regimens. Experimental Design: We segmented tumor vessels and computed mathematical measurements of twistedness and organization on routine pre-treatment radiology (CT or contrast-enhanced MRI) from 558 patients, who received one of four first-line chemotherapy-based therapeutic intervention strategies for breast (n=371) or non-small cell lung cancer (NSCLC, n=187). Results: Across 4 chemotherapy-based treatment strategies, classifiers of QuanTAV measurements significantly (p<.05) predicted response in held out testing cohorts alone (AUC=0.63-0.71) and increased AUC by 0.06-0.12 when added to models of significant clinical variables alone. QuanTAV risk scores were prognostic of recurrence free survival in treatment cohorts chemotherapy for breast cancer (p=0.002, HR=1.25, 95% CI 1.08-1.44, C-index=.66) and chemoradiation for NSCLC (p=0.039, HR=1.28, 95% CI 1.01-1.62, C-index=0.66). Categorical QuanTAV risk groups were independently prognostic among all treatment groups, including NSCLC patients receiving chemotherapy (p=0.034, HR=2.29, 95% CI 1.07-4.94, C-index=0.62). Conclusions: Across these domains, we observed an association of vascular morphology on radiology with treatment outcome. Our findings suggest the potential of tumor-associated vasculature shape and structure as a prognostic and predictive biomarker for multiple cancers and treatments. |
2005.06552 | Gary Mamon | Gary A. Mamon (Institut d'Astrophysique de Paris (UMR 7095: CNRS &
Sorbonne Universit\'e)) | Regional analysis of COVID-19 in France from fit of hospital data with
different evolutionary models | 21 pages. Comments welcome. This version 4 has a different title in
the PDF to match that of arXiv, and a retouch of the last sentence of the
last section and of the Acknowledgements | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SIR evolutionary model predicts too sharp a decrease of the fractions of
people infected with COVID-19 in France after the start of the national
lockdown, compared to what is observed. I fit the daily hospital data: arrivals
in regular and critical care units, releases and deaths, using extended SEIR
models. These involve ratios of evolutionary timescales to branching fractions,
assumed uniform throughout a country, and the basic reproduction number, $R_0$,
before and during the national lockdown, for each region of France. The
joint-region Bayesian analysis allows precise evaluations of the time/fraction
ratios and pre-hospitalized fractions. The hospital data are well fit by the
models, except the arrivals in critical care, which decrease faster than
predicted, indicating better treatment over time. Averaged over France, the
analysis yields $R_0$= 3.4$\pm$0.1 before the lockdown and 0.65$\pm$0.04 (90%
c.l.) during the lockdown, with small regional variations. On 11 May 2020, the
Infection Fatality Rate in France was 4 $\pm$1% (90% c.l.), while the Feverish
vastly outnumber the Asymptomatic, contrary to the early phases. Without the
lockdown nor social distancing, over 2 million deaths from COVID-19 would have
occurred throughout France, while a lockdown that would have been enforced 10
days earlier would have led to less than 1000 deaths. The fraction of immunized
people reached a plateau below 1% throughout France (3% in Paris) by late April
2020 (95% c.l.), suggesting a lack of herd immunity. The widespread
availability of face masks on 11 May, when the lockdown was partially lifted,
should keep $R_0$ below unity if at least 46% of the population wear them
outside their home. Otherwise, without enhanced other social distancing, a
second wave is inevitable and cause the number of deaths to triple between
early May and October (if $R_0$=1.2) or even late June (if $R_0$=2).
| [
{
"created": "Wed, 13 May 2020 19:42:14 GMT",
"version": "v1"
},
{
"created": "Fri, 15 May 2020 17:20:15 GMT",
"version": "v2"
},
{
"created": "Mon, 25 May 2020 17:54:00 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Jun 2020 11:05:03 GMT",
"version": "v4"
}
] | 2020-06-17 | [
[
"Mamon",
"Gary A.",
"",
"Institut d'Astrophysique de Paris"
]
] | The SIR evolutionary model predicts too sharp a decrease of the fractions of people infected with COVID-19 in France after the start of the national lockdown, compared to what is observed. I fit the daily hospital data: arrivals in regular and critical care units, releases and deaths, using extended SEIR models. These involve ratios of evolutionary timescales to branching fractions, assumed uniform throughout a country, and the basic reproduction number, $R_0$, before and during the national lockdown, for each region of France. The joint-region Bayesian analysis allows precise evaluations of the time/fraction ratios and pre-hospitalized fractions. The hospital data are well fit by the models, except the arrivals in critical care, which decrease faster than predicted, indicating better treatment over time. Averaged over France, the analysis yields $R_0$= 3.4$\pm$0.1 before the lockdown and 0.65$\pm$0.04 (90% c.l.) during the lockdown, with small regional variations. On 11 May 2020, the Infection Fatality Rate in France was 4 $\pm$1% (90% c.l.), while the Feverish vastly outnumber the Asymptomatic, contrary to the early phases. Without the lockdown nor social distancing, over 2 million deaths from COVID-19 would have occurred throughout France, while a lockdown that would have been enforced 10 days earlier would have led to less than 1000 deaths. The fraction of immunized people reached a plateau below 1% throughout France (3% in Paris) by late April 2020 (95% c.l.), suggesting a lack of herd immunity. The widespread availability of face masks on 11 May, when the lockdown was partially lifted, should keep $R_0$ below unity if at least 46% of the population wear them outside their home. Otherwise, without enhanced other social distancing, a second wave is inevitable and cause the number of deaths to triple between early May and October (if $R_0$=1.2) or even late June (if $R_0$=2). |
2012.00675 | Tananun Songdechakraiwut | Tananun Songdechakraiwut and Moo K. Chung | Topological Learning for Brain Networks | 31 pages, 14 figures, 4 tables, code at https://github.com/topolearn | Ann. Appl. Stat. 17(1): 403-433 (March 2023) | 10.1214/22-AOAS1633 | null | q-bio.NC cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel topological learning framework that integrates
networks of different sizes and topology through persistent homology. Such
challenging task is made possible through the introduction of a computationally
efficient topological loss. The use of the proposed loss bypasses the intrinsic
computational bottleneck associated with matching networks. We validate the
method in extensive statistical simulations to assess its effectiveness when
discriminating networks with different topology. The method is further
demonstrated in a twin brain imaging study where we determine if brain networks
are genetically heritable. The challenge here is due to the difficulty of
overlaying the topologically different functional brain networks obtained from
resting-state functional MRI onto the template structural brain network
obtained through diffusion MRI.
| [
{
"created": "Wed, 25 Nov 2020 18:46:36 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Dec 2020 05:51:52 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Nov 2021 21:33:38 GMT",
"version": "v3"
},
{
"created": "Fri, 27 May 2022 19:00:08 GMT",
"version": "v4"
},
{
"c... | 2023-01-30 | [
[
"Songdechakraiwut",
"Tananun",
""
],
[
"Chung",
"Moo K.",
""
]
] | This paper proposes a novel topological learning framework that integrates networks of different sizes and topology through persistent homology. Such challenging task is made possible through the introduction of a computationally efficient topological loss. The use of the proposed loss bypasses the intrinsic computational bottleneck associated with matching networks. We validate the method in extensive statistical simulations to assess its effectiveness when discriminating networks with different topology. The method is further demonstrated in a twin brain imaging study where we determine if brain networks are genetically heritable. The challenge here is due to the difficulty of overlaying the topologically different functional brain networks obtained from resting-state functional MRI onto the template structural brain network obtained through diffusion MRI. |
1903.10042 | Jonathan Karr | Paul F. Lang, Yassmine Chebaro, Xiaoyue Zheng, John A. P. Sekar, Bilal
Shaikh, Darren A. Natale and Jonathan R. Karr | BpForms and BcForms: Tools for concretely describing non-canonical
polymers and complexes to facilitate comprehensive biochemical networks | 21 pages, 4 figures, 2 boxes | null | 10.1186/s13059-020-02025-z | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Although non-canonical residues, caps, crosslinks, and nicks play an
important role in the function of many DNA, RNA, proteins, and complexes, we do
not fully understand how networks of non-canonical macromolecules generate
behavior. One barrier is our limited formats, such as IUPAC, for abstractly
describing macromolecules. To overcome this barrier, we developed BpForms and
BcForms, a toolkit of ontologies, grammars, and software for abstracting the
primary structure of polymers and complexes as combinations of residues, caps,
crosslinks, and nicks. The toolkit can help quality control, exchange, and
integrate information about the primary structure of macromolecules into
fine-grained global networks of intracellular biochemistry.
| [
{
"created": "Sun, 24 Mar 2019 18:59:53 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Sep 2019 14:23:05 GMT",
"version": "v2"
}
] | 2021-06-04 | [
[
"Lang",
"Paul F.",
""
],
[
"Chebaro",
"Yassmine",
""
],
[
"Zheng",
"Xiaoyue",
""
],
[
"Sekar",
"John A. P.",
""
],
[
"Shaikh",
"Bilal",
""
],
[
"Natale",
"Darren A.",
""
],
[
"Karr",
"Jonathan R.",
""
]
] | Although non-canonical residues, caps, crosslinks, and nicks play an important role in the function of many DNA, RNA, proteins, and complexes, we do not fully understand how networks of non-canonical macromolecules generate behavior. One barrier is our limited formats, such as IUPAC, for abstractly describing macromolecules. To overcome this barrier, we developed BpForms and BcForms, a toolkit of ontologies, grammars, and software for abstracting the primary structure of polymers and complexes as combinations of residues, caps, crosslinks, and nicks. The toolkit can help quality control, exchange, and integrate information about the primary structure of macromolecules into fine-grained global networks of intracellular biochemistry. |
2206.03950 | Youzhi Qu | Youzhi Qu, Xinyao Jian, Wenxin Che, Penghui Du, Kai Fu, Quanying Liu | Transfer learning to decode brain states reflecting the relationship
between cognitive tasks | null | null | null | null | q-bio.NC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transfer learning improves the performance of the target task by leveraging
the data of a specific source task: the closer the relationship between the
source and the target tasks, the greater the performance improvement by
transfer learning. In neuroscience, the relationship between cognitive tasks is
usually represented by similarity of activated brain regions or neural
representation. However, no study has linked transfer learning and neuroscience
to reveal the relationship between cognitive tasks. In this study, we propose a
transfer learning framework to reflect the relationship between cognitive
tasks, and compare the task relations reflected by transfer learning and by the
overlaps of brain regions (e.g., neurosynth). Our results of transfer learning
create cognitive taskonomy to reflect the relationship between cognitive tasks
which is well in line with the task relations derived from neurosynth. Transfer
learning performs better in task decoding with fMRI data if the source and
target cognitive tasks activate similar brain regions. Our study uncovers the
relationship of multiple cognitive tasks and provides guidance for source task
selection in transfer learning for neural decoding based on small-sample data.
| [
{
"created": "Tue, 7 Jun 2022 09:39:47 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jun 2022 13:25:10 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Aug 2022 06:50:03 GMT",
"version": "v3"
}
] | 2022-08-31 | [
[
"Qu",
"Youzhi",
""
],
[
"Jian",
"Xinyao",
""
],
[
"Che",
"Wenxin",
""
],
[
"Du",
"Penghui",
""
],
[
"Fu",
"Kai",
""
],
[
"Liu",
"Quanying",
""
]
] | Transfer learning improves the performance of the target task by leveraging the data of a specific source task: the closer the relationship between the source and the target tasks, the greater the performance improvement by transfer learning. In neuroscience, the relationship between cognitive tasks is usually represented by similarity of activated brain regions or neural representation. However, no study has linked transfer learning and neuroscience to reveal the relationship between cognitive tasks. In this study, we propose a transfer learning framework to reflect the relationship between cognitive tasks, and compare the task relations reflected by transfer learning and by the overlaps of brain regions (e.g., neurosynth). Our results of transfer learning create cognitive taskonomy to reflect the relationship between cognitive tasks which is well in line with the task relations derived from neurosynth. Transfer learning performs better in task decoding with fMRI data if the source and target cognitive tasks activate similar brain regions. Our study uncovers the relationship of multiple cognitive tasks and provides guidance for source task selection in transfer learning for neural decoding based on small-sample data. |
1605.09020 | Katy Rubin | Katy J. Rubin and Peter Sollich | Michaelis-Menten dynamics in protein subnetworks | null | null | 10.1063/1.4947478 | null | q-bio.MN physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand the behaviour of complex systems it is often necessary to use
models that describe the dynamics of subnetworks. It has previously been
established using projection methods that such subnetwork dynamics generically
involves memory of the past, and that the memory functions can be calculated
explicitly for biochemical reaction networks made up of unary and binary
reactions. However, many established network models involve also
Michaelis-Menten kinetics, to describe e.g. enzymatic reactions. We show that
the projection approach to subnetwork dynamics can be extended to such
networks, thus significantly broadening its range of applicability. To derive
the extension we construct a larger network that represents enzymes and enzyme
complexes explicitly, obtain the projected equations, and finally take the
limit of fast enzyme reactions that gives back Michaelis-Menten kinetics. The
crucial point is that this limit can be taken in closed form. The outcome is a
simple procedure that allows one to obtain a description of subnetwork
dynamics, including memory functions, starting directly from any given network
of unary, binary and Michaelis-Menten reactions. Numerical tests show that this
closed form enzyme elimination gives a much more accurate description of the
subnetwork dynamics than the simpler method that represents enzymes explicitly,
and is also more efficient computationally.
| [
{
"created": "Sun, 29 May 2016 16:06:41 GMT",
"version": "v1"
}
] | 2016-06-08 | [
[
"Rubin",
"Katy J.",
""
],
[
"Sollich",
"Peter",
""
]
] | To understand the behaviour of complex systems it is often necessary to use models that describe the dynamics of subnetworks. It has previously been established using projection methods that such subnetwork dynamics generically involves memory of the past, and that the memory functions can be calculated explicitly for biochemical reaction networks made up of unary and binary reactions. However, many established network models involve also Michaelis-Menten kinetics, to describe e.g. enzymatic reactions. We show that the projection approach to subnetwork dynamics can be extended to such networks, thus significantly broadening its range of applicability. To derive the extension we construct a larger network that represents enzymes and enzyme complexes explicitly, obtain the projected equations, and finally take the limit of fast enzyme reactions that gives back Michaelis-Menten kinetics. The crucial point is that this limit can be taken in closed form. The outcome is a simple procedure that allows one to obtain a description of subnetwork dynamics, including memory functions, starting directly from any given network of unary, binary and Michaelis-Menten reactions. Numerical tests show that this closed form enzyme elimination gives a much more accurate description of the subnetwork dynamics than the simpler method that represents enzymes explicitly, and is also more efficient computationally. |
2312.02791 | Andrew Ligeralde | Andrew Ligeralde, Yilun Kuang, Thomas Edward Yerxa, Miah N. Pitcher,
Marla Feller, SueYeon Chung | Unsupervised learning on spontaneous retinal activity leads to efficient
neural representation geometry | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Prior to the onset of vision, neurons in the developing mammalian retina
spontaneously fire in correlated activity patterns known as retinal waves.
Experimental evidence suggests that retinal waves strongly influence the
emergence of sensory representations before visual experience. We aim to model
this early stage of functional development by using movies of neurally active
developing retinas as pre-training data for neural networks. Specifically, we
pre-train a ResNet-18 with an unsupervised contrastive learning objective
(SimCLR) on both simulated and experimentally-obtained movies of retinal waves,
then evaluate its performance on image classification tasks. We find that
pre-training on retinal waves significantly improves performance on tasks that
test object invariance to spatial translation, while slightly improving
performance on more complex tasks like image classification. Notably, these
performance boosts are realized on held-out natural images even though the
pre-training procedure does not include any natural image data. We then propose
a geometrical explanation for the increase in network performance, namely that
the spatiotemporal characteristics of retinal waves facilitate the formation of
separable feature representations. In particular, we demonstrate that networks
pre-trained on retinal waves are more effective at separating image manifolds
than randomly initialized networks, especially for manifolds defined by sets of
spatial translations. These findings indicate that the broad spatiotemporal
properties of retinal waves prepare networks for higher order feature
extraction.
| [
{
"created": "Tue, 5 Dec 2023 14:22:46 GMT",
"version": "v1"
}
] | 2023-12-06 | [
[
"Ligeralde",
"Andrew",
""
],
[
"Kuang",
"Yilun",
""
],
[
"Yerxa",
"Thomas Edward",
""
],
[
"Pitcher",
"Miah N.",
""
],
[
"Feller",
"Marla",
""
],
[
"Chung",
"SueYeon",
""
]
] | Prior to the onset of vision, neurons in the developing mammalian retina spontaneously fire in correlated activity patterns known as retinal waves. Experimental evidence suggests that retinal waves strongly influence the emergence of sensory representations before visual experience. We aim to model this early stage of functional development by using movies of neurally active developing retinas as pre-training data for neural networks. Specifically, we pre-train a ResNet-18 with an unsupervised contrastive learning objective (SimCLR) on both simulated and experimentally-obtained movies of retinal waves, then evaluate its performance on image classification tasks. We find that pre-training on retinal waves significantly improves performance on tasks that test object invariance to spatial translation, while slightly improving performance on more complex tasks like image classification. Notably, these performance boosts are realized on held-out natural images even though the pre-training procedure does not include any natural image data. We then propose a geometrical explanation for the increase in network performance, namely that the spatiotemporal characteristics of retinal waves facilitate the formation of separable feature representations. In particular, we demonstrate that networks pre-trained on retinal waves are more effective at separating image manifolds than randomly initialized networks, especially for manifolds defined by sets of spatial translations. These findings indicate that the broad spatiotemporal properties of retinal waves prepare networks for higher order feature extraction. |
1611.05082 | Hugo Jacquin | Hugo Jacquin, Amy Gilson, Eugene Shakhnovich, Simona Cocco, R\'emi
Monasson | Benchmarking inverse statistical approaches for protein structure and
design with exactly solvable models | Supplementary Information available at
http://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1004889 | PLoS Comput. Biol. 12(5): e1004889 (2016) | 10.1371/journal.pcbi.1004889 | null | q-bio.BM cond-mat.stat-mech physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inverse statistical approaches to determine protein structure and function
from Multiple Sequence Alignments (MSA) are emerging as powerful tools in
computational biology. However the underlying assumptions of the relationship
between the inferred effective Potts Hamiltonian and real protein structure and
energetics remain untested so far. Here we use lattice protein model (LP) to
benchmark those inverse statistical approaches. We build MSA of highly stable
sequences in target LP structures, and infer the effective pairwise Potts
Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce
many important aspects of 'true' LP structures and energetics. Careful analysis
reveals that effective pairwise couplings in inferred Potts Hamiltonians depend
not only on the energetics of the native structure but also on competing folds;
in particular, the coupling values reflect both positive design (stabilization
of native conformation) and negative design (destabilization of competing
folds). In addition to providing detailed structural information, the inferred
Potts models used as protein Hamiltonian for design of new sequences are able
to generate with high probability completely new sequences with the desired
folds, which is not possible using independent-site models. Those are
remarkable results as the effective LP Hamiltonians used to generate MSA are
not simple pairwise models due to the competition between the folds. Our
findings elucidate the reasons for the success of inverse approaches to the
modelling of proteins from sequence data, and their limitations.
| [
{
"created": "Tue, 15 Nov 2016 22:32:11 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Jacquin",
"Hugo",
""
],
[
"Gilson",
"Amy",
""
],
[
"Shakhnovich",
"Eugene",
""
],
[
"Cocco",
"Simona",
""
],
[
"Monasson",
"Rémi",
""
]
] | Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations. |
1508.05782 | Jaan Aru | Madis Vasser, Markus K\"angsepp, Jaan Aru | Change Blindness in 3D Virtual Reality | null | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present change blindness study subjects explored stereoscopic three
dimensional (3D) environments through a virtual reality (VR) headset. A novel
method that tracked the subjects' head movements was used for inducing changes
in the scene whenever the changing object was out of the field of view. The
effect of change location (foreground or background in 3D depth) on change
blindness was investigated. Two experiments were conducted, one in the lab (n =
50) and the other online (n = 25). Up to 25% of the changes were undetected and
the mean overall search time was 27 seconds in the lab study. Results indicated
significantly lower change detection success and more change cycles if the
changes occurred in the background, with no differences in overall search
times. The results confirm findings from previous studies and extend them to 3D
environments. The study also demonstrates the feasibility of online VR
experiments.
| [
{
"created": "Mon, 24 Aug 2015 12:33:10 GMT",
"version": "v1"
}
] | 2015-08-25 | [
[
"Vasser",
"Madis",
""
],
[
"Kängsepp",
"Markus",
""
],
[
"Aru",
"Jaan",
""
]
] | In the present change blindness study subjects explored stereoscopic three dimensional (3D) environments through a virtual reality (VR) headset. A novel method that tracked the subjects' head movements was used for inducing changes in the scene whenever the changing object was out of the field of view. The effect of change location (foreground or background in 3D depth) on change blindness was investigated. Two experiments were conducted, one in the lab (n = 50) and the other online (n = 25). Up to 25% of the changes were undetected and the mean overall search time was 27 seconds in the lab study. Results indicated significantly lower change detection success and more change cycles if the changes occurred in the background, with no differences in overall search times. The results confirm findings from previous studies and extend them to 3D environments. The study also demonstrates the feasibility of online VR experiments. |
1005.3349 | Nicholas Chia | Nicholas Chia and Nigel Goldenfeld | The dynamics of gene duplication and transposons in microbial genomes
following a sudden environmental change | null | null | 10.1103/PhysRevE.83.021906 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of genome transformations can occur as a microbial population
adapts to a large environmental change. In particular, genomic surveys indicate
that, following the transition to an obligate, host-dependent symbiont, the
density of transposons first rises, then subsequently declines over
evolutionary time. Here, we show that these observations can be accounted for
by a class of generic stochastic models for the evolution of genomes in the
presence of continuous selection and gene duplication. The models use a fitness
function that allows for partial contributions from multiple gene copies, is an
increasing but bounded function of copy number, and is optimal for one fully
adapted gene copy. We use Monte Carlo simulation to show that the dynamics
result in an initial rise in gene copy number followed by a subsequent fall due
to adaptation to the new environmental parameters. These results are robust for
reasonable gene duplication and mutation parameters when adapting to a novel
target sequence. Our model provides a generic explanation for the dynamics of
microbial transposon density following a large environmental changes such as
host restriction.
| [
{
"created": "Wed, 19 May 2010 01:37:07 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2010 04:43:29 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jan 2011 06:12:11 GMT",
"version": "v3"
}
] | 2015-03-17 | [
[
"Chia",
"Nicholas",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | A variety of genome transformations can occur as a microbial population adapts to a large environmental change. In particular, genomic surveys indicate that, following the transition to an obligate, host-dependent symbiont, the density of transposons first rises, then subsequently declines over evolutionary time. Here, we show that these observations can be accounted for by a class of generic stochastic models for the evolution of genomes in the presence of continuous selection and gene duplication. The models use a fitness function that allows for partial contributions from multiple gene copies, is an increasing but bounded function of copy number, and is optimal for one fully adapted gene copy. We use Monte Carlo simulation to show that the dynamics result in an initial rise in gene copy number followed by a subsequent fall due to adaptation to the new environmental parameters. These results are robust for reasonable gene duplication and mutation parameters when adapting to a novel target sequence. Our model provides a generic explanation for the dynamics of microbial transposon density following a large environmental changes such as host restriction. |
2408.00711 | Amarpal Sahota | Amarpal Sahota, Amber Roguski, Matthew W Jones, Zahraa S. Abdallah and
Raul Santos-Rodriguez | Investigating Brain Connectivity and Regional Statistics from EEG for
early stage Parkinson's Classification | null | null | null | null | q-bio.NC cs.AI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We evaluate the effectiveness of combining brain connectivity metrics with
signal statistics for early stage Parkinson's Disease (PD) classification using
electroencephalogram data (EEG). The data is from 5 arousal states - wakeful
and four sleep stages (N1, N2, N3 and REM). Our pipeline uses an Ada Boost
model for classification on a challenging early stage PD classification task
with with only 30 participants (11 PD , 19 Healthy Control). Evaluating 9 brain
connectivity metrics we find the best connectivity metric to be different for
each arousal state with Phase Lag Index achieving the highest individual
classification accuracy of 86\% on N1 data. Further to this our pipeline using
regional signal statistics achieves an accuracy of 78\%, using brain
connectivity only achieves an accuracy of 86\% whereas combining the two
achieves a best accuracy of 91\%. This best performance is achieved on N1 data
using Phase Lag Index (PLI) combined with statistics derived from the frequency
characteristics of the EEG signal. This model also achieves a recall of 80 \%
and precision of 96\%. Furthermore we find that on data from each arousal
state, combining PLI with regional signal statistics improves classification
accuracy versus using signal statistics or brain connectivity alone. Thus we
conclude that combining brain connectivity statistics with regional EEG
statistics is optimal for classifier performance on early stage Parkinson's.
Additionally, we find outperformance of N1 EEG for classification of
Parkinson's and expect this could be due to disrupted N1 sleep in PD. This
should be explored in future work.
| [
{
"created": "Thu, 1 Aug 2024 16:58:21 GMT",
"version": "v1"
}
] | 2024-08-02 | [
[
"Sahota",
"Amarpal",
""
],
[
"Roguski",
"Amber",
""
],
[
"Jones",
"Matthew W",
""
],
[
"Abdallah",
"Zahraa S.",
""
],
[
"Santos-Rodriguez",
"Raul",
""
]
] | We evaluate the effectiveness of combining brain connectivity metrics with signal statistics for early stage Parkinson's Disease (PD) classification using electroencephalogram data (EEG). The data is from 5 arousal states - wakeful and four sleep stages (N1, N2, N3 and REM). Our pipeline uses an Ada Boost model for classification on a challenging early stage PD classification task with with only 30 participants (11 PD , 19 Healthy Control). Evaluating 9 brain connectivity metrics we find the best connectivity metric to be different for each arousal state with Phase Lag Index achieving the highest individual classification accuracy of 86\% on N1 data. Further to this our pipeline using regional signal statistics achieves an accuracy of 78\%, using brain connectivity only achieves an accuracy of 86\% whereas combining the two achieves a best accuracy of 91\%. This best performance is achieved on N1 data using Phase Lag Index (PLI) combined with statistics derived from the frequency characteristics of the EEG signal. This model also achieves a recall of 80 \% and precision of 96\%. Furthermore we find that on data from each arousal state, combining PLI with regional signal statistics improves classification accuracy versus using signal statistics or brain connectivity alone. Thus we conclude that combining brain connectivity statistics with regional EEG statistics is optimal for classifier performance on early stage Parkinson's. Additionally, we find outperformance of N1 EEG for classification of Parkinson's and expect this could be due to disrupted N1 sleep in PD. This should be explored in future work. |
1701.06122 | Tal Einav | Tal Einav, Rob Phillips | Monod-Wyman-Changeux Analysis of Ligand-Gated Ion Channel Mutants | null | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for computing the gating properties of ligand-gated
ion channel mutants using the Monod-Wyman-Changeux (MWC) model of allostery. We
derive simple analytic formulas for key functional properties such as the
leakiness, dynamic range, half-maximal effective concentration, and effective
Hill coefficient, and explore the full spectrum of phenotypes that are
accessible through mutations. Specifically, we consider mutations in the
channel pore of nicotinic acetylcholine receptor (nAChR) and the ligand binding
domain of a cyclic nucleotide-gated (CNG) ion channel, demonstrating how each
mutation can be characterized as only affecting a subset of the biophysical
parameters. In addition, we show how the unifying perspective offered by the
MWC model allows us, perhaps surprisingly, to collapse the plethora of
dose-response data from different classes of ion channels into a universal
family of curves.
| [
{
"created": "Sun, 22 Jan 2017 05:10:51 GMT",
"version": "v1"
}
] | 2017-01-24 | [
[
"Einav",
"Tal",
""
],
[
"Phillips",
"Rob",
""
]
] | We present a framework for computing the gating properties of ligand-gated ion channel mutants using the Monod-Wyman-Changeux (MWC) model of allostery. We derive simple analytic formulas for key functional properties such as the leakiness, dynamic range, half-maximal effective concentration, and effective Hill coefficient, and explore the full spectrum of phenotypes that are accessible through mutations. Specifically, we consider mutations in the channel pore of nicotinic acetylcholine receptor (nAChR) and the ligand binding domain of a cyclic nucleotide-gated (CNG) ion channel, demonstrating how each mutation can be characterized as only affecting a subset of the biophysical parameters. In addition, we show how the unifying perspective offered by the MWC model allows us, perhaps surprisingly, to collapse the plethora of dose-response data from different classes of ion channels into a universal family of curves. |
1911.11304 | Caitlin Loeffler | Caitlin Loeffler, Keylie M. Gibson, Lana Martin, Liz Chang, Jeremy
Rotman, Ian V. Toma, Christopher E. Mason, Eleazar Eskin, Joseph P. Zackular,
Keith A. Crandall, David Koslicki, Serghei Mangul | Metagenomics for clinical diagnostics: technologies and informatics | 75 pages, 7 figures, 2 tables, 4 supplementary table, review paper | null | null | null | q-bio.QM q-bio.GN | http://creativecommons.org/publicdomain/zero/1.0/ | The human-associated microbiome is closely tied to human health and is of
substantial clinical interest. Metagenomics-based tools are emerging for
clinical diagnostics, tracking the spread of diseases, and surveillance of
potential pathogens. In some cases, these tools are overcoming limitations of
traditional clinical approaches. Metagenomics has limitations barring the tools
from clinical validation. Once these hurdles are overcome, clinical
metagenomics will inform doctors of the best, targeted treatment for their
patients and provide early detection of disease. Here we present an overview of
metagenomics methods with a discussion of computational challenges and
limitations.
| [
{
"created": "Tue, 26 Nov 2019 01:36:30 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Aug 2020 02:44:42 GMT",
"version": "v2"
}
] | 2020-08-11 | [
[
"Loeffler",
"Caitlin",
""
],
[
"Gibson",
"Keylie M.",
""
],
[
"Martin",
"Lana",
""
],
[
"Chang",
"Liz",
""
],
[
"Rotman",
"Jeremy",
""
],
[
"Toma",
"Ian V.",
""
],
[
"Mason",
"Christopher E.",
""
],
[
... | The human-associated microbiome is closely tied to human health and is of substantial clinical interest. Metagenomics-based tools are emerging for clinical diagnostics, tracking the spread of diseases, and surveillance of potential pathogens. In some cases, these tools are overcoming limitations of traditional clinical approaches. Metagenomics has limitations barring the tools from clinical validation. Once these hurdles are overcome, clinical metagenomics will inform doctors of the best, targeted treatment for their patients and provide early detection of disease. Here we present an overview of metagenomics methods with a discussion of computational challenges and limitations. |
1905.06301 | Chaitanya A. Athale | Yash Joshi, Yash Kiran Jawale and Chaitanya Anil Athale | Tunability of the Dual Feedback Genetic Oscillator Modeled by the
Asymmetry in Transcription and Translation | This work was begun as a part of an iGEM project | Phys. Rev. E 101, 012417 (2020) | 10.1103/PhysRevE.101.012417 | null | q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oscillatory gene circuits are ubiquitous to biology and are involved in
fundamental processes of cell cycle, circadian rhythms and developmental
systems. The synthesis of small, non-natural oscillatory genetic circuits have
been increasingly used to test fundamental principles of genetic network
dynamics. A recently developed fast, tunable genetic oscillator by Stricker et
al.[23] has demonstrated robustness and tunability of oscillatory behavior by
combining positive and negative feedback loops. This oscillator combining lacI
(negative) and araC (positive) feedback loops, was however modeled using
multiple layers of differential equations to capture the molecular complexity
of regulation, in order to explain the experimentally measured oscillations. We
have developed a reduced model based on delay differential equations (DDEs) of
this dual feedback loop oscillator, that reproduces the tunability of
oscillator period and amplitude based on the concentration of the two inducers
isopropyl b-D-1-thiogalactopyranoside (IPTG) and arabinose. Previous work had
predicted a need for an asymmetry in copy numbers of activator (araC) and
repressor (lacI) genes encoded on plasmids. We use our reduced model to
redesign the network by comparing the effect of asymmetry in gene expression at
the level of (a) DNA copy numbers and the rates of (b) mRNA translation and (c)
degradation. We find the minimal period of the oscillator is sensitive to DNA
copy number asymmetry, but translation rate asymmetry has an identical effect
as plasmid copy numbers, while modulating the asymmetry in mRNA degradation can
improve the tunability of period of the oscillator, together with increased
robustness to replication 'noise' and influence of the host cell cycle. Thus,
our model predicts experimentally testable principles to redesign a potentially
more robust oscillatory genetic network.
| [
{
"created": "Wed, 15 May 2019 17:19:50 GMT",
"version": "v1"
}
] | 2020-02-05 | [
[
"Joshi",
"Yash",
""
],
[
"Jawale",
"Yash Kiran",
""
],
[
"Athale",
"Chaitanya Anil",
""
]
] | Oscillatory gene circuits are ubiquitous to biology and are involved in fundamental processes of cell cycle, circadian rhythms and developmental systems. The synthesis of small, non-natural oscillatory genetic circuits have been increasingly used to test fundamental principles of genetic network dynamics. A recently developed fast, tunable genetic oscillator by Stricker et al.[23] has demonstrated robustness and tunability of oscillatory behavior by combining positive and negative feedback loops. This oscillator combining lacI (negative) and araC (positive) feedback loops, was however modeled using multiple layers of differential equations to capture the molecular complexity of regulation, in order to explain the experimentally measured oscillations. We have developed a reduced model based on delay differential equations (DDEs) of this dual feedback loop oscillator, that reproduces the tunability of oscillator period and amplitude based on the concentration of the two inducers isopropyl b-D-1-thiogalactopyranoside (IPTG) and arabinose. Previous work had predicted a need for an asymmetry in copy numbers of activator (araC) and repressor (lacI) genes encoded on plasmids. We use our reduced model to redesign the network by comparing the effect of asymmetry in gene expression at the level of (a) DNA copy numbers and the rates of (b) mRNA translation and (c) degradation. We find the minimal period of the oscillator is sensitive to DNA copy number asymmetry, but translation rate asymmetry has an identical effect as plasmid copy numbers, while modulating the asymmetry in mRNA degradation can improve the tunability of period of the oscillator, together with increased robustness to replication 'noise' and influence of the host cell cycle. Thus, our model predicts experimentally testable principles to redesign a potentially more robust oscillatory genetic network. |
1712.01146 | Karel B\v{r}inda | Karel B\v{r}inda, Valentina Boeva, Gregory Kucherov | Ococo: an online variant and consensus caller | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Motivation: Identifying genomic variants is an essential step for connecting
genotype and phenotype. The usual approach consists of statistical inference of
variants from alignments of sequencing reads. State-of-the-art variant callers
can resolve a wide range of different variant types with high accuracy.
However, they require that all read alignments be available from the beginning
of variant calling and be sorted by coordinates. Sorting is computationally
expensive, both memory- and speed-wise, and the resulting pipelines suffer from
storing and retrieving large alignments files from external memory. Therefore,
there is interest in developing methods for resource-efficient variant calling.
Results: We present Ococo, the first program capable of inferring variants in
a real-time, as read alignments are fed in. Ococo inputs unsorted alignments
from a stream and infers single-nucleotide variants, together with a genomic
consensus, using statistics stored in compact several-bit counters. Ococo
provides a fast and memory-efficient alternative to the usual variant calling.
It is particularly advantageous when reads are sequenced or mapped
progressively, or when available computational resources are at a premium.
| [
{
"created": "Mon, 4 Dec 2017 15:30:24 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Nov 2018 23:24:33 GMT",
"version": "v2"
}
] | 2018-11-07 | [
[
"Břinda",
"Karel",
""
],
[
"Boeva",
"Valentina",
""
],
[
"Kucherov",
"Gregory",
""
]
] | Motivation: Identifying genomic variants is an essential step for connecting genotype and phenotype. The usual approach consists of statistical inference of variants from alignments of sequencing reads. State-of-the-art variant callers can resolve a wide range of different variant types with high accuracy. However, they require that all read alignments be available from the beginning of variant calling and be sorted by coordinates. Sorting is computationally expensive, both memory- and speed-wise, and the resulting pipelines suffer from storing and retrieving large alignments files from external memory. Therefore, there is interest in developing methods for resource-efficient variant calling. Results: We present Ococo, the first program capable of inferring variants in a real-time, as read alignments are fed in. Ococo inputs unsorted alignments from a stream and infers single-nucleotide variants, together with a genomic consensus, using statistics stored in compact several-bit counters. Ococo provides a fast and memory-efficient alternative to the usual variant calling. It is particularly advantageous when reads are sequenced or mapped progressively, or when available computational resources are at a premium. |
1203.2430 | Taoyang Wu | Si Li, Kwok Pui Choi, Taoyang Wu, Louxin Zhang | Reconstruction of Network Evolutionary History from Extant Network
Topology and Duplication History | 15 pages, 5 figures, submitted to ISBRA 2012 | null | null | null | q-bio.PE math.CO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide protein-protein interaction (PPI) data are readily available
thanks to recent breakthroughs in biotechnology. However, PPI networks of
extant organisms are only snapshots of the network evolution. How to infer the
whole evolution history becomes a challenging problem in computational biology.
In this paper, we present a likelihood-based approach to inferring network
evolution history from the topology of PPI networks and the duplication
relationship among the paralogs. Simulations show that our approach outperforms
the existing ones in terms of the accuracy of reconstruction. Moreover, the
growth parameters of several real PPI networks estimated by our method are more
consistent with the ones predicted in literature.
| [
{
"created": "Mon, 12 Mar 2012 09:20:34 GMT",
"version": "v1"
}
] | 2012-03-13 | [
[
"Li",
"Si",
""
],
[
"Choi",
"Kwok Pui",
""
],
[
"Wu",
"Taoyang",
""
],
[
"Zhang",
"Louxin",
""
]
] | Genome-wide protein-protein interaction (PPI) data are readily available thanks to recent breakthroughs in biotechnology. However, PPI networks of extant organisms are only snapshots of the network evolution. How to infer the whole evolution history becomes a challenging problem in computational biology. In this paper, we present a likelihood-based approach to inferring network evolution history from the topology of PPI networks and the duplication relationship among the paralogs. Simulations show that our approach outperforms the existing ones in terms of the accuracy of reconstruction. Moreover, the growth parameters of several real PPI networks estimated by our method are more consistent with the ones predicted in literature. |
2309.15326 | Alexander Browning | Alexander P Browning and Maria Tasc\u{a} and Carles Falc\'o and Ruth E
Baker | Structural identifiability analysis of linear
reaction-advection-diffusion processes in mathematical biology | null | null | null | null | q-bio.QM stat.ME | http://creativecommons.org/licenses/by/4.0/ | Effective application of mathematical models to interpret biological data and
make accurate predictions often requires that model parameters are
identifiable. Approaches to assess the so-called structural identifiability of
models are well-established for ordinary differential equation models, yet
there are no commonly adopted approaches that can be applied to assess the
structural identifiability of the partial differential equation (PDE) models
that are requisite to capture spatial features inherent to many phenomena. The
differential algebra approach to structural identifiability has recently been
demonstrated to be applicable to several specific PDE models. In this brief
article, we present general methodology for performing structural
identifiability analysis on partially observed reaction-advection-diffusion
(RAD) PDE models that are linear in the unobserved quantities. We show that the
differential algebra approach can always, in theory, be applied to such models.
Moreover, despite the perceived complexity introduced by the addition of
advection and diffusion terms, identifiability of spatial analogues of
non-spatial models cannot decrease in structural identifiability. We conclude
by discussing future possibilities and the computational cost of performing
structural identifiability analysis on more general PDE models.
| [
{
"created": "Wed, 27 Sep 2023 00:12:20 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Sep 2023 23:21:46 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Feb 2024 07:12:17 GMT",
"version": "v3"
}
] | 2024-02-28 | [
[
"Browning",
"Alexander P",
""
],
[
"Tască",
"Maria",
""
],
[
"Falcó",
"Carles",
""
],
[
"Baker",
"Ruth E",
""
]
] | Effective application of mathematical models to interpret biological data and make accurate predictions often requires that model parameters are identifiable. Approaches to assess the so-called structural identifiability of models are well-established for ordinary differential equation models, yet there are no commonly adopted approaches that can be applied to assess the structural identifiability of the partial differential equation (PDE) models that are requisite to capture spatial features inherent to many phenomena. The differential algebra approach to structural identifiability has recently been demonstrated to be applicable to several specific PDE models. In this brief article, we present general methodology for performing structural identifiability analysis on partially observed reaction-advection-diffusion (RAD) PDE models that are linear in the unobserved quantities. We show that the differential algebra approach can always, in theory, be applied to such models. Moreover, despite the perceived complexity introduced by the addition of advection and diffusion terms, identifiability of spatial analogues of non-spatial models cannot decrease in structural identifiability. We conclude by discussing future possibilities and the computational cost of performing structural identifiability analysis on more general PDE models. |
2204.05919 | Lei Fang Mr | Lei Fang and Junren Li and Ming Zhao and Li Tan and Jian-Guang Lou | Leveraging Reaction-aware Substructures for Retrosynthesis Analysis | Work in progress | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Retrosynthesis analysis is a critical task in organic chemistry central to
many important industries. Previously, various machine learning approaches have
achieved promising results on this task by representing output molecules as
strings and autoregressively decoded token-by-token with generative models.
Text generation or machine translation models in natural language processing
were frequently utilized approaches. The token-by-token decoding approach is
not intuitive from a chemistry perspective because some substructures are
relatively stable and remain unchanged during reactions. In this paper, we
propose a substructure-level decoding model, where the substructures are
reaction-aware and can be automatically extracted with a fully data-driven
approach. Our approach achieved improvement over previously reported models,
and we find that the performance can be further boosted if the accuracy of
substructure extraction is improved. The substructures extracted by our
approach can provide users with better insights for decision-making compared to
existing methods. We hope this work will generate interest in this fast growing
and highly interdisciplinary area on retrosynthesis prediction and other
related topics.
| [
{
"created": "Tue, 12 Apr 2022 16:25:51 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Apr 2022 10:31:30 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Apr 2022 09:06:45 GMT",
"version": "v3"
},
{
"created": "Sun, 18 Sep 2022 10:13:49 GMT",
"version": "v4"
}
] | 2022-09-20 | [
[
"Fang",
"Lei",
""
],
[
"Li",
"Junren",
""
],
[
"Zhao",
"Ming",
""
],
[
"Tan",
"Li",
""
],
[
"Lou",
"Jian-Guang",
""
]
] | Retrosynthesis analysis is a critical task in organic chemistry central to many important industries. Previously, various machine learning approaches have achieved promising results on this task by representing output molecules as strings and autoregressively decoded token-by-token with generative models. Text generation or machine translation models in natural language processing were frequently utilized approaches. The token-by-token decoding approach is not intuitive from a chemistry perspective because some substructures are relatively stable and remain unchanged during reactions. In this paper, we propose a substructure-level decoding model, where the substructures are reaction-aware and can be automatically extracted with a fully data-driven approach. Our approach achieved improvement over previously reported models, and we find that the performance can be further boosted if the accuracy of substructure extraction is improved. The substructures extracted by our approach can provide users with better insights for decision-making compared to existing methods. We hope this work will generate interest in this fast growing and highly interdisciplinary area on retrosynthesis prediction and other related topics. |
1409.1199 | Stephen Plaza PhD | Stephen M. Plaza | Focused Proofreading: Efficiently Extracting Connectomes from Segmented
EM Images | null | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing.
| [
{
"created": "Wed, 3 Sep 2014 19:14:13 GMT",
"version": "v1"
}
] | 2014-09-04 | [
[
"Plaza",
"Stephen M.",
""
]
] | Identifying complex neural circuitry from electron microscopic (EM) images may help unlock the mysteries of the brain. However, identifying this circuitry requires time-consuming, manual tracing (proofreading) due to the size and intricacy of these image datasets, thus limiting state-of-the-art analysis to very small brain regions. Potential avenues to improve scalability include automatic image segmentation and crowd sourcing, but current efforts have had limited success. In this paper, we propose a new strategy, focused proofreading, that works with automatic segmentation and aims to limit proofreading to the regions of a dataset that are most impactful to the resulting circuit. We then introduce a novel workflow, which exploits biological information such as synapses, and apply it to a large dataset in the fly optic lobe. With our techniques, we achieve significant tracing speedups of 3-5x without sacrificing the quality of the resulting circuit. Furthermore, our methodology makes the task of proofreading much more accessible and hence potentially enhances the effectiveness of crowd sourcing. |
1902.03238 | James Larus | Sahand Kashani and Stuart Byma and James R. Larus | IMPACT: Interval-based Multi-pass Proteomic Alignment with Constant
Traceback | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Darwin is a genomics co-processor that achieved a 15000x acceleration on long
read assembly through innovative hardware and algorithm co-design. Darwins
algorithms and hardware implementation were specifically designed for DNA
analysis pipelines. This paper analyzes the feasibility of applying Darwins
algorithms to the problem of protein sequence alignment. In addition to a
behavioral analysis of Darwin when aligning proteins, we propose an algorithmic
improvement to Darwins alignment algorithm, GACT, in the form of a multi-pass
variant that increases its accuracy on protein sequence alignment. Concretely,
our proposed multi-pass variant of GACT achieves on average 14\% better
alignment scores.
| [
{
"created": "Sat, 9 Feb 2019 16:25:59 GMT",
"version": "v1"
}
] | 2019-02-12 | [
[
"Kashani",
"Sahand",
""
],
[
"Byma",
"Stuart",
""
],
[
"Larus",
"James R.",
""
]
] | Darwin is a genomics co-processor that achieved a 15000x acceleration on long read assembly through innovative hardware and algorithm co-design. Darwins algorithms and hardware implementation were specifically designed for DNA analysis pipelines. This paper analyzes the feasibility of applying Darwins algorithms to the problem of protein sequence alignment. In addition to a behavioral analysis of Darwin when aligning proteins, we propose an algorithmic improvement to Darwins alignment algorithm, GACT, in the form of a multi-pass variant that increases its accuracy on protein sequence alignment. Concretely, our proposed multi-pass variant of GACT achieves on average 14\% better alignment scores. |
1304.4460 | Filippos Klironomos | Filippos D. Klironomos, Juliette de Meaux, Johannes Berg | Can we always sweep the details of RNA-processing under the carpet? | null | null | 10.1088/1478-3975/10/5/056007 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA molecules follow a succession of enzyme-mediated processing steps from
transcription until maturation. The participating enzymes, for example the
spliceosome for mRNAs and Drosha and Dicer for microRNAs, are also produced in
the cell and their copy-numbers fluctuate over time. Enzyme copy-number changes
affect the processing rate of the substrate molecules; high enzyme numbers
increase the processing probability, low enzyme numbers decrease it. We study
different RNA processing cascades where enzyme copy-numbers are either fixed or
fluctuate. We find that for fixed enzyme-copy numbers the substrates at
steady-state are Poisson-distributed, and the whole RNA cascade dynamics can be
understood as a single birth-death process of the mature RNA product. In this
case, solely fluctuations in the timing of RNA processing lead to variation in
the number of RNA molecules. However, we show analytically and numerically that
when enzyme copy-numbers fluctuate, the strength of RNA fluctuations increases
linearly with the RNA transcription rate. This linear effect becomes stronger
as the speed of enzyme dynamics decreases relative to the speed of RNA
dynamics. Interestingly, we find that under certain conditions, the RNA cascade
can reduce the strength of fluctuations in the expression level of the mature
RNA product. Finally, by investigating the effects of processing polymorphisms
we show that it is possible for the effects of transcriptional polymorphisms to
be enhanced, reduced, or even reversed. Our results provide a framework to
understand the dynamics of RNA processing.
| [
{
"created": "Tue, 16 Apr 2013 14:26:24 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2013 08:13:03 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Sep 2013 12:33:03 GMT",
"version": "v3"
}
] | 2015-06-15 | [
[
"Klironomos",
"Filippos D.",
""
],
[
"de Meaux",
"Juliette",
""
],
[
"Berg",
"Johannes",
""
]
] | RNA molecules follow a succession of enzyme-mediated processing steps from transcription until maturation. The participating enzymes, for example the spliceosome for mRNAs and Drosha and Dicer for microRNAs, are also produced in the cell and their copy-numbers fluctuate over time. Enzyme copy-number changes affect the processing rate of the substrate molecules; high enzyme numbers increase the processing probability, low enzyme numbers decrease it. We study different RNA processing cascades where enzyme copy-numbers are either fixed or fluctuate. We find that for fixed enzyme-copy numbers the substrates at steady-state are Poisson-distributed, and the whole RNA cascade dynamics can be understood as a single birth-death process of the mature RNA product. In this case, solely fluctuations in the timing of RNA processing lead to variation in the number of RNA molecules. However, we show analytically and numerically that when enzyme copy-numbers fluctuate, the strength of RNA fluctuations increases linearly with the RNA transcription rate. This linear effect becomes stronger as the speed of enzyme dynamics decreases relative to the speed of RNA dynamics. Interestingly, we find that under certain conditions, the RNA cascade can reduce the strength of fluctuations in the expression level of the mature RNA product. Finally, by investigating the effects of processing polymorphisms we show that it is possible for the effects of transcriptional polymorphisms to be enhanced, reduced, or even reversed. Our results provide a framework to understand the dynamics of RNA processing. |
1910.09746 | Michael Phillips | Michael Phillips | Hysteresis Effects in Social Behavior with Parasitic Infection | 7 pages, 6 figures; accepted in Journal of Statistical Physics (2020) | null | 10.1007/s10955-020-02580-6 | null | q-bio.PE nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work has found that the behavior of an individual can be altered when
infected by a parasite. Here we explore the question: under what conditions, in
principle, can a general parasitic infection control system-wide social
behaviors? We analyze fixed points and hysteresis effects under the Master
Equation, with transitions between two behaviors given two different
subpopulations, healthy vs. parasitically-infected, within a population which
is kept fixed overall. The key model choices are: (i) the internal opinion of
infected humans may differ from that of the healthy population, (ii) the extent
that interaction drives behavioral changes may also differ, and (iii) indirect
interactions are most important. We find that the socio-configuration can be
controlled by the parasitically-infected population, under some conditions,
even if the healthy population is the majority and of opposite opinion.
| [
{
"created": "Sun, 20 Oct 2019 20:50:16 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jun 2020 23:13:12 GMT",
"version": "v2"
}
] | 2020-06-11 | [
[
"Phillips",
"Michael",
""
]
] | Recent work has found that the behavior of an individual can be altered when infected by a parasite. Here we explore the question: under what conditions, in principle, can a general parasitic infection control system-wide social behaviors? We analyze fixed points and hysteresis effects under the Master Equation, with transitions between two behaviors given two different subpopulations, healthy vs. parasitically-infected, within a population which is kept fixed overall. The key model choices are: (i) the internal opinion of infected humans may differ from that of the healthy population, (ii) the extent that interaction drives behavioral changes may also differ, and (iii) indirect interactions are most important. We find that the socio-configuration can be controlled by the parasitically-infected population, under some conditions, even if the healthy population is the majority and of opposite opinion. |
q-bio/0509042 | Atul Narang | Atul Narang | Comparative analysis of some models of mixed-substrate microbial growth | 5 figures | null | null | null | q-bio.MN | null | Mixed-substrate microbial growth is among the most intensely studied systems
in molecular microbiology. Several mathematical models have been developed to
account for the genetic regulation of such systems, especially those resulting
in diauxic growth. In this work, we compare the dynamics of three such models
(Narang, Biotech. Bioeng., 59, 116, 1998; Thattai & Shraiman, Biophys. J, 85,
744, 2003; Brandt et al, Water Research, 38, 1004, 2004). We show that these
models are dynamically similar - the initial motion of the inducible enzymes in
all the models is described by Lotka-Volterra equations for competing species.
The dynamic similarity occurs because in all the models, the inducible enzymes
possess properties characteristic of competing species: Their synthesis is
autocatalytic, and they inhibit each other. Despite this dynamic similarity,
the models vary with respect to the range of dynamics captured. The Brandt et
al model captures only the diauxic growth pattern, whereas the remaining two
models capture both diauxic and non-diauxic growth patterns. The models also
differ with respect to the mechanisms that generate the mutual inhibition
between the enzymes. In the Narang model, the mutual inhibition occurs because
the enzymes for each substrate enhance the dilution of the enzymes for the
other substrate. In the Thattai & Shraiman model, the mutual inhibition is
entirely due to competition for the phosphoryl groups.
| [
{
"created": "Thu, 29 Sep 2005 16:20:30 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Narang",
"Atul",
""
]
] | Mixed-substrate microbial growth is among the most intensely studied systems in molecular microbiology. Several mathematical models have been developed to account for the genetic regulation of such systems, especially those resulting in diauxic growth. In this work, we compare the dynamics of three such models (Narang, Biotech. Bioeng., 59, 116, 1998; Thattai & Shraiman, Biophys. J, 85, 744, 2003; Brandt et al, Water Research, 38, 1004, 2004). We show that these models are dynamically similar - the initial motion of the inducible enzymes in all the models is described by Lotka-Volterra equations for competing species. The dynamic similarity occurs because in all the models, the inducible enzymes possess properties characteristic of competing species: Their synthesis is autocatalytic, and they inhibit each other. Despite this dynamic similarity, the models vary with respect to the range of dynamics captured. The Brandt et al model captures only the diauxic growth pattern, whereas the remaining two models capture both diauxic and non-diauxic growth patterns. The models also differ with respect to the mechanisms that generate the mutual inhibition between the enzymes. In the Narang model, the mutual inhibition occurs because the enzymes for each substrate enhance the dilution of the enzymes for the other substrate. In the Thattai & Shraiman model, the mutual inhibition is entirely due to competition for the phosphoryl groups. |
0808.3996 | Marcelo Magnasco | Marcelo O. Magnasco, Oreste Piro, Guillermo A. Cecchi | Dynamical and Statistical Criticality in a Model of Neural Tissue | null | null | 10.1103/PhysRevLett.102.258102 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the nervous system to work at all, a delicate balance of excitation and
inhibition must be achieved. However, when such a balance is sought by global
strategies, only few modes remain balanced close to instability, and all other
modes are strongly stable. Here we present a simple model of neural tissue in
which this balance is sought locally by neurons following `anti-Hebbian'
behavior: {\sl all} degrees of freedom achieve a close balance of excitation
and inhibition and become "critical" in the dynamical sense. At long
timescales, the modes of our model oscillate around the instability line, so an
extremely complex "breakout" dynamics ensues in which different modes of the
system oscillate between prominence and extinction. We show the system develops
various anomalous statistical behaviours and hence becomes self-organized
critical in the statistical sense.
| [
{
"created": "Thu, 28 Aug 2008 21:30:22 GMT",
"version": "v1"
}
] | 2013-05-29 | [
[
"Magnasco",
"Marcelo O.",
""
],
[
"Piro",
"Oreste",
""
],
[
"Cecchi",
"Guillermo A.",
""
]
] | For the nervous system to work at all, a delicate balance of excitation and inhibition must be achieved. However, when such a balance is sought by global strategies, only few modes remain balanced close to instability, and all other modes are strongly stable. Here we present a simple model of neural tissue in which this balance is sought locally by neurons following `anti-Hebbian' behavior: {\sl all} degrees of freedom achieve a close balance of excitation and inhibition and become "critical" in the dynamical sense. At long timescales, the modes of our model oscillate around the instability line, so an extremely complex "breakout" dynamics ensues in which different modes of the system oscillate between prominence and extinction. We show the system develops various anomalous statistical behaviours and hence becomes self-organized critical in the statistical sense. |
1506.02087 | Min Xu | Min Xu | Global Gene Expression Analysis Using Machine Learning Methods | Author's master thesis (National University of Singapore, May 2003).
Adviser: Rudy Setiono | null | null | null | q-bio.QM cs.CE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microarray is a technology to quantitatively monitor the expression of large
number of genes in parallel. It has become one of the main tools for global
gene expression analysis in molecular biology research in recent years. The
large amount of expression data generated by this technology makes the study of
certain complex biological problems possible and machine learning methods are
playing a crucial role in the analysis process. At present, many machine
learning methods have been or have the potential to be applied to major areas
of gene expression analysis. These areas include clustering, classification,
dynamic modeling and reverse engineering.
In this thesis, we focus our work on using machine learning methods to solve
the classification problems arising from microarray data. We first identify the
major types of the classification problems; then apply several machine learning
methods to solve the problems and perform systematic tests on real and
artificial datasets. We propose improvement to existing methods. Specifically,
we develop a multivariate and a hybrid feature selection method to obtain high
classification performance for high dimension classification problems. Using
the hybrid feature selection method, we are able to identify small sets of
features that give predictive accuracy that is as good as that from other
methods which require many more features.
| [
{
"created": "Fri, 5 Jun 2015 23:37:20 GMT",
"version": "v1"
}
] | 2015-06-18 | [
[
"Xu",
"Min",
""
]
] | Microarray is a technology to quantitatively monitor the expression of large number of genes in parallel. It has become one of the main tools for global gene expression analysis in molecular biology research in recent years. The large amount of expression data generated by this technology makes the study of certain complex biological problems possible and machine learning methods are playing a crucial role in the analysis process. At present, many machine learning methods have been or have the potential to be applied to major areas of gene expression analysis. These areas include clustering, classification, dynamic modeling and reverse engineering. In this thesis, we focus our work on using machine learning methods to solve the classification problems arising from microarray data. We first identify the major types of the classification problems; then apply several machine learning methods to solve the problems and perform systematic tests on real and artificial datasets. We propose improvement to existing methods. Specifically, we develop a multivariate and a hybrid feature selection method to obtain high classification performance for high dimension classification problems. Using the hybrid feature selection method, we are able to identify small sets of features that give predictive accuracy that is as good as that from other methods which require many more features. |
1901.06023 | Benjamin Kompa | Benjamin Kompa and Beau Coker | Learning a Generative Model of Cancer Metastasis | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce a Unified Disentanglement Network (UFDN) trained on The Cancer
Genome Atlas (TCGA). We demonstrate that the UFDN learns a biologically
relevant latent space of gene expression data by applying our network to two
classification tasks of cancer status and cancer type. Our UFDN specific
algorithms perform comparably to random forest methods. The UFDN allows for
continuous, partial interpolation between distinct cancer types. Furthermore,
we perform an analysis of differentially expressed genes between skin cutaneous
melanoma(SKCM) samples and the same samples interpolated into glioblastoma
(GBM). We demonstrate that our interpolations learn relevant metagenes that
recapitulate known glioblastoma mechanisms and suggest possible starting points
for investigations into the metastasis of SKCM into GBM.
| [
{
"created": "Thu, 17 Jan 2019 22:39:41 GMT",
"version": "v1"
}
] | 2019-01-21 | [
[
"Kompa",
"Benjamin",
""
],
[
"Coker",
"Beau",
""
]
] | We introduce a Unified Disentanglement Network (UFDN) trained on The Cancer Genome Atlas (TCGA). We demonstrate that the UFDN learns a biologically relevant latent space of gene expression data by applying our network to two classification tasks of cancer status and cancer type. Our UFDN specific algorithms perform comparably to random forest methods. The UFDN allows for continuous, partial interpolation between distinct cancer types. Furthermore, we perform an analysis of differentially expressed genes between skin cutaneous melanoma(SKCM) samples and the same samples interpolated into glioblastoma (GBM). We demonstrate that our interpolations learn relevant metagenes that recapitulate known glioblastoma mechanisms and suggest possible starting points for investigations into the metastasis of SKCM into GBM. |
2201.05259 | Andy Goldschmidt | Andy Goldschmidt, James Kunert-Graf, Adrian C. Scott, Zhihao Tan,
Aim\'ee M. Dudley, J. Nathan Kutz | Quantifying yeast colony morphologies with feature engineering from
time-lapse photography | 15 pages; 7 pages text, 8 pages tables and figures; 4 figures, 4
tables | null | 10.1038/s41597-022-01340-3 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Baker's yeast (Saccharomyces cerevisiae) is a model organism for studying the
morphology that emerges at the scale of multi-cell colonies. To look at how
morphology develops, we collect a dataset of time-lapse photographs of the
growth of different strains of S. cerevisiae. We discuss the general
statistical challenges that arise when using time-lapse photographs to extract
time-dependent features. In particular, we show how texture-based feature
engineering and representative clustering can be successfully applied to
categorize the development of yeast colony morphology using our dataset. The
local binary pattern (LBP) from image processing is used to score the surface
texture of colonies. This texture score develops along a smooth trajectory
during growth. The path taken depends on how the morphology emerges. A
hierarchical clustering of the colonies is performed according to their texture
development trajectories. The clustering method is designed for practical
interpretability; it obtains the best representative colony image for any
hierarchical sub-cluster.
| [
{
"created": "Fri, 14 Jan 2022 00:30:40 GMT",
"version": "v1"
}
] | 2022-06-07 | [
[
"Goldschmidt",
"Andy",
""
],
[
"Kunert-Graf",
"James",
""
],
[
"Scott",
"Adrian C.",
""
],
[
"Tan",
"Zhihao",
""
],
[
"Dudley",
"Aimée M.",
""
],
[
"Kutz",
"J. Nathan",
""
]
] | Baker's yeast (Saccharomyces cerevisiae) is a model organism for studying the morphology that emerges at the scale of multi-cell colonies. To look at how morphology develops, we collect a dataset of time-lapse photographs of the growth of different strains of S. cerevisiae. We discuss the general statistical challenges that arise when using time-lapse photographs to extract time-dependent features. In particular, we show how texture-based feature engineering and representative clustering can be successfully applied to categorize the development of yeast colony morphology using our dataset. The local binary pattern (LBP) from image processing is used to score the surface texture of colonies. This texture score develops along a smooth trajectory during growth. The path taken depends on how the morphology emerges. A hierarchical clustering of the colonies is performed according to their texture development trajectories. The clustering method is designed for practical interpretability; it obtains the best representative colony image for any hierarchical sub-cluster. |
1605.03660 | Anna Seigal | Anna Seigal, Portia Mira, Bernd Sturmfels, Miriam Barlow | Does Antibiotic Resistance Evolve in Hospitals? | 15 pages, 2 figures | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nosocomial outbreaks of bacteria are well-documented. Based on these
incidents, and the heavy usage of antibiotics in hospitals, it has been assumed
that antibiotic resistance evolves in hospital environments. To test this
assumption, we studied resistance phenotypes of bacteria collected from patient
isolates at a community hospital over a 2.5-year period. A graphical model
analysis shows no association between resistance and patient information other
than time of arrival. This allows us to focus on time course data.
We introduce a Hospital Transmission Model, based on negative binomial delay.
Our main contribution is a statistical hypothesis test called the Nosocomial
Evolution of Resistance Detector (NERD). It calculates the significance of
resistance trends occurring in a hospital. It can inform hospital staff about
the effects of various practices and interventions, can help detect clonal
outbreaks, and is available as an R-package.
We applied the NERD method to each of the 16 antibiotics in the study via 16
hypothesis tests. For 13 of the antibiotics, we found that the hospital
environment had no significant effect upon the evolution of resistance; the
hospital is merely a piece of the larger picture. The p-values obtained for the
other three antibiotics (Cefepime, Ceftazidime and Gentamicin) indicate that
particular care should be taken in hospital practices with these antibiotics.
One of the three, Ceftazidime, was significant after accounting for multiple
hypotheses, indicating a trend of decreased resistance for this drug.
| [
{
"created": "Thu, 12 May 2016 02:58:39 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jun 2016 17:23:55 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Oct 2016 21:03:09 GMT",
"version": "v3"
}
] | 2016-10-20 | [
[
"Seigal",
"Anna",
""
],
[
"Mira",
"Portia",
""
],
[
"Sturmfels",
"Bernd",
""
],
[
"Barlow",
"Miriam",
""
]
] | Nosocomial outbreaks of bacteria are well-documented. Based on these incidents, and the heavy usage of antibiotics in hospitals, it has been assumed that antibiotic resistance evolves in hospital environments. To test this assumption, we studied resistance phenotypes of bacteria collected from patient isolates at a community hospital over a 2.5-year period. A graphical model analysis shows no association between resistance and patient information other than time of arrival. This allows us to focus on time course data. We introduce a Hospital Transmission Model, based on negative binomial delay. Our main contribution is a statistical hypothesis test called the Nosocomial Evolution of Resistance Detector (NERD). It calculates the significance of resistance trends occurring in a hospital. It can inform hospital staff about the effects of various practices and interventions, can help detect clonal outbreaks, and is available as an R-package. We applied the NERD method to each of the 16 antibiotics in the study via 16 hypothesis tests. For 13 of the antibiotics, we found that the hospital environment had no significant effect upon the evolution of resistance; the hospital is merely a piece of the larger picture. The p-values obtained for the other three antibiotics (Cefepime, Ceftazidime and Gentamicin) indicate that particular care should be taken in hospital practices with these antibiotics. One of the three, Ceftazidime, was significant after accounting for multiple hypotheses, indicating a trend of decreased resistance for this drug. |
2108.13414 | Innokentiy Kastalskiy | Yuliya Tsybina, Innokentiy Kastalskiy, Mikhail Krivonosov, Alexey
Zaikin, Victor Kazantsev, Alexander Gorban and Susanna Gordleeva | Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network | 18 pages, 6 figures, 1 table, Appendix | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modeling the neuronal processes underlying short-term working memory remains
the focus of many theoretical studies in neuroscience. Here we propose a
mathematical model of spiking neuron network (SNN) demonstrating how a piece of
information can be maintained as a robust activity pattern for several seconds
then completely disappear if no other stimuli come. Such short-term memory
traces are preserved due to the activation of astrocytes accompanying the SNN.
The astrocytes exhibit calcium transients at a time scale of seconds. These
transients further modulate the efficiency of synaptic transmission and, hence,
the firing rate of neighboring neurons at diverse timescales through
gliotransmitter release. We show how such transients continuously encode
frequencies of neuronal discharges and provide robust short-term storage of
analogous information. This kind of short-term memory can keep operative
information for seconds, then completely forget it to avoid overlapping with
forthcoming patterns. The SNN is inter-connected with the astrocytic layer by
local inter-cellular diffusive connections. The astrocytes are activated only
when the neighboring neurons fire quite synchronously, e.g. when an information
pattern is loaded. For illustration, we took greyscale photos of people's faces
where the grey level encoded the level of applied current stimulating the
neurons. The astrocyte feedback modulates (facilitates) synaptic transmission
by varying the frequency of neuronal firing. We show how arbitrary patterns can
be loaded, then stored for a certain interval of time, and retrieved if the
appropriate clue pattern is applied to the input.
| [
{
"created": "Tue, 31 Aug 2021 16:13:15 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Tsybina",
"Yuliya",
""
],
[
"Kastalskiy",
"Innokentiy",
""
],
[
"Krivonosov",
"Mikhail",
""
],
[
"Zaikin",
"Alexey",
""
],
[
"Kazantsev",
"Victor",
""
],
[
"Gorban",
"Alexander",
""
],
[
"Gordleeva",
"Susanna"... | Modeling the neuronal processes underlying short-term working memory remains the focus of many theoretical studies in neuroscience. Here we propose a mathematical model of spiking neuron network (SNN) demonstrating how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come. Such short-term memory traces are preserved due to the activation of astrocytes accompanying the SNN. The astrocytes exhibit calcium transients at a time scale of seconds. These transients further modulate the efficiency of synaptic transmission and, hence, the firing rate of neighboring neurons at diverse timescales through gliotransmitter release. We show how such transients continuously encode frequencies of neuronal discharges and provide robust short-term storage of analogous information. This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns. The SNN is inter-connected with the astrocytic layer by local inter-cellular diffusive connections. The astrocytes are activated only when the neighboring neurons fire quite synchronously, e.g. when an information pattern is loaded. For illustration, we took greyscale photos of people's faces where the grey level encoded the level of applied current stimulating the neurons. The astrocyte feedback modulates (facilitates) synaptic transmission by varying the frequency of neuronal firing. We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input. |
1207.4968 | Chad M. Topaz | Chad M. Topaz, Maria R. D'Orsogna, Leah Edelstein-Keshet and Andrew J.
Bernoff | Locust Dynamics: Behavioral Phase Change and Swarming | Main text plus figures and supporting information; to appear in PLOS
Computational Biology | null | 10.1371/journal.pcbi.1002642 | null | q-bio.QM nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Locusts exhibit two interconvertible behavioral phases, solitarious and
gregarious. While solitarious individuals are repelled from other locusts,
gregarious insects are attracted to conspecifics and can form large
aggregations such as marching hopper bands. Numerous biological experiments at
the individual level have shown how crowding biases conversion towards the
gregarious form. To understand the formation of marching locust hopper bands,
we study phase change at the collective level, and in a quantitative framework.
Specifically, we construct a partial integrodifferential equation model
incorporating the interplay between phase change and spatial movement at the
individual level in order to predict the dynamics of hopper band formation at
the population level. Stability analysis of our model reveals conditions for an
outbreak, characterized by a large scale transition to the gregarious phase. A
model reduction enables quantification of the temporal dynamics of each phase,
of the proportion of the population that will eventually gregarize, and of the
time scale for this to occur. Numerical simulations provide descriptions of the
aggregation's structure and reveal transiently traveling clumps of gregarious
insects. Our predictions of aggregation and mass gregarization suggest several
possible future biological experiments.
| [
{
"created": "Fri, 20 Jul 2012 14:45:48 GMT",
"version": "v1"
}
] | 2015-06-05 | [
[
"Topaz",
"Chad M.",
""
],
[
"D'Orsogna",
"Maria R.",
""
],
[
"Edelstein-Keshet",
"Leah",
""
],
[
"Bernoff",
"Andrew J.",
""
]
] | Locusts exhibit two interconvertible behavioral phases, solitarious and gregarious. While solitarious individuals are repelled from other locusts, gregarious insects are attracted to conspecifics and can form large aggregations such as marching hopper bands. Numerous biological experiments at the individual level have shown how crowding biases conversion towards the gregarious form. To understand the formation of marching locust hopper bands, we study phase change at the collective level, and in a quantitative framework. Specifically, we construct a partial integrodifferential equation model incorporating the interplay between phase change and spatial movement at the individual level in order to predict the dynamics of hopper band formation at the population level. Stability analysis of our model reveals conditions for an outbreak, characterized by a large scale transition to the gregarious phase. A model reduction enables quantification of the temporal dynamics of each phase, of the proportion of the population that will eventually gregarize, and of the time scale for this to occur. Numerical simulations provide descriptions of the aggregation's structure and reveal transiently traveling clumps of gregarious insects. Our predictions of aggregation and mass gregarization suggest several possible future biological experiments. |
1004.3175 | Eva Kranz | Eva Kranz | Structural Stability and Immunogenicity of Peptides | null | null | null | null | q-bio.BM cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigated the role of peptide folding stability in peptide
immunogenicity. It was the aim of this thesis to implement a stability
criterion based on energy computations using an AMBER force field, and to test
the implementation with a large dataset.
| [
{
"created": "Mon, 19 Apr 2010 12:43:55 GMT",
"version": "v1"
}
] | 2010-04-20 | [
[
"Kranz",
"Eva",
""
]
] | We investigated the role of peptide folding stability in peptide immunogenicity. It was the aim of this thesis to implement a stability criterion based on energy computations using an AMBER force field, and to test the implementation with a large dataset. |
1303.3287 | Michael Okun | Michael Okun, Pierre Yger and Kenneth D. Harris | How (not) to assess the importance of correlations for the matching of
spontaneous and evoked activity: a response | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A response to a comment of Fiser et al.
| [
{
"created": "Wed, 13 Mar 2013 20:39:44 GMT",
"version": "v1"
}
] | 2013-03-15 | [
[
"Okun",
"Michael",
""
],
[
"Yger",
"Pierre",
""
],
[
"Harris",
"Kenneth D.",
""
]
] | A response to a comment of Fiser et al. |
1605.00021 | Philipp Boersch-Supan | Philipp H Boersch-Supan, Sadie J Ryan, Leah R Johnson | deBInfer: Bayesian inference for dynamical models of biological systems
in R | null | Methods in Ecology and Evolution 8 (2017) 511-518 | 10.1111/2041-210X.12679 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 1. Understanding the mechanisms underlying biological systems, and
ultimately, predicting their behaviours in a changing environment requires
overcoming the gap between mathematical models and experimental or
observational data. Differential equations (DEs) are commonly used to model the
temporal evolution of biological systems, but statistical methods for comparing
DE models to data and for parameter inference are relatively poorly developed.
This is especially problematic in the context of biological systems where
observations are often noisy and only a small number of time points may be
available. 2. The Bayesian approach offers a coherent framework for parameter
inference that can account for multiple sources of uncertainty, while making
use of prior information. It offers a rigorous methodology for parameter
inference, as well as modelling the link between unobservable model states and
parameters, and observable quantities. 3. We present deBInfer, a package for
the statistical computing environment R, implementing a Bayesian framework for
parameter inference in DEs. deBInfer provides templates for the DE model, the
observation model and data likelihood, and the model parameters and their prior
distributions. A Markov chain Monte Carlo (MCMC) procedure processes these
inputs to estimate the posterior distributions of the parameters and any
derived quantities, including the model trajectories. Further functionality is
provided to facilitate MCMC diagnostics, the visualisation of the posterior
distributions of model parameters and trajectories, and the use of compiled DE
models for improved computational performance. 4. The templating approach makes
deBInfer applicable to a wide range of DE models. We demonstrate its
application to ordinary and delay DE models for population ecology.
| [
{
"created": "Fri, 29 Apr 2016 20:41:31 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2016 03:24:21 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Oct 2016 13:48:45 GMT",
"version": "v3"
}
] | 2017-04-19 | [
[
"Boersch-Supan",
"Philipp H",
""
],
[
"Ryan",
"Sadie J",
""
],
[
"Johnson",
"Leah R",
""
]
] | 1. Understanding the mechanisms underlying biological systems, and ultimately, predicting their behaviours in a changing environment requires overcoming the gap between mathematical models and experimental or observational data. Differential equations (DEs) are commonly used to model the temporal evolution of biological systems, but statistical methods for comparing DE models to data and for parameter inference are relatively poorly developed. This is especially problematic in the context of biological systems where observations are often noisy and only a small number of time points may be available. 2. The Bayesian approach offers a coherent framework for parameter inference that can account for multiple sources of uncertainty, while making use of prior information. It offers a rigorous methodology for parameter inference, as well as modelling the link between unobservable model states and parameters, and observable quantities. 3. We present deBInfer, a package for the statistical computing environment R, implementing a Bayesian framework for parameter inference in DEs. deBInfer provides templates for the DE model, the observation model and data likelihood, and the model parameters and their prior distributions. A Markov chain Monte Carlo (MCMC) procedure processes these inputs to estimate the posterior distributions of the parameters and any derived quantities, including the model trajectories. Further functionality is provided to facilitate MCMC diagnostics, the visualisation of the posterior distributions of model parameters and trajectories, and the use of compiled DE models for improved computational performance. 4. The templating approach makes deBInfer applicable to a wide range of DE models. We demonstrate its application to ordinary and delay DE models for population ecology. |
2107.03086 | Jakob Lykke Andersen | Jakob L. Andersen, Christoph Flamm, Daniel Merkle, Peter F. Stadler | Defining Autocatalysis in Chemical Reaction Networks | null | null | null | null | q-bio.MN cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autocatalysis is a deceptively simple concept, referring to the situation
that a chemical species $X$ catalyzes its own formation. From the perspective
of chemical kinetics, autocatalysts show a regime of super-linear growth. Given
a chemical reaction network, however, it is not at all straightforward to
identify species that are autocatalytic in the sense that there is a
sub-network that takes $X$ as input and produces more than one copy of $X$ as
output. The difficulty arises from the need to distinguish autocatalysis e.g.
from the superposition of a cycle that consumes and produces equal amounts of
$X$ and a pathway that produces $X$. To deal with this issue, a number of
competing notions, such as exclusive autocatalysis and autocatalytic cycles,
have been introduced. A closer inspection of concepts and their usage by
different authors shows, however, that subtle differences in the definitions
often makes conceptually matching ideas difficult to bring together formally.
In this contribution we make some of the available approaches comparable by
translating them into a common formal framework that uses integer hyperflows as
a basis to study autocatalysis in large chemical reaction networks. As an
application we investigate the prevalence of autocatalysis in metabolic
networks.
| [
{
"created": "Wed, 7 Jul 2021 09:11:28 GMT",
"version": "v1"
}
] | 2021-07-08 | [
[
"Andersen",
"Jakob L.",
""
],
[
"Flamm",
"Christoph",
""
],
[
"Merkle",
"Daniel",
""
],
[
"Stadler",
"Peter F.",
""
]
] | Autocatalysis is a deceptively simple concept, referring to the situation that a chemical species $X$ catalyzes its own formation. From the perspective of chemical kinetics, autocatalysts show a regime of super-linear growth. Given a chemical reaction network, however, it is not at all straightforward to identify species that are autocatalytic in the sense that there is a sub-network that takes $X$ as input and produces more than one copy of $X$ as output. The difficulty arises from the need to distinguish autocatalysis e.g. from the superposition of a cycle that consumes and produces equal amounts of $X$ and a pathway that produces $X$. To deal with this issue, a number of competing notions, such as exclusive autocatalysis and autocatalytic cycles, have been introduced. A closer inspection of concepts and their usage by different authors shows, however, that subtle differences in the definitions often makes conceptually matching ideas difficult to bring together formally. In this contribution we make some of the available approaches comparable by translating them into a common formal framework that uses integer hyperflows as a basis to study autocatalysis in large chemical reaction networks. As an application we investigate the prevalence of autocatalysis in metabolic networks. |
2006.05504 | Clement de Chaisemartin | Cl\'ement de Chaisemartin, Luc de Chaisemartin | BCG vaccination in infancy does not protect against COVID-19. Evidence
from a natural experiment in Sweden | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Bacille Calmette-Gu\'erin (BCG) tuberculosis vaccine has immunity
benefits against respiratory infections. Accordingly, it has been hypothesized
that it may have a protective effect against COVID-19. Recent research found
that countries with universal Bacillus Calmette-Gu\'erin (BCG) childhood
vaccination policies tend to be less affected by the COVID-19 pandemic.
However, such ecological studies are biased by numerous confounders. Instead,
this paper takes advantage of a rare nationwide natural experiment that took
place in Sweden in 1975, where discontinuation of newborns BCG vaccination led
to a dramatic fall of the BCG coverage rate from 92% to 2% , thus allowing us
to estimate the BCG's effect without all the biases associated with
cross-country comparisons. Numbers of COVID-19 cases and hospitalizations were
recorded for birth cohorts born just before and just after that change,
representing 1,026,304 and 1,018,544 individuals, respectively. We used
regression discontinuity to assess the effect of BCG vaccination on Covid-19
related outcomes. This method used on such a large population allows for a high
precision that would be hard to achieve using a classical randomized controlled
trial. The odds ratio for Covid-19 cases and Covid-19 related hospitalizations
were 0.9997 (CI95: [0.8002-1.1992]) and 1.1931 (CI95: [0.7558-1.6304]),
respectively. We can thus reject with 95\% confidence that universal BCG
vaccination reduces the number of cases by more than 20% and the number of
hospitalizations by more than 24%. While the effect of a recent vaccination
must be evaluated, we provide strong evidence that receiving the BCG vaccine at
birth does not have a protective effect against COVID-19.
| [
{
"created": "Mon, 8 Jun 2020 16:41:13 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jun 2020 23:50:43 GMT",
"version": "v2"
}
] | 2020-06-24 | [
[
"de Chaisemartin",
"Clément",
""
],
[
"de Chaisemartin",
"Luc",
""
]
] | The Bacille Calmette-Gu\'erin (BCG) tuberculosis vaccine has immunity benefits against respiratory infections. Accordingly, it has been hypothesized that it may have a protective effect against COVID-19. Recent research found that countries with universal Bacillus Calmette-Gu\'erin (BCG) childhood vaccination policies tend to be less affected by the COVID-19 pandemic. However, such ecological studies are biased by numerous confounders. Instead, this paper takes advantage of a rare nationwide natural experiment that took place in Sweden in 1975, where discontinuation of newborns BCG vaccination led to a dramatic fall of the BCG coverage rate from 92% to 2% , thus allowing us to estimate the BCG's effect without all the biases associated with cross-country comparisons. Numbers of COVID-19 cases and hospitalizations were recorded for birth cohorts born just before and just after that change, representing 1,026,304 and 1,018,544 individuals, respectively. We used regression discontinuity to assess the effect of BCG vaccination on Covid-19 related outcomes. This method used on such a large population allows for a high precision that would be hard to achieve using a classical randomized controlled trial. The odds ratio for Covid-19 cases and Covid-19 related hospitalizations were 0.9997 (CI95: [0.8002-1.1992]) and 1.1931 (CI95: [0.7558-1.6304]), respectively. We can thus reject with 95\% confidence that universal BCG vaccination reduces the number of cases by more than 20% and the number of hospitalizations by more than 24%. While the effect of a recent vaccination must be evaluated, we provide strong evidence that receiving the BCG vaccine at birth does not have a protective effect against COVID-19. |
1504.02265 | Tiago Simas | Tiago Simas, Mario Chavez, Pablo Rodriguez, and Albert Diaz-Guilera | An Algebraic Topological Method for Multimodal Brain Networks
Comparisons | null | null | null | null | q-bio.NC nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding brain connectivity has become one of the most important issues
in neuroscience. But connectivity data can reflect either the functional
relationships of the brain activities or the anatomical properties between
brain areas. Although one should expect a clear relationship between both
representations it is not straightforward. Here we present a formalism that
allows for the comparison of structural (DTI) and functional (fMRI) networks by
embedding both in a common metric space. In this metric space one can then find
for which regions the two networks are significantly different. Our methodology
can be used not only to compare multimodal networks but also to extract
statistically significant aggregated networks of a set of subjects. Actually,
we use this procedure to aggregate a set of functional (fMRI) networks from
different subjects in an aggregated network that is compared with the
anatomical (DTI) connectivity. The comparison of the aggregated network reveals
some features that are not observed when the comparison is done with the
classical averaged network.
| [
{
"created": "Thu, 9 Apr 2015 11:32:44 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Apr 2015 17:04:25 GMT",
"version": "v2"
}
] | 2015-04-13 | [
[
"Simas",
"Tiago",
""
],
[
"Chavez",
"Mario",
""
],
[
"Rodriguez",
"Pablo",
""
],
[
"Diaz-Guilera",
"Albert",
""
]
] | Understanding brain connectivity has become one of the most important issues in neuroscience. But connectivity data can reflect either the functional relationships of the brain activities or the anatomical properties between brain areas. Although one should expect a clear relationship between both representations it is not straightforward. Here we present a formalism that allows for the comparison of structural (DTI) and functional (fMRI) networks by embedding both in a common metric space. In this metric space one can then find for which regions the two networks are significantly different. Our methodology can be used not only to compare multimodal networks but also to extract statistically significant aggregated networks of a set of subjects. Actually, we use this procedure to aggregate a set of functional (fMRI) networks from different subjects in an aggregated network that is compared with the anatomical (DTI) connectivity. The comparison of the aggregated network reveals some features that are not observed when the comparison is done with the classical averaged network. |
1906.06163 | Mareike Fischer | Mareike Fischer and Andrew Francis | How tree-based is my network? Proximity measures for unrooted
phylogenetic networks | null | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tree-based networks are a class of phylogenetic networks that attempt to
formally capture what is meant by "tree-like" evolution. A given non-tree-based
phylogenetic network, however, might appear to be very close to being
tree-based, or very far. In this paper, we formalise the notion of proximity to
tree-based for unrooted phylogenetic networks, with a range of proximity
measures. These measures also provide characterisations of tree-based networks.
One measure in particular, related to the nearest neighbour interchange
operation, allows us to define the notion of "tree-based rank". This provides a
subclassification within the tree-based networks themselves, identifying those
networks that are "very" tree-based. Finally, we prove results relating
tree-based networks in the settings of rooted and unrooted phylogenetic
networks, showing effectively that an unrooted network is tree-based if and
only if it can be made a rooted tree-based network by rooting it and orienting
the edges appropriately. This leads to a clarification of the contrasting
decision problems for tree-based networks, which are polynomial in the rooted
case but NP complete in the unrooted.
| [
{
"created": "Fri, 14 Jun 2019 12:28:56 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2019 05:22:27 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Nov 2019 20:25:11 GMT",
"version": "v3"
},
{
"created": "Thu, 16 Jan 2020 09:55:15 GMT",
"version": "v4"
}
] | 2020-01-17 | [
[
"Fischer",
"Mareike",
""
],
[
"Francis",
"Andrew",
""
]
] | Tree-based networks are a class of phylogenetic networks that attempt to formally capture what is meant by "tree-like" evolution. A given non-tree-based phylogenetic network, however, might appear to be very close to being tree-based, or very far. In this paper, we formalise the notion of proximity to tree-based for unrooted phylogenetic networks, with a range of proximity measures. These measures also provide characterisations of tree-based networks. One measure in particular, related to the nearest neighbour interchange operation, allows us to define the notion of "tree-based rank". This provides a subclassification within the tree-based networks themselves, identifying those networks that are "very" tree-based. Finally, we prove results relating tree-based networks in the settings of rooted and unrooted phylogenetic networks, showing effectively that an unrooted network is tree-based if and only if it can be made a rooted tree-based network by rooting it and orienting the edges appropriately. This leads to a clarification of the contrasting decision problems for tree-based networks, which are polynomial in the rooted case but NP complete in the unrooted. |
2003.03083 | Ramses Djidjou-Demasse | Rams\`es Djidjou-Demasse (IRD), Samuel Alizon (MIVEGEC), Mircea T.
Sofonea (MIVEGEC) | Within-host bacterial growth dynamics with both mutation and horizontal
gene transfer | null | null | null | null | q-bio.PE math.DS math.FA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution and emergence of antibiotic resistance is a major public health
concern. The understanding of the within-host microbial dynamics combining
mutational processes, horizontal gene transfer and resource consumption, is one
of the keys to solve this problem. We analyze a generic model to rigorously
describe interactions dynamics of four bacterial strains: one fully sensitive
to the drug, one with mutational resistance only, one with plasmidic resistance
only and one with both resistances. By dening thresholds numbers (i.e. each
strain's eective reproduction and each strain's transition thresholds numbers),
we rst express conditions for the existence of non trivial stationary states.
We nd that these thresholds mainly depend on bacteria quantitative traits such
as nutrient consumption ability, growth conversion factor, death rate, mutation
(forward or reverse) and segregational loss of plasmid probabilities (for
plasmid-bearing strains). Next, with respect to the order in the set of
strain's eective reproduction thresholds numbers, we show that the qualitative
dynamics of the model range from the extinction of all strains, coexistence of
sensitive and mutational resistance strains to the coexistence of all strains
at equilibrium. Finally, we go through some applications of our general
analysis depending on whether bacteria strains interact without or with drug
action (either cytostatic or cytotoxic).
| [
{
"created": "Fri, 6 Mar 2020 08:56:05 GMT",
"version": "v1"
}
] | 2020-03-09 | [
[
"Djidjou-Demasse",
"Ramsès",
"",
"IRD"
],
[
"Alizon",
"Samuel",
"",
"MIVEGEC"
],
[
"Sofonea",
"Mircea T.",
"",
"MIVEGEC"
]
] | The evolution and emergence of antibiotic resistance is a major public health concern. The understanding of the within-host microbial dynamics combining mutational processes, horizontal gene transfer and resource consumption, is one of the keys to solve this problem. We analyze a generic model to rigorously describe interactions dynamics of four bacterial strains: one fully sensitive to the drug, one with mutational resistance only, one with plasmidic resistance only and one with both resistances. By dening thresholds numbers (i.e. each strain's eective reproduction and each strain's transition thresholds numbers), we rst express conditions for the existence of non trivial stationary states. We nd that these thresholds mainly depend on bacteria quantitative traits such as nutrient consumption ability, growth conversion factor, death rate, mutation (forward or reverse) and segregational loss of plasmid probabilities (for plasmid-bearing strains). Next, with respect to the order in the set of strain's eective reproduction thresholds numbers, we show that the qualitative dynamics of the model range from the extinction of all strains, coexistence of sensitive and mutational resistance strains to the coexistence of all strains at equilibrium. Finally, we go through some applications of our general analysis depending on whether bacteria strains interact without or with drug action (either cytostatic or cytotoxic). |
1501.03359 | Raoul Wadhwa | Raoul R. Wadhwa, Laszlo Zalanyi, Judit Szente, Laszlo Negyessy, Peter
Erdi | Stochastic kinetics of the circular gene hypothesis: feedback effects
and protein fluctuations | 16 pages, 6 figures | Math.Comput.Simul. 133 (2017) 326-336 | 10.1016/j.matcom.2015.08.006 | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Stochastic kinetic models of genetic expression are able to describe protein
fluctuations. A comparative study of the canonical and a feedback model is
given here by using stochastic simulation methods. The feedback model is
skeleton model implementation of the circular gene hypothesis, which suggests
the interaction between the synthesis and degradation of mRNA. Qualitative and
quantitative changes in the shape and in the numerical characteristics of the
stationary distributions suggest that more combined experimental and
theoretical studies should be done to uncover the details of the kinetic
mechanism of gene expression.
| [
{
"created": "Mon, 12 Jan 2015 23:18:14 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Aug 2015 18:38:17 GMT",
"version": "v2"
}
] | 2018-09-06 | [
[
"Wadhwa",
"Raoul R.",
""
],
[
"Zalanyi",
"Laszlo",
""
],
[
"Szente",
"Judit",
""
],
[
"Negyessy",
"Laszlo",
""
],
[
"Erdi",
"Peter",
""
]
] | Stochastic kinetic models of genetic expression are able to describe protein fluctuations. A comparative study of the canonical and a feedback model is given here by using stochastic simulation methods. The feedback model is skeleton model implementation of the circular gene hypothesis, which suggests the interaction between the synthesis and degradation of mRNA. Qualitative and quantitative changes in the shape and in the numerical characteristics of the stationary distributions suggest that more combined experimental and theoretical studies should be done to uncover the details of the kinetic mechanism of gene expression. |
1107.5338 | Elisa Loza-Reyes Dr | Elisa Loza-Reyes, Merrilee Hurn and Tony Robinson | Classification of molecular sequence data using Bayesian phylogenetic
mixture models | null | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rate variation among the sites of a molecular sequence is commonly found in
applications of phylogenetic inference. Several approaches exist to account for
this feature but they do not usually enable the investigator to pinpoint the
sites that evolve under one or another rate of evolution in a straightforward
manner. The focus is on Bayesian phylogenetic mixture models, augmented with
allocation variables, as tools for site classification and quantification of
classification uncertainty. The method does not rely on prior knowledge of site
membership to classes or even the number of classes. Furthermore, it does not
require correlated sites to be next to one another in the sequence alignment,
unlike some phylogenetic hidden Markov or change-point models. In the approach
presented, model selection on the number and type of mixture components is
conducted ahead of both model estimation and site classification; the
steppingstone sampler (SS) is used to select amongst competing mixture models.
Example applications of simulated data and mitochondrial DNA of primates
illustrate site classification via 'augmented' Bayesian phylogenetic mixtures.
In both examples, all mixtures outperform commonly-used models of among-site
rate variation and models that do not account for rate heterogeneity. The
examples further demonstrate how site classification is readily available from
the analysis output. The method is directly relevant to the choice of
partitions in Bayesian phylogenetics, and its application may lead to the
discovery of structure not otherwise recognised in a molecular sequence
alignment. Computational aspects of Bayesian phylogenetic model estimation are
discussed, including the use of simple Markov chain Monte Carlo (MCMC) moves
that mix efficiently without tempering the chains.
| [
{
"created": "Tue, 26 Jul 2011 21:24:16 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Aug 2012 17:29:49 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Apr 2013 10:51:23 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Apr 2013 07:40:36 GMT",
"version": "v4"
},
{
"cre... | 2013-05-23 | [
[
"Loza-Reyes",
"Elisa",
""
],
[
"Hurn",
"Merrilee",
""
],
[
"Robinson",
"Tony",
""
]
] | Rate variation among the sites of a molecular sequence is commonly found in applications of phylogenetic inference. Several approaches exist to account for this feature but they do not usually enable the investigator to pinpoint the sites that evolve under one or another rate of evolution in a straightforward manner. The focus is on Bayesian phylogenetic mixture models, augmented with allocation variables, as tools for site classification and quantification of classification uncertainty. The method does not rely on prior knowledge of site membership to classes or even the number of classes. Furthermore, it does not require correlated sites to be next to one another in the sequence alignment, unlike some phylogenetic hidden Markov or change-point models. In the approach presented, model selection on the number and type of mixture components is conducted ahead of both model estimation and site classification; the steppingstone sampler (SS) is used to select amongst competing mixture models. Example applications of simulated data and mitochondrial DNA of primates illustrate site classification via 'augmented' Bayesian phylogenetic mixtures. In both examples, all mixtures outperform commonly-used models of among-site rate variation and models that do not account for rate heterogeneity. The examples further demonstrate how site classification is readily available from the analysis output. The method is directly relevant to the choice of partitions in Bayesian phylogenetics, and its application may lead to the discovery of structure not otherwise recognised in a molecular sequence alignment. Computational aspects of Bayesian phylogenetic model estimation are discussed, including the use of simple Markov chain Monte Carlo (MCMC) moves that mix efficiently without tempering the chains. |
2305.18089 | Natalie Maus | Natalie Maus and Yimeng Zeng and Daniel Allen Anderson and Phillip
Maffettone and Aaron Solomon and Peyton Greenside and Osbert Bastani and
Jacob R. Gardner | Inverse Protein Folding Using Deep Bayesian Optimization | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Inverse protein folding -- the task of predicting a protein sequence from its
backbone atom coordinates -- has surfaced as an important problem in the "top
down", de novo design of proteins. Contemporary approaches have cast this
problem as a conditional generative modelling problem, where a large generative
model over protein sequences is conditioned on the backbone. While these
generative models very rapidly produce promising sequences, independent draws
from generative models may fail to produce sequences that reliably fold to the
correct backbone. Furthermore, it is challenging to adapt pure generative
approaches to other settings, e.g., when constraints exist. In this paper, we
cast the problem of improving generated inverse folds as an optimization
problem that we solve using recent advances in "deep" or "latent space"
Bayesian optimization. Our approach consistently produces protein sequences
with greatly reduced structural error to the target backbone structure as
measured by TM score and RMSD while using fewer computational resources.
Additionally, we demonstrate other advantages of an optimization-based approach
to the problem, such as the ability to handle constraints.
| [
{
"created": "Thu, 25 May 2023 02:15:25 GMT",
"version": "v1"
}
] | 2023-05-30 | [
[
"Maus",
"Natalie",
""
],
[
"Zeng",
"Yimeng",
""
],
[
"Anderson",
"Daniel Allen",
""
],
[
"Maffettone",
"Phillip",
""
],
[
"Solomon",
"Aaron",
""
],
[
"Greenside",
"Peyton",
""
],
[
"Bastani",
"Osbert",
""
... | Inverse protein folding -- the task of predicting a protein sequence from its backbone atom coordinates -- has surfaced as an important problem in the "top down", de novo design of proteins. Contemporary approaches have cast this problem as a conditional generative modelling problem, where a large generative model over protein sequences is conditioned on the backbone. While these generative models very rapidly produce promising sequences, independent draws from generative models may fail to produce sequences that reliably fold to the correct backbone. Furthermore, it is challenging to adapt pure generative approaches to other settings, e.g., when constraints exist. In this paper, we cast the problem of improving generated inverse folds as an optimization problem that we solve using recent advances in "deep" or "latent space" Bayesian optimization. Our approach consistently produces protein sequences with greatly reduced structural error to the target backbone structure as measured by TM score and RMSD while using fewer computational resources. Additionally, we demonstrate other advantages of an optimization-based approach to the problem, such as the ability to handle constraints. |
1209.6607 | Pamela Reinagel | Kate S. Gaudry and Pamela Reinagel | Evidence for an additive inhibitory component of contrast adaptation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The latency of visual responses generally decreases as contrast increases.
Recording in the lateral geniculate nucleus (LGN), we find that response
latency increases with increasing contrast in ON cells for some visual stimuli.
We propose that this surprising latency trend can be explained if ON cells rest
further from threshold at higher contrasts. Indeed, while contrast changes
caused a combination of multiplicative gain change and additive shift in LGN
cells, the additive shift predominated in ON cells. Modeling results supported
this theory: the ON cell latency trend was found when the distance-to-threshold
shifted with contrast, but not when distance-to-threshold was fixed across
contrasts. In the model, latency also increases as surround-to-center ratios
increase, which has been shown to occur at higher contrasts. We propose that
higher-contrast full-field stimuli can evoke more surround inhibition, shifting
the potential further from spiking threshold and thereby increasing response
latency.
| [
{
"created": "Fri, 28 Sep 2012 18:59:30 GMT",
"version": "v1"
}
] | 2012-10-01 | [
[
"Gaudry",
"Kate S.",
""
],
[
"Reinagel",
"Pamela",
""
]
] | The latency of visual responses generally decreases as contrast increases. Recording in the lateral geniculate nucleus (LGN), we find that response latency increases with increasing contrast in ON cells for some visual stimuli. We propose that this surprising latency trend can be explained if ON cells rest further from threshold at higher contrasts. Indeed, while contrast changes caused a combination of multiplicative gain change and additive shift in LGN cells, the additive shift predominated in ON cells. Modeling results supported this theory: the ON cell latency trend was found when the distance-to-threshold shifted with contrast, but not when distance-to-threshold was fixed across contrasts. In the model, latency also increases as surround-to-center ratios increase, which has been shown to occur at higher contrasts. We propose that higher-contrast full-field stimuli can evoke more surround inhibition, shifting the potential further from spiking threshold and thereby increasing response latency. |
2403.19718 | Piotr Ludynia | Micha{\l} Szafarczyk, Piotr Ludynia, Przemys{\l}aw Kukla | A Python library for efficient computation of molecular fingerprints | 56 pages | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning solutions are very popular in the field of chemoinformatics,
where they have numerous applications, such as novel drug discovery or
molecular property prediction. Molecular fingerprints are algorithms commonly
used for vectorizing chemical molecules as a part of preprocessing in this kind
of solution. However, despite their popularity, there are no libraries that
implement them efficiently for large datasets, utilizing modern, multicore
architectures. On top of that, most of them do not provide the user with an
intuitive interface, or one that would be compatible with other machine
learning tools.
In this project, we created a Python library that computes molecular
fingerprints efficiently and delivers an interface that is comprehensive and
enables the user to easily incorporate the library into their existing machine
learning workflow. The library enables the user to perform computation on large
datasets using parallelism. Because of that, it is possible to perform such
tasks as hyperparameter tuning in a reasonable time. We describe tools used in
implementation of the library and asses its time performance on example
benchmark datasets. Additionally, we show that using molecular fingerprints we
can achieve results comparable to state-of-the-art ML solutions even with very
simple models.
| [
{
"created": "Wed, 27 Mar 2024 19:02:09 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Szafarczyk",
"Michał",
""
],
[
"Ludynia",
"Piotr",
""
],
[
"Kukla",
"Przemysław",
""
]
] | Machine learning solutions are very popular in the field of chemoinformatics, where they have numerous applications, such as novel drug discovery or molecular property prediction. Molecular fingerprints are algorithms commonly used for vectorizing chemical molecules as a part of preprocessing in this kind of solution. However, despite their popularity, there are no libraries that implement them efficiently for large datasets, utilizing modern, multicore architectures. On top of that, most of them do not provide the user with an intuitive interface, or one that would be compatible with other machine learning tools. In this project, we created a Python library that computes molecular fingerprints efficiently and delivers an interface that is comprehensive and enables the user to easily incorporate the library into their existing machine learning workflow. The library enables the user to perform computation on large datasets using parallelism. Because of that, it is possible to perform such tasks as hyperparameter tuning in a reasonable time. We describe tools used in implementation of the library and asses its time performance on example benchmark datasets. Additionally, we show that using molecular fingerprints we can achieve results comparable to state-of-the-art ML solutions even with very simple models. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.