id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2001.07811 | Hiroshi Tamura | Yuta Kanda, Kota S Sasaki, Izumi Ohzawa, Hiroshi Tamura | Deleting object selective units in a fully-connected layer of deep
convolutional networks improves classification performance | Number of pages: 18; Number of Figures: 6; Number of Tables: 1 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurons in the primate visual cortices show a wide range of stimulus
selectivity. Some neurons respond to only a small fraction of stimulus images,
whereas others respond to many stimulus images in a non-selective manner. It is
unclear how stimulus selective and non-selective neurons contribute to visual
object recognition. Herein, we examined the relationship between stimulus
selectivity and the effect of deletion of units on task performance using fully
a connected layer of two types of deep convolutional neural networks (DCNNs).
Deleting a stimulus selective unit caused slight improvements of task
performance, whereas deleting stimulus non-selective units caused a significant
decrease in task performance. However, these findings do not imply that
stimulus selective units have no use for the task. Indeed, better performance
was obtained when the networks consisted of both stimulus selective and
non-selective units.
| [
{
"created": "Tue, 21 Jan 2020 23:33:53 GMT",
"version": "v1"
}
] | 2020-01-23 | [
[
"Kanda",
"Yuta",
""
],
[
"Sasaki",
"Kota S",
""
],
[
"Ohzawa",
"Izumi",
""
],
[
"Tamura",
"Hiroshi",
""
]
] | Neurons in the primate visual cortices show a wide range of stimulus selectivity. Some neurons respond to only a small fraction of stimulus images, whereas others respond to many stimulus images in a non-selective manner. It is unclear how stimulus selective and non-selective neurons contribute to visual object recognition. Herein, we examined the relationship between stimulus selectivity and the effect of deletion of units on task performance using fully a connected layer of two types of deep convolutional neural networks (DCNNs). Deleting a stimulus selective unit caused slight improvements of task performance, whereas deleting stimulus non-selective units caused a significant decrease in task performance. However, these findings do not imply that stimulus selective units have no use for the task. Indeed, better performance was obtained when the networks consisted of both stimulus selective and non-selective units. |
2209.06862 | Gideon Kowadlo | Chandramouli Rajagopalan, David Rawlinson, Elkhonon Goldberg, Gideon
Kowadlo | Deep learning in a bilateral brain with hemispheric specialization | ACAIN 2024 | null | null | null | q-bio.NC cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | The brains of all bilaterally symmetric animals on Earth are divided into
left and right hemispheres. The anatomy and functionality of the hemispheres
have a large degree of overlap, but there are asymmetries, and they specialise
in possesses different attributes. Other authors have used computational models
to mimic hemispheric asymmetries with a focus on reproducing human data on
semantic and visual processing tasks. We took a different approach and aimed to
understand how dual hemispheres in a bilateral architecture interact to perform
well in a given task. We propose a bilateral artificial neural network that
imitates lateralisation observed in nature: that the left hemisphere
specialises in local features and the right in global features. We used
different training objectives to achieve the desired specialisation and tested
it on an image classification task with two different CNN backbones: ResNet and
VGG. Our analysis found that the hemispheres represent complementary features
that are exploited by a network head that implements a type of weighted
attention. The bilateral architecture outperformed a range of baselines of
similar representational capacity that do not exploit differential
specialisation, with the exception of a conventional ensemble of unilateral
networks trained on dual training objectives for local and global features. The
results demonstrate the efficacy of bilateralism, contribute to the discussion
of bilateralism in biological brains, and the principle may serve as an
inductive bias for new AI systems.
| [
{
"created": "Fri, 9 Sep 2022 00:40:14 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Sep 2022 03:11:43 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Sep 2022 04:34:31 GMT",
"version": "v3"
},
{
"created": "Fri, 23 Dec 2022 12:35:51 GMT",
"version": "v4"
},
{
"created": "Mon, 26 Dec 2022 18:51:51 GMT",
"version": "v5"
},
{
"created": "Wed, 25 Jan 2023 04:06:44 GMT",
"version": "v6"
},
{
"created": "Tue, 5 Mar 2024 01:18:45 GMT",
"version": "v7"
},
{
"created": "Fri, 15 Mar 2024 02:05:21 GMT",
"version": "v8"
},
{
"created": "Wed, 10 Jul 2024 12:20:11 GMT",
"version": "v9"
}
] | 2024-07-11 | [
[
"Rajagopalan",
"Chandramouli",
""
],
[
"Rawlinson",
"David",
""
],
[
"Goldberg",
"Elkhonon",
""
],
[
"Kowadlo",
"Gideon",
""
]
] | The brains of all bilaterally symmetric animals on Earth are divided into left and right hemispheres. The anatomy and functionality of the hemispheres have a large degree of overlap, but there are asymmetries, and they specialise in possesses different attributes. Other authors have used computational models to mimic hemispheric asymmetries with a focus on reproducing human data on semantic and visual processing tasks. We took a different approach and aimed to understand how dual hemispheres in a bilateral architecture interact to perform well in a given task. We propose a bilateral artificial neural network that imitates lateralisation observed in nature: that the left hemisphere specialises in local features and the right in global features. We used different training objectives to achieve the desired specialisation and tested it on an image classification task with two different CNN backbones: ResNet and VGG. Our analysis found that the hemispheres represent complementary features that are exploited by a network head that implements a type of weighted attention. The bilateral architecture outperformed a range of baselines of similar representational capacity that do not exploit differential specialisation, with the exception of a conventional ensemble of unilateral networks trained on dual training objectives for local and global features. The results demonstrate the efficacy of bilateralism, contribute to the discussion of bilateralism in biological brains, and the principle may serve as an inductive bias for new AI systems. |
1805.11084 | David K. Lubensky | Alexander D. Golden, Joris Paijmans, and David K. Lubensky | Distinguishing Feedback Mechanisms in Clock Models | Figure numbering and cross-references corrected in v2 | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological oscillators are very diverse but can be classified based on
dynamical motifs such as the types of feedback loops present. The S. Elongatus
circadian clock is a remarkable phosphorylation-based oscillator that can be
reconstituted in vitro with only 3 different purified proteins: the clock
proteins KaiA, KaiB, and KaiC. Despite a growing body of knowledge about the
biochemistry of the Kai proteins, basic questions about how their interactions
lead to sustained oscillations remain unanswered. Here, we compare models of
this system that make opposing assumptions about whether KaiA sequestration
introduces a positive or a negative feedback loop. We find that the two
different feedback mechanisms can be distinguished experimentally by the
introduction of a protein that binds competitively with KaiA. Understanding the
dynamical mechanism responsible for oscillations in the Kai system may shed
light on the broader question of what clock architectures have been selected by
evolution and why.
| [
{
"created": "Mon, 28 May 2018 17:53:59 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jun 2018 03:25:19 GMT",
"version": "v2"
}
] | 2018-06-06 | [
[
"Golden",
"Alexander D.",
""
],
[
"Paijmans",
"Joris",
""
],
[
"Lubensky",
"David K.",
""
]
] | Biological oscillators are very diverse but can be classified based on dynamical motifs such as the types of feedback loops present. The S. Elongatus circadian clock is a remarkable phosphorylation-based oscillator that can be reconstituted in vitro with only 3 different purified proteins: the clock proteins KaiA, KaiB, and KaiC. Despite a growing body of knowledge about the biochemistry of the Kai proteins, basic questions about how their interactions lead to sustained oscillations remain unanswered. Here, we compare models of this system that make opposing assumptions about whether KaiA sequestration introduces a positive or a negative feedback loop. We find that the two different feedback mechanisms can be distinguished experimentally by the introduction of a protein that binds competitively with KaiA. Understanding the dynamical mechanism responsible for oscillations in the Kai system may shed light on the broader question of what clock architectures have been selected by evolution and why. |
q-bio/0505051 | Nenad Pavin | Nenad Pavin, Hana Cipcic Paljetak, Vladimir Krstic | Min-protein oscillations in Escherichia coli with spontaneous formation
of two-stranded filaments in a three-dimensional stochastic
reaction-diffusion model | Corrected typos, changed content | Phys. Rev. E 73, 021904 (2006) | 10.1103/PhysRevE.73.021904 | null | q-bio.SC physics.bio-ph q-bio.OT | null | We introduce a three-dimensional stochastic reaction-diffusion model to
describe MinD/MinE dynamical structures in Escherichia coli. This model
spontaneously generates pole-to-pole oscillations of the membrane-associated
MinD proteins, MinE ring, as well as filaments of the membrane-associated MinD
proteins. Experimental data suggest MinD filaments are two-stranded. In order
to model them we assume that each membrane-associated MinD protein can form up
to three bonds with adjacent membrane associated MinD molecules and that MinE
induced hydrolysis strongly depends on the number of bonds MinD has
established.
| [
{
"created": "Thu, 26 May 2005 09:57:32 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2006 15:19:37 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Pavin",
"Nenad",
""
],
[
"Paljetak",
"Hana Cipcic",
""
],
[
"Krstic",
"Vladimir",
""
]
] | We introduce a three-dimensional stochastic reaction-diffusion model to describe MinD/MinE dynamical structures in Escherichia coli. This model spontaneously generates pole-to-pole oscillations of the membrane-associated MinD proteins, MinE ring, as well as filaments of the membrane-associated MinD proteins. Experimental data suggest MinD filaments are two-stranded. In order to model them we assume that each membrane-associated MinD protein can form up to three bonds with adjacent membrane associated MinD molecules and that MinE induced hydrolysis strongly depends on the number of bonds MinD has established. |
0905.4232 | Jose Emilio Jimenez | SA Wells, JE Jimenez-Roldan, RA R\"omer | Comparative analysis of rigidity across protein families | 15 pages, 7 figures comprising 21 subfigures | null | 10.1088/1478-3975/6/4/046005 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rigidity analysis using the "pebble game" has been applied to protein crystal
structures to obtain information on protein folding, assembly and t he
structure-function relationship. However, previous work using this technique
has not made clear how the set of hydrogen-bond constraints included in the
rigidity analysis should be chosen, nor how sensitive the results of rigidity
analysis are to small structural variations. We present a comparative study in
which "pebble game" rigidity analysis is applied to multiple protein crystal
structures, for each of six differen t protein families. We find that the
mainchain rigidity of a protein structure at a given hydrogen-bond energy
cutoff is quite sensitive to small structural variations, and conclude that the
hydrogen bond constraints in rigidity analysis should be chosen so as to form
and test specific hypotheses about the rigidity o f a particular protein. Our
comparative approach highlights two different characteristic patterns ("sudden"
or "gradual") for protein rigidity loss as constraints are re moved, in line
with recent results on the rigidity transitions of glassy networks.
| [
{
"created": "Tue, 26 May 2009 16:04:14 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jun 2009 16:03:23 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Wells",
"SA",
""
],
[
"Jimenez-Roldan",
"JE",
""
],
[
"Römer",
"RA",
""
]
] | Rigidity analysis using the "pebble game" has been applied to protein crystal structures to obtain information on protein folding, assembly and t he structure-function relationship. However, previous work using this technique has not made clear how the set of hydrogen-bond constraints included in the rigidity analysis should be chosen, nor how sensitive the results of rigidity analysis are to small structural variations. We present a comparative study in which "pebble game" rigidity analysis is applied to multiple protein crystal structures, for each of six differen t protein families. We find that the mainchain rigidity of a protein structure at a given hydrogen-bond energy cutoff is quite sensitive to small structural variations, and conclude that the hydrogen bond constraints in rigidity analysis should be chosen so as to form and test specific hypotheses about the rigidity o f a particular protein. Our comparative approach highlights two different characteristic patterns ("sudden" or "gradual") for protein rigidity loss as constraints are re moved, in line with recent results on the rigidity transitions of glassy networks. |
1610.03596 | Noriaki Ogawa Ph.D. | Noriaki Ogawa, Tetsuo Hatsuda, Atsushi Mochizuki and Masashi Tachikawa | Dynamical Pattern Selection of Growing Cellular Mosaic in Fish Retina | REVTeX, 12 pages, 7 figures, published in Phys.Rev.E (v2) | Phys. Rev. E 96, 032416 (2017) | 10.1103/PhysRevE.96.032416 | RIKEN-QHP-248, RIKEN-STAMP-29 | q-bio.CB cond-mat.soft cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Markovian lattice model for photoreceptor cells is introduced to describe
the growth of mosaic patterns on fish retina. The radial stripe pattern
observed in wild-type zebrafish is shown to be selected naturally during the
retina growth, against the geometrically equivalent, circular stripe pattern.
The mechanism of such dynamical pattern selection is clarified on the basis of
both numerical simulations and theoretical analyses, which find that the
successive emergence of local defects plays a critical role in the realization
of the wild-type pattern.
| [
{
"created": "Wed, 12 Oct 2016 04:21:26 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Oct 2017 07:17:46 GMT",
"version": "v2"
}
] | 2017-10-13 | [
[
"Ogawa",
"Noriaki",
""
],
[
"Hatsuda",
"Tetsuo",
""
],
[
"Mochizuki",
"Atsushi",
""
],
[
"Tachikawa",
"Masashi",
""
]
] | A Markovian lattice model for photoreceptor cells is introduced to describe the growth of mosaic patterns on fish retina. The radial stripe pattern observed in wild-type zebrafish is shown to be selected naturally during the retina growth, against the geometrically equivalent, circular stripe pattern. The mechanism of such dynamical pattern selection is clarified on the basis of both numerical simulations and theoretical analyses, which find that the successive emergence of local defects plays a critical role in the realization of the wild-type pattern. |
1811.09621 | Marwin Segler | Nathan Brown, Marco Fiscato, Marwin H.S. Segler, Alain C. Vaucher | GuacaMol: Benchmarking Models for De Novo Molecular Design | null | null | 10.1021/acs.jcim.8b00839 | null | q-bio.QM cs.LG physics.chem-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | De novo design seeks to generate molecules with required property profiles by
virtual design-make-test cycles. With the emergence of deep learning and neural
generative models in many application areas, models for molecular design based
on neural networks appeared recently and show promising results. However, the
new models have not been profiled on consistent tasks, and comparative studies
to well-established algorithms have only seldom been performed.
To standardize the assessment of both classical and neural models for de novo
molecular design, we propose an evaluation framework, GuacaMol, based on a
suite of standardized benchmarks. The benchmark tasks encompass measuring the
fidelity of the models to reproduce the property distribution of the training
sets, the ability to generate novel molecules, the exploration and exploitation
of chemical space, and a variety of single and multi-objective optimization
tasks. The benchmarking open-source Python code, and a leaderboard can be found
on https://benevolent.ai/guacamol
| [
{
"created": "Thu, 22 Nov 2018 18:08:13 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Feb 2019 18:25:21 GMT",
"version": "v2"
}
] | 2021-01-05 | [
[
"Brown",
"Nathan",
""
],
[
"Fiscato",
"Marco",
""
],
[
"Segler",
"Marwin H. S.",
""
],
[
"Vaucher",
"Alain C.",
""
]
] | De novo design seeks to generate molecules with required property profiles by virtual design-make-test cycles. With the emergence of deep learning and neural generative models in many application areas, models for molecular design based on neural networks appeared recently and show promising results. However, the new models have not been profiled on consistent tasks, and comparative studies to well-established algorithms have only seldom been performed. To standardize the assessment of both classical and neural models for de novo molecular design, we propose an evaluation framework, GuacaMol, based on a suite of standardized benchmarks. The benchmark tasks encompass measuring the fidelity of the models to reproduce the property distribution of the training sets, the ability to generate novel molecules, the exploration and exploitation of chemical space, and a variety of single and multi-objective optimization tasks. The benchmarking open-source Python code, and a leaderboard can be found on https://benevolent.ai/guacamol |
1403.0472 | Roberto Cavoretto | Iulia Martina Bulai, Roberto Cavoretto, Bruna Chialva, Davide Duma,
Ezio Venturino | Comparing disease control policies for interacting wild populations | null | null | null | null | q-bio.PE math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider interacting population systems of predator-prey type, presenting
four models of control strategies for epidemics among the prey. In particular
to contain the transmissible disease, safety niches are considered, assuming
they lessen the disease spread, but do not protect prey from predators. This
represents a novelty with respect to standard ecosystems where the refuge
prevents predators' attacks. The niche is assumed either to protect the healthy
individuals, or to hinder the infected ones to get in contact with the
susceptibles, or finally to reduce altogether contacts that might lead to new
cases of the infection. In addition a standard culling procedure is also
analysed. The effectiveness of the different strategies are compared. Probably
the environments providing a place where disease carriers cannot come in
contact with the healthy individuals, or where their contact rates are lowered,
seem to preferable for disease containment.
| [
{
"created": "Mon, 3 Mar 2014 16:07:10 GMT",
"version": "v1"
}
] | 2014-03-04 | [
[
"Bulai",
"Iulia Martina",
""
],
[
"Cavoretto",
"Roberto",
""
],
[
"Chialva",
"Bruna",
""
],
[
"Duma",
"Davide",
""
],
[
"Venturino",
"Ezio",
""
]
] | We consider interacting population systems of predator-prey type, presenting four models of control strategies for epidemics among the prey. In particular to contain the transmissible disease, safety niches are considered, assuming they lessen the disease spread, but do not protect prey from predators. This represents a novelty with respect to standard ecosystems where the refuge prevents predators' attacks. The niche is assumed either to protect the healthy individuals, or to hinder the infected ones to get in contact with the susceptibles, or finally to reduce altogether contacts that might lead to new cases of the infection. In addition a standard culling procedure is also analysed. The effectiveness of the different strategies are compared. Probably the environments providing a place where disease carriers cannot come in contact with the healthy individuals, or where their contact rates are lowered, seem to preferable for disease containment. |
1602.03244 | Dibya Ghosh | Dibya Jyoti Ghosh | Development of a Computationally Optimized Model of Cancer-induced
Angiogenesis through Specialized Cellular Mechanics | null | null | null | null | q-bio.CB stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Angiogenesis, the development of new vasculature, is a critical process in
the growth of new tumors. Driven by a goal to understand this aspect of cancer
proliferation, I develop a discrete computationally optimized mathematical
model of angiogenesis that specializes in intercellular interactions. I model
vascular endothelial growth factor spread and dynamics of endothelial cell
movement in a competitive environment, with parameters specific to our model
calculated through Dependent Variable Sensitivity Analysis (DVSA) and
experimentally observed data. Through simulation testing, we find the critical
limits of angiogenesis to be 102 m and 153 m respectively, beyond which
angiogenesis will not successfully occur. Cell density in the surrounding
region and the concentration of extracellular matrix fibers are also found to
directly inhibit angiogenesis. Through these three factors, we postulate a
method for establishing criticality of a tumor based upon the likelihood of
angiogenesis completing. This research expands on other work by choosing
factors that are patient-dependent through our specialized Cellular Potts mode,
which serves to optimize and increase accuracy of the model. By doing such, I
establish a theoretical framework for analyzing lesions using angiogenetic
properties, with the ability to potentially compute the criticality of tumors
with the aid of medical imaging technology.
| [
{
"created": "Wed, 10 Feb 2016 02:36:27 GMT",
"version": "v1"
}
] | 2016-02-11 | [
[
"Ghosh",
"Dibya Jyoti",
""
]
] | Angiogenesis, the development of new vasculature, is a critical process in the growth of new tumors. Driven by a goal to understand this aspect of cancer proliferation, I develop a discrete computationally optimized mathematical model of angiogenesis that specializes in intercellular interactions. I model vascular endothelial growth factor spread and dynamics of endothelial cell movement in a competitive environment, with parameters specific to our model calculated through Dependent Variable Sensitivity Analysis (DVSA) and experimentally observed data. Through simulation testing, we find the critical limits of angiogenesis to be 102 m and 153 m respectively, beyond which angiogenesis will not successfully occur. Cell density in the surrounding region and the concentration of extracellular matrix fibers are also found to directly inhibit angiogenesis. Through these three factors, we postulate a method for establishing criticality of a tumor based upon the likelihood of angiogenesis completing. This research expands on other work by choosing factors that are patient-dependent through our specialized Cellular Potts mode, which serves to optimize and increase accuracy of the model. By doing such, I establish a theoretical framework for analyzing lesions using angiogenetic properties, with the ability to potentially compute the criticality of tumors with the aid of medical imaging technology. |
1209.1927 | Juan Poyatos | Matteo Cavaliere and Juan F. Poyatos | Plasticity facilitates sustainable growth in the commons | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the commons, communities whose growth depends on public goods, individuals
often rely on surprisingly simple strategies, or heuristics, to decide whether
to contribute to the common good (at risk of exploitation by free-riders).
Although this appears a limitation, here we show how four heuristics lead to
sustainable growth by exploiting specific environmental constraints. The two
simplest ones --contribute permanently or switch stochastically between
contributing or not-- are first shown to bring sustainability when the public
good efficiently promotes growth. If efficiency declines and the commons is
structured in small groups, the most effective strategy resides in contributing
only when a majority of individuals are also contributors. In contrast, when
group size becomes large, the most effective behavior follows a minimal-effort
rule: contribute only when it is strictly necessary. Both plastic strategies
are observed in natural systems what presents them as fundamental social motifs
to successfully manage sustainability.
| [
{
"created": "Mon, 10 Sep 2012 10:08:37 GMT",
"version": "v1"
}
] | 2012-09-11 | [
[
"Cavaliere",
"Matteo",
""
],
[
"Poyatos",
"Juan F.",
""
]
] | In the commons, communities whose growth depends on public goods, individuals often rely on surprisingly simple strategies, or heuristics, to decide whether to contribute to the common good (at risk of exploitation by free-riders). Although this appears a limitation, here we show how four heuristics lead to sustainable growth by exploiting specific environmental constraints. The two simplest ones --contribute permanently or switch stochastically between contributing or not-- are first shown to bring sustainability when the public good efficiently promotes growth. If efficiency declines and the commons is structured in small groups, the most effective strategy resides in contributing only when a majority of individuals are also contributors. In contrast, when group size becomes large, the most effective behavior follows a minimal-effort rule: contribute only when it is strictly necessary. Both plastic strategies are observed in natural systems what presents them as fundamental social motifs to successfully manage sustainability. |
2305.01737 | Ariel Nikas | Ariel Nikas and Hasan Ahmed and Veronika I. Zarnitsyna | Competing Heterogeneities in Vaccine Effectiveness Estimation | null | null | 10.3390/vaccines11081312 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Understanding waning of vaccine-induced protection is important for both
immunology and public health. Population heterogeneities in underlying
(pre-vaccination) susceptibility and vaccine response can cause measured
vaccine effectiveness (mVE) to change over time even in the absence of pathogen
evolution and any actual waning of immune responses. We use a multi-scale
agent-based models parameterized using epidemiological and immunological data,
to investigate the effect of these heterogeneities on mVE as measured by the
hazard ratio. Based on our previous work, we consider waning of antibodies
according to a power law and link it to protection in two ways: 1) motivated by
correlates of risk data and 2) using a within-host model of stochastic viral
extinction. The effect of the heterogeneities is given by concise and
understandable formulas, one of which is essentially a generalization of
Fisher's fundamental theorem of natural selection to include higher
derivatives. Heterogeneity in underlying susceptibility accelerates apparent
waning, whereas heterogeneity in vaccine response slows down apparent waning.
Our models suggest that heterogeneity in underlying susceptibility is likely to
dominate. However, heterogeneity in vaccine response offsets <10% to >100%
(median of 29%) of this effect in our simulations. Our methodology and results
may be helpful in understanding competing heterogeneities and waning of
immunity and vaccine-induced protection. Our study suggests heterogeneity is
more likely to 'bias' mVE downwards towards faster waning of immunity but a
subtle bias in the opposite direction is also plausible.
| [
{
"created": "Tue, 2 May 2023 19:13:02 GMT",
"version": "v1"
},
{
"created": "Fri, 12 May 2023 19:37:40 GMT",
"version": "v2"
}
] | 2023-08-02 | [
[
"Nikas",
"Ariel",
""
],
[
"Ahmed",
"Hasan",
""
],
[
"Zarnitsyna",
"Veronika I.",
""
]
] | Understanding waning of vaccine-induced protection is important for both immunology and public health. Population heterogeneities in underlying (pre-vaccination) susceptibility and vaccine response can cause measured vaccine effectiveness (mVE) to change over time even in the absence of pathogen evolution and any actual waning of immune responses. We use a multi-scale agent-based models parameterized using epidemiological and immunological data, to investigate the effect of these heterogeneities on mVE as measured by the hazard ratio. Based on our previous work, we consider waning of antibodies according to a power law and link it to protection in two ways: 1) motivated by correlates of risk data and 2) using a within-host model of stochastic viral extinction. The effect of the heterogeneities is given by concise and understandable formulas, one of which is essentially a generalization of Fisher's fundamental theorem of natural selection to include higher derivatives. Heterogeneity in underlying susceptibility accelerates apparent waning, whereas heterogeneity in vaccine response slows down apparent waning. Our models suggest that heterogeneity in underlying susceptibility is likely to dominate. However, heterogeneity in vaccine response offsets <10% to >100% (median of 29%) of this effect in our simulations. Our methodology and results may be helpful in understanding competing heterogeneities and waning of immunity and vaccine-induced protection. Our study suggests heterogeneity is more likely to 'bias' mVE downwards towards faster waning of immunity but a subtle bias in the opposite direction is also plausible. |
2307.05338 | Eric Strobl | Eric V. Strobl | Root Causal Inference from Single Cell RNA Sequencing with the Negative
Binomial | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately inferring the root causes of disease from sequencing data can
improve the discovery of novel therapeutic targets. However, existing root
causal inference algorithms require perfectly measured continuous random
variables. Single cell RNA sequencing (scRNA-seq) datasets contain large
numbers of cells but non-negative counts measured by an error prone process. We
therefore introduce an algorithm called Root Causal Inference with Negative
Binomials (RCI-NB) that accounts for count-based measurement error by
separating negative binomial distributions into their gamma and Poisson
components; the gamma distributions form a fully identifiable but latent post
non-linear causal model representing the true RNA expression levels, which we
only observe with Poisson corruption. RCI-NB identifies patient-specific root
causal contributions from scRNA-seq datasets by integrating novel sparse
regression and goodness of fit testing procedures that bypass Poisson
measurement error. Experiments demonstrate significant improvements over
existing alternatives.
| [
{
"created": "Mon, 10 Jul 2023 17:30:04 GMT",
"version": "v1"
}
] | 2023-07-12 | [
[
"Strobl",
"Eric V.",
""
]
] | Accurately inferring the root causes of disease from sequencing data can improve the discovery of novel therapeutic targets. However, existing root causal inference algorithms require perfectly measured continuous random variables. Single cell RNA sequencing (scRNA-seq) datasets contain large numbers of cells but non-negative counts measured by an error prone process. We therefore introduce an algorithm called Root Causal Inference with Negative Binomials (RCI-NB) that accounts for count-based measurement error by separating negative binomial distributions into their gamma and Poisson components; the gamma distributions form a fully identifiable but latent post non-linear causal model representing the true RNA expression levels, which we only observe with Poisson corruption. RCI-NB identifies patient-specific root causal contributions from scRNA-seq datasets by integrating novel sparse regression and goodness of fit testing procedures that bypass Poisson measurement error. Experiments demonstrate significant improvements over existing alternatives. |
2307.00813 | Jun Guo | Ruoyang Zhao, Feng Gao, Maoyu Li, Xingkun Niu, Shihao Liu, Xinmin
Zhao, Liping Wang, Jun Guo and Feng Zhang | Regulating the Hydrophobic Domain in Peptide-Catecholamine Coassembled
Nanostructures for Fluorescence Enhancement | 19 pages, 5 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hydrophobic domains provide specific microenvironment for essential
functional activities in life. Herein, we studied how the coassembling of
peptides with catecholamines regulate the hydrophobic domain-containing
nanostructures for fluorescence enhancement. By peptide encoding and
coassembling with catecholamines of different hydrophilicities, a series of
hierarchical assembling systems were constructed. In combination with molecular
dynamics simulation, we experimentally discovered the hydrophobic domain of
chromophore microenvironment regulates the fluorescence of coassembled
nanostructures. Our results shed light on the rational design of fluorescent
bio-coassembled nanoprobes for biomedical applications.
| [
{
"created": "Mon, 3 Jul 2023 07:53:05 GMT",
"version": "v1"
}
] | 2023-07-04 | [
[
"Zhao",
"Ruoyang",
""
],
[
"Gao",
"Feng",
""
],
[
"Li",
"Maoyu",
""
],
[
"Niu",
"Xingkun",
""
],
[
"Liu",
"Shihao",
""
],
[
"Zhao",
"Xinmin",
""
],
[
"Wang",
"Liping",
""
],
[
"Guo",
"Jun",
""
],
[
"Zhang",
"Feng",
""
]
] | Hydrophobic domains provide specific microenvironment for essential functional activities in life. Herein, we studied how the coassembling of peptides with catecholamines regulate the hydrophobic domain-containing nanostructures for fluorescence enhancement. By peptide encoding and coassembling with catecholamines of different hydrophilicities, a series of hierarchical assembling systems were constructed. In combination with molecular dynamics simulation, we experimentally discovered the hydrophobic domain of chromophore microenvironment regulates the fluorescence of coassembled nanostructures. Our results shed light on the rational design of fluorescent bio-coassembled nanoprobes for biomedical applications. |
2201.05408 | Zhitong Bing | Dietrich Kong, Ke Wang, Qiu-Ning Zhang and Zhi-Tong Bing | Systematic analysis reveals key microRNAs as diagnostic and prognostic
factors in progressive stages of lung cancer | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs play an indispensable role in numerous biological processes ranging
from organismic development to tumor progression.In oncology,these microRNAs
constitute a fundamental regulation role in the pathology of cancer that
provides the basis for probing into the influences on clinical features through
transcriptome data. Previous work focused on machine learning (ML) for
searching biomarkers in different cancer databases, but the functions of these
biomarkers are fully not clear. Taking lung cancer as a prototype case of
study. Through integrating clinical information into the transcripts expression
data, we systematically analyzed the effect of microRNA on diagnostic and
prognostic factors at deteriorative lung adenocarcinoma (LUAD). After dimension
reduction, unsupervised hierarchical clustering was used to find the diagnostic
factors which represent the unique expression patterns of microRNA at various
patient's stages. In addition, we developed a classification framework, Light
Gradient Boosting Machine (LightGBM) and SHAPley Additive explanation (SHAP)
algorithm, to screen out the prognostic factors. Enrichment analyses show that
the diagnostic and prognostic factors are not only enriched in cancer-related
athways, but also involved in many vital cellular signaling transduction and
immune responses. These key microRNAs also impact the survival risk of LUAD
patients at all (or a specific) stage(s) and some of them target some important
Transcription Factors (TF).The key finding is that five microRNAs
(hsa-mir-196b, hsa-mir-31, hsa-mir-891a, hsa-mir-34c, and hsa-mir-653) can then
serve as not only potential diagnostic factors but also prognostic tools in the
monitoring of lung cancer.
| [
{
"created": "Fri, 14 Jan 2022 11:59:34 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Kong",
"Dietrich",
""
],
[
"Wang",
"Ke",
""
],
[
"Zhang",
"Qiu-Ning",
""
],
[
"Bing",
"Zhi-Tong",
""
]
] | MicroRNAs play an indispensable role in numerous biological processes ranging from organismic development to tumor progression.In oncology,these microRNAs constitute a fundamental regulation role in the pathology of cancer that provides the basis for probing into the influences on clinical features through transcriptome data. Previous work focused on machine learning (ML) for searching biomarkers in different cancer databases, but the functions of these biomarkers are fully not clear. Taking lung cancer as a prototype case of study. Through integrating clinical information into the transcripts expression data, we systematically analyzed the effect of microRNA on diagnostic and prognostic factors at deteriorative lung adenocarcinoma (LUAD). After dimension reduction, unsupervised hierarchical clustering was used to find the diagnostic factors which represent the unique expression patterns of microRNA at various patient's stages. In addition, we developed a classification framework, Light Gradient Boosting Machine (LightGBM) and SHAPley Additive explanation (SHAP) algorithm, to screen out the prognostic factors. Enrichment analyses show that the diagnostic and prognostic factors are not only enriched in cancer-related athways, but also involved in many vital cellular signaling transduction and immune responses. These key microRNAs also impact the survival risk of LUAD patients at all (or a specific) stage(s) and some of them target some important Transcription Factors (TF).The key finding is that five microRNAs (hsa-mir-196b, hsa-mir-31, hsa-mir-891a, hsa-mir-34c, and hsa-mir-653) can then serve as not only potential diagnostic factors but also prognostic tools in the monitoring of lung cancer. |
1906.07564 | Andrei D. Robu | Andrei D. Robu, Christoph Salge, Chrystopher L. Nehaniv, Daniel Polani | Measuring Time with Minimal Clocks | Article submitted to the Artificial Life journal selected papers from
ECAL 2017 issue. This document is the version after the review phase. Journal
version of arXiv:1706.07091 | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Being able to measure time, whether directly or indirectly, is a significant
advantage for an organism. It allows for the timely reaction to regular or
predicted events, reducing the pressure for fast processing of sensory input.
Thus, clocks are ubiquitous in biology. In the present paper, we consider
minimal abstract pure clocks in different configurations and investigate their
characteristic dynamics. We are especially interested in optimally
time-resolving clocks. Among these, we find fundamentally diametral clock
characteristics, such as oscillatory behavior for purely local time measurement
or decay-based clocks measuring time periods of a scale global to the problem.
We include also sets of independent clocks ("clock bags"), sequential cascades
of clocks and composite clocks with controlled dependency. Clock cascades show
a "condensation effect" and the composite clock shows various regimes of
markedly different dynamics.
| [
{
"created": "Mon, 17 Jun 2019 16:13:56 GMT",
"version": "v1"
}
] | 2019-06-19 | [
[
"Robu",
"Andrei D.",
""
],
[
"Salge",
"Christoph",
""
],
[
"Nehaniv",
"Chrystopher L.",
""
],
[
"Polani",
"Daniel",
""
]
] | Being able to measure time, whether directly or indirectly, is a significant advantage for an organism. It allows for the timely reaction to regular or predicted events, reducing the pressure for fast processing of sensory input. Thus, clocks are ubiquitous in biology. In the present paper, we consider minimal abstract pure clocks in different configurations and investigate their characteristic dynamics. We are especially interested in optimally time-resolving clocks. Among these, we find fundamentally diametral clock characteristics, such as oscillatory behavior for purely local time measurement or decay-based clocks measuring time periods of a scale global to the problem. We include also sets of independent clocks ("clock bags"), sequential cascades of clocks and composite clocks with controlled dependency. Clock cascades show a "condensation effect" and the composite clock shows various regimes of markedly different dynamics. |
q-bio/0612049 | L\'eonard G\'erard | L\'eonard G\'erard and Jean-Jacques Slotine | Neuronal networks and controlled symmetries, a generic framework | Last editorial changes, long version of the paper | null | null | null | q-bio.NC | null | The extraordinary computational power of the brain may be related in part to
the fact that each of the smaller neural networks that compose it can behave
transiently in many different ways, depending on its inputs. Mathematically,
input continuity helps to show how a large network, constructed recursively
from smaller blocks, can exhibit robust specific properties according to its
input. By extending earlier work on synchrony and symmetry, we exploit input
continuity of contracting systems to ensure robust control of diverse spatial
and spatio-temporal symmetries of the output signal in such a network.
| [
{
"created": "Thu, 28 Dec 2006 00:24:55 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2007 11:23:46 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Dec 2007 02:39:14 GMT",
"version": "v3"
},
{
"created": "Sat, 29 Mar 2008 16:38:11 GMT",
"version": "v4"
}
] | 2008-03-29 | [
[
"Gérard",
"Léonard",
""
],
[
"Slotine",
"Jean-Jacques",
""
]
] | The extraordinary computational power of the brain may be related in part to the fact that each of the smaller neural networks that compose it can behave transiently in many different ways, depending on its inputs. Mathematically, input continuity helps to show how a large network, constructed recursively from smaller blocks, can exhibit robust specific properties according to its input. By extending earlier work on synchrony and symmetry, we exploit input continuity of contracting systems to ensure robust control of diverse spatial and spatio-temporal symmetries of the output signal in such a network. |
2104.11644 | Fan Zhang | Fan Zhang, Alessandro Daducci, Yong He, Simona Schiavi, Caio Seguin,
Robert Smith, Chun-Hung Yeh, Tengda Zhao, Lauren J. O'Donnell | Quantitative mapping of the brain's structural connectivity using
diffusion MRI tractography: a review | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Diffusion magnetic resonance imaging (dMRI) tractography is an advanced
imaging technique that enables in vivo mapping of the brain's white matter
connections at macro scale. Over the last two decades, the study of brain
connectivity using dMRI tractography has played a prominent role in the
neuroimaging research landscape. In this paper, we provide a high-level
overview of how tractography is used to enable quantitative analysis of the
brain's structural connectivity in health and disease. We first provide a
review of methodology involved in three main processing steps that are common
across most approaches for quantitative analysis of tractography, including
methods for tractography correction, segmentation and quantification. For each
step, we aim to describe methodological choices, their popularity, and
potential pros and cons. We then review studies that have used quantitative
tractography approaches to study the brain's white matter, focusing on
applications in neurodevelopment, aging, neurological disorders, mental
disorders, and neurosurgery. We conclude that, while there have been
considerable advancements in methodological technologies and breadth of
applications, there nevertheless remains no consensus about the "best"
methodology in quantitative analysis of tractography, and researchers should
remain cautious when interpreting results in research and clinical
applications.
| [
{
"created": "Fri, 23 Apr 2021 14:50:11 GMT",
"version": "v1"
}
] | 2021-04-26 | [
[
"Zhang",
"Fan",
""
],
[
"Daducci",
"Alessandro",
""
],
[
"He",
"Yong",
""
],
[
"Schiavi",
"Simona",
""
],
[
"Seguin",
"Caio",
""
],
[
"Smith",
"Robert",
""
],
[
"Yeh",
"Chun-Hung",
""
],
[
"Zhao",
"Tengda",
""
],
[
"O'Donnell",
"Lauren J.",
""
]
] | Diffusion magnetic resonance imaging (dMRI) tractography is an advanced imaging technique that enables in vivo mapping of the brain's white matter connections at macro scale. Over the last two decades, the study of brain connectivity using dMRI tractography has played a prominent role in the neuroimaging research landscape. In this paper, we provide a high-level overview of how tractography is used to enable quantitative analysis of the brain's structural connectivity in health and disease. We first provide a review of methodology involved in three main processing steps that are common across most approaches for quantitative analysis of tractography, including methods for tractography correction, segmentation and quantification. For each step, we aim to describe methodological choices, their popularity, and potential pros and cons. We then review studies that have used quantitative tractography approaches to study the brain's white matter, focusing on applications in neurodevelopment, aging, neurological disorders, mental disorders, and neurosurgery. We conclude that, while there have been considerable advancements in methodological technologies and breadth of applications, there nevertheless remains no consensus about the "best" methodology in quantitative analysis of tractography, and researchers should remain cautious when interpreting results in research and clinical applications. |
1206.5453 | Paolo Masucci | Paolo Masucci, Sophie Arnaud-Haond, V\'ictor M. Egu\'iluz, Emilio
Hern\'andez-Garc\'ia and Ester A. Serr\~ao | Genetic flow directionality and geographical segregation in a Cymodocea
nodosa genetic diversity network | null | EPJ Data Science, 1:11, 2012 | 10.1140/epjds11 | null | q-bio.PE physics.bio-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse a large data set of genetic markers obtained from populations of
Cymodocea nodosa, a marine plant occurring from the East Mediterranean to the
Iberian-African coasts in the Atlantic Ocean. We fully develop and test a
recently introduced methodology to infer the directionality of gene flow based
on the concept of geographical segregation. Using the Jensen-Shannon
divergence, we are able to extract a directed network of gene flow describing
the evolutionary patterns of Cymodocea nodosa. In particular we recover the
genetic segregation that the marine plant underwent during its evolution. The
results are confirmed by natural evidence and are consistent with an
independent cross analysis.
| [
{
"created": "Sun, 24 Jun 2012 00:57:27 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jun 2012 10:34:58 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Oct 2012 14:41:09 GMT",
"version": "v3"
}
] | 2013-02-13 | [
[
"Masucci",
"Paolo",
""
],
[
"Arnaud-Haond",
"Sophie",
""
],
[
"Eguíluz",
"Víctor M.",
""
],
[
"Hernández-García",
"Emilio",
""
],
[
"Serrão",
"Ester A.",
""
]
] | We analyse a large data set of genetic markers obtained from populations of Cymodocea nodosa, a marine plant occurring from the East Mediterranean to the Iberian-African coasts in the Atlantic Ocean. We fully develop and test a recently introduced methodology to infer the directionality of gene flow based on the concept of geographical segregation. Using the Jensen-Shannon divergence, we are able to extract a directed network of gene flow describing the evolutionary patterns of Cymodocea nodosa. In particular we recover the genetic segregation that the marine plant underwent during its evolution. The results are confirmed by natural evidence and are consistent with an independent cross analysis. |
1310.7899 | Renquan Zhang | Renquan Zhang | Evolution of autocatalytic sets in a competitive percolation model | 16 pages,5 figures, 2 tables | null | 10.1088/1742-5468/2014/11/P11018 | null | q-bio.MN physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of autocatalytic sets (ACS) is a widespread process in
biological, chemical and ecological systems which is of great significance in
many applications, such as the evolution of new species or complex chemical
organizations. In this paper, we propose a competitive model with a m-selection
rule in which an abrupt emergence of a macroscopic independent ACS is observed.
By numerical simulations, we find that the maximal increase of the size grows
linearly with the system size. We analytically derive the threshold t{\alpha}
where the abrupt jump happens and verify it by simulations. Moreover, our
analysis explains how this giant independent ACS grows and reveals that, as the
selection rule becomes more strict, the phase transition is dramatically
postponed, and the number of the largest independent ACSs coexisting in the
system increases accordingly. Our research work deepens the understanding of
the evolution of ACS and should provide useful information for designing
strategies to control the emergence of ACS in corresponding applications.
| [
{
"created": "Sat, 26 Oct 2013 03:31:54 GMT",
"version": "v1"
}
] | 2014-12-18 | [
[
"Zhang",
"Renquan",
""
]
] | The evolution of autocatalytic sets (ACS) is a widespread process in biological, chemical and ecological systems which is of great significance in many applications, such as the evolution of new species or complex chemical organizations. In this paper, we propose a competitive model with a m-selection rule in which an abrupt emergence of a macroscopic independent ACS is observed. By numerical simulations, we find that the maximal increase of the size grows linearly with the system size. We analytically derive the threshold t{\alpha} where the abrupt jump happens and verify it by simulations. Moreover, our analysis explains how this giant independent ACS grows and reveals that, as the selection rule becomes more strict, the phase transition is dramatically postponed, and the number of the largest independent ACSs coexisting in the system increases accordingly. Our research work deepens the understanding of the evolution of ACS and should provide useful information for designing strategies to control the emergence of ACS in corresponding applications. |
0803.3467 | Tiago Ribeiro | Tiago L. Ribeiro and Mauro Copelli | Deterministic excitable media under Poisson drive: power law responses,
spiral waves and dynamic range | 10 pages, 5 figures, final version | Phys. Rev. E 77, 051911 (2008) | 10.1103/PhysRevE.77.051911 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | When each site of a spatially extended excitable medium is independently
driven by a Poisson stimulus with rate h, the interplay between creation and
annihilation of excitable waves leads to an average activity F. It has recently
been suggested that in the low-stimulus regime (h ~ 0) the response function
F(h) of hypercubic deterministic systems behaves as a power law, F ~ h^m.
Moreover the response exponent m has been predicted to depend only on the
dimensionality d of the lattice, m = 1/(1+d) [T. Ohta and T. Yoshimura, Physica
D 205, 189 (2005)]. In order to test this prediction, we study the response
function of excitable lattices modeled by either coupled Morris-Lecar equations
or Greenberg-Hastings cellular automata. We show that the prediction is
verified in our model systems for d = 1, 2, and 3, provided that a minimum set
of conditions is satisfied. Under these conditions, the dynamic range - which
measures the range of stimulus intensities that can be coded by the network
activity - increases with the dimensionality d of the network. The power law
scenario breaks down, however, if the system can exhibit self-sustained
activity (spiral waves). In this case, we recover a scenario that is common to
probabilistic excitable media: as a function of the conductance coupling G
among the excitable elements, the dynamic range is maximized precisely at the
critical value G_c above which self-sustained activity becomes stable. We
discuss the implications of these results in the context of neural coding.
| [
{
"created": "Mon, 24 Mar 2008 21:35:09 GMT",
"version": "v1"
},
{
"created": "Thu, 22 May 2008 18:06:01 GMT",
"version": "v2"
}
] | 2008-05-22 | [
[
"Ribeiro",
"Tiago L.",
""
],
[
"Copelli",
"Mauro",
""
]
] | When each site of a spatially extended excitable medium is independently driven by a Poisson stimulus with rate h, the interplay between creation and annihilation of excitable waves leads to an average activity F. It has recently been suggested that in the low-stimulus regime (h ~ 0) the response function F(h) of hypercubic deterministic systems behaves as a power law, F ~ h^m. Moreover the response exponent m has been predicted to depend only on the dimensionality d of the lattice, m = 1/(1+d) [T. Ohta and T. Yoshimura, Physica D 205, 189 (2005)]. In order to test this prediction, we study the response function of excitable lattices modeled by either coupled Morris-Lecar equations or Greenberg-Hastings cellular automata. We show that the prediction is verified in our model systems for d = 1, 2, and 3, provided that a minimum set of conditions is satisfied. Under these conditions, the dynamic range - which measures the range of stimulus intensities that can be coded by the network activity - increases with the dimensionality d of the network. The power law scenario breaks down, however, if the system can exhibit self-sustained activity (spiral waves). In this case, we recover a scenario that is common to probabilistic excitable media: as a function of the conductance coupling G among the excitable elements, the dynamic range is maximized precisely at the critical value G_c above which self-sustained activity becomes stable. We discuss the implications of these results in the context of neural coding. |
2306.17202 | Kazuma Inoue | Kazuma Inoue, Ryosuke Kojima, Mayumi Kamada, Yasushi Okuno | An end-to-end framework for gene expression classification by
integrating a background knowledge graph: application to cancer prognosis
prediction | 19 pages including supplementary materials | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Biological data may be separated into primary data, such as gene expression,
and secondary data, such as pathways and protein-protein interactions. Methods
using secondary data to enhance the analysis of primary data are promising,
because secondary data have background information that is not included in
primary data. In this study, we proposed an end-to-end framework to integrally
handle secondary data to construct a classification model for primary data. We
applied this framework to cancer prognosis prediction using gene expression
data and a biological network. Cross-validation results indicated that our
model achieved higher accuracy compared with a deep neural network model
without background biological network information. Experiments conducted in
patient groups by cancer type showed improvement in ROC-area under the curve
for many groups. Visualizations of high accuracy cancer types identified
contributing genes and pathways by enrichment analysis. Known biomarkers and
novel biomarker candidates were identified through these experiments.
| [
{
"created": "Thu, 29 Jun 2023 11:20:47 GMT",
"version": "v1"
}
] | 2023-07-03 | [
[
"Inoue",
"Kazuma",
""
],
[
"Kojima",
"Ryosuke",
""
],
[
"Kamada",
"Mayumi",
""
],
[
"Okuno",
"Yasushi",
""
]
] | Biological data may be separated into primary data, such as gene expression, and secondary data, such as pathways and protein-protein interactions. Methods using secondary data to enhance the analysis of primary data are promising, because secondary data have background information that is not included in primary data. In this study, we proposed an end-to-end framework to integrally handle secondary data to construct a classification model for primary data. We applied this framework to cancer prognosis prediction using gene expression data and a biological network. Cross-validation results indicated that our model achieved higher accuracy compared with a deep neural network model without background biological network information. Experiments conducted in patient groups by cancer type showed improvement in ROC-area under the curve for many groups. Visualizations of high accuracy cancer types identified contributing genes and pathways by enrichment analysis. Known biomarkers and novel biomarker candidates were identified through these experiments. |
1108.2091 | Subhadip Raychaudhuri | Subhadip Raychaudhuri | Low probability Bid-Bax reaction generates heterogeneity in apoptosis
resistance of cancer and cancer stem cells | 17 pages, 5 figures | null | null | null | q-bio.MN physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variability in the tumorigenic potential among cancer cells within a tumor
population is an unresolved fundamental issue in cancer biology. It is
important to know whether cancer cells with higher tumorigenic potential, such
as cancer stem cells, are only a small subpopulation. We attempt to address the
question of variability in tumorigenic potential based on the heterogeneity in
apoptosis resistance of cancer cells. We use stochastic differential equations
and kinetic Monte Carlo simulations to explore the mechanisms that generate
cell-to-cell variability in apoptosis resistance of cancer cells. In our model,
a simplified scheme of apoptosis signaling reactions is developed focusing on
the proapoptotic Bid-Bax reaction and its inhibition by Bcl-2 like
antiapoptotic proteins. We show how a combination of low probability Bid-Bax
reaction along with overexpressed reactant molecules allows specific killing of
cancer cells, especially under targeted therapy such as Bcl-2 inhibition. This
low probability Bid-Bax reaction protects normal cells from accidental
apoptosis but generates cell-to-cell stochastic variability in apoptotic
activation of cells equipped with overexpressed Bid and Bax molecules. We
further demonstrate that cellular variations in Bcl-2 / Bax ratio, within a
cancer cell population, can affect the intrinsic fluctuations arising from the
stochastic Bid-Bax reaction and thereby provides a mechanism for origin of
cells with higher tumorigenic potential. We discuss the implications of our
results for cancer therapy, such as, optimal strategies to minimize stochastic
fluctuations in cancer cell death.
| [
{
"created": "Wed, 10 Aug 2011 03:58:28 GMT",
"version": "v1"
}
] | 2011-11-30 | [
[
"Raychaudhuri",
"Subhadip",
""
]
] | Variability in the tumorigenic potential among cancer cells within a tumor population is an unresolved fundamental issue in cancer biology. It is important to know whether cancer cells with higher tumorigenic potential, such as cancer stem cells, are only a small subpopulation. We attempt to address the question of variability in tumorigenic potential based on the heterogeneity in apoptosis resistance of cancer cells. We use stochastic differential equations and kinetic Monte Carlo simulations to explore the mechanisms that generate cell-to-cell variability in apoptosis resistance of cancer cells. In our model, a simplified scheme of apoptosis signaling reactions is developed focusing on the proapoptotic Bid-Bax reaction and its inhibition by Bcl-2 like antiapoptotic proteins. We show how a combination of low probability Bid-Bax reaction along with overexpressed reactant molecules allows specific killing of cancer cells, especially under targeted therapy such as Bcl-2 inhibition. This low probability Bid-Bax reaction protects normal cells from accidental apoptosis but generates cell-to-cell stochastic variability in apoptotic activation of cells equipped with overexpressed Bid and Bax molecules. We further demonstrate that cellular variations in Bcl-2 / Bax ratio, within a cancer cell population, can affect the intrinsic fluctuations arising from the stochastic Bid-Bax reaction and thereby provides a mechanism for origin of cells with higher tumorigenic potential. We discuss the implications of our results for cancer therapy, such as, optimal strategies to minimize stochastic fluctuations in cancer cell death. |
1708.04695 | Carlos Floyd | Carlos S. Floyd, Christopher Jarzynski, Garegin A. Papoian | Low-Dimensional Manifold of Actin Polymerization Dynamics | null | null | 10.1088/1367-2630/aa9641 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Actin filaments are critical components of the eukaryotic cytoskeleton,
playing important roles in a number of cellular functions, such as cell
migration, organelle transport, and mechanosensation. They are helical polymers
with a well-defined polarity, composed of globular monomers that bind
nucleotides in one of three hydrolysis states (ATP, ADP-Pi, or ADP). Mean-field
models of the dynamics of actin polymerization have succeeded in, among other
things, determining the nucleotide profile of an average filament and resolving
the mechanisms of accessory proteins, however these models require numerical
solution of a high-dimensional system of nonlinear ODE's. By truncating a set
of recursion equations, the Brooks-Carlsson model reduces dimensionality to 11,
but it remains nonlinear and does not admit an analytical solution, hence,
significantly hindering understanding of its resulting dynamics. In this work,
by taking advantage of the fast timescales of the hydrolysis states of the
filament tips, we propose two model reduction schemes that achieve low
dimensionality and linearity. We provide an exact solution of the resulting
linear equations and use it to shed light on the dynamical behaviors of the
full BC model, highlighting the relative ordering of the timescales of various
collective processes, and explaining some unusual dependence of the
steady-state behavior on initial conditions.
| [
{
"created": "Tue, 15 Aug 2017 21:24:58 GMT",
"version": "v1"
}
] | 2018-01-17 | [
[
"Floyd",
"Carlos S.",
""
],
[
"Jarzynski",
"Christopher",
""
],
[
"Papoian",
"Garegin A.",
""
]
] | Actin filaments are critical components of the eukaryotic cytoskeleton, playing important roles in a number of cellular functions, such as cell migration, organelle transport, and mechanosensation. They are helical polymers with a well-defined polarity, composed of globular monomers that bind nucleotides in one of three hydrolysis states (ATP, ADP-Pi, or ADP). Mean-field models of the dynamics of actin polymerization have succeeded in, among other things, determining the nucleotide profile of an average filament and resolving the mechanisms of accessory proteins, however these models require numerical solution of a high-dimensional system of nonlinear ODE's. By truncating a set of recursion equations, the Brooks-Carlsson model reduces dimensionality to 11, but it remains nonlinear and does not admit an analytical solution, hence, significantly hindering understanding of its resulting dynamics. In this work, by taking advantage of the fast timescales of the hydrolysis states of the filament tips, we propose two model reduction schemes that achieve low dimensionality and linearity. We provide an exact solution of the resulting linear equations and use it to shed light on the dynamical behaviors of the full BC model, highlighting the relative ordering of the timescales of various collective processes, and explaining some unusual dependence of the steady-state behavior on initial conditions. |
2004.06888 | Sheldon Tan | Sheldon X.D. Tan and Liang Chen | Real-Time Differential Epidemic Analysis and Prediction for COVID-19
Pandemic | 9 pages, first submission | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new real-time differential virus transmission
model, which can give more accurate and robust short-term predictions of
COVID-19 transmitted infectious disease with benefits of near-term trend
projection. Different from the existing Susceptible-Exposed-Infected-Removed
(SEIR) based virus transmission models, which fits well for pandemic modeling
with sufficient historical data, the new model, which is also SEIR based, uses
short history data to find the trend of the changing disease dynamics for the
infected, the dead and the recovered so that it can naturally accommodate the
adaptive real-time changes of disease mitigation, business activity and social
behavior of populations. As the parameters of the improved SEIR models are
trained by short history window data for accurate trend prediction, our
differential epidemic model, essentially are window-based time-varying SEIR
model. Since SEIR model still is a physics-based disease transmission model,
its near-term (like one month) projection can still be very instrumental for
policy makers to guide their decision for disease mitigation and business
activity policy change in a real-time. This is especially useful if the
pandemic lasts more than one year with different phases across the world like
1918 flu pandemic. Numerical results on the recent COVID-19 data from China,
Italy and US, California and New York states have been analyzed.
| [
{
"created": "Wed, 15 Apr 2020 05:38:08 GMT",
"version": "v1"
},
{
"created": "Sun, 3 May 2020 23:31:39 GMT",
"version": "v2"
}
] | 2020-05-05 | [
[
"Tan",
"Sheldon X. D.",
""
],
[
"Chen",
"Liang",
""
]
] | In this paper, we propose a new real-time differential virus transmission model, which can give more accurate and robust short-term predictions of COVID-19 transmitted infectious disease with benefits of near-term trend projection. Different from the existing Susceptible-Exposed-Infected-Removed (SEIR) based virus transmission models, which fits well for pandemic modeling with sufficient historical data, the new model, which is also SEIR based, uses short history data to find the trend of the changing disease dynamics for the infected, the dead and the recovered so that it can naturally accommodate the adaptive real-time changes of disease mitigation, business activity and social behavior of populations. As the parameters of the improved SEIR models are trained by short history window data for accurate trend prediction, our differential epidemic model, essentially are window-based time-varying SEIR model. Since SEIR model still is a physics-based disease transmission model, its near-term (like one month) projection can still be very instrumental for policy makers to guide their decision for disease mitigation and business activity policy change in a real-time. This is especially useful if the pandemic lasts more than one year with different phases across the world like 1918 flu pandemic. Numerical results on the recent COVID-19 data from China, Italy and US, California and New York states have been analyzed. |
2112.11722 | Nicola Parolini | Nicola Parolini, Luca Dede', Giovanni Ardenghi and Alfio Quarteroni | Modelling the COVID-19 epidemic and the vaccination campaign in Italy by
the SUIHTER model | 34 pages, 10 figures, 3 tables | null | null | null | q-bio.PE cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several epidemiological models have been proposed to study the evolution of
COVID-19 pandemic. In this paper, we propose an extension of the SUIHTER model,
first introduced in [Parolini et al, Proc R. Soc. A., 2021] to analyse the
COVID-19 spreading in Italy, which accounts for the vaccination campaign and
the presence of new variants when they become dominant. In particular, the
specific features of the variants (e.g. their increased transmission rate) and
vaccines (e.g. their efficacy to prevent transmission, hospitalization and
death) are modeled, based on clinical evidence. The new model is validated
comparing its near-future forecast capabilities with other epidemiological
models and exploring different scenario analyses.
| [
{
"created": "Wed, 22 Dec 2021 08:13:49 GMT",
"version": "v1"
}
] | 2021-12-23 | [
[
"Parolini",
"Nicola",
""
],
[
"Dede'",
"Luca",
""
],
[
"Ardenghi",
"Giovanni",
""
],
[
"Quarteroni",
"Alfio",
""
]
] | Several epidemiological models have been proposed to study the evolution of COVID-19 pandemic. In this paper, we propose an extension of the SUIHTER model, first introduced in [Parolini et al, Proc R. Soc. A., 2021] to analyse the COVID-19 spreading in Italy, which accounts for the vaccination campaign and the presence of new variants when they become dominant. In particular, the specific features of the variants (e.g. their increased transmission rate) and vaccines (e.g. their efficacy to prevent transmission, hospitalization and death) are modeled, based on clinical evidence. The new model is validated comparing its near-future forecast capabilities with other epidemiological models and exploring different scenario analyses. |
1505.00660 | Tam\'as Szabados | Tam\'as Szabados and G\'abor Tusn\'ady and L\'aszl\'o Varga and Tibor
Bak\'acs | A stochastic model of B cell affinity maturation and a network model of
immune memory | 20 pages, 10 figures, manuscript 1998 | null | null | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many events in the vertebrate immune system are influenced by some element of
chance. The objective of the present work is to describe affinity maturation of
B lymphocytes (in which random events are perhaps the most characteristic), and
to study a possible network model of immune memory. In our model stochastic
processes govern all events. A major novelty of this approach is that it
permits studying random variations in the immune process. Four basic components
are simulated in the model: non-immune self cells, nonself cells (pathogens), B
lymphocytes, and bone marrow cells that produce naive B lymphocytes. A point in
a generalized shape space plus the size of the corresponding population
represents nonself and non-immune self cells. On the other hand, each
individual B cell is represented by a disc that models its recognition region
in the shape space. Infection is simulated by an "injection" of nonself cells
into the system. Division of pathogens may instigate an attack of naive B
cells, which in turn may induce clonal proliferation and hypermutation in the
attacking B cells, and which eventually may slow down and stop the exponential
growth of pathogens. Affinity maturation of newly produced B cells becomes
expressed as a result of selection when the number of pathogens decreases.
Under favorable conditions, the expanded primary B cell clones may stimulate
the expansion of secondary B cell clones carrying complementary receptors to
the stimulating B cells. Like in a hall of mirrors, the image of pathogens in
the primary B cell clones then will be reflected in secondary B cell clones.
This "ping-pong" game may survive for a long time even in the absence of the
pathogen, creating a local network memory. This memory ensures that repeated
infection by the same pathogen will be eliminated more efficiently.
| [
{
"created": "Mon, 4 May 2015 14:42:35 GMT",
"version": "v1"
}
] | 2015-05-05 | [
[
"Szabados",
"Tamás",
""
],
[
"Tusnády",
"Gábor",
""
],
[
"Varga",
"László",
""
],
[
"Bakács",
"Tibor",
""
]
] | Many events in the vertebrate immune system are influenced by some element of chance. The objective of the present work is to describe affinity maturation of B lymphocytes (in which random events are perhaps the most characteristic), and to study a possible network model of immune memory. In our model stochastic processes govern all events. A major novelty of this approach is that it permits studying random variations in the immune process. Four basic components are simulated in the model: non-immune self cells, nonself cells (pathogens), B lymphocytes, and bone marrow cells that produce naive B lymphocytes. A point in a generalized shape space plus the size of the corresponding population represents nonself and non-immune self cells. On the other hand, each individual B cell is represented by a disc that models its recognition region in the shape space. Infection is simulated by an "injection" of nonself cells into the system. Division of pathogens may instigate an attack of naive B cells, which in turn may induce clonal proliferation and hypermutation in the attacking B cells, and which eventually may slow down and stop the exponential growth of pathogens. Affinity maturation of newly produced B cells becomes expressed as a result of selection when the number of pathogens decreases. Under favorable conditions, the expanded primary B cell clones may stimulate the expansion of secondary B cell clones carrying complementary receptors to the stimulating B cells. Like in a hall of mirrors, the image of pathogens in the primary B cell clones then will be reflected in secondary B cell clones. This "ping-pong" game may survive for a long time even in the absence of the pathogen, creating a local network memory. This memory ensures that repeated infection by the same pathogen will be eliminated more efficiently. |
2205.02650 | Jeremy Rothschild | Nava Leibovich, Jeremy Rothschild, Sidhartha Goyal, Anton Zilman | Phenomenology and dynamics of competitive ecosystems beyond the
niche-neutral regimes | Main: 13 pages, 5 figures. Supplementary: 12 pages, 6 figures | Proceedings of the National Academy of Sciences (PNAS). 119(43)
(2022) | 10.1073/pnas.2204394119 | null | q-bio.PE physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Structure, composition and stability of ecological populations are shaped by
the inter- and intra-species interactions within these communities. It remains
to be fully understood how the interplay of these interactions with other
factors, such as immigration, control the structure, diversity and the long
term stability of ecological systems in the presence of noise and fluctuations.
We address this problem using a minimal model of interacting multi-species
ecological communities that incorporates competition, immigration and
demographic noise. We find that the complete phase diagram exhibits rich
behavior with multiple regimes that go beyond the classical 'niche' and
'neutral' regimes, extending and modifying the 'neutral-like' or 'niche-like'
dichotomy. In particular, we observe novel regimes that cannot be characterized
as either 'niche' or 'neutral' where a multimodal species abundance
distribution is observed. We characterize the transitions between the different
regimes and show how they arise from the underlying kinetics of the species
turnover, extinction and invasion. Our model serves as a minimal null model of
noisy competitive ecological systems, against which more complex models that
include factors such as mutations and environmental noise can be compared.
| [
{
"created": "Thu, 5 May 2022 13:49:25 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jan 2023 13:07:34 GMT",
"version": "v2"
}
] | 2023-01-16 | [
[
"Leibovich",
"Nava",
""
],
[
"Rothschild",
"Jeremy",
""
],
[
"Goyal",
"Sidhartha",
""
],
[
"Zilman",
"Anton",
""
]
] | Structure, composition and stability of ecological populations are shaped by the inter- and intra-species interactions within these communities. It remains to be fully understood how the interplay of these interactions with other factors, such as immigration, control the structure, diversity and the long term stability of ecological systems in the presence of noise and fluctuations. We address this problem using a minimal model of interacting multi-species ecological communities that incorporates competition, immigration and demographic noise. We find that the complete phase diagram exhibits rich behavior with multiple regimes that go beyond the classical 'niche' and 'neutral' regimes, extending and modifying the 'neutral-like' or 'niche-like' dichotomy. In particular, we observe novel regimes that cannot be characterized as either 'niche' or 'neutral' where a multimodal species abundance distribution is observed. We characterize the transitions between the different regimes and show how they arise from the underlying kinetics of the species turnover, extinction and invasion. Our model serves as a minimal null model of noisy competitive ecological systems, against which more complex models that include factors such as mutations and environmental noise can be compared. |
1909.02695 | Ayumi Kikkawa Dr. | Ayumi Kikkawa | Spectral analysis for gene communities in cancer cells | 15 pages, 7 figures, 1 table | null | null | null | q-bio.MN physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate gene interaction networks in various cancer cells by spectral
analysis of the adjacency matrices. We observe localization of the networks on
hub genes which have extraordinarily many links. The eigenvector centralities
take finite values only on special nodes when the hub degree exceeds a critical
value $d_c \simeq 40$. The degree correlation function shows the disassortative
behavior in the large degrees, and the nodes whose degrees $d \gtrsim 40$ have
tendencies to link to small degree nodes. The communities of the gene networks
centered at the hub genes are extracted by the amount of node degree
discrepancies between linked nodes. We verify the Wigner-Dyson distribution of
the nearest neighbor eigenvalues spacing distribution $P(s)$ in the small
degree discrepancy communities, and the Poisson $P(s)$ in the communities of
large degree discrepancies including the hubs.
| [
{
"created": "Fri, 6 Sep 2019 02:29:10 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Sep 2019 05:55:24 GMT",
"version": "v2"
}
] | 2019-09-25 | [
[
"Kikkawa",
"Ayumi",
""
]
] | We investigate gene interaction networks in various cancer cells by spectral analysis of the adjacency matrices. We observe localization of the networks on hub genes which have extraordinarily many links. The eigenvector centralities take finite values only on special nodes when the hub degree exceeds a critical value $d_c \simeq 40$. The degree correlation function shows the disassortative behavior in the large degrees, and the nodes whose degrees $d \gtrsim 40$ have tendencies to link to small degree nodes. The communities of the gene networks centered at the hub genes are extracted by the amount of node degree discrepancies between linked nodes. We verify the Wigner-Dyson distribution of the nearest neighbor eigenvalues spacing distribution $P(s)$ in the small degree discrepancy communities, and the Poisson $P(s)$ in the communities of large degree discrepancies including the hubs. |
1910.11182 | Christian Benar | Christian-G. B\'enar (INS, AMU), C. Grova, V. Jirsa (INS, AMU), J.
Lina (ETS) | Differences in MEG and EEG power-law scaling explained by a coupling
between spatial coherence and frequency: a simulation study | null | Journal of Computational Neuroscience, Springer Verlag, 2019, 47
(1), pp.31-41 | 10.1007/s10827-019-00721-9 | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrophysiological signals (electroencephalography, EEG, and
magnetoencephalography , MEG), as many natural processes, exhibit
scale-invariance properties resulting in a power-law (1/f) spectrum.
Interestingly, EEG and MEG differ in their slopes, which could be explained by
several mechanisms, including non-resistive properties of tissues. Our goal in
the present study is to estimate the impact of space/frequency structure of
source signals as a putative mechanism to explain spectral scaling properties
of neuroimaging signals. We performed simulations based on the summed
contribution of cortical patches with different sizes (ranging from 0.4 to
104.2 cm 2). Small patches were attributed signals of high frequencies, whereas
large patches were associated with signals of low frequencies, on a logarithmic
scale. The tested parameters included i) the space/frequency structure (range
of patch sizes and frequencies) and ii) the amplitude factor c parametrizing
the spatial scale ratios. We found that the space/frequency structure may cause
differences between EEG and MEG scale-free spectra that are compatible with
real data findings reported in previous studies. We also found that below a
certain spatial scale, there were no more differences between EEG and MEG,
suggesting a limit for the resolution of both methods. Our work provides an
explanation of experimental findings. This does not rule out other mechanisms
for differences between EEG and MEG, but suggests an important role of
spatio-temporal structure of neural dynamics. This can help the analysis and
interpretation of power-law measures in EEG and MEG, and we believe our results
can also impact computational modeling of brain dynamics, where different local
connectivity structures could be used at different frequencies.
| [
{
"created": "Thu, 24 Oct 2019 14:41:02 GMT",
"version": "v1"
}
] | 2019-10-25 | [
[
"Bénar",
"Christian-G.",
"",
"INS, AMU"
],
[
"Grova",
"C.",
"",
"INS, AMU"
],
[
"Jirsa",
"V.",
"",
"INS, AMU"
],
[
"Lina",
"J.",
"",
"ETS"
]
] | Electrophysiological signals (electroencephalography, EEG, and magnetoencephalography , MEG), as many natural processes, exhibit scale-invariance properties resulting in a power-law (1/f) spectrum. Interestingly, EEG and MEG differ in their slopes, which could be explained by several mechanisms, including non-resistive properties of tissues. Our goal in the present study is to estimate the impact of space/frequency structure of source signals as a putative mechanism to explain spectral scaling properties of neuroimaging signals. We performed simulations based on the summed contribution of cortical patches with different sizes (ranging from 0.4 to 104.2 cm 2). Small patches were attributed signals of high frequencies, whereas large patches were associated with signals of low frequencies, on a logarithmic scale. The tested parameters included i) the space/frequency structure (range of patch sizes and frequencies) and ii) the amplitude factor c parametrizing the spatial scale ratios. We found that the space/frequency structure may cause differences between EEG and MEG scale-free spectra that are compatible with real data findings reported in previous studies. We also found that below a certain spatial scale, there were no more differences between EEG and MEG, suggesting a limit for the resolution of both methods. Our work provides an explanation of experimental findings. This does not rule out other mechanisms for differences between EEG and MEG, but suggests an important role of spatio-temporal structure of neural dynamics. This can help the analysis and interpretation of power-law measures in EEG and MEG, and we believe our results can also impact computational modeling of brain dynamics, where different local connectivity structures could be used at different frequencies. |
2003.05447 | Lin Jia | Lin Jia, Kewen Li, Yu Jiang, Xin Guo, Ting zhao | Prediction and analysis of Coronavirus Disease 2019 | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In December 2019, a novel coronavirus was found in a seafood wholesale market
in Wuhan, China. WHO officially named this coronavirus as COVID-19. Since the
first patient was hospitalized on December 12, 2019, China has reported a total
of 78,824 confirmed CONID-19 cases and 2,788 deaths as of February 28, 2020.
Wuhan's cumulative confirmed cases and deaths accounted for 61.1% and 76.5% of
the whole China mainland , making it the priority center for epidemic
prevention and control. Meanwhile, 51 countries and regions outside China have
reported 4,879 confirmed cases and 79 deaths as of February 28, 2020. COVID-19
epidemic does great harm to people's daily life and country's economic
development. This paper adopts three kinds of mathematical models, i.e.,
Logistic model, Bertalanffy model and Gompertz model. The epidemic trends of
SARS were first fitted and analyzed in order to prove the validity of the
existing mathematical models. The results were then used to fit and analyze the
situation of COVID-19. The prediction results of three different mathematical
models are different for different parameters and in different regions. In
general, the fitting effect of Logistic model may be the best among the three
models studied in this paper, while the fitting effect of Gompertz model may be
better than Bertalanffy model. According to the current trend, based on the
three models, the total number of people expected to be infected is 49852-57447
in Wuhan,12972-13405 in non-Hubei areas and 80261-85140 in China respectively.
The total death toll is 2502-5108 in Wuhan, 107-125 in Non-Hubei areas and
3150-6286 in China respetively. COVID-19 will be over p robably in late-April,
2020 in Wuhan and before late-March, 2020 in other areas respectively.
| [
{
"created": "Wed, 11 Mar 2020 09:23:10 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Mar 2020 23:40:07 GMT",
"version": "v2"
}
] | 2020-03-18 | [
[
"Jia",
"Lin",
""
],
[
"Li",
"Kewen",
""
],
[
"Jiang",
"Yu",
""
],
[
"Guo",
"Xin",
""
],
[
"zhao",
"Ting",
""
]
] | In December 2019, a novel coronavirus was found in a seafood wholesale market in Wuhan, China. WHO officially named this coronavirus as COVID-19. Since the first patient was hospitalized on December 12, 2019, China has reported a total of 78,824 confirmed CONID-19 cases and 2,788 deaths as of February 28, 2020. Wuhan's cumulative confirmed cases and deaths accounted for 61.1% and 76.5% of the whole China mainland , making it the priority center for epidemic prevention and control. Meanwhile, 51 countries and regions outside China have reported 4,879 confirmed cases and 79 deaths as of February 28, 2020. COVID-19 epidemic does great harm to people's daily life and country's economic development. This paper adopts three kinds of mathematical models, i.e., Logistic model, Bertalanffy model and Gompertz model. The epidemic trends of SARS were first fitted and analyzed in order to prove the validity of the existing mathematical models. The results were then used to fit and analyze the situation of COVID-19. The prediction results of three different mathematical models are different for different parameters and in different regions. In general, the fitting effect of Logistic model may be the best among the three models studied in this paper, while the fitting effect of Gompertz model may be better than Bertalanffy model. According to the current trend, based on the three models, the total number of people expected to be infected is 49852-57447 in Wuhan,12972-13405 in non-Hubei areas and 80261-85140 in China respectively. The total death toll is 2502-5108 in Wuhan, 107-125 in Non-Hubei areas and 3150-6286 in China respetively. COVID-19 will be over p robably in late-April, 2020 in Wuhan and before late-March, 2020 in other areas respectively. |
2211.07105 | Derek Aguiar | Marjan Hosseini, Devin McConnell, Derek Aguiar | Bayesian Reconstruction and Differential Testing of Excised mRNA | null | null | null | null | q-bio.QM cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing the differential excision of mRNA is critical for
understanding the functional complexity of a cell or tissue, from normal
developmental processes to disease pathogenesis. Most transcript reconstruction
methods infer full-length transcripts from high-throughput sequencing data.
However, this is a challenging task due to incomplete annotations and the
differential expression of transcripts across cell-types, tissues, and
experimental conditions. Several recent methods circumvent these difficulties
by considering local splicing events, but these methods lose transcript-level
splicing information and may conflate transcripts. We develop the first
probabilistic model that reconciles the transcript and local splicing
perspectives. First, we formalize the sequence of mRNA excisions (SME)
reconstruction problem, which aims to assemble variable-length sequences of
mRNA excisions from RNA-sequencing data. We then present a novel hierarchical
Bayesian admixture model for the Reconstruction of Excised mRNA (BREM). BREM
interpolates between local splicing events and full-length transcripts and thus
focuses only on SMEs that have high posterior probability. We develop posterior
inference algorithms based on Gibbs sampling and local search of independent
sets and characterize differential SME usage using generalized linear models
based on converged BREM model parameters. We show that BREM achieves higher F1
score for reconstruction tasks and improved accuracy and sensitivity in
differential splicing when compared with four state-of-the-art transcript and
local splicing methods on simulated data. Lastly, we evaluate BREM on both bulk
and scRNA sequencing data based on transcript reconstruction, novelty of
transcripts produced, model sensitivity to hyperparameters, and a functional
analysis of differentially expressed SMEs, demonstrating that BREM captures
relevant biological signal.
| [
{
"created": "Mon, 14 Nov 2022 04:46:33 GMT",
"version": "v1"
}
] | 2022-11-15 | [
[
"Hosseini",
"Marjan",
""
],
[
"McConnell",
"Devin",
""
],
[
"Aguiar",
"Derek",
""
]
] | Characterizing the differential excision of mRNA is critical for understanding the functional complexity of a cell or tissue, from normal developmental processes to disease pathogenesis. Most transcript reconstruction methods infer full-length transcripts from high-throughput sequencing data. However, this is a challenging task due to incomplete annotations and the differential expression of transcripts across cell-types, tissues, and experimental conditions. Several recent methods circumvent these difficulties by considering local splicing events, but these methods lose transcript-level splicing information and may conflate transcripts. We develop the first probabilistic model that reconciles the transcript and local splicing perspectives. First, we formalize the sequence of mRNA excisions (SME) reconstruction problem, which aims to assemble variable-length sequences of mRNA excisions from RNA-sequencing data. We then present a novel hierarchical Bayesian admixture model for the Reconstruction of Excised mRNA (BREM). BREM interpolates between local splicing events and full-length transcripts and thus focuses only on SMEs that have high posterior probability. We develop posterior inference algorithms based on Gibbs sampling and local search of independent sets and characterize differential SME usage using generalized linear models based on converged BREM model parameters. We show that BREM achieves higher F1 score for reconstruction tasks and improved accuracy and sensitivity in differential splicing when compared with four state-of-the-art transcript and local splicing methods on simulated data. Lastly, we evaluate BREM on both bulk and scRNA sequencing data based on transcript reconstruction, novelty of transcripts produced, model sensitivity to hyperparameters, and a functional analysis of differentially expressed SMEs, demonstrating that BREM captures relevant biological signal. |
1311.4921 | Joseph Heled | Joseph Heled and Alexei J.Drummond | Calibrated birth-death phylogenetic time-tree priors for Bayesian
inference | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Here we introduce a general class of multiple calibration birth-death tree
priors for use in Bayesian phylogenetic inference. All tree priors in this
class separate ancestral node heights into a set of "calibrated nodes" and
"uncalibrated nodes" such that the marginal distribution of the calibrated
nodes is user-specified whereas the density ratio of the birth-death prior is
retained for trees with equal values for the calibrated nodes. We describe two
formulations, one in which the calibration information informs the prior on
ranked tree topologies, through the (conditional) prior, and the other which
factorizes the prior on divergence times and ranked topologies, thus allowing
uniform, or any arbitrary prior distribution on ranked topologies. While the
first of these formulations has some attractive properties the algorithm we
present for computing its prior density is computationally intensive. On the
other hand, the second formulation is always computationally efficient. We
demonstrate the utility of the new class of multiple-calibration tree priors
using both small simulations and a real-world analysis and compare the results
to existing schemes. The two new calibrated tree priors described in this paper
offer greater flexibility and control of prior specification in calibrated
time-tree inference and divergence time dating, and will remove the need for
indirect approaches to the assessment of the combined effect of calibration
densities and tree process priors in Bayesian phylogenetic inference.
| [
{
"created": "Tue, 19 Nov 2013 23:54:16 GMT",
"version": "v1"
}
] | 2013-11-21 | [
[
"Heled",
"Joseph",
""
],
[
"Drummond",
"Alexei J.",
""
]
] | Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. While the first of these formulations has some attractive properties the algorithm we present for computing its prior density is computationally intensive. On the other hand, the second formulation is always computationally efficient. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this paper offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree process priors in Bayesian phylogenetic inference. |
q-bio/0609030 | Alain Destexhe | Martin Pospischil, Zuzanna Piwkowska, Michelle Rudolph, Thierry Bal
and Alain Destexhe | Calculating event-triggered average synaptic conductances from the
membrane potential | 10 pages, 8 figures; final version published in the Journal of
Neurophysiology | Journal of Neurophysiology, 97: 2544--2552 (2007) | null | null | q-bio.NC | null | The optimal patterns of synaptic conductances for spike generation in central
neurons is a subject of considerable interest. Ideally, such conductance time
courses should be extracted from membrane potential (Vm) activity, but this is
difficult because the nonlinear contribution of conductances to the Vm renders
their estimation from the membrane equation extremely sensitive. We outline
here a solution to this problem based on a discretization of the time axis.
This procedure can extract the time course of excitatory and inhibitory
conductances solely from the analysis of Vm activity. We test this method by
calculating spike-triggered averages of synaptic conductances using numerical
simulations of the integrate-and-fire model subject to colored conductance
noise. The procedure was also tested successfully in biological cortical
neurons using conductance noise injected with dynamic-clamp. This method should
allow the extraction of synaptic conductances from Vm recordings in vivo.
| [
{
"created": "Wed, 20 Sep 2006 22:19:54 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Nov 2006 14:07:22 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Mar 2007 13:18:16 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Pospischil",
"Martin",
""
],
[
"Piwkowska",
"Zuzanna",
""
],
[
"Rudolph",
"Michelle",
""
],
[
"Bal",
"Thierry",
""
],
[
"Destexhe",
"Alain",
""
]
] | The optimal patterns of synaptic conductances for spike generation in central neurons is a subject of considerable interest. Ideally, such conductance time courses should be extracted from membrane potential (Vm) activity, but this is difficult because the nonlinear contribution of conductances to the Vm renders their estimation from the membrane equation extremely sensitive. We outline here a solution to this problem based on a discretization of the time axis. This procedure can extract the time course of excitatory and inhibitory conductances solely from the analysis of Vm activity. We test this method by calculating spike-triggered averages of synaptic conductances using numerical simulations of the integrate-and-fire model subject to colored conductance noise. The procedure was also tested successfully in biological cortical neurons using conductance noise injected with dynamic-clamp. This method should allow the extraction of synaptic conductances from Vm recordings in vivo. |
2003.00110 | Serghei Mangul | Mohammed Alser, Jeremy Rotman, Kodi Taraszka, Huwenbo Shi, Pelin Icer
Baykal, Harry Taegyun Yang, Victor Xue, Sergey Knyazev, Benjamin D. Singer,
Brunilda Balliu, David Koslicki, Pavel Skums, Alex Zelikovsky, Can Alkan,
Onur Mutlu, and Serghei Mangul | Technology dictates algorithms: Recent developments in read alignment | null | Genome Biol . Aug 26;22(1):249, 2021 | 10.1186/s13059-021-02443-7 | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Massively parallel sequencing techniques have revolutionized biological and
medical sciences by providing unprecedented insight into the genomes of humans,
animals, and microbes. Modern sequencing platforms generate enormous amounts of
genomic data in the form of nucleotide sequences or reads. Aligning reads onto
reference genomes enables the identification of individual-specific genetic
variants and is an essential step of the majority of genomic analysis
pipelines. Aligned reads are essential for answering important biological
questions, such as detecting mutations driving various human diseases and
complex traits as well as identifying species present in metagenomic samples.
The read alignment problem is extremely challenging due to the large size of
analyzed datasets and numerous technological limitations of sequencing
platforms, and researchers have developed novel bioinformatics algorithms to
tackle these difficulties. Importantly, computational algorithms have evolved
and diversified in accordance with technological advances, leading to todays
diverse array of bioinformatics tools. Our review provides a survey of
algorithmic foundations and methodologies across 107 alignment methods
published between 1988 and 2020, for both short and long reads. We provide
rigorous experimental evaluation of 11 read aligners to demonstrate the effect
of these underlying algorithms on speed and efficiency of read aligners. We
separately discuss how longer read lengths produce unique advantages and
limitations to read alignment techniques. We also discuss how general alignment
algorithms have been tailored to the specific needs of various domains in
biology, including whole transcriptome, adaptive immune repertoire, and human
microbiome studies.
| [
{
"created": "Fri, 28 Feb 2020 23:15:29 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Apr 2020 19:28:22 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Jul 2020 22:26:29 GMT",
"version": "v3"
}
] | 2023-11-21 | [
[
"Alser",
"Mohammed",
""
],
[
"Rotman",
"Jeremy",
""
],
[
"Taraszka",
"Kodi",
""
],
[
"Shi",
"Huwenbo",
""
],
[
"Baykal",
"Pelin Icer",
""
],
[
"Yang",
"Harry Taegyun",
""
],
[
"Xue",
"Victor",
""
],
[
"Knyazev",
"Sergey",
""
],
[
"Singer",
"Benjamin D.",
""
],
[
"Balliu",
"Brunilda",
""
],
[
"Koslicki",
"David",
""
],
[
"Skums",
"Pavel",
""
],
[
"Zelikovsky",
"Alex",
""
],
[
"Alkan",
"Can",
""
],
[
"Mutlu",
"Onur",
""
],
[
"Mangul",
"Serghei",
""
]
] | Massively parallel sequencing techniques have revolutionized biological and medical sciences by providing unprecedented insight into the genomes of humans, animals, and microbes. Modern sequencing platforms generate enormous amounts of genomic data in the form of nucleotide sequences or reads. Aligning reads onto reference genomes enables the identification of individual-specific genetic variants and is an essential step of the majority of genomic analysis pipelines. Aligned reads are essential for answering important biological questions, such as detecting mutations driving various human diseases and complex traits as well as identifying species present in metagenomic samples. The read alignment problem is extremely challenging due to the large size of analyzed datasets and numerous technological limitations of sequencing platforms, and researchers have developed novel bioinformatics algorithms to tackle these difficulties. Importantly, computational algorithms have evolved and diversified in accordance with technological advances, leading to todays diverse array of bioinformatics tools. Our review provides a survey of algorithmic foundations and methodologies across 107 alignment methods published between 1988 and 2020, for both short and long reads. We provide rigorous experimental evaluation of 11 read aligners to demonstrate the effect of these underlying algorithms on speed and efficiency of read aligners. We separately discuss how longer read lengths produce unique advantages and limitations to read alignment techniques. We also discuss how general alignment algorithms have been tailored to the specific needs of various domains in biology, including whole transcriptome, adaptive immune repertoire, and human microbiome studies. |
2106.05538 | William Winlow Professor | William Winlow and Andrew Simon Johnson | Nerve Impulses Have Three Interdependent Functions: Communication,
Modulation And Computation | Keywords: Nerve impulse, Physiological Action potential, Soliton,
Action potential pulse Computational action potential.10 pages, 4 figures, 1
table | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Comprehending the nature of action potentials is fundamental to our
understanding of the functioning of nervous systems in general. Here we
consider their evolution and describe their functions of communication,
modulation and computation within nervous systems. The ionic mechanisms
underlying action potentials in the squid giant axon were first described by
Hodgkin and Huxley in 1952 and their findings have formed our orthodox view of
how the physiological action potential functions. However, substantial evidence
has now accumulated to show that the action potential is accompanied by a
synchronized coupled soliton pressure pulse in the cell membrane, the action
potential pulse (APPulse). Here we explore the interactions between the soliton
and the ionic mechanisms known to be associated with the action potential.
Computational models of the action potential usually describe it as a binary
event, but we suggest that it is quantum ternary event known as the
computational action potential (CAP), whose temporal fixed point is threshold,
rather than the rather plastic action potential peak used in other models. The
CAP accompanies the APPulse and the Physiological action potential. Therefore,
we conclude that nerve impulses appear to be an ensemble of three inseparable,
interdependent, concurrent states: the physiological action potential, the
APPulse and the CAP.
| [
{
"created": "Thu, 10 Jun 2021 06:59:06 GMT",
"version": "v1"
}
] | 2021-06-11 | [
[
"Winlow",
"William",
""
],
[
"Johnson",
"Andrew Simon",
""
]
] | Comprehending the nature of action potentials is fundamental to our understanding of the functioning of nervous systems in general. Here we consider their evolution and describe their functions of communication, modulation and computation within nervous systems. The ionic mechanisms underlying action potentials in the squid giant axon were first described by Hodgkin and Huxley in 1952 and their findings have formed our orthodox view of how the physiological action potential functions. However, substantial evidence has now accumulated to show that the action potential is accompanied by a synchronized coupled soliton pressure pulse in the cell membrane, the action potential pulse (APPulse). Here we explore the interactions between the soliton and the ionic mechanisms known to be associated with the action potential. Computational models of the action potential usually describe it as a binary event, but we suggest that it is quantum ternary event known as the computational action potential (CAP), whose temporal fixed point is threshold, rather than the rather plastic action potential peak used in other models. The CAP accompanies the APPulse and the Physiological action potential. Therefore, we conclude that nerve impulses appear to be an ensemble of three inseparable, interdependent, concurrent states: the physiological action potential, the APPulse and the CAP. |
1503.03258 | Pablo Villegas G\'ongora | Pablo Villegas, Jorge Hidalgo, Paolo Moretti, Miguel A. Mu\~noz | Complex synchronization patterns in the human connectome network | 12pages, 6 Figures | Proceedings of ECCS 2014: European Conference on Complex Systems
(2016) pp.69-80 | 10.1007/978-3-319-29228-1 | null | q-bio.NC cond-mat.dis-nn nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in neuroscience is posed by the need for relating the
emerging dynamical features of brain activity with the underlying modular
structure of neural connections, hierarchically organized throughout several
scales. The spontaneous emergence of coherence and synchronization across such
scales is crucial to neural function, while its anomalies often relate to
pathological conditions. Here we provide a numerical study of synchronization
dynamics in the human connectome network. Our purpose is to provide a detailed
characterization of the recently uncovered broad dynamic regime, interposed
between order and disorder, which stems from the hierarchical modular
organization of the human connectome. In this regime -similar in essence to a
Griffiths phase- synchronization dynamics are trapped within metastable
attractors of local coherence. Here we explore the role of noise, as an
effective description of external perturbations, and discuss how its presence
accounts for the ability of the system to escape intermittently from such
attractors and explore complex dynamic repertoires of locally coherent states,
in analogy with experimentally recorded patterns of cerebral activity.
| [
{
"created": "Wed, 11 Mar 2015 10:27:21 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Mar 2016 12:11:39 GMT",
"version": "v2"
}
] | 2016-06-03 | [
[
"Villegas",
"Pablo",
""
],
[
"Hidalgo",
"Jorge",
""
],
[
"Moretti",
"Paolo",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] | A major challenge in neuroscience is posed by the need for relating the emerging dynamical features of brain activity with the underlying modular structure of neural connections, hierarchically organized throughout several scales. The spontaneous emergence of coherence and synchronization across such scales is crucial to neural function, while its anomalies often relate to pathological conditions. Here we provide a numerical study of synchronization dynamics in the human connectome network. Our purpose is to provide a detailed characterization of the recently uncovered broad dynamic regime, interposed between order and disorder, which stems from the hierarchical modular organization of the human connectome. In this regime -similar in essence to a Griffiths phase- synchronization dynamics are trapped within metastable attractors of local coherence. Here we explore the role of noise, as an effective description of external perturbations, and discuss how its presence accounts for the ability of the system to escape intermittently from such attractors and explore complex dynamic repertoires of locally coherent states, in analogy with experimentally recorded patterns of cerebral activity. |
1405.7965 | Viola Priesemann | Juhan Aru, Jaan Aru, Viola Priesemann, Michael Wibral, Luiz Lana,
Gordon Pipa, Wolf Singer, Raul Vicente | Untangling cross-frequency coupling in neuroscience | 47 pages, 12 figures, including supplementary material | null | 10.1016/j.conb.2014.08.002 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-frequency coupling (CFC) has been proposed to coordinate neural
dynamics across spatial and temporal scales. Despite its potential relevance
for understanding healthy and pathological brain function, the standard CFC
analysis and physiological interpretation come with fundamental problems. For
example, apparent CFC can appear because of spectral correlations due to common
non-stationarities that may arise in the total absence of interactions between
neural frequency components. To provide a road map towards an improved
mechanistic understanding of CFC, we organize the available and potential novel
statistical/modeling approaches according to their biophysical
interpretability. While we do not provide solutions for all the problems
described, we provide a list of practical recommendations to avoid common
errors and to enhance the interpretability of CFC analysis.
| [
{
"created": "Fri, 30 May 2014 19:34:37 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Aug 2014 12:04:47 GMT",
"version": "v2"
}
] | 2014-10-08 | [
[
"Aru",
"Juhan",
""
],
[
"Aru",
"Jaan",
""
],
[
"Priesemann",
"Viola",
""
],
[
"Wibral",
"Michael",
""
],
[
"Lana",
"Luiz",
""
],
[
"Pipa",
"Gordon",
""
],
[
"Singer",
"Wolf",
""
],
[
"Vicente",
"Raul",
""
]
] | Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis. |
1805.12313 | Yunlong Liu | Yunlong Liu and L. Mario Amzel | Conformation Clustering of Long MD Protein Dynamics with an Adversarial
Autoencoder | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in specialized computer hardware have greatly accelerated
atomic level Molecular Dynamics (MD) simulations. A single GPU-attached cluster
is capable of producing microsecond-length trajectories in reasonable amounts
of time. Multiple protein states and a large number of microstates associated
with folding and with the function of the protein can be observed as
conformations sampled in the trajectories. Clustering those conformations,
however, is needed for identifying protein states, evaluating transition rates
and understanding protein behavior. In this paper, we propose a novel
data-driven generative conformation clustering method based on the adversarial
autoencoder (AAE) and provide the associated software implementation Cong. The
method was tested using a 208 microseconds MD simulation of the fast-folding
peptide Trp-Cage (20 residues) obtained from the D.E. Shaw Research Group. The
proposed clustering algorithm identifies many of the salient features of the
folding process by grouping a large number of conformations that share common
features not easily identifiable in the trajectory.
| [
{
"created": "Thu, 31 May 2018 03:46:27 GMT",
"version": "v1"
}
] | 2018-06-01 | [
[
"Liu",
"Yunlong",
""
],
[
"Amzel",
"L. Mario",
""
]
] | Recent developments in specialized computer hardware have greatly accelerated atomic level Molecular Dynamics (MD) simulations. A single GPU-attached cluster is capable of producing microsecond-length trajectories in reasonable amounts of time. Multiple protein states and a large number of microstates associated with folding and with the function of the protein can be observed as conformations sampled in the trajectories. Clustering those conformations, however, is needed for identifying protein states, evaluating transition rates and understanding protein behavior. In this paper, we propose a novel data-driven generative conformation clustering method based on the adversarial autoencoder (AAE) and provide the associated software implementation Cong. The method was tested using a 208 microseconds MD simulation of the fast-folding peptide Trp-Cage (20 residues) obtained from the D.E. Shaw Research Group. The proposed clustering algorithm identifies many of the salient features of the folding process by grouping a large number of conformations that share common features not easily identifiable in the trajectory. |
1802.09169 | Tan Vu Van | Tan Van Vu, Yoshihiko Hasegawa | An algebraic method to calculate parameter regions for constrained
steady-state distribution in stochastic reaction networks | 17 pages, 4 figures | Chaos 29, 023123 (2019) | 10.1063/1.5047579 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Steady state is an essential concept in reaction networks. Its stability
reflects fundamental characteristics of several biological phenomena such as
cellular signal transduction and gene expression. Because biochemical reactions
occur at the cellular level, they are affected by unavoidable fluctuations.
Although several methods have been proposed to detect and analyze the stability
of steady states for deterministic models, these methods cannot be applied to
stochastic reaction networks. In this paper, we propose an algorithm based on
algebraic computations to calculate parameter regions for constrained
steady-state distribution of stochastic reaction networks, in which the means
and variances satisfy some given inequality constraints. To evaluate our
proposed method, we perform computer simulations for three typical chemical
reactions and demonstrate that the results obtained with our method are
consistent with the simulation results.
| [
{
"created": "Mon, 26 Feb 2018 05:36:25 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Mar 2018 11:58:54 GMT",
"version": "v2"
},
{
"created": "Fri, 6 Jul 2018 06:39:54 GMT",
"version": "v3"
},
{
"created": "Mon, 9 Jul 2018 10:26:13 GMT",
"version": "v4"
}
] | 2019-04-19 | [
[
"Van Vu",
"Tan",
""
],
[
"Hasegawa",
"Yoshihiko",
""
]
] | Steady state is an essential concept in reaction networks. Its stability reflects fundamental characteristics of several biological phenomena such as cellular signal transduction and gene expression. Because biochemical reactions occur at the cellular level, they are affected by unavoidable fluctuations. Although several methods have been proposed to detect and analyze the stability of steady states for deterministic models, these methods cannot be applied to stochastic reaction networks. In this paper, we propose an algorithm based on algebraic computations to calculate parameter regions for constrained steady-state distribution of stochastic reaction networks, in which the means and variances satisfy some given inequality constraints. To evaluate our proposed method, we perform computer simulations for three typical chemical reactions and demonstrate that the results obtained with our method are consistent with the simulation results. |
1508.00165 | Luca Mazzucato | Luca Mazzucato, Alfredo Fontanini, and Giancarlo La Camera | Dynamics of multi-stable states during ongoing and evoked cortical
activity | 34 pages, 11 figures; v2: typos in Methods section corrected; v3:
typos corrected | J Neurosci. 2015 May 27;35(21):8214-31 | 10.1523/JNEUROSCI.4819-14.2015 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single trial analyses of ensemble activity in alert animals demonstrate that
cortical circuits dynamics evolve through temporal sequences of metastable
states. Metastability has been studied for its potential role in sensory
coding, memory and decision-making. Yet, very little is known about the network
mechanisms responsible for its genesis. It is often assumed that the onset of
state sequences is triggered by an external stimulus. Here we show that state
sequences can be observed also in the absence of overt sensory stimulation.
Analysis of multielectrode recordings from the gustatory cortex of alert rats
revealed ongoing sequences of states, where single neurons spontaneously attain
several firing rates across different states. This single neuron
multi-stability represents a challenge to existing spiking network models,
where typically each neuron is at most bi-stable. We present a recurrent
spiking network model that accounts for both the spontaneous generation of
state sequences and the multi-stability in single neuron firing rates. Each
state results from the activation of neural clusters with potentiated
intra-cluster connections, with the firing rate in each cluster depending on
the number of active clusters. Simulations show that the models ensemble
activity hops among the different states, reproducing the ongoing dynamics
observed in the data. When probed with external stimuli, the model predicts the
quenching of single neuron multi-stability into bi-stability and the reduction
of trial-by-trial variability. Both predictions were confirmed in the data.
Altogether, these results provide a theoretical framework that captures both
ongoing and evoked network dynamics in a single mechanistic model.
| [
{
"created": "Sat, 1 Aug 2015 20:23:12 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Feb 2016 04:48:05 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Mar 2016 14:41:18 GMT",
"version": "v3"
}
] | 2016-03-23 | [
[
"Mazzucato",
"Luca",
""
],
[
"Fontanini",
"Alfredo",
""
],
[
"La Camera",
"Giancarlo",
""
]
] | Single trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single neuron multi-stability represents a challenge to existing spiking network models, where typically each neuron is at most bi-stable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multi-stability in single neuron firing rates. Each state results from the activation of neural clusters with potentiated intra-cluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the models ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single neuron multi-stability into bi-stability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Altogether, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model. |
1904.03653 | Eugene Shakhnovich | Eugene Serebryany, Rostam Razban and Eugene I Shakhnovich | Conformational catalysis of cataract-associated aggregation by
interacting intermediates in a human eye lens crystallin | 26 pages, 6 figures+Supplementary | null | null | null | q-bio.BM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Most known proteins in nature consist of multiple domains. Interactions
between domains may lead to unexpected folding and misfolding phenomena. This
study of human {\gamma}D-crystallin, a two-domain protein in the eye lens,
revealed one such surprise: conformational catalysis of misfolding via
intermolecular domain interface ''stealing''. An intermolecular interface
between the more stable domains outcompetes the native intramolecular domain
interface. Loss of the native interface in turn promotes misfolding and
subsequent aggregation, especially in cataract-related {\gamma}D-crystallin
variants. This phenomenon is likely a contributing factor in the development of
cataract disease, the leading worldwide cause of blindness. However, interface
stealing likely occurs in many proteins composed of two or more interacting
domains.
| [
{
"created": "Sun, 7 Apr 2019 13:53:02 GMT",
"version": "v1"
}
] | 2019-04-09 | [
[
"Serebryany",
"Eugene",
""
],
[
"Razban",
"Rostam",
""
],
[
"Shakhnovich",
"Eugene I",
""
]
] | Most known proteins in nature consist of multiple domains. Interactions between domains may lead to unexpected folding and misfolding phenomena. This study of human {\gamma}D-crystallin, a two-domain protein in the eye lens, revealed one such surprise: conformational catalysis of misfolding via intermolecular domain interface ''stealing''. An intermolecular interface between the more stable domains outcompetes the native intramolecular domain interface. Loss of the native interface in turn promotes misfolding and subsequent aggregation, especially in cataract-related {\gamma}D-crystallin variants. This phenomenon is likely a contributing factor in the development of cataract disease, the leading worldwide cause of blindness. However, interface stealing likely occurs in many proteins composed of two or more interacting domains. |
2402.00312 | Trond Arne Undheim | Trond Arne Undheim | The whack-a-mole governance challenge for AI-enabled synthetic biology:
literature review and emerging frameworks | null | Front. Bioeng. Biotechnol. 12:1359768. | 10.3389/fbioe.2024.1359768 | null | q-bio.OT cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI-enabled synthetic biology has tremendous potential but also significantly
increases biorisks and brings about a new set of dual use concerns. The picture
is complicated given the vast innovations envisioned to emerge by combining
emerging technologies, as AI-enabled synthetic biology potentially scales up
bioengineering into industrial biomanufacturing. However, the literature review
indicates that goals such as maintaining a reasonable scope for innovation, or
more ambitiously to foster a huge bioeconomy don't necessarily contrast with
biosafety, but need to go hand in hand. This paper presents a literature review
of the issues and describes emerging frameworks for policy and practice that
transverse the options of command-and control, stewardship, bottom-up, and
laissez-faire governance. How to achieve early warning systems that enable
prevention and mitigation of future AI-enabled biohazards from the lab, from
deliberate misuse, or from the public realm, will constantly need to evolve,
and adaptive, interactive approaches should emerge. Although biorisk is subject
to an established governance regime, and scientists generally adhere to
biosafety protocols, even experimental, but legitimate use by scientists could
lead to unexpected developments. Recent advances in chatbots enabled by
generative AI have revived fears that advanced biological insight can more
easily get into the hands of malignant individuals or organizations. Given
these sets of issues, society needs to rethink how AI-enabled synthetic biology
should be governed. The suggested way to visualize the challenge at hand is
whack-a-mole governance, although the emerging solutions are perhaps not so
different either.
| [
{
"created": "Thu, 1 Feb 2024 03:53:13 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Undheim",
"Trond Arne",
""
]
] | AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy don't necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either. |
2211.10182 | Ruben Boot | R.C. Boot, A. Roscani, L. van Buren, S. Maity, G.H. Koenderink and
P.E. Boukany | High-throughput mechanophenotyping of multicellular spheroids using a
microfluidic micropipette aspiration chip | null | null | null | null | q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by-sa/4.0/ | Cell spheroids are in vitro multicellular model systems that mimic the
crowded micro-environment of biological tissues. Their mechanical
characterization can provide valuable insights in how single-cell mechanics and
cell-cell interactions control tissue mechanics and self-organization. However,
most measurement techniques are limited to probing one spheroid at a time,
require specialized equipment and are difficult to handle. Here, we developed a
microfluidic chip that follows the concept of glass capillary micropipette
aspiration in order to quantify the viscoelastic behavior of spheroids in an
easy- to-handle, high-throughput manner. Spheroids are loaded in parallel
pockets via a gentle flow, after which spheroid tongues are aspirated into
adjacent aspiration channels using hydrostatic pressure. After each experiment,
the spheroids are easily removed from the chip by reversing the pressure and
new spheroids can be injected. The presence of multiple pockets with a uniform
aspiration pressure, combined with the ease to conduct successive experiments,
allows for a high throughput of tens of spheroids per day. We demonstrate that
the chip provides accurate deformation data when working at different
aspiration pressures. Lastly, we measure the viscoelastic properties of
spheroids made of different cell lines and show how these are consistent with
previous studies using established experimental techniques. In summary, our
chip provides a high-throughput way to measure the viscoelastic deformation
behavior of cell spheroids, in order to mechanophenotype different tissue types
and examine the link between cell-intrinsic properties and overall tissue
behavior.
| [
{
"created": "Fri, 18 Nov 2022 12:00:50 GMT",
"version": "v1"
}
] | 2022-11-21 | [
[
"Boot",
"R. C.",
""
],
[
"Roscani",
"A.",
""
],
[
"van Buren",
"L.",
""
],
[
"Maity",
"S.",
""
],
[
"Koenderink",
"G. H.",
""
],
[
"Boukany",
"P. E.",
""
]
] | Cell spheroids are in vitro multicellular model systems that mimic the crowded micro-environment of biological tissues. Their mechanical characterization can provide valuable insights in how single-cell mechanics and cell-cell interactions control tissue mechanics and self-organization. However, most measurement techniques are limited to probing one spheroid at a time, require specialized equipment and are difficult to handle. Here, we developed a microfluidic chip that follows the concept of glass capillary micropipette aspiration in order to quantify the viscoelastic behavior of spheroids in an easy- to-handle, high-throughput manner. Spheroids are loaded in parallel pockets via a gentle flow, after which spheroid tongues are aspirated into adjacent aspiration channels using hydrostatic pressure. After each experiment, the spheroids are easily removed from the chip by reversing the pressure and new spheroids can be injected. The presence of multiple pockets with a uniform aspiration pressure, combined with the ease to conduct successive experiments, allows for a high throughput of tens of spheroids per day. We demonstrate that the chip provides accurate deformation data when working at different aspiration pressures. Lastly, we measure the viscoelastic properties of spheroids made of different cell lines and show how these are consistent with previous studies using established experimental techniques. In summary, our chip provides a high-throughput way to measure the viscoelastic deformation behavior of cell spheroids, in order to mechanophenotype different tissue types and examine the link between cell-intrinsic properties and overall tissue behavior. |
2005.07108 | Guido Tiana | M. Crippa, Y. Zhan, G. Tiana | Effective Model of Loop Extrusion Predicts Chromosomal Domains | null | Phys. Rev. E 102, 032414 (2020) | 10.1103/PhysRevE.102.032414 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An active loop-extrusion mechanism is regarded as the main
out--of--equilibrium mechanism responsible for the structuring of
megabase-sized domains in chromosomes. We developed a model to study the
dynamics of the chromosome fibre by solving the kinetic equations associated
with the motion of the extruder. By averaging out the position of the extruder
along the chain, we build an effective equilibrium model capable of reproducing
experimental contact maps based solely on the positions of extrusion--blocking
proteins. We assessed the quality of the effective model using numerical
simulations of chromosomal segments and comparing the results with
explicit-extruder models and experimental data.
| [
{
"created": "Thu, 14 May 2020 16:21:04 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2020 08:49:34 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Sep 2020 10:41:55 GMT",
"version": "v3"
}
] | 2020-09-30 | [
[
"Crippa",
"M.",
""
],
[
"Zhan",
"Y.",
""
],
[
"Tiana",
"G.",
""
]
] | An active loop-extrusion mechanism is regarded as the main out--of--equilibrium mechanism responsible for the structuring of megabase-sized domains in chromosomes. We developed a model to study the dynamics of the chromosome fibre by solving the kinetic equations associated with the motion of the extruder. By averaging out the position of the extruder along the chain, we build an effective equilibrium model capable of reproducing experimental contact maps based solely on the positions of extrusion--blocking proteins. We assessed the quality of the effective model using numerical simulations of chromosomal segments and comparing the results with explicit-extruder models and experimental data. |
2312.07422 | Anh Duong Vo | Anh Duong Vo, Elisabeth Abs, Pau Vilimelis Aceituno, Benjamin
Friedrich Grewe, Katharina Anna Wilmes | Exploring the functional hierarchy of different pyramidal cell types in
temporal processing | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Recent research has revealed the unique functionality of cortical pyramidal
cell subtypes, namely intratelencephalic neurons (IT) and pyramidal-tract
neurons (PT). How these two populations interact with each other to fulfill
their functional roles remains poorly understood. We propose the existence of a
functional hierarchy between IT and PT due to their unidirectional connection
and distinct roles in sensory discrimination and motor tasks. To investigate
this hypothesis, we conducted a literature review of recent studies that
explored the properties and functionalities of IT and PT, including causal
lesion studies, population-based encoding, and calcium imaging experiments.
Further, we suggest future experiments to determine the relevance of the
canonical IT-PT circuit motif for temporal processing. Our work provides a
novel perspective on the mechanistic role of IT and PT in temporal processing.
| [
{
"created": "Tue, 12 Dec 2023 16:45:12 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Vo",
"Anh Duong",
""
],
[
"Abs",
"Elisabeth",
""
],
[
"Aceituno",
"Pau Vilimelis",
""
],
[
"Grewe",
"Benjamin Friedrich",
""
],
[
"Wilmes",
"Katharina Anna",
""
]
] | Recent research has revealed the unique functionality of cortical pyramidal cell subtypes, namely intratelencephalic neurons (IT) and pyramidal-tract neurons (PT). How these two populations interact with each other to fulfill their functional roles remains poorly understood. We propose the existence of a functional hierarchy between IT and PT due to their unidirectional connection and distinct roles in sensory discrimination and motor tasks. To investigate this hypothesis, we conducted a literature review of recent studies that explored the properties and functionalities of IT and PT, including causal lesion studies, population-based encoding, and calcium imaging experiments. Further, we suggest future experiments to determine the relevance of the canonical IT-PT circuit motif for temporal processing. Our work provides a novel perspective on the mechanistic role of IT and PT in temporal processing. |
2101.03985 | Deborah Weighill | Deborah Weighill, Marouen Ben Guebila, Kimberly Glass, John Platig,
Jen Jen Yeh and John Quackenbush | Gene targeting in disease networks | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Profiling of whole transcriptomes has become a cornerstone of molecular
biology and an invaluable tool for the characterization of clinical phenotypes
and the identification of disease subtypes. Analyses of these data are becoming
ever more sophisticated as we move beyond simple comparisons to consider
networks of higher-order interactions and associations. Gene regulatory
networks model the regulatory relationships of transcription factors and genes
and have allowed the identification of differentially regulated processes in
disease systems. In this perspective we discuss gene targeting scores, which
measure changes in inferred regulatory network interactions, and their use in
identifying disease-relevant processes. In addition, we present an example
analysis or pancreatic ductal adenocarcinoma demonstrating the power of gene
targeting scores to identify differential processes between complex phenotypes;
processes which would have been missed by only performing differential
expression analysis. This example demonstrates that gene targeting scores are
an invaluable addition to gene expression analysis in the characterization of
diseases and other complex phenotypes.
| [
{
"created": "Mon, 11 Jan 2021 15:50:39 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Weighill",
"Deborah",
""
],
[
"Guebila",
"Marouen Ben",
""
],
[
"Glass",
"Kimberly",
""
],
[
"Platig",
"John",
""
],
[
"Yeh",
"Jen Jen",
""
],
[
"Quackenbush",
"John",
""
]
] | Profiling of whole transcriptomes has become a cornerstone of molecular biology and an invaluable tool for the characterization of clinical phenotypes and the identification of disease subtypes. Analyses of these data are becoming ever more sophisticated as we move beyond simple comparisons to consider networks of higher-order interactions and associations. Gene regulatory networks model the regulatory relationships of transcription factors and genes and have allowed the identification of differentially regulated processes in disease systems. In this perspective we discuss gene targeting scores, which measure changes in inferred regulatory network interactions, and their use in identifying disease-relevant processes. In addition, we present an example analysis or pancreatic ductal adenocarcinoma demonstrating the power of gene targeting scores to identify differential processes between complex phenotypes; processes which would have been missed by only performing differential expression analysis. This example demonstrates that gene targeting scores are an invaluable addition to gene expression analysis in the characterization of diseases and other complex phenotypes. |
1208.0747 | Sarbaz Khoshnaw | S. Khoshnaw | Iterative Approximate Solutions of Kinetic Equations for Reversible
Enzyme Reactions | 28 pages, 22 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | We study kinetic models of reversible enzyme reactions and compare two
techniques for analytic approximate solutions of the model. Analytic
approximate solutions of non-linear reaction equations for reversible enzyme
reactions are calculated using the Homotopy Perturbation Method (HPM) and the
Simple Iteration Method (SIM). The results of the approximations are similar.
The Matlab programs are included in appendices.
| [
{
"created": "Fri, 3 Aug 2012 14:06:22 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Jan 2013 23:39:16 GMT",
"version": "v2"
}
] | 2013-01-17 | [
[
"Khoshnaw",
"S.",
""
]
] | We study kinetic models of reversible enzyme reactions and compare two techniques for analytic approximate solutions of the model. Analytic approximate solutions of non-linear reaction equations for reversible enzyme reactions are calculated using the Homotopy Perturbation Method (HPM) and the Simple Iteration Method (SIM). The results of the approximations are similar. The Matlab programs are included in appendices. |
1206.6129 | Marcelo Magnasco | Thibaud Taillefumier and Marcelo O. Magnasco | A phase transition in the first passage of a Brownian process through a
fluctuating boundary: implications for neural coding | null | null | 10.1073/pnas.1212479110 | null | q-bio.NC math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding the first time a fluctuating quantity reaches a given boundary is a
deceptively simple-looking problem of vast practical importance in physics,
biology, chemistry, neuroscience, economics and industry. Problems in which the
bound to be traversed is itself a fluctuating function of time include widely
studied settings in neural coding, such as neuronal integrators with irregular
inputs and internal noise. We show that the probability p(t) that a
Gauss-Markov process will first exceed the boundary at time t suffers a phase
transition as a function of the roughness of the boundary, as measured by its
H\"older exponent H, with critical value Hc = 1/2. For smoother boundaries, H >
1/2, the probability density is a continuous func- tion of time. For rougher
boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of
zero measure: the probability density becomes divergent, almost everywhere
either zero or infin- ity. The critical point Hc = 1/2 corresponds to a
widely-studied case in the theory of neural coding, where the external input
integrated by a model neuron is a white-noise process, such as uncorrelated but
precisely balanced excitatory and inhibitory inputs. We argue this transition
corresponds to a sharp boundary between rate codes, in which the neural firing
probability varies smoothly, and temporal codes, in which the neuron fires at
sharply-defined times regardless of the intensity of internal noise.
| [
{
"created": "Tue, 26 Jun 2012 21:48:53 GMT",
"version": "v1"
}
] | 2015-06-05 | [
[
"Taillefumier",
"Thibaud",
""
],
[
"Magnasco",
"Marcelo O.",
""
]
] | Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics and industry. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied settings in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its H\"older exponent H, with critical value Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous func- tion of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infin- ity. The critical point Hc = 1/2 corresponds to a widely-studied case in the theory of neural coding, where the external input integrated by a model neuron is a white-noise process, such as uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply-defined times regardless of the intensity of internal noise. |
2404.14867 | Zhong Wang | Zhong Wang | How we Learn Concepts: A Review of Relevant Advances Since 2010 and Its
Inspirations for Teaching | null | null | 10.26689/jcer.v8i6.7049 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | This article reviews the psychological and neuroscience achievements in
concept learning since 2010 from the perspectives of individual learning and
social learning, and discusses several issues related to concept learning,
including the assistance of machine learning about concept learning. 1 In terms
of individual learning, current evidences shown that the brain tends to process
concrete concepts through typical features (shared features); And abstract
concepts, semantic processing is the most important cognitive way. 2 In terms
of social learning, Interpersonal Neuro Synchronization (INS) is considered the
main indicator of efficient knowledge transfer (such as teaching activities
between teachers and students), but this phenomenon only broadens the channels
for concept sources and does not change the basic mode of individual concept
learning. Ultimately, this article argues that the way the human brain
processes concepts depends on concept's own characteristics, so there are no
'better' strategies in teaching, only more 'suitable' strategies.
| [
{
"created": "Tue, 23 Apr 2024 09:42:59 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Wang",
"Zhong",
""
]
] | This article reviews the psychological and neuroscience achievements in concept learning since 2010 from the perspectives of individual learning and social learning, and discusses several issues related to concept learning, including the assistance of machine learning about concept learning. 1 In terms of individual learning, current evidences shown that the brain tends to process concrete concepts through typical features (shared features); And abstract concepts, semantic processing is the most important cognitive way. 2 In terms of social learning, Interpersonal Neuro Synchronization (INS) is considered the main indicator of efficient knowledge transfer (such as teaching activities between teachers and students), but this phenomenon only broadens the channels for concept sources and does not change the basic mode of individual concept learning. Ultimately, this article argues that the way the human brain processes concepts depends on concept's own characteristics, so there are no 'better' strategies in teaching, only more 'suitable' strategies. |
1311.6717 | Jing Liu | Qian Wang, Yang Yu, Keqin Pan, and Jing Liu | Liquid Metal Angiography for Mega Contrast X-ray Visualization of
Vascular Network | null | null | 10.1109/TBME.2014.2317554 | null | q-bio.QM physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visualizing the anatomical vessel networks plays a vital role in
physiological or pathological investigations. However, identifying the fine
structures of the smallest capillary vessels via conventional imaging ways
remains a big challenge. Here, the room temperature liquid metal angiography
was proposed for the first time to produce mega contrast X-ray images for
multi-scale vasculature mapping. Gallium was used as the room temperature
liquid metal contrast agent and perfused into the vessels of in vitro pig
hearts and kidneys. We scanned the samples under X-ray and compared the
angiograms with those obtained via conventional contrast agent--the iohexol. As
quantitatively proved by the gray scale histograms, the contrast of the vessels
to the surrounding tissues in the liquid metal angiograms is orders higher than
that of the iohexol enhanced images. And the resolution of the angiograms has
reached 100{\mu}m, which means the capillaries can be clearly distinguished in
the liquid metal enhanced images. With tomography from the micro-CT, we also
managed to reconstruct the 3-dementional structures of the kidney vessels.
Tremendous clarity and efficiency of the method over existing approaches were
experimentally demonstrated. It was disclosed that the usually invisible
capillary networks now become distinctively clear in the gallium angiograms.
This mechanism can be generalized and extended to a wide spectrum of
3-dimensional computational tomographic areas. It provides a soft tool for
quickly reconstructing high resolution spatial channel networks for scientific
researches or engineering applications where complicated and time consuming
surgical procedures are no longer necessary.
| [
{
"created": "Tue, 26 Nov 2013 15:58:58 GMT",
"version": "v1"
}
] | 2014-05-20 | [
[
"Wang",
"Qian",
""
],
[
"Yu",
"Yang",
""
],
[
"Pan",
"Keqin",
""
],
[
"Liu",
"Jing",
""
]
] | Visualizing the anatomical vessel networks plays a vital role in physiological or pathological investigations. However, identifying the fine structures of the smallest capillary vessels via conventional imaging ways remains a big challenge. Here, the room temperature liquid metal angiography was proposed for the first time to produce mega contrast X-ray images for multi-scale vasculature mapping. Gallium was used as the room temperature liquid metal contrast agent and perfused into the vessels of in vitro pig hearts and kidneys. We scanned the samples under X-ray and compared the angiograms with those obtained via conventional contrast agent--the iohexol. As quantitatively proved by the gray scale histograms, the contrast of the vessels to the surrounding tissues in the liquid metal angiograms is orders higher than that of the iohexol enhanced images. And the resolution of the angiograms has reached 100{\mu}m, which means the capillaries can be clearly distinguished in the liquid metal enhanced images. With tomography from the micro-CT, we also managed to reconstruct the 3-dementional structures of the kidney vessels. Tremendous clarity and efficiency of the method over existing approaches were experimentally demonstrated. It was disclosed that the usually invisible capillary networks now become distinctively clear in the gallium angiograms. This mechanism can be generalized and extended to a wide spectrum of 3-dimensional computational tomographic areas. It provides a soft tool for quickly reconstructing high resolution spatial channel networks for scientific researches or engineering applications where complicated and time consuming surgical procedures are no longer necessary. |
2004.11903 | Delfim F. M. Torres | Cristiana J. Silva, Delfim F. M. Torres | On SICA models for HIV transmission | This is a preprint of the following paper: C. J. Silva and D. F. M.
Torres, On SICA models for HIV transmission, published in 'Mathematical
Modelling and Analysis of Infectious Diseases', edited by K. Hattaf and H.
Dutta, Springer Nature Switzerland AG. Submitted 28/Dec/2019; revised
21/Apr/2020; accepted 24/Apr/2020. arXiv admin note: text overlap with
arXiv:1703.06446, arXiv:1612.00732, arXiv:1812.06965 | Studies in Systems, Decision and Control 302 (2020), 155--179 | 10.1007/978-3-030-49896-2_6 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the SICA (Susceptible-Infectious-Chronic-AIDS) mathematical model
for transmission dynamics of the human immunodeficiency virus (HIV) with
varying population size in a homogeneously mixing population. We consider SICA
models given by systems of ordinary differential equations and some
generalizations given by systems with fractional and stochastic differential
operators. Local and global stability results are proved for deterministic,
fractional, and stochastic-type SICA models. Two case studies, in Cape Verde
and Morocco, are investigated.
| [
{
"created": "Fri, 24 Apr 2020 12:08:34 GMT",
"version": "v1"
}
] | 2020-08-10 | [
[
"Silva",
"Cristiana J.",
""
],
[
"Torres",
"Delfim F. M.",
""
]
] | We revisit the SICA (Susceptible-Infectious-Chronic-AIDS) mathematical model for transmission dynamics of the human immunodeficiency virus (HIV) with varying population size in a homogeneously mixing population. We consider SICA models given by systems of ordinary differential equations and some generalizations given by systems with fractional and stochastic differential operators. Local and global stability results are proved for deterministic, fractional, and stochastic-type SICA models. Two case studies, in Cape Verde and Morocco, are investigated. |
2205.15718 | Adam Mielke | Adam Mielke | On the Role of Spatial Effects in Early Estimates of Disease
Infectiousness: A Second Quantization Approach | Main Article: 6 pages, 1 figure, 1 table Supplementary material: 9
pages, 2 figures Version 2 has been submitted for peer-review | null | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the covid-19 pandemic still ongoing and an enormous amount of test data
available, the lessons learned over the last two years need to be developed to
a point where they can provide understanding for tackling new variants and
future diseases. The SIR-model commonly used to model disease spread, predicts
exponential initial growth, which helps establish the infectiousness of a
disease in the early days of an outbreak. Unfortunately, the exponential growth
becomes muddied by spatial, finite-size, and non-equilibrium effects in
realistic systems, and robust estimates that may be used in prediction and
description are still lacking. I here establish a second quantization framework
that allows introduction of arbitrarily complicated spatial behavior, and I
show that a simplified version of this model is in good agreement with both the
growth of different covid-19 variants in Denmark and analytical results from
the theory of branched polymers. Denmark is well-suited for comparison, because
the number of tests with variant information in early December 2021 is very
high, so the spread of a single variant can be followed. I expect this model to
build bridges between the epidemic modeling and solid state communities. The
long-term goal of the particular analysis in this paper is to establish priors
that allow better early estimates for the infectiousness of a new disease.
| [
{
"created": "Tue, 31 May 2022 12:05:11 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jun 2022 23:26:10 GMT",
"version": "v2"
}
] | 2022-06-09 | [
[
"Mielke",
"Adam",
""
]
] | With the covid-19 pandemic still ongoing and an enormous amount of test data available, the lessons learned over the last two years need to be developed to a point where they can provide understanding for tackling new variants and future diseases. The SIR-model commonly used to model disease spread, predicts exponential initial growth, which helps establish the infectiousness of a disease in the early days of an outbreak. Unfortunately, the exponential growth becomes muddied by spatial, finite-size, and non-equilibrium effects in realistic systems, and robust estimates that may be used in prediction and description are still lacking. I here establish a second quantization framework that allows introduction of arbitrarily complicated spatial behavior, and I show that a simplified version of this model is in good agreement with both the growth of different covid-19 variants in Denmark and analytical results from the theory of branched polymers. Denmark is well-suited for comparison, because the number of tests with variant information in early December 2021 is very high, so the spread of a single variant can be followed. I expect this model to build bridges between the epidemic modeling and solid state communities. The long-term goal of the particular analysis in this paper is to establish priors that allow better early estimates for the infectiousness of a new disease. |
0809.1717 | Philippe Rondard | Jianfeng Liu (IGF), Damien Maurel (IGF), S\'ebastien Etzol (IGF),
Isabelle Brabet (IGF), Herv\'e Ansanay, Jean-Philippe Pin (IGF), Philippe
Rondard (IGF) | Molecular determinants involved in the allosteric control of agonist
affinity in the GABAB receptor by the GABAB2 subunit | null | The Journal of Biological Chemistry 279, 16 (2004) 15824-30 | 10.1074/jbc.M313639200 | null | q-bio.BM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gamma-aminobutyric acid type B (GABAB) receptor is an allosteric complex
made of two subunits, GABAB1 (GB1) and GABAB2 (GB2). Both subunits are composed
of an extracellular Venus flytrap domain (VFT) and a heptahelical domain (HD).
GB1 binds GABA, and GB2 plays a major role in G-protein activation as well as
in the high agonist affinity state of GB1. How agonist affinity in GB1 is
regulated in the receptor remains unknown. Here, we demonstrate that GB2 VFT is
a major molecular determinant involved in this control. We show that isolated
versions of GB1 and GB2 VFTs in the absence of the HD and C-terminal tail can
form hetero-oligomers as shown by time-resolved fluorescence resonance energy
transfer (based on HTRF technology). GB2 VFT and its association with GB1 VFT
controlled agonist affinity in GB1 in two ways. First, GB2 VFT exerted a direct
action on GB1 VFT, as it slightly increased agonist affinity in isolated GB1
VFT. Second and most importantly, GB2 VFT prevented inhibitory interaction
between the two main domains (VFT and HD) of GB1. According to this model, we
propose that GB1 HD prevents the possible natural closure of GB1 VFT. In
contrast, GB2 VFT facilitates this closure. Finally, such inhibitory contacts
between HD and VFT in GB1 could be similar to those important to maintain the
inactive state of the receptor.
| [
{
"created": "Wed, 10 Sep 2008 07:04:48 GMT",
"version": "v1"
}
] | 2008-09-11 | [
[
"Liu",
"Jianfeng",
"",
"IGF"
],
[
"Maurel",
"Damien",
"",
"IGF"
],
[
"Etzol",
"Sébastien",
"",
"IGF"
],
[
"Brabet",
"Isabelle",
"",
"IGF"
],
[
"Ansanay",
"Hervé",
"",
"IGF"
],
[
"Pin",
"Jean-Philippe",
"",
"IGF"
],
[
"Rondard",
"Philippe",
"",
"IGF"
]
] | The gamma-aminobutyric acid type B (GABAB) receptor is an allosteric complex made of two subunits, GABAB1 (GB1) and GABAB2 (GB2). Both subunits are composed of an extracellular Venus flytrap domain (VFT) and a heptahelical domain (HD). GB1 binds GABA, and GB2 plays a major role in G-protein activation as well as in the high agonist affinity state of GB1. How agonist affinity in GB1 is regulated in the receptor remains unknown. Here, we demonstrate that GB2 VFT is a major molecular determinant involved in this control. We show that isolated versions of GB1 and GB2 VFTs in the absence of the HD and C-terminal tail can form hetero-oligomers as shown by time-resolved fluorescence resonance energy transfer (based on HTRF technology). GB2 VFT and its association with GB1 VFT controlled agonist affinity in GB1 in two ways. First, GB2 VFT exerted a direct action on GB1 VFT, as it slightly increased agonist affinity in isolated GB1 VFT. Second and most importantly, GB2 VFT prevented inhibitory interaction between the two main domains (VFT and HD) of GB1. According to this model, we propose that GB1 HD prevents the possible natural closure of GB1 VFT. In contrast, GB2 VFT facilitates this closure. Finally, such inhibitory contacts between HD and VFT in GB1 could be similar to those important to maintain the inactive state of the receptor. |
2301.07246 | Daniel Swartz | Daniel W. Swartz, Hyunseok Lee, Mehran Kardar, Kirill S. Korolev | Competition on the edge of an expanding population | 5 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In growing populations, the fate of mutations depends on their competitive
ability against the ancestor and their ability to colonize new territory. Here
we present a theory that integrates both aspects of mutant fitness by coupling
the classic description of one-dimensional competition (Fisher equation) to the
minimal model of front shape (KPZ equation). We solved these equations and
found three regimes, which are controlled solely by the expansion rates, solely
by the competitive abilities, or by both. Collectively, our results provide a
simple framework to study spatial competition.
| [
{
"created": "Wed, 18 Jan 2023 01:09:32 GMT",
"version": "v1"
}
] | 2023-01-19 | [
[
"Swartz",
"Daniel W.",
""
],
[
"Lee",
"Hyunseok",
""
],
[
"Kardar",
"Mehran",
""
],
[
"Korolev",
"Kirill S.",
""
]
] | In growing populations, the fate of mutations depends on their competitive ability against the ancestor and their ability to colonize new territory. Here we present a theory that integrates both aspects of mutant fitness by coupling the classic description of one-dimensional competition (Fisher equation) to the minimal model of front shape (KPZ equation). We solved these equations and found three regimes, which are controlled solely by the expansion rates, solely by the competitive abilities, or by both. Collectively, our results provide a simple framework to study spatial competition. |
0707.2376 | Francesc Rossell\'o | Gabriel Cardona, Francesc Rossello, Gabriel Valiente | Tripartitions do not always discriminate phylogenetic networks | 26 pages, 9 figures | null | null | null | q-bio.PE cs.CE cs.DM | null | Phylogenetic networks are a generalization of phylogenetic trees that allow
for the representation of non-treelike evolutionary events, like recombination,
hybridization, or lateral gene transfer. In a recent series of papers devoted
to the study of reconstructibility of phylogenetic networks, Moret, Nakhleh,
Warnow and collaborators introduced the so-called {tripartition metric for
phylogenetic networks. In this paper we show that, in fact, this tripartition
metric does not satisfy the separation axiom of distances (zero distance means
isomorphism, or, in a more relaxed version, zero distance means
indistinguishability in some specific sense) in any of the subclasses of
phylogenetic networks where it is claimed to do so. We also present a subclass
of phylogenetic networks whose members can be singled out by means of their
sets of tripartitions (or even clusters), and hence where the latter can be
used to define a meaningful metric.
| [
{
"created": "Mon, 16 Jul 2007 19:59:42 GMT",
"version": "v1"
}
] | 2007-07-17 | [
[
"Cardona",
"Gabriel",
""
],
[
"Rossello",
"Francesc",
""
],
[
"Valiente",
"Gabriel",
""
]
] | Phylogenetic networks are a generalization of phylogenetic trees that allow for the representation of non-treelike evolutionary events, like recombination, hybridization, or lateral gene transfer. In a recent series of papers devoted to the study of reconstructibility of phylogenetic networks, Moret, Nakhleh, Warnow and collaborators introduced the so-called {tripartition metric for phylogenetic networks. In this paper we show that, in fact, this tripartition metric does not satisfy the separation axiom of distances (zero distance means isomorphism, or, in a more relaxed version, zero distance means indistinguishability in some specific sense) in any of the subclasses of phylogenetic networks where it is claimed to do so. We also present a subclass of phylogenetic networks whose members can be singled out by means of their sets of tripartitions (or even clusters), and hence where the latter can be used to define a meaningful metric. |
2209.05829 | Vito Dichio | Vito Dichio and Fabrizio De Vico Fallani | Statistical models of complex brain networks: a maximum entropy approach | 34 pages, 8 figures | null | null | null | q-bio.NC physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The brain is a highly complex system. Most of such complexity stems from the
intermingled connections between its parts, which give rise to rich dynamics
and to the emergence of high-level cognitive functions. Disentangling the
underlying network structure is crucial to understand the brain functioning
under both healthy and pathological conditions. Yet, analyzing brain networks
is challenging, in part because their structure represents only one possible
realization of a generative stochastic process which is in general unknown.
Having a formal way to cope with such intrinsic variability is therefore
central for the characterization of brain network properties. Addressing this
issue entails the development of appropriate tools mostly adapted from network
science and statistics. Here, we focus on a particular class of maximum entropy
models for networks, i.e. exponential random graph models (ERGMs), as a
parsimonious approach to identify the local connection mechanisms behind
observed global network structure. Efforts are reviewed on the quest for basic
organizational properties of human brain networks, as well as on the
identification of predictive biomarkers of neurological diseases such as
stroke. We conclude with a discussion on how emerging results and tools from
statistical graph modeling, associated with forthcoming improvements in
experimental data acquisition, could lead to a finer probabilistic description
of complex systems in network neuroscience.
| [
{
"created": "Tue, 13 Sep 2022 09:08:38 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Apr 2023 14:14:03 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Jun 2023 15:48:47 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Aug 2023 08:22:43 GMT",
"version": "v4"
}
] | 2023-08-14 | [
[
"Dichio",
"Vito",
""
],
[
"Fallani",
"Fabrizio De Vico",
""
]
] | The brain is a highly complex system. Most of such complexity stems from the intermingled connections between its parts, which give rise to rich dynamics and to the emergence of high-level cognitive functions. Disentangling the underlying network structure is crucial to understand the brain functioning under both healthy and pathological conditions. Yet, analyzing brain networks is challenging, in part because their structure represents only one possible realization of a generative stochastic process which is in general unknown. Having a formal way to cope with such intrinsic variability is therefore central for the characterization of brain network properties. Addressing this issue entails the development of appropriate tools mostly adapted from network science and statistics. Here, we focus on a particular class of maximum entropy models for networks, i.e. exponential random graph models (ERGMs), as a parsimonious approach to identify the local connection mechanisms behind observed global network structure. Efforts are reviewed on the quest for basic organizational properties of human brain networks, as well as on the identification of predictive biomarkers of neurological diseases such as stroke. We conclude with a discussion on how emerging results and tools from statistical graph modeling, associated with forthcoming improvements in experimental data acquisition, could lead to a finer probabilistic description of complex systems in network neuroscience. |
q-bio/0407018 | Paolo Castorina | Paolo Castorina, Dario Zappala' | Tumor Gompertzian growth by cellular energetic balance | Major modifications and refinements | null | null | null | q-bio.CB physics.bio-ph | null | A macroscopic model of the tumor Gompertzian growth is proposed. The new
approach is based on the energetic balance among the different cell activities,
described by methods of statistical mechanics and related to the growth
inhibitor factors. The model is successfully applied to the multicellular tumor
spheroid data.
| [
{
"created": "Mon, 12 Jul 2004 17:43:54 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Dec 2004 09:16:31 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Castorina",
"Paolo",
""
],
[
"Zappala'",
"Dario",
""
]
] | A macroscopic model of the tumor Gompertzian growth is proposed. The new approach is based on the energetic balance among the different cell activities, described by methods of statistical mechanics and related to the growth inhibitor factors. The model is successfully applied to the multicellular tumor spheroid data. |
q-bio/0606035 | Santiago Schnell | Benjamin Ribba, Thierry Colin and Santiago Schnell | A multiscale mathematical model of cancer, and its use in analyzing
irradiation therapies | 19 pages, 14, figures. Article available at
http://www.tbiomed.com/content/3/1/7 Copyright 2006 Ribba et al; licensee
BioMed Central Ltd. This is an Open Access article distributed under the
terms of the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is
properly cited | null | 10.1186/1742-4682-3-7 | null | q-bio.TO q-bio.SC | null | Background: Radiotherapy outcomes are usually predicted using the Linear
Quadratic model. However, this model does not integrate complex features of
tumor growth, in particular cell cycle regulation.
Methods: In this paper, we propose a multiscale model of cancer growth based
on the genetic and molecular features of the evolution of colorectal cancer.
The model includes key genes, cellular kinetics, tissue dynamics, macroscopic
tumor evolution and radiosensitivity dependence on the cell cycle phase. We
investigate the role of gene-dependent cell cycle regulation in the response of
tumors to therapeutic irradiation protocols.
Results: Simulation results emphasize the importance of tumor tissue features
and the need to consider regulating factors such as hypoxia, as well as tumor
geometry and tissue dynamics, in predicting and improving radiotherapeutic
efficacy.
Conclusion: This model provides insight into the coupling of complex
biological processes, which leads to a better understanding of oncogenesis.
This will hopefully lead to improved irradiation therapy.
| [
{
"created": "Sat, 24 Jun 2006 05:29:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ribba",
"Benjamin",
""
],
[
"Colin",
"Thierry",
""
],
[
"Schnell",
"Santiago",
""
]
] | Background: Radiotherapy outcomes are usually predicted using the Linear Quadratic model. However, this model does not integrate complex features of tumor growth, in particular cell cycle regulation. Methods: In this paper, we propose a multiscale model of cancer growth based on the genetic and molecular features of the evolution of colorectal cancer. The model includes key genes, cellular kinetics, tissue dynamics, macroscopic tumor evolution and radiosensitivity dependence on the cell cycle phase. We investigate the role of gene-dependent cell cycle regulation in the response of tumors to therapeutic irradiation protocols. Results: Simulation results emphasize the importance of tumor tissue features and the need to consider regulating factors such as hypoxia, as well as tumor geometry and tissue dynamics, in predicting and improving radiotherapeutic efficacy. Conclusion: This model provides insight into the coupling of complex biological processes, which leads to a better understanding of oncogenesis. This will hopefully lead to improved irradiation therapy. |
2108.06170 | Suman Das | Suman G Das, Joachim Krug, Muhittin Mungan | A Driven Disordered Systems Approach to Biological Evolution in Changing
Environments | null | Physical Review X 12, 031040 (2022) | null | null | q-bio.PE cond-mat.dis-nn cond-mat.soft | http://creativecommons.org/licenses/by/4.0/ | Biological evolution of a population is governed by the fitness landscape,
which is a map from genotype to fitness. However, a fitness landscape depends
on the organisms environment, and evolution in changing environments is still
poorly understood. We study a particular model of antibiotic resistance
evolution in bacteria where the antibiotic concentration is an environmental
parameter and the fitness landscapes incorporate tradeoffs between adaptation
to low and high antibiotic concentration. With evolutionary dynamics that
follow fitness gradients, the evolution of the system under slowly changing
antibiotic concentration resembles the athermal dynamics of disordered physical
systems under external drives. Exploiting this resemblance, we show that our
model can be described as a system with interacting hysteretic elements. As in
the case of the driven disordered systems, adaptive evolution under antibiotic
concentration cycling is found to exhibit hysteresis loops and memory
formation. We derive a number of analytical results for quasistatic
concentration changes. We also perform numerical simulations to study how these
effects are modified under driving protocols in which the concentration is
changed in discrete steps. Our approach provides a general framework for
studying motifs of evolutionary dynamics in biological systems in a changing
environment.
| [
{
"created": "Fri, 13 Aug 2021 11:19:14 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Sep 2022 19:16:45 GMT",
"version": "v2"
}
] | 2022-09-26 | [
[
"Das",
"Suman G",
""
],
[
"Krug",
"Joachim",
""
],
[
"Mungan",
"Muhittin",
""
]
] | Biological evolution of a population is governed by the fitness landscape, which is a map from genotype to fitness. However, a fitness landscape depends on the organisms environment, and evolution in changing environments is still poorly understood. We study a particular model of antibiotic resistance evolution in bacteria where the antibiotic concentration is an environmental parameter and the fitness landscapes incorporate tradeoffs between adaptation to low and high antibiotic concentration. With evolutionary dynamics that follow fitness gradients, the evolution of the system under slowly changing antibiotic concentration resembles the athermal dynamics of disordered physical systems under external drives. Exploiting this resemblance, we show that our model can be described as a system with interacting hysteretic elements. As in the case of the driven disordered systems, adaptive evolution under antibiotic concentration cycling is found to exhibit hysteresis loops and memory formation. We derive a number of analytical results for quasistatic concentration changes. We also perform numerical simulations to study how these effects are modified under driving protocols in which the concentration is changed in discrete steps. Our approach provides a general framework for studying motifs of evolutionary dynamics in biological systems in a changing environment. |
0907.0335 | Ganna Rozhnova | Ganna Rozhnova, Ana Nunes | Population dynamics on random networks: simulations and analytical
models | 8 pages, 5 figures; we have expanded and rewritten the introduction,
slightly modified the abstract and the text in other sections; also, several
new references have been added in the revised manuscript (Refs.
[17-25,30,35]); | Eur. Phys. J. B Volume 74, Number 2, March II 2010 Pages 235 - 242 | 10.1140/epjb/e2010-00068-7 | null | q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the phase diagram of the standard pair approximation equations for
two different models in population dynamics, the
susceptible-infective-recovered-susceptible model of infection spread and a
predator-prey interaction model, on a network of homogeneous degree $k$. These
models have similar phase diagrams and represent two classes of systems for
which noisy oscillations, still largely unexplained, are observed in nature. We
show that for a certain range of the parameter $k$ both models exhibit an
oscillatory phase in a region of parameter space that corresponds to weak
driving. This oscillatory phase, however, disappears when $k$ is large. For
$k=3, 4$, we compare the phase diagram of the standard pair approximation
equations of both models with the results of simulations on regular random
graphs of the same degree. We show that for parameter values in the oscillatory
phase, and even for large system sizes, the simulations either die out or
exhibit damped oscillations, depending on the initial conditions. We discuss
this failure of the standard pair approximation model to capture even the
qualitative behavior of the simulations on large regular random graphs and the
relevance of the oscillatory phase in the pair approximation diagrams to
explain the cycling behavior found in real populations.
| [
{
"created": "Thu, 2 Jul 2009 10:41:47 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jan 2010 19:41:43 GMT",
"version": "v2"
}
] | 2010-07-07 | [
[
"Rozhnova",
"Ganna",
""
],
[
"Nunes",
"Ana",
""
]
] | We study the phase diagram of the standard pair approximation equations for two different models in population dynamics, the susceptible-infective-recovered-susceptible model of infection spread and a predator-prey interaction model, on a network of homogeneous degree $k$. These models have similar phase diagrams and represent two classes of systems for which noisy oscillations, still largely unexplained, are observed in nature. We show that for a certain range of the parameter $k$ both models exhibit an oscillatory phase in a region of parameter space that corresponds to weak driving. This oscillatory phase, however, disappears when $k$ is large. For $k=3, 4$, we compare the phase diagram of the standard pair approximation equations of both models with the results of simulations on regular random graphs of the same degree. We show that for parameter values in the oscillatory phase, and even for large system sizes, the simulations either die out or exhibit damped oscillations, depending on the initial conditions. We discuss this failure of the standard pair approximation model to capture even the qualitative behavior of the simulations on large regular random graphs and the relevance of the oscillatory phase in the pair approximation diagrams to explain the cycling behavior found in real populations. |
2002.12831 | Marcus Aguiar de | Rodrigo A. Caetano, Sergio Sanchez, Carolina L. N. Costa, Marcus A. M.
de Aguiar | Sympatric speciation based on pure assortative mating | 30 pages, 10 figures | null | 10.1088/1751-8121/ab7b9f | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although geographic isolation has been shown to play a key role in promoting
reproductive isolation, it is now believed that speciation can also happen in
sympatry and with considerable gene flow. Here we present a model of sympatric
speciation based on assortative mating that does not require a genetic
threshold for reproduction, i.e., that does not directly associate genetic
differences between individuals with reproductive incompatibilities. In the
model individuals mate with the most similar partner in their pool of potential
mates, irrespective of how dissimilar it might be. We show that assortativity
alone can lead to the formation of clusters of genetically similar individuals.
The absence of a minimal genetic similarity for mating implies the constant
generation of hybrids and brings up the old problem of species definition.
Here, we define species based on clustering of genetically similar individuals
but allowing genetic flow among different species. We show that the results
obtained with the present model are in good agreement with empirical data, in
which different species can still reproduce and generate hybrids.
| [
{
"created": "Fri, 28 Feb 2020 16:00:41 GMT",
"version": "v1"
}
] | 2020-05-20 | [
[
"Caetano",
"Rodrigo A.",
""
],
[
"Sanchez",
"Sergio",
""
],
[
"Costa",
"Carolina L. N.",
""
],
[
"de Aguiar",
"Marcus A. M.",
""
]
] | Although geographic isolation has been shown to play a key role in promoting reproductive isolation, it is now believed that speciation can also happen in sympatry and with considerable gene flow. Here we present a model of sympatric speciation based on assortative mating that does not require a genetic threshold for reproduction, i.e., that does not directly associate genetic differences between individuals with reproductive incompatibilities. In the model individuals mate with the most similar partner in their pool of potential mates, irrespective of how dissimilar it might be. We show that assortativity alone can lead to the formation of clusters of genetically similar individuals. The absence of a minimal genetic similarity for mating implies the constant generation of hybrids and brings up the old problem of species definition. Here, we define species based on clustering of genetically similar individuals but allowing genetic flow among different species. We show that the results obtained with the present model are in good agreement with empirical data, in which different species can still reproduce and generate hybrids. |
2102.08468 | Ezekiel Adebiyi | Yagoub Adam, Suraju Sadeeq, Judit Kumuthini, Olabode Ajayi, Gordon
Wells, Rotimi Solomon, Olubanke Ogunlana, Emmmanuel Adetiba, Emeka Iweala,
Benedikt Brors, Ezekiel Adebiyi | Polygenic Risk Score in Africa Population: Progress and challenges | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Polygenic risk score (PRS) analysis is a powerful method been used to
estimate an individual's genetic risk towards targeted traits. PRS analysis
could be used to obtain evidence of a genetic effect beyond Genome-Wide
Association Studies (GWAS) results i.e. when there are no significant markers.
PRS analysis has been widely applied to investigate the genetic basis of
several traits including rare diseases. However, the accuracy of PRS analysis
depends on the genomic data of the underlying population. For instance, several
studies showed that obtaining higher prediction power of PRS analysis is
challenging for non-Europeans. In this manuscript, we reviewed the conventional
PRS methods and their application to sub-saharan Africa communities. We
concluded that the limiting factor of applying PRS analysis to sub-saharan
populations is the lack of sufficient GWAS data. Also, we recommended
developing African-specific PRS tools
| [
{
"created": "Tue, 16 Feb 2021 22:08:12 GMT",
"version": "v1"
}
] | 2021-02-18 | [
[
"Adam",
"Yagoub",
""
],
[
"Sadeeq",
"Suraju",
""
],
[
"Kumuthini",
"Judit",
""
],
[
"Ajayi",
"Olabode",
""
],
[
"Wells",
"Gordon",
""
],
[
"Solomon",
"Rotimi",
""
],
[
"Ogunlana",
"Olubanke",
""
],
[
"Adetiba",
"Emmmanuel",
""
],
[
"Iweala",
"Emeka",
""
],
[
"Brors",
"Benedikt",
""
],
[
"Adebiyi",
"Ezekiel",
""
]
] | Polygenic risk score (PRS) analysis is a powerful method been used to estimate an individual's genetic risk towards targeted traits. PRS analysis could be used to obtain evidence of a genetic effect beyond Genome-Wide Association Studies (GWAS) results i.e. when there are no significant markers. PRS analysis has been widely applied to investigate the genetic basis of several traits including rare diseases. However, the accuracy of PRS analysis depends on the genomic data of the underlying population. For instance, several studies showed that obtaining higher prediction power of PRS analysis is challenging for non-Europeans. In this manuscript, we reviewed the conventional PRS methods and their application to sub-saharan Africa communities. We concluded that the limiting factor of applying PRS analysis to sub-saharan populations is the lack of sufficient GWAS data. Also, we recommended developing African-specific PRS tools |
1911.00996 | Shigang Liu | Shigang Liu, Jun Zhang, Yang Xiang, Wanlei Zhou, Dongxi Xiang | A Study of Data Pre-processing Techniques for Imbalanced Biomedical Data
Classification | This paper is scheduled for inclusion in V16 N3 2020, International
Journal of Bioinformatics Research and Applications (IJBRA) | V16 N3, International Journal of Bioinformatics Research and
Applications (IJBRA), 2020 | null | null | q-bio.QM cs.LG stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | Biomedical data are widely accepted in developing prediction models for
identifying a specific tumor, drug discovery and classification of human
cancers. However, previous studies usually focused on different classifiers,
and overlook the class imbalance problem in real-world biomedical datasets.
There are a lack of studies on evaluation of data pre-processing techniques,
such as resampling and feature selection, on imbalanced biomedical data
learning. The relationship between data pre-processing techniques and the data
distributions has never been analysed in previous studies. This article mainly
focuses on reviewing and evaluating some popular and recently developed
resampling and feature selection methods for class imbalance learning. We
analyse the effectiveness of each technique from data distribution perspective.
Extensive experiments have been done based on five classifiers, four
performance measures, eight learning techniques across twenty real-world
datasets. Experimental results show that: (1) resampling and feature selection
techniques exhibit better performance using support vector machine (SVM)
classifier. However, resampling and Feature Selection techniques perform poorly
when using C4.5 decision tree and Linear discriminant analysis classifiers; (2)
for datasets with different distributions, techniques such as Random
undersampling and Feature Selection perform better than other data
pre-processing methods with T Location-Scale distribution when using SVM and
KNN (K-nearest neighbours) classifiers. Random oversampling outperforms other
methods on Negative Binomial distribution using Random Forest classifier with
lower level of imbalance ratio; (3) Feature Selection outperforms other data
pre-processing methods in most cases, thus, Feature Selection with SVM
classifier is the best choice for imbalanced biomedical data learning.
| [
{
"created": "Mon, 4 Nov 2019 00:32:32 GMT",
"version": "v1"
}
] | 2019-11-05 | [
[
"Liu",
"Shigang",
""
],
[
"Zhang",
"Jun",
""
],
[
"Xiang",
"Yang",
""
],
[
"Zhou",
"Wanlei",
""
],
[
"Xiang",
"Dongxi",
""
]
] | Biomedical data are widely accepted in developing prediction models for identifying a specific tumor, drug discovery and classification of human cancers. However, previous studies usually focused on different classifiers, and overlook the class imbalance problem in real-world biomedical datasets. There are a lack of studies on evaluation of data pre-processing techniques, such as resampling and feature selection, on imbalanced biomedical data learning. The relationship between data pre-processing techniques and the data distributions has never been analysed in previous studies. This article mainly focuses on reviewing and evaluating some popular and recently developed resampling and feature selection methods for class imbalance learning. We analyse the effectiveness of each technique from data distribution perspective. Extensive experiments have been done based on five classifiers, four performance measures, eight learning techniques across twenty real-world datasets. Experimental results show that: (1) resampling and feature selection techniques exhibit better performance using support vector machine (SVM) classifier. However, resampling and Feature Selection techniques perform poorly when using C4.5 decision tree and Linear discriminant analysis classifiers; (2) for datasets with different distributions, techniques such as Random undersampling and Feature Selection perform better than other data pre-processing methods with T Location-Scale distribution when using SVM and KNN (K-nearest neighbours) classifiers. Random oversampling outperforms other methods on Negative Binomial distribution using Random Forest classifier with lower level of imbalance ratio; (3) Feature Selection outperforms other data pre-processing methods in most cases, thus, Feature Selection with SVM classifier is the best choice for imbalanced biomedical data learning. |
2010.11810 | R. James Cotton | R. James Cotton, Fabian H. Sinz, Andreas S. Tolias | Factorized Neural Processes for Neural Processes: $K$-Shot Prediction of
Neural Responses | 14 pages, 5 figures, NeurIPS 2020 conference paper | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, artificial neural networks have achieved state-of-the-art
performance for predicting the responses of neurons in the visual cortex to
natural stimuli. However, they require a time consuming parameter optimization
process for accurately modeling the tuning function of newly observed neurons,
which prohibits many applications including real-time, closed-loop experiments.
We overcome this limitation by formulating the problem as $K$-shot prediction
to directly infer a neuron's tuning function from a small set of
stimulus-response pairs using a Neural Process. This required us to developed a
Factorized Neural Process, which embeds the observed set into a latent space
partitioned into the receptive field location and the tuning function
properties. We show on simulated responses that the predictions and
reconstructed receptive fields from the Factorized Neural Process approach
ground truth with increasing number of trials. Critically, the latent
representation that summarizes the tuning function of a neuron is inferred in a
quick, single forward pass through the network. Finally, we validate this
approach on real neural data from visual cortex and find that the predictive
accuracy is comparable to -- and for small $K$ even greater than --
optimization based approaches, while being substantially faster. We believe
this novel deep learning systems identification framework will facilitate
better real-time integration of artificial neural network modeling into
neuroscience experiments.
| [
{
"created": "Thu, 22 Oct 2020 15:43:59 GMT",
"version": "v1"
}
] | 2020-10-24 | [
[
"Cotton",
"R. James",
""
],
[
"Sinz",
"Fabian H.",
""
],
[
"Tolias",
"Andreas S.",
""
]
] | In recent years, artificial neural networks have achieved state-of-the-art performance for predicting the responses of neurons in the visual cortex to natural stimuli. However, they require a time consuming parameter optimization process for accurately modeling the tuning function of newly observed neurons, which prohibits many applications including real-time, closed-loop experiments. We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neuron's tuning function from a small set of stimulus-response pairs using a Neural Process. This required us to developed a Factorized Neural Process, which embeds the observed set into a latent space partitioned into the receptive field location and the tuning function properties. We show on simulated responses that the predictions and reconstructed receptive fields from the Factorized Neural Process approach ground truth with increasing number of trials. Critically, the latent representation that summarizes the tuning function of a neuron is inferred in a quick, single forward pass through the network. Finally, we validate this approach on real neural data from visual cortex and find that the predictive accuracy is comparable to -- and for small $K$ even greater than -- optimization based approaches, while being substantially faster. We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments. |
1901.02702 | Fakhteh Ghanbarnejad | Fatemeh Zarei, Saman Moghimi-Araghi, Fakhteh Ghanbarnejad | Exact solution of generalized cooperative SIR dynamics | null | Phys. Rev. E 100, 012307 (2019) | 10.1103/PhysRevE.100.012307 | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a general framework for co-infection as
cooperative SIR dynamics. We first solve analytically CGCG model [1] and then
the generalized model in symmetric scenarios. We calculate transition points,
order parameter, i.e. total number of infected hosts. Also we show analytically
there is a saddle-node bifurcation for two cooperative SIR dynamics and the
transition is hybrid. Moreover, we investigate where symmetric solution is
stable for initial fluctuations. Then we study asymmetric cases of parameters.
The more asymmetry, for the primary and secondary infection rates of one
pathogen in comparison to the other pathogen, can lead to the less infected
hosts, the higher epidemic threshold and continuous transitions. Our model and
results for co-infection in combination with super-infection [2] can open a
road to model disease ecology.
| [
{
"created": "Wed, 9 Jan 2019 12:52:18 GMT",
"version": "v1"
}
] | 2019-07-24 | [
[
"Zarei",
"Fatemeh",
""
],
[
"Moghimi-Araghi",
"Saman",
""
],
[
"Ghanbarnejad",
"Fakhteh",
""
]
] | In this paper, we introduce a general framework for co-infection as cooperative SIR dynamics. We first solve analytically CGCG model [1] and then the generalized model in symmetric scenarios. We calculate transition points, order parameter, i.e. total number of infected hosts. Also we show analytically there is a saddle-node bifurcation for two cooperative SIR dynamics and the transition is hybrid. Moreover, we investigate where symmetric solution is stable for initial fluctuations. Then we study asymmetric cases of parameters. The more asymmetry, for the primary and secondary infection rates of one pathogen in comparison to the other pathogen, can lead to the less infected hosts, the higher epidemic threshold and continuous transitions. Our model and results for co-infection in combination with super-infection [2] can open a road to model disease ecology. |
1907.01380 | Sergei Maslov | Veronika Dubinkina, Yulia Fridman, Parth Pratim Pandey, and Sergei
Maslov | Multistability and regime shifts in microbial communities explained by
competition for essential nutrients | 15 pages plus SI, 4 figures and 5 supplementary figures. arXiv admin
note: substantial text overlap with arXiv:1810.04726 | null | null | null | q-bio.PE cond-mat.stat-mech cs.GT math.DS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microbial communities routinely have several possible species compositions or
community states observed for the same environmental parameters. Changes in
these parameters can trigger abrupt and persistent transitions (regime shifts)
between such community states. Yet little is known about the main determinants
and mechanisms of multistability in microbial communities. Here we introduce
and study a resource-explicit model in which microbes compete for two types of
essential nutrients. We adapt game-theoretical methods of the stable matching
problem to identify all possible species compositions of a microbial community.
We then classify them by their resilience against three types of perturbations:
fluctuations in nutrient supply, invasions by new species, and small changes of
abundances of existing ones. We observe multistability and explore an intricate
network of regime shifts between stable states in our model. Our results
suggest that multistability requires microbial species to have different
stoichiometries of essential nutrients. We also find that balanced nutrient
supply promote multistability and species diversity yet make individual
community states less stable.
| [
{
"created": "Sat, 29 Jun 2019 14:39:32 GMT",
"version": "v1"
}
] | 2019-07-03 | [
[
"Dubinkina",
"Veronika",
""
],
[
"Fridman",
"Yulia",
""
],
[
"Pandey",
"Parth Pratim",
""
],
[
"Maslov",
"Sergei",
""
]
] | Microbial communities routinely have several possible species compositions or community states observed for the same environmental parameters. Changes in these parameters can trigger abrupt and persistent transitions (regime shifts) between such community states. Yet little is known about the main determinants and mechanisms of multistability in microbial communities. Here we introduce and study a resource-explicit model in which microbes compete for two types of essential nutrients. We adapt game-theoretical methods of the stable matching problem to identify all possible species compositions of a microbial community. We then classify them by their resilience against three types of perturbations: fluctuations in nutrient supply, invasions by new species, and small changes of abundances of existing ones. We observe multistability and explore an intricate network of regime shifts between stable states in our model. Our results suggest that multistability requires microbial species to have different stoichiometries of essential nutrients. We also find that balanced nutrient supply promote multistability and species diversity yet make individual community states less stable. |
2004.02278 | George Yuan | George Xianzhi Yuan, Lan Di, Yudi Gu, Guoqi Qian, and Xiaosong Qian | The Framework for the Prediction of the Critical Turning Period for
Outbreak of COVID-19 Spread in China based on the iSEIR Model | 24 paages, 9 figures, 10 tables | Journal of Systems Science and Information, Vol.10, No.4: 309 -
337, (2022) | 10.21078/JSSI-2022-309-29 | null | q-bio.PE physics.soc-ph stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this study is to establish a general framework for predicting the
so-called critical Turning Period in an infectious disease epidemic such as the
COVID-19 outbreak in China early this year. This framework enabled a timely
prediction of the turning period when applied to Wuhan COVID-19 epidemic and
informed the relevant authority for taking appropriate and timely actions to
control the epidemic. It is expected to provide insightful information on
turning period for the world's current battle against the COVID-19 pandemic.
The underlying mathematical model in our framework is the individual
Susceptible-Exposed- Infective-Removed (iSEIR) model, which is a set of
differential equations extending the classic SEIR model. We used the observed
daily cases of COVID-19 in Wuhan from February 6 to 10, 2020 as the input to
the iSEIR model and were able to generate the trajectory of COVID-19 cases
dynamics for the following days at midnight of February 10 based on the updated
model, from which we predicted that the turning period of CIVID-19 outbreak in
Wuhan would arrive within one week after February 14. This prediction turned to
be timely and accurate, providing adequate time for the government, hospitals,
essential industry sectors and services to meet peak demands and to prepare
aftermath planning. Our study also supports the observed effectiveness on
flatting the epidemic curve by decisively imposing the Lockdown and Isolation
Control Program in Wuhan since January 23, 2020. The Wuhan experience provides
an exemplary lesson for the whole world to learn in combating COVID-19.
| [
{
"created": "Sun, 5 Apr 2020 18:43:48 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Yuan",
"George Xianzhi",
""
],
[
"Di",
"Lan",
""
],
[
"Gu",
"Yudi",
""
],
[
"Qian",
"Guoqi",
""
],
[
"Qian",
"Xiaosong",
""
]
] | The goal of this study is to establish a general framework for predicting the so-called critical Turning Period in an infectious disease epidemic such as the COVID-19 outbreak in China early this year. This framework enabled a timely prediction of the turning period when applied to Wuhan COVID-19 epidemic and informed the relevant authority for taking appropriate and timely actions to control the epidemic. It is expected to provide insightful information on turning period for the world's current battle against the COVID-19 pandemic. The underlying mathematical model in our framework is the individual Susceptible-Exposed- Infective-Removed (iSEIR) model, which is a set of differential equations extending the classic SEIR model. We used the observed daily cases of COVID-19 in Wuhan from February 6 to 10, 2020 as the input to the iSEIR model and were able to generate the trajectory of COVID-19 cases dynamics for the following days at midnight of February 10 based on the updated model, from which we predicted that the turning period of CIVID-19 outbreak in Wuhan would arrive within one week after February 14. This prediction turned to be timely and accurate, providing adequate time for the government, hospitals, essential industry sectors and services to meet peak demands and to prepare aftermath planning. Our study also supports the observed effectiveness on flatting the epidemic curve by decisively imposing the Lockdown and Isolation Control Program in Wuhan since January 23, 2020. The Wuhan experience provides an exemplary lesson for the whole world to learn in combating COVID-19. |
1612.08116 | Anna Seigal | Anna Seigal, Mariano Beguerisse-D\'iaz, Birgit Schoeberl, Mario
Niepel, Heather A. Harrington | Tensor clustering with algebraic constraints gives interpretable groups
of crosstalk mechanisms in breast cancer | 22 pages, 12 figures, 4 tables | Journal of The Royal Society Interface, volume 16 (2019) issue
151, 20180661 | 10.1098/rsif.2018.0661 | null | q-bio.QM math.OC physics.soc-ph q-bio.MN stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a tensor-based clustering method to extract sparse,
low-dimensional structure from high-dimensional, multi-indexed datasets. This
framework is designed to enable detection of clusters of data in the presence
of structural requirements which we encode as algebraic constraints in a linear
program. Our clustering method is general and can be tailored to a variety of
applications in science and industry. We illustrate our method on a collection
of experiments measuring the response of genetically diverse breast cancer cell
lines to an array of ligands. Each experiment consists of a cell line-ligand
combination, and contains time-course measurements of the early-signalling
kinases MAPK and AKT at two different ligand dose levels. By imposing
appropriate structural constraints and respecting the multi-indexed structure
of the data, the analysis of clusters can be optimized for biological
interpretation and therapeutic understanding. We then perform a systematic,
large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each
cluster. This analysis allows us to quantify the heterogeneity of breast cancer
cell subtypes, and leads to hypotheses about the signalling mechanisms that
mediate the response of the cell lines to ligands.
| [
{
"created": "Sat, 24 Dec 2016 00:00:43 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2017 14:49:37 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Feb 2019 18:16:47 GMT",
"version": "v3"
}
] | 2019-02-11 | [
[
"Seigal",
"Anna",
""
],
[
"Beguerisse-Díaz",
"Mariano",
""
],
[
"Schoeberl",
"Birgit",
""
],
[
"Niepel",
"Mario",
""
],
[
"Harrington",
"Heather A.",
""
]
] | We introduce a tensor-based clustering method to extract sparse, low-dimensional structure from high-dimensional, multi-indexed datasets. This framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. Our clustering method is general and can be tailored to a variety of applications in science and industry. We illustrate our method on a collection of experiments measuring the response of genetically diverse breast cancer cell lines to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains time-course measurements of the early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and respecting the multi-indexed structure of the data, the analysis of clusters can be optimized for biological interpretation and therapeutic understanding. We then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to quantify the heterogeneity of breast cancer cell subtypes, and leads to hypotheses about the signalling mechanisms that mediate the response of the cell lines to ligands. |
1106.0863 | Andrea Barreiro | Andrea K. Barreiro, Evan L. Thilo, Eric Shea-Brown | The A-current and Type I / Type II transition determine collective
spiking from common input | 42 pages, 10 figures v1: Submitted June 4, 2011 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mechanisms and impact of correlated, or synchronous, firing among pairs
and groups of neurons is under intense investigation throughout the nervous
system. A ubiquitous circuit feature that can give rise to such correlations
consists of overlapping, or common, inputs to pairs and populations of cells,
leading to common spike train responses. Here, we use computational tools to
study how the transfer of common input currents into common spike outputs is
modulated by the physiology of the recipient cells. We focus on a key
conductance - gA, for the A-type potassium current - which drives neurons
between "Type II" excitability (low gA), and "Type I" excitability (high gA).
Regardless of gA, cells transform common input fluctuations into a ten- dency
to spike nearly simultaneously. However, this process is more pronounced at low
gA values, as previously predicted by reduced "phase" models. Thus, for a given
level of common input, Type II neurons produce spikes that are relatively more
correlated over short time scales. Over long time scales, the trend reverses,
with Type II neurons producing relatively less correlated spike trains. This is
because these cells' increased tendency for simultaneous spiking is balanced by
opposing tendencies at larger time lags. We demonstrate a novel implication for
neural signal processing: downstream cells with long time constants are
selectively driven by Type I cell populations upstream, and those with short
time constants by Type II cell populations. Our results are established via
high-throughput numerical simulations, and explained via the cells' filtering
properties and nonlinear dynamics.
| [
{
"created": "Sat, 4 Jun 2011 22:29:34 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Barreiro",
"Andrea K.",
""
],
[
"Thilo",
"Evan L.",
""
],
[
"Shea-Brown",
"Eric",
""
]
] | The mechanisms and impact of correlated, or synchronous, firing among pairs and groups of neurons is under intense investigation throughout the nervous system. A ubiquitous circuit feature that can give rise to such correlations consists of overlapping, or common, inputs to pairs and populations of cells, leading to common spike train responses. Here, we use computational tools to study how the transfer of common input currents into common spike outputs is modulated by the physiology of the recipient cells. We focus on a key conductance - gA, for the A-type potassium current - which drives neurons between "Type II" excitability (low gA), and "Type I" excitability (high gA). Regardless of gA, cells transform common input fluctuations into a ten- dency to spike nearly simultaneously. However, this process is more pronounced at low gA values, as previously predicted by reduced "phase" models. Thus, for a given level of common input, Type II neurons produce spikes that are relatively more correlated over short time scales. Over long time scales, the trend reverses, with Type II neurons producing relatively less correlated spike trains. This is because these cells' increased tendency for simultaneous spiking is balanced by opposing tendencies at larger time lags. We demonstrate a novel implication for neural signal processing: downstream cells with long time constants are selectively driven by Type I cell populations upstream, and those with short time constants by Type II cell populations. Our results are established via high-throughput numerical simulations, and explained via the cells' filtering properties and nonlinear dynamics. |
q-bio/0412012 | Jon McAuliffe | Jon D. McAuliffe, Michael I. Jordan and Lior Pachter | Subtree power analysis finds optimal species for comparative genomics | 16 pages, 3 figures, 3 tables | null | null | UCB-Stat-TR-677 | q-bio.GN q-bio.QM | null | Sequence comparison across multiple organisms aids in the detection of
regions under selection. However, resource limitations require a prioritization
of genomes to be sequenced. This prioritization should be grounded in two
considerations: the lineal scope encompassing the biological phenomena of
interest, and the optimal species within that scope for detecting functional
elements. We introduce a statistical framework for optimal species subset
selection, based on maximizing power to detect conserved sites. In a study of
vertebrate species, we show that the optimal species subset is not in general
the most evolutionarily diverged subset. Our results suggest that marsupials
are prime sequencing candidates.
| [
{
"created": "Mon, 6 Dec 2004 19:06:11 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"McAuliffe",
"Jon D.",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Pachter",
"Lior",
""
]
] | Sequence comparison across multiple organisms aids in the detection of regions under selection. However, resource limitations require a prioritization of genomes to be sequenced. This prioritization should be grounded in two considerations: the lineal scope encompassing the biological phenomena of interest, and the optimal species within that scope for detecting functional elements. We introduce a statistical framework for optimal species subset selection, based on maximizing power to detect conserved sites. In a study of vertebrate species, we show that the optimal species subset is not in general the most evolutionarily diverged subset. Our results suggest that marsupials are prime sequencing candidates. |
2007.00865 | Zhenlin Wang | Zhenlin Wang, Xiaoxuan Zhang, Gregory Teichert, Mariana Carrasco-Teja
and Krishna Garikipati | System inference for the spatio-temporal evolution of infectious
diseases: Michigan in the time of COVID-19 | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the classical SIR model of infectious disease spread to account for
time dependence in the parameters, which also include diffusivities. The
temporal dependence accounts for the changing characteristics of testing,
quarantine and treatment protocols, while diffusivity incorporates a mobile
population. This model has been applied to data on the evolution of the
COVID-19 pandemic in the US state of Michigan. For system inference, we use
recent advances; specifically our framework for Variational System
Identification (Wang et al., Comp. Meth. App. Mech. Eng., 356, 44-74, 2019;
arXiv:2001.04816 [cs.CE]) as well as Bayesian machine learning methods.
| [
{
"created": "Thu, 2 Jul 2020 04:17:30 GMT",
"version": "v1"
}
] | 2020-07-03 | [
[
"Wang",
"Zhenlin",
""
],
[
"Zhang",
"Xiaoxuan",
""
],
[
"Teichert",
"Gregory",
""
],
[
"Carrasco-Teja",
"Mariana",
""
],
[
"Garikipati",
"Krishna",
""
]
] | We extend the classical SIR model of infectious disease spread to account for time dependence in the parameters, which also include diffusivities. The temporal dependence accounts for the changing characteristics of testing, quarantine and treatment protocols, while diffusivity incorporates a mobile population. This model has been applied to data on the evolution of the COVID-19 pandemic in the US state of Michigan. For system inference, we use recent advances; specifically our framework for Variational System Identification (Wang et al., Comp. Meth. App. Mech. Eng., 356, 44-74, 2019; arXiv:2001.04816 [cs.CE]) as well as Bayesian machine learning methods. |
2006.08115 | Qianyi Li | Qianyi Li, Cengiz Pehlevan | Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory
and Inhibitory Neurons | There was a typo in Eq. 3 for the definition of firing rates, where
we had e^{-(t-t')/\tau_E} in the integrand, which should be e^{-t'/\tau_E},
it is corrected in this version | NeurIPS 2020 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Excitation-inhibition (E-I) balance is ubiquitously observed in the cortex.
Recent studies suggest an intriguing link between balance on fast timescales,
tight balance, and efficient information coding with spikes. We further this
connection by taking a principled approach to optimal balanced networks of
excitatory (E) and inhibitory (I) neurons. By deriving E-I spiking neural
networks from greedy spike-based optimizations of constrained minimax
objectives, we show that tight balance arises from correcting for deviations
from the minimax optima. We predict specific neuron firing rates in the network
by solving the minimax problem, going beyond statistical theories of balanced
networks. Finally, we design minimax objectives for reconstruction of an input
signal, associative memory, and storage of manifold attractors, and derive from
them E-I networks that perform the computation. Overall, we present a novel
normative modeling approach for spiking E-I networks, going beyond the
widely-used energy minimizing networks that violate Dale's law. Our networks
can be used to model cortical circuits and computations.
| [
{
"created": "Mon, 15 Jun 2020 03:54:32 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2020 22:17:48 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Jan 2021 19:43:48 GMT",
"version": "v3"
},
{
"created": "Fri, 30 Apr 2021 20:04:34 GMT",
"version": "v4"
}
] | 2021-05-04 | [
[
"Li",
"Qianyi",
""
],
[
"Pehlevan",
"Cengiz",
""
]
] | Excitation-inhibition (E-I) balance is ubiquitously observed in the cortex. Recent studies suggest an intriguing link between balance on fast timescales, tight balance, and efficient information coding with spikes. We further this connection by taking a principled approach to optimal balanced networks of excitatory (E) and inhibitory (I) neurons. By deriving E-I spiking neural networks from greedy spike-based optimizations of constrained minimax objectives, we show that tight balance arises from correcting for deviations from the minimax optima. We predict specific neuron firing rates in the network by solving the minimax problem, going beyond statistical theories of balanced networks. Finally, we design minimax objectives for reconstruction of an input signal, associative memory, and storage of manifold attractors, and derive from them E-I networks that perform the computation. Overall, we present a novel normative modeling approach for spiking E-I networks, going beyond the widely-used energy minimizing networks that violate Dale's law. Our networks can be used to model cortical circuits and computations. |
0704.2547 | Remi Monasson | Valentina Baldazzi (LPS), Serena Bradde (LPS), Simona Cocco (LPS),
Enzo Marinari, Remi Monasson (LPTENS) | Inferring DNA sequences from mechanical unzipping data: the
large-bandwidth case | null | Phys. Rev. E 75 (2007) 011904 | 10.1103/PhysRevE.75.011904 | null | q-bio.BM cond-mat.stat-mech | null | The complementary strands of DNA molecules can be separated when stretched
apart by a force; the unzipping signal is correlated to the base content of the
sequence but is affected by thermal and instrumental noise. We consider here
the ideal case where opening events are known to a very good time resolution
(very large bandwidth), and study how the sequence can be reconstructed from
the unzipping data. Our approach relies on the use of statistical Bayesian
inference and of Viterbi decoding algorithm. Performances are studied
numerically on Monte Carlo generated data, and analytically. We show how
multiple unzippings of the same molecule may be exploited to improve the
quality of the prediction, and calculate analytically the number of required
unzippings as a function of the bandwidth, the sequence content, the elasticity
parameters of the unzipped strands.
| [
{
"created": "Thu, 19 Apr 2007 14:45:29 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Baldazzi",
"Valentina",
"",
"LPS"
],
[
"Bradde",
"Serena",
"",
"LPS"
],
[
"Cocco",
"Simona",
"",
"LPS"
],
[
"Marinari",
"Enzo",
"",
"LPTENS"
],
[
"Monasson",
"Remi",
"",
"LPTENS"
]
] | The complementary strands of DNA molecules can be separated when stretched apart by a force; the unzipping signal is correlated to the base content of the sequence but is affected by thermal and instrumental noise. We consider here the ideal case where opening events are known to a very good time resolution (very large bandwidth), and study how the sequence can be reconstructed from the unzipping data. Our approach relies on the use of statistical Bayesian inference and of Viterbi decoding algorithm. Performances are studied numerically on Monte Carlo generated data, and analytically. We show how multiple unzippings of the same molecule may be exploited to improve the quality of the prediction, and calculate analytically the number of required unzippings as a function of the bandwidth, the sequence content, the elasticity parameters of the unzipped strands. |
1902.07107 | Francesc Rossell\'o | Ricardo Alberich, Adri\`a Alcala, Merc\`e Llabr\'es, Francesc
Rossell\'o and Gabriel Valiente | AligNet: Alignment of Protein-Protein Interaction Networks | 30 pages, 11 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most difficult problems difficult problem in systems biology is to
discover protein-protein interactions as well as their associated functions.
The analysis and alignment of protein-protein interaction networks (PPIN),
which are the standard model to describe protein-protein interactions, has
become a key ingredient to obtain functional orthologs as well as evolutionary
conserved pathways and protein complexes. Several methods have been proposed to
solve the PPIN alignment problem, aimed to match conserved subnetworks or
functionally related proteins. However, the right balance between considering
network topology and biological information is one of the most difficult and
key points in any PPIN alignment algorithm which, unfortunately, remains
unsolved. Therefore, in this work, we propose AligNet, a new method and
software tool for the pairwise global alignment of PPIN that produces
biologically meaningful alignments and more efficient computations than
state-of-the-art methods and tools, by achieving a good balance between
structural matching and protein function conservation as well as reasonable
running times.
| [
{
"created": "Tue, 19 Feb 2019 15:54:24 GMT",
"version": "v1"
}
] | 2019-02-20 | [
[
"Alberich",
"Ricardo",
""
],
[
"Alcala",
"Adrià",
""
],
[
"Llabrés",
"Mercè",
""
],
[
"Rosselló",
"Francesc",
""
],
[
"Valiente",
"Gabriel",
""
]
] | One of the most difficult problems difficult problem in systems biology is to discover protein-protein interactions as well as their associated functions. The analysis and alignment of protein-protein interaction networks (PPIN), which are the standard model to describe protein-protein interactions, has become a key ingredient to obtain functional orthologs as well as evolutionary conserved pathways and protein complexes. Several methods have been proposed to solve the PPIN alignment problem, aimed to match conserved subnetworks or functionally related proteins. However, the right balance between considering network topology and biological information is one of the most difficult and key points in any PPIN alignment algorithm which, unfortunately, remains unsolved. Therefore, in this work, we propose AligNet, a new method and software tool for the pairwise global alignment of PPIN that produces biologically meaningful alignments and more efficient computations than state-of-the-art methods and tools, by achieving a good balance between structural matching and protein function conservation as well as reasonable running times. |
2305.14609 | Yi-Ming Chen | Yiming Chen, and Sihui Wang, and Dong Xu | Association of stroke lesion distributions with atrial fibrillation
detected after stroke | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Background Atrial fibrillation is often missed by traditional intermittent
electrocardiogram monitoring after ischemic stroke due to its paroxysmal and
asymptomatic nature. The knowledge of the unique characteristics of the
population with atrial fibrillation detected after stroke (AFDAS) enables more
ischemic stroke patients to benefit from more aggressive anticoagulation
therapy and AF management. Method This is an observational, retrospective, MRI
imaging-based single-center study. Patients with AFDAS were matched in 1:3
ratio with patients without AF (NoAF)and patients with known AF before
stroke(KAF) in PSM model based on age, gender, and time from stroke onset to
admission. Multivariate logistic models were used to test the association of
MRI-based stroke lesion distribution, other clinical parameters and AF. A
backward stepwise elimination regression was conducted to identify the most
important variables. Results Compared to the NoAF group(n=103), the patients
with AFDAS (n=42) had more cortical involvement(p=0.016), as well as
temporal(p<0.001) and insular lobes(p=0.018) infraction. After performing a
backward stepwise elimination model in regression analysis, the temporal lobe
infraction(OR 0.274, 95%CI 0.090-0.838, p=0.023) remained independently
associated with the detection of AF. Compared to the KAF group(n=89), LAD(OR
1.113, 95%CI 1.022-1.211, p=0.014), Number of lobes infarction(p=0.012),
3-lobes involvement(OR 0.177, 95%CI 0.056-0.559, p=0.003), and left hemisphere
lobe involvement(OR 5.966, 95%CI 2.273-15.817, p<0.001)) were independently
associated with AFDAS and KAF. Conclusions Ischemic stroke patients with AF
detected after stroke present more temporal lobe infraction and cortical
involvement. These lesion distribution characteristics with clinical
characteristics together may help in stratifying patients with long-term
cardiac monitoring after stroke.
| [
{
"created": "Wed, 24 May 2023 01:14:08 GMT",
"version": "v1"
}
] | 2023-05-25 | [
[
"Chen",
"Yiming",
""
],
[
"Wang",
"Sihui",
""
],
[
"Xu",
"Dong",
""
]
] | Background Atrial fibrillation is often missed by traditional intermittent electrocardiogram monitoring after ischemic stroke due to its paroxysmal and asymptomatic nature. The knowledge of the unique characteristics of the population with atrial fibrillation detected after stroke (AFDAS) enables more ischemic stroke patients to benefit from more aggressive anticoagulation therapy and AF management. Method This is an observational, retrospective, MRI imaging-based single-center study. Patients with AFDAS were matched in 1:3 ratio with patients without AF (NoAF)and patients with known AF before stroke(KAF) in PSM model based on age, gender, and time from stroke onset to admission. Multivariate logistic models were used to test the association of MRI-based stroke lesion distribution, other clinical parameters and AF. A backward stepwise elimination regression was conducted to identify the most important variables. Results Compared to the NoAF group(n=103), the patients with AFDAS (n=42) had more cortical involvement(p=0.016), as well as temporal(p<0.001) and insular lobes(p=0.018) infraction. After performing a backward stepwise elimination model in regression analysis, the temporal lobe infraction(OR 0.274, 95%CI 0.090-0.838, p=0.023) remained independently associated with the detection of AF. Compared to the KAF group(n=89), LAD(OR 1.113, 95%CI 1.022-1.211, p=0.014), Number of lobes infarction(p=0.012), 3-lobes involvement(OR 0.177, 95%CI 0.056-0.559, p=0.003), and left hemisphere lobe involvement(OR 5.966, 95%CI 2.273-15.817, p<0.001)) were independently associated with AFDAS and KAF. Conclusions Ischemic stroke patients with AF detected after stroke present more temporal lobe infraction and cortical involvement. These lesion distribution characteristics with clinical characteristics together may help in stratifying patients with long-term cardiac monitoring after stroke. |
2208.12876 | Yuri A. Dabaghian | Yuri Dabaghian | Grid Cell Percolation | 15 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grid cells play a principal role in enabling mammalian cognitive
representations of ambient environments. The key property of these cells -- the
regular arrangement of their firing fields -- is commonly viewed as means for
establishing spatial scales or encoding specific locations. However, using grid
cells' spiking outputs for deducing spatial orderliness proves to be a
strenuous task, due to fairly irregular activation patterns triggered by the
animal's sporadic visits to the grid fields. The following discussion addresses
statistical mechanisms enabling emergent regularity of grid cell firing
activity, from the perspective of percolation theory. In particular, it is
shown that the range of neurophysiological parameters required for spiking
percolation phenomena matches experimental data, which points at biological
viability of the percolation approach and casts a new light on the role of grid
cells in organizing the hippocampal map.
| [
{
"created": "Fri, 26 Aug 2022 22:06:54 GMT",
"version": "v1"
}
] | 2022-08-30 | [
[
"Dabaghian",
"Yuri",
""
]
] | Grid cells play a principal role in enabling mammalian cognitive representations of ambient environments. The key property of these cells -- the regular arrangement of their firing fields -- is commonly viewed as means for establishing spatial scales or encoding specific locations. However, using grid cells' spiking outputs for deducing spatial orderliness proves to be a strenuous task, due to fairly irregular activation patterns triggered by the animal's sporadic visits to the grid fields. The following discussion addresses statistical mechanisms enabling emergent regularity of grid cell firing activity, from the perspective of percolation theory. In particular, it is shown that the range of neurophysiological parameters required for spiking percolation phenomena matches experimental data, which points at biological viability of the percolation approach and casts a new light on the role of grid cells in organizing the hippocampal map. |
2306.13699 | Cong Shen | Cong Shen, Pingjian Ding, Junjie Wee, Jialin Bi, Jiawei Luo and Kelin
Xia | Curvature-enhanced Graph Convolutional Network for Biomolecular
Interaction Prediction | null | null | null | null | q-bio.QM cs.AI cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geometric deep learning has demonstrated a great potential in non-Euclidean
data analysis. The incorporation of geometric insights into learning
architecture is vital to its success. Here we propose a curvature-enhanced
graph convolutional network (CGCN) for biomolecular interaction prediction, for
the first time. Our CGCN employs Ollivier-Ricci curvature (ORC) to characterize
network local structures and to enhance the learning capability of GCNs. More
specifically, ORCs are evaluated based on the local topology from node
neighborhoods, and further used as weights for the feature aggregation in
message-passing procedure. Our CGCN model is extensively validated on fourteen
real-world bimolecular interaction networks and a series of simulated data. It
has been found that our CGCN can achieve the state-of-the-art results. It
outperforms all existing models, as far as we know, in thirteen out of the
fourteen real-world datasets and ranks as the second in the rest one. The
results from the simulated data show that our CGCN model is superior to the
traditional GCN models regardless of the positive-to-negativecurvature ratios,
network densities, and network sizes (when larger than 500).
| [
{
"created": "Fri, 23 Jun 2023 14:45:34 GMT",
"version": "v1"
}
] | 2023-06-27 | [
[
"Shen",
"Cong",
""
],
[
"Ding",
"Pingjian",
""
],
[
"Wee",
"Junjie",
""
],
[
"Bi",
"Jialin",
""
],
[
"Luo",
"Jiawei",
""
],
[
"Xia",
"Kelin",
""
]
] | Geometric deep learning has demonstrated a great potential in non-Euclidean data analysis. The incorporation of geometric insights into learning architecture is vital to its success. Here we propose a curvature-enhanced graph convolutional network (CGCN) for biomolecular interaction prediction, for the first time. Our CGCN employs Ollivier-Ricci curvature (ORC) to characterize network local structures and to enhance the learning capability of GCNs. More specifically, ORCs are evaluated based on the local topology from node neighborhoods, and further used as weights for the feature aggregation in message-passing procedure. Our CGCN model is extensively validated on fourteen real-world bimolecular interaction networks and a series of simulated data. It has been found that our CGCN can achieve the state-of-the-art results. It outperforms all existing models, as far as we know, in thirteen out of the fourteen real-world datasets and ranks as the second in the rest one. The results from the simulated data show that our CGCN model is superior to the traditional GCN models regardless of the positive-to-negativecurvature ratios, network densities, and network sizes (when larger than 500). |
1208.6467 | Laurent Perrinet | Paula Sanz Leon (INT), Ivo Vanzetta (INCM), Guillaume S Masson (INT),
Laurent U Perrinet (INT) | Motion clouds: model-based stimulus synthesis of natural-like random
textures for the study of motion perception | null | Journal of Neurophysiology 107, 11 (2012) 3217-26 | 10.1152/jn.00737.2011 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Choosing an appropriate set of stimuli is essential to characterize the
response of a sensory system to a particular functional dimension, such as the
eye movement following the motion of a visual scene. Here, we describe a
framework to generate random texture movies with controlled information
content, i.e., Motion Clouds. These stimuli are defined using a generative
model that is based on controlled experimental parametrization. We show that
Motion Clouds correspond to dense mixing of localized moving gratings with
random positions. Their global envelope is similar to natural-like stimulation
with an approximate full-field translation corresponding to a retinal slip. We
describe the construction of these stimuli mathematically and propose an
open-source Python-based implementation. Examples of the use of this framework
are shown. We also propose extensions to other modalities such as color vision,
touch, and audition.
| [
{
"created": "Fri, 31 Aug 2012 11:47:23 GMT",
"version": "v1"
}
] | 2012-09-03 | [
[
"Leon",
"Paula Sanz",
"",
"INT"
],
[
"Vanzetta",
"Ivo",
"",
"INCM"
],
[
"Masson",
"Guillaume S",
"",
"INT"
],
[
"Perrinet",
"Laurent U",
"",
"INT"
]
] | Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition. |
1509.01677 | Yuri A. Dabaghian | A. Babichev, D. Ji, F. Memoli and Y. Dabaghian | Combinatorics of Place Cell Coactivity and Hippocampal Maps | 22 pages, 9 Figures, 6 Supplementary Figures, 4 Supplementary Movies
available upon request | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely accepted that the hippocampal place cells' spiking activity
produces a cognitive map of space. However, many details of this
representation's physiological mechanism remain unknown. For example, it is
believed that the place cells exhibiting frequent coactivity form functionally
interconnected groups---place cell assemblies---that drive readout neurons in
the downstream networks. However, the sheer number of coactive combinations is
extremely large, which implies that only a small fraction of them actually
gives rise to cell assemblies. The physiological processes responsible for
selecting the winning combinations are highly complex and are usually modeled
via detailed synaptic and structural plasticity mechanisms. Here we propose an
alternative approach that allows modeling the cell assembly network directly,
based on a small number of phenomenological selection rules. We then
demonstrate that the selected population of place cell assemblies correctly
encodes the topology of the environment in biologically plausible time, and may
serve as a schematic model of the hippocampal network.
| [
{
"created": "Sat, 5 Sep 2015 08:31:44 GMT",
"version": "v1"
}
] | 2015-09-08 | [
[
"Babichev",
"A.",
""
],
[
"Ji",
"D.",
""
],
[
"Memoli",
"F.",
""
],
[
"Dabaghian",
"Y.",
""
]
] | It is widely accepted that the hippocampal place cells' spiking activity produces a cognitive map of space. However, many details of this representation's physiological mechanism remain unknown. For example, it is believed that the place cells exhibiting frequent coactivity form functionally interconnected groups---place cell assemblies---that drive readout neurons in the downstream networks. However, the sheer number of coactive combinations is extremely large, which implies that only a small fraction of them actually gives rise to cell assemblies. The physiological processes responsible for selecting the winning combinations are highly complex and are usually modeled via detailed synaptic and structural plasticity mechanisms. Here we propose an alternative approach that allows modeling the cell assembly network directly, based on a small number of phenomenological selection rules. We then demonstrate that the selected population of place cell assemblies correctly encodes the topology of the environment in biologically plausible time, and may serve as a schematic model of the hippocampal network. |
2210.17257 | Vladimir Chechetkin R. | Vladimir R. Chechetkin and Vasily V. Lobzin | On the number of nucleoproteins in the assembly of coronaviruses:
Consequences for COVID-19 | 6 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multifunctional nucleoproteins play important role in the life cycle of
coronaviruses. The assessment of their quantities is of general interest for
the assembly of virions and medical applications. The proliferating
nucleoproteins induce the related (auto)immune response and via binding to host
RNA affect various regulation mechanisms. In this report we briefly summarize
and comment the available experimental data on the subject concerned.
| [
{
"created": "Mon, 31 Oct 2022 12:27:58 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Nov 2022 13:48:29 GMT",
"version": "v2"
},
{
"created": "Thu, 17 Nov 2022 14:05:13 GMT",
"version": "v3"
},
{
"created": "Mon, 13 Feb 2023 09:40:56 GMT",
"version": "v4"
},
{
"created": "Fri, 24 Feb 2023 11:49:29 GMT",
"version": "v5"
}
] | 2023-02-27 | [
[
"Chechetkin",
"Vladimir R.",
""
],
[
"Lobzin",
"Vasily V.",
""
]
] | The multifunctional nucleoproteins play important role in the life cycle of coronaviruses. The assessment of their quantities is of general interest for the assembly of virions and medical applications. The proliferating nucleoproteins induce the related (auto)immune response and via binding to host RNA affect various regulation mechanisms. In this report we briefly summarize and comment the available experimental data on the subject concerned. |
2004.10117 | Michael Powell | Allison Koenecke, Michael Powell, Ruoxuan Xiong, Zhu Shen, Nicole
Fischer, Sakibul Huq, Adham M. Khalafallah, Marco Trevisan, P\"ar Sparen,
Juan J Carrero, Akihiko Nishimura, Brian Caffo, Elizabeth A. Stuart, Renyuan
Bai, Verena Staedtke, David L. Thomas, Nickolas Papadopoulos, Kenneth W.
Kinzler, Bert Vogelstein, Shibin Zhou, Chetan Bettegowda, Maximilian F.
Konig, Brett Mensh, Joshua T. Vogelstein, Susan Athey | Alpha-1 adrenergic receptor antagonists to prevent hyperinflammation and
death from lower respiratory tract infection | 31 pages, 10 figures | Elife 10 (2021): e61700 | 10.7554/eLife.61700 | null | q-bio.TO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In severe viral pneumonia, including Coronavirus disease 2019 (COVID-19), the
viral replication phase is often followed by hyperinflammation, which can lead
to acute respiratory distress syndrome, multi-organ failure, and death. We
previously demonstrated that alpha-1 adrenergic receptor ($\alpha_1$-AR)
antagonists can prevent hyperinflammation and death in mice. Here, we conducted
retrospective analyses in two cohorts of patients with acute respiratory
distress (ARD, n=18,547) and three cohorts with pneumonia (n=400,907).
Federated across two ARD cohorts, we find that patients exposed to
$\alpha_1$-AR antagonists, as compared to unexposed patients, had a 34%
relative risk reduction for mechanical ventilation and death (OR=0.70,
p=0.021). We replicated these methods on three pneumonia cohorts, all with
similar effects on both outcomes. All results were robust to sensitivity
analyses. These results highlight the urgent need for prospective trials
testing whether prophylactic use of $\alpha_1$-AR antagonists ameliorates lower
respiratory tract infection-associated hyperinflammation and death, as observed
in COVID-19.
| [
{
"created": "Tue, 21 Apr 2020 15:52:25 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Apr 2020 17:56:34 GMT",
"version": "v2"
},
{
"created": "Wed, 6 May 2020 15:25:18 GMT",
"version": "v3"
},
{
"created": "Tue, 12 May 2020 14:39:52 GMT",
"version": "v4"
},
{
"created": "Wed, 13 May 2020 12:27:32 GMT",
"version": "v5"
},
{
"created": "Tue, 4 Aug 2020 16:40:34 GMT",
"version": "v6"
},
{
"created": "Wed, 9 Sep 2020 02:41:16 GMT",
"version": "v7"
},
{
"created": "Mon, 2 Aug 2021 18:22:15 GMT",
"version": "v8"
}
] | 2021-08-04 | [
[
"Koenecke",
"Allison",
""
],
[
"Powell",
"Michael",
""
],
[
"Xiong",
"Ruoxuan",
""
],
[
"Shen",
"Zhu",
""
],
[
"Fischer",
"Nicole",
""
],
[
"Huq",
"Sakibul",
""
],
[
"Khalafallah",
"Adham M.",
""
],
[
"Trevisan",
"Marco",
""
],
[
"Sparen",
"Pär",
""
],
[
"Carrero",
"Juan J",
""
],
[
"Nishimura",
"Akihiko",
""
],
[
"Caffo",
"Brian",
""
],
[
"Stuart",
"Elizabeth A.",
""
],
[
"Bai",
"Renyuan",
""
],
[
"Staedtke",
"Verena",
""
],
[
"Thomas",
"David L.",
""
],
[
"Papadopoulos",
"Nickolas",
""
],
[
"Kinzler",
"Kenneth W.",
""
],
[
"Vogelstein",
"Bert",
""
],
[
"Zhou",
"Shibin",
""
],
[
"Bettegowda",
"Chetan",
""
],
[
"Konig",
"Maximilian F.",
""
],
[
"Mensh",
"Brett",
""
],
[
"Vogelstein",
"Joshua T.",
""
],
[
"Athey",
"Susan",
""
]
] | In severe viral pneumonia, including Coronavirus disease 2019 (COVID-19), the viral replication phase is often followed by hyperinflammation, which can lead to acute respiratory distress syndrome, multi-organ failure, and death. We previously demonstrated that alpha-1 adrenergic receptor ($\alpha_1$-AR) antagonists can prevent hyperinflammation and death in mice. Here, we conducted retrospective analyses in two cohorts of patients with acute respiratory distress (ARD, n=18,547) and three cohorts with pneumonia (n=400,907). Federated across two ARD cohorts, we find that patients exposed to $\alpha_1$-AR antagonists, as compared to unexposed patients, had a 34% relative risk reduction for mechanical ventilation and death (OR=0.70, p=0.021). We replicated these methods on three pneumonia cohorts, all with similar effects on both outcomes. All results were robust to sensitivity analyses. These results highlight the urgent need for prospective trials testing whether prophylactic use of $\alpha_1$-AR antagonists ameliorates lower respiratory tract infection-associated hyperinflammation and death, as observed in COVID-19. |
1512.03342 | Nihar Sheth | Hardik I. Parikh, Vishal N. Koparde, Steven P. Bradley, Gregory A.
Buck and Nihar U. Sheth | MeFiT: Merging and Filtering Tool for Illumina Paired-End Reads for 16S
rRNA Amplicon Sequencing | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in next-generation sequencing have revolutionized genomic
research. 16S rRNA amplicon sequencing using paired-end sequencing on the MiSeq
platform from Illumina, Inc., is being used to characterize the composition and
dynamics of extremely complex/diverse microbial communities. For this analysis
on the Illumina platform, merging and quality filtering of paired-end reads are
essential first steps in data analysis to ensure the accuracy and reliability
of downstream analysis. We have developed the Merging and Filtering Tool
(MeFiT) to combine these pre-processing steps into one simple, intuitive
pipeline. MeFiT provides an open-source solution that permits users to merge
and filter paired end illumina reads based on user-selected quality parameters.
The tool has been implemented in python and the source-code is freely available
at https://github.com/nisheth/MeFiT.
| [
{
"created": "Thu, 10 Dec 2015 17:43:48 GMT",
"version": "v1"
}
] | 2015-12-11 | [
[
"Parikh",
"Hardik I.",
""
],
[
"Koparde",
"Vishal N.",
""
],
[
"Bradley",
"Steven P.",
""
],
[
"Buck",
"Gregory A.",
""
],
[
"Sheth",
"Nihar U.",
""
]
] | Recent advances in next-generation sequencing have revolutionized genomic research. 16S rRNA amplicon sequencing using paired-end sequencing on the MiSeq platform from Illumina, Inc., is being used to characterize the composition and dynamics of extremely complex/diverse microbial communities. For this analysis on the Illumina platform, merging and quality filtering of paired-end reads are essential first steps in data analysis to ensure the accuracy and reliability of downstream analysis. We have developed the Merging and Filtering Tool (MeFiT) to combine these pre-processing steps into one simple, intuitive pipeline. MeFiT provides an open-source solution that permits users to merge and filter paired end illumina reads based on user-selected quality parameters. The tool has been implemented in python and the source-code is freely available at https://github.com/nisheth/MeFiT. |
2007.09297 | Alvaro Ovalle | Alvaro Ovalle and Simon M. Lucas | Modulation of viability signals for self-regulatory control | Accepted at the International Workshop on Active Inference 2020
(camera-ready version). Extended from 6 to 13 pages to include appendices and
a more comprehensive reference list | null | null | null | q-bio.NC cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | We revisit the role of instrumental value as a driver of adaptive behavior.
In active inference, instrumental or extrinsic value is quantified by the
information-theoretic surprisal of a set of observations measuring the extent
to which those observations conform to prior beliefs or preferences. That is,
an agent is expected to seek the type of evidence that is consistent with its
own model of the world. For reinforcement learning tasks, the distribution of
preferences replaces the notion of reward. We explore a scenario in which the
agent learns this distribution in a self-supervised manner. In particular, we
highlight the distinction between observations induced by the environment and
those pertaining more directly to the continuity of an agent in time. We
evaluate our methodology in a dynamic environment with discrete time and
actions. First with a surprisal minimizing model-free agent (in the RL sense)
and then expanding to the model-based case to minimize the expected free
energy.
| [
{
"created": "Sat, 18 Jul 2020 01:11:51 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Oct 2020 11:57:40 GMT",
"version": "v2"
}
] | 2020-10-14 | [
[
"Ovalle",
"Alvaro",
""
],
[
"Lucas",
"Simon M.",
""
]
] | We revisit the role of instrumental value as a driver of adaptive behavior. In active inference, instrumental or extrinsic value is quantified by the information-theoretic surprisal of a set of observations measuring the extent to which those observations conform to prior beliefs or preferences. That is, an agent is expected to seek the type of evidence that is consistent with its own model of the world. For reinforcement learning tasks, the distribution of preferences replaces the notion of reward. We explore a scenario in which the agent learns this distribution in a self-supervised manner. In particular, we highlight the distinction between observations induced by the environment and those pertaining more directly to the continuity of an agent in time. We evaluate our methodology in a dynamic environment with discrete time and actions. First with a surprisal minimizing model-free agent (in the RL sense) and then expanding to the model-based case to minimize the expected free energy. |
2306.07491 | Stuart Johnston | Stuart T. Johnston and Matthew J. Simpson | Exact sharp-fronted solutions for nonlinear diffusion on evolving
domains | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Models of diffusive processes that occur on evolving domains are frequently
employed to describe biological and physical phenomena, such as diffusion
within expanding tissues or substrates. Previous investigations into these
models either report numerical solutions or require an assumption of linear
diffusion to determine exact solutions. Unfortunately, numerical solutions do
not reveal the relationship between the model parameters and the solution
features. Additionally, experimental observations typically report the presence
of sharp fronts, which are not captured by linear diffusion. Here we address
both limitations by presenting exact sharp-fronted solutions to a model of
degenerate nonlinear diffusion on a growing domain. We obtain the solution by
identifying a series of transformations that converts the model of a nonlinear
diffusive process on an evolving domain to a nonlinear diffusion equation on a
fixed domain, which admits known exact solutions for certain choices of
diffusivity functions. We determine expressions for critical time scales and
domain growth rates such that the diffusive population never reaches the domain
boundaries and hence the solution remains valid.
| [
{
"created": "Tue, 13 Jun 2023 01:42:21 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Oct 2023 00:09:06 GMT",
"version": "v2"
}
] | 2023-10-09 | [
[
"Johnston",
"Stuart T.",
""
],
[
"Simpson",
"Matthew J.",
""
]
] | Models of diffusive processes that occur on evolving domains are frequently employed to describe biological and physical phenomena, such as diffusion within expanding tissues or substrates. Previous investigations into these models either report numerical solutions or require an assumption of linear diffusion to determine exact solutions. Unfortunately, numerical solutions do not reveal the relationship between the model parameters and the solution features. Additionally, experimental observations typically report the presence of sharp fronts, which are not captured by linear diffusion. Here we address both limitations by presenting exact sharp-fronted solutions to a model of degenerate nonlinear diffusion on a growing domain. We obtain the solution by identifying a series of transformations that converts the model of a nonlinear diffusive process on an evolving domain to a nonlinear diffusion equation on a fixed domain, which admits known exact solutions for certain choices of diffusivity functions. We determine expressions for critical time scales and domain growth rates such that the diffusive population never reaches the domain boundaries and hence the solution remains valid. |
2204.09042 | Samuel Hoffman | Vijil Chenthamarakshan, Samuel C. Hoffman, C. David Owen, Petra
Lukacik, Claire Strain-Damerell, Daren Fearon, Tika R. Malla, Anthony Tumber,
Christopher J. Schofield, Helen M.E. Duyvesteyn, Wanwisa Dejnirattisai, Loic
Carrique, Thomas S. Walter, Gavin R. Screaton, Tetiana Matviiuk, Aleksandra
Mojsilovic, Jason Crain, Martin A. Walsh, David I. Stuart, Payel Das | Accelerating Inhibitor Discovery With A Deep Generative Foundation
Model: Validation for SARS-CoV-2 Drug Targets | Revised title, abstract, and text; additional figures | null | null | null | q-bio.QM cs.LG q-bio.BM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of novel inhibitor molecules for emerging drug-target proteins
is widely acknowledged as a challenging inverse design problem: Exhaustive
exploration of the vast chemical search space is impractical, especially when
the target structure or active molecules are unknown. Here we validate
experimentally the broad utility of a deep generative framework trained
at-scale on protein sequences, small molecules, and their mutual interactions
-- that is unbiased toward any specific target. As demonstrators, we consider
two dissimilar and relevant SARS-CoV-2 targets: the main protease and the spike
protein (receptor binding domain, RBD). To perform target-aware design of novel
inhibitor molecules, a protein sequence-conditioned sampling on the generative
foundation model is performed. Despite using only the target sequence
information, and without performing any target-specific adaptation of the
generative model, micromolar-level inhibition was observed in in vitro
experiments for two candidates out of only four synthesized for each target.
The most potent spike RBD inhibitor also exhibited activity against several
variants in live virus neutralization assays. These results therefore establish
that a single, broadly deployable generative foundation model for accelerated
hit discovery is effective and efficient, even in the most general case where
neither target structure nor binder information is available.
| [
{
"created": "Tue, 19 Apr 2022 17:59:46 GMT",
"version": "v1"
},
{
"created": "Wed, 4 May 2022 15:10:57 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Oct 2022 19:36:16 GMT",
"version": "v3"
}
] | 2022-10-18 | [
[
"Chenthamarakshan",
"Vijil",
""
],
[
"Hoffman",
"Samuel C.",
""
],
[
"Owen",
"C. David",
""
],
[
"Lukacik",
"Petra",
""
],
[
"Strain-Damerell",
"Claire",
""
],
[
"Fearon",
"Daren",
""
],
[
"Malla",
"Tika R.",
""
],
[
"Tumber",
"Anthony",
""
],
[
"Schofield",
"Christopher J.",
""
],
[
"Duyvesteyn",
"Helen M. E.",
""
],
[
"Dejnirattisai",
"Wanwisa",
""
],
[
"Carrique",
"Loic",
""
],
[
"Walter",
"Thomas S.",
""
],
[
"Screaton",
"Gavin R.",
""
],
[
"Matviiuk",
"Tetiana",
""
],
[
"Mojsilovic",
"Aleksandra",
""
],
[
"Crain",
"Jason",
""
],
[
"Walsh",
"Martin A.",
""
],
[
"Stuart",
"David I.",
""
],
[
"Das",
"Payel",
""
]
] | The discovery of novel inhibitor molecules for emerging drug-target proteins is widely acknowledged as a challenging inverse design problem: Exhaustive exploration of the vast chemical search space is impractical, especially when the target structure or active molecules are unknown. Here we validate experimentally the broad utility of a deep generative framework trained at-scale on protein sequences, small molecules, and their mutual interactions -- that is unbiased toward any specific target. As demonstrators, we consider two dissimilar and relevant SARS-CoV-2 targets: the main protease and the spike protein (receptor binding domain, RBD). To perform target-aware design of novel inhibitor molecules, a protein sequence-conditioned sampling on the generative foundation model is performed. Despite using only the target sequence information, and without performing any target-specific adaptation of the generative model, micromolar-level inhibition was observed in in vitro experiments for two candidates out of only four synthesized for each target. The most potent spike RBD inhibitor also exhibited activity against several variants in live virus neutralization assays. These results therefore establish that a single, broadly deployable generative foundation model for accelerated hit discovery is effective and efficient, even in the most general case where neither target structure nor binder information is available. |
2406.01617 | Gianvito Grasso | Gabriele Maroni, Filip Stojceski, Lorenzo Pallante, Marco A. Deriu,
Dario Piga, Gianvito Grasso | LightCPPgen: An Explainable Machine Learning Pipeline for Rational
Design of Cell Penetrating Peptides | null | null | null | null | q-bio.BM cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Cell-penetrating peptides (CPPs) are powerful vectors for the intracellular
delivery of a diverse array of therapeutic molecules. Despite their potential,
the rational design of CPPs remains a challenging task that often requires
extensive experimental efforts and iterations. In this study, we introduce an
innovative approach for the de novo design of CPPs, leveraging the strengths of
machine learning (ML) and optimization algorithms. Our strategy, named
LightCPPgen, integrates a LightGBM-based predictive model with a genetic
algorithm (GA), enabling the systematic generation and optimization of CPP
sequences. At the core of our methodology is the development of an accurate,
efficient, and interpretable predictive model, which utilizes 20 explainable
features to shed light on the critical factors influencing CPP translocation
capacity. The CPP predictive model works synergistically with an optimization
algorithm, which is tuned to enhance computational efficiency while maintaining
optimization performance. The GA solutions specifically target the candidate
sequences' penetrability score, while trying to maximize similarity with the
original non-penetrating peptide in order to retain its original biological and
physicochemical properties. By prioritizing the synthesis of only the most
promising CPP candidates, LightCPPgen can drastically reduce the time and cost
associated with wet lab experiments. In summary, our research makes a
substantial contribution to the field of CPP design, offering a robust
framework that combines ML and optimization techniques to facilitate the
rational design of penetrating peptides, by enhancing the explainability and
interpretability of the design process.
| [
{
"created": "Fri, 31 May 2024 10:57:25 GMT",
"version": "v1"
}
] | 2024-06-05 | [
[
"Maroni",
"Gabriele",
""
],
[
"Stojceski",
"Filip",
""
],
[
"Pallante",
"Lorenzo",
""
],
[
"Deriu",
"Marco A.",
""
],
[
"Piga",
"Dario",
""
],
[
"Grasso",
"Gianvito",
""
]
] | Cell-penetrating peptides (CPPs) are powerful vectors for the intracellular delivery of a diverse array of therapeutic molecules. Despite their potential, the rational design of CPPs remains a challenging task that often requires extensive experimental efforts and iterations. In this study, we introduce an innovative approach for the de novo design of CPPs, leveraging the strengths of machine learning (ML) and optimization algorithms. Our strategy, named LightCPPgen, integrates a LightGBM-based predictive model with a genetic algorithm (GA), enabling the systematic generation and optimization of CPP sequences. At the core of our methodology is the development of an accurate, efficient, and interpretable predictive model, which utilizes 20 explainable features to shed light on the critical factors influencing CPP translocation capacity. The CPP predictive model works synergistically with an optimization algorithm, which is tuned to enhance computational efficiency while maintaining optimization performance. The GA solutions specifically target the candidate sequences' penetrability score, while trying to maximize similarity with the original non-penetrating peptide in order to retain its original biological and physicochemical properties. By prioritizing the synthesis of only the most promising CPP candidates, LightCPPgen can drastically reduce the time and cost associated with wet lab experiments. In summary, our research makes a substantial contribution to the field of CPP design, offering a robust framework that combines ML and optimization techniques to facilitate the rational design of penetrating peptides, by enhancing the explainability and interpretability of the design process. |
2109.14781 | Jerome Bartholome | J\'er\^ome Bartholom\'e, Parthiban Thathapalli Prakash and Joshua N.
Cobb | Genomic prediction: progress and perspectives for rice improvement | Book Chapter, 63 pages, 5 figures, 1 table | null | null | null | q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Genomic prediction can be a powerful tool to achieve greater rates of genetic
gain for quantitative traits if thoroughly integrated into a breeding strategy.
In rice as in other crops, the interest in genomic prediction is very strong
with a number of studies addressing multiple aspects of its use, ranging from
the more conceptual to the more practical. In this chapter, we review the
literature on rice (Oryza sativa) and summarize important considerations for
the integration of genomic prediction in breeding programs. The irrigated
breeding program at the International Rice Research Institute is used as a
concrete example on which we provide data and R scripts to reproduce the
analysis but also to highlight practical challenges regarding the use of
predictions. The adage: "To someone with a hammer, everything looks like a
nail" describes a common psychological pitfall that sometimes plagues the
integration and application of new technologies to a discipline. We have
designed this chapter to help rice breeders avoid that pitfall and appreciate
the benefits and limitations of applying genomic prediction, as it is not
always the best approach nor the first step to increasing the rate of genetic
gain in every context.
| [
{
"created": "Thu, 30 Sep 2021 01:11:21 GMT",
"version": "v1"
}
] | 2021-10-01 | [
[
"Bartholomé",
"Jérôme",
""
],
[
"Prakash",
"Parthiban Thathapalli",
""
],
[
"Cobb",
"Joshua N.",
""
]
] | Genomic prediction can be a powerful tool to achieve greater rates of genetic gain for quantitative traits if thoroughly integrated into a breeding strategy. In rice as in other crops, the interest in genomic prediction is very strong with a number of studies addressing multiple aspects of its use, ranging from the more conceptual to the more practical. In this chapter, we review the literature on rice (Oryza sativa) and summarize important considerations for the integration of genomic prediction in breeding programs. The irrigated breeding program at the International Rice Research Institute is used as a concrete example on which we provide data and R scripts to reproduce the analysis but also to highlight practical challenges regarding the use of predictions. The adage: "To someone with a hammer, everything looks like a nail" describes a common psychological pitfall that sometimes plagues the integration and application of new technologies to a discipline. We have designed this chapter to help rice breeders avoid that pitfall and appreciate the benefits and limitations of applying genomic prediction, as it is not always the best approach nor the first step to increasing the rate of genetic gain in every context. |
2107.10244 | Carina Curto | Caitlyn Parmelee, Juliana Londono Alvarez, Carina Curto, Katherine
Morrison | Sequential attractors in combinatorial threshold-linear networks | 41 pages, 23 figures | SIAM J. Applied Dynamical Systems, Vol. 21, No. 2, pp. 1597-1630,
2022 | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sequences of neural activity arise in many brain areas, including cortex,
hippocampus, and central pattern generator circuits that underlie rhythmic
behaviors like locomotion. While network architectures supporting sequence
generation vary considerably, a common feature is an abundance of inhibition.
In this work, we focus on architectures that support sequential activity in
recurrently connected networks with inhibition-dominated dynamics.
Specifically, we study emergent sequences in a special family of
threshold-linear networks, called combinatorial threshold-linear networks
(CTLNs), whose connectivity matrices are defined from directed graphs. Such
networks naturally give rise to an abundance of sequences whose dynamics are
tightly connected to the underlying graph. We find that architectures based on
generalizations of cycle graphs produce limit cycle attractors that can be
activated to generate transient or persistent (repeating) sequences. Each
architecture type gives rise to an infinite family of graphs that can be built
from arbitrary component subgraphs. Moreover, we prove a number of graph rules
for the corresponding CTLNs in each family. The graph rules allow us to
strongly constrain, and in some cases fully determine, the fixed points of the
network in terms of the fixed points of the component subnetworks. Finally, we
also show how the structure of certain architectures gives insight into the
sequential dynamics of the corresponding attractor.
| [
{
"created": "Wed, 21 Jul 2021 17:47:17 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Aug 2021 17:55:58 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Sep 2021 04:28:12 GMT",
"version": "v3"
},
{
"created": "Mon, 15 Aug 2022 01:42:31 GMT",
"version": "v4"
}
] | 2022-08-16 | [
[
"Parmelee",
"Caitlyn",
""
],
[
"Alvarez",
"Juliana Londono",
""
],
[
"Curto",
"Carina",
""
],
[
"Morrison",
"Katherine",
""
]
] | Sequences of neural activity arise in many brain areas, including cortex, hippocampus, and central pattern generator circuits that underlie rhythmic behaviors like locomotion. While network architectures supporting sequence generation vary considerably, a common feature is an abundance of inhibition. In this work, we focus on architectures that support sequential activity in recurrently connected networks with inhibition-dominated dynamics. Specifically, we study emergent sequences in a special family of threshold-linear networks, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. Such networks naturally give rise to an abundance of sequences whose dynamics are tightly connected to the underlying graph. We find that architectures based on generalizations of cycle graphs produce limit cycle attractors that can be activated to generate transient or persistent (repeating) sequences. Each architecture type gives rise to an infinite family of graphs that can be built from arbitrary component subgraphs. Moreover, we prove a number of graph rules for the corresponding CTLNs in each family. The graph rules allow us to strongly constrain, and in some cases fully determine, the fixed points of the network in terms of the fixed points of the component subnetworks. Finally, we also show how the structure of certain architectures gives insight into the sequential dynamics of the corresponding attractor. |
2012.06697 | Homayoun Valafar | Xijiang Miao, Michael G. Bryson, Homayoun Valafar | TALI: Protein Structure Alignment Using Backbone Torsion Angles | Seven pages | Published in BIOCOMP 2006: 3-9 | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | This article introduces a novel protein structure alignment method (named
TALI) based on the protein backbone torsion angle instead of the more
traditional distance matrix. Because the structural alignment of the two
proteins is based on the comparison of two sequences of numbers (backbone
torsion angles), we can take advantage of a large number of well-developed
methods such as Smith-Waterman or Needleman-Wunsch. Here we report the result
of TALI in comparison to other structure alignment methods such as DALI, CE,
and SSM ass well as sequence alignment based on PSI-BLAST. TALI demonstrated
great success over all other methods in application to challenging proteins.
TALI was more successful in recognizing remote structural homology. TALI also
demonstrated an ability to identify structural homology between two proteins
where the structural difference was due to a rotation of internal domains by
nearly 180$^\circ$.
| [
{
"created": "Sat, 12 Dec 2020 01:45:30 GMT",
"version": "v1"
}
] | 2020-12-15 | [
[
"Miao",
"Xijiang",
""
],
[
"Bryson",
"Michael G.",
""
],
[
"Valafar",
"Homayoun",
""
]
] | This article introduces a novel protein structure alignment method (named TALI) based on the protein backbone torsion angle instead of the more traditional distance matrix. Because the structural alignment of the two proteins is based on the comparison of two sequences of numbers (backbone torsion angles), we can take advantage of a large number of well-developed methods such as Smith-Waterman or Needleman-Wunsch. Here we report the result of TALI in comparison to other structure alignment methods such as DALI, CE, and SSM ass well as sequence alignment based on PSI-BLAST. TALI demonstrated great success over all other methods in application to challenging proteins. TALI was more successful in recognizing remote structural homology. TALI also demonstrated an ability to identify structural homology between two proteins where the structural difference was due to a rotation of internal domains by nearly 180$^\circ$. |
2004.08207 | Marco Paggi | Marco Paggi | Simulation of Covid-19 epidemic evolution: are compartmental models
really predictive? | 12 pages, 2 figures | null | null | null | q-bio.PE cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational models for the simulation of the severe acute respiratory
syndrome coronavirus 2 (SARS-CoV-2) epidemic evolution would be extremely
useful to support authorities in designing healthcare policies and lockdown
measures to contain its impact on public health and economy. In Italy, the
devised forecasts have been mostly based on a pure data-driven approach, by
fitting and extrapolating open data on the epidemic evolution collected by the
Italian Civil Protection Center. In this respect, SIR epidemiological models,
which start from the description of the nonlinear interactions between
population compartments, would be a much more desirable approach to understand
and predict the collective emergent response. The present contribution
addresses the fundamental question whether a SIR epidemiological model,
suitably enriched with asymptomatic and dead individual compartments, could be
able to provide reliable predictions on the epidemic evolution. To this aim, a
machine learning approach based on particle swarm optimization (PSO) is
proposed to automatically identify the model parameters based on a training set
of data of progressive increasing size, considering Lombardy in Italy as a case
study. The analysis of the scatter in the forecasts shows that model
predictions are quite sensitive to the size of the dataset used for training,
and that further data are still required to achieve convergent -- and therefore
reliable -- predictions.
| [
{
"created": "Tue, 14 Apr 2020 08:42:11 GMT",
"version": "v1"
}
] | 2020-04-20 | [
[
"Paggi",
"Marco",
""
]
] | Computational models for the simulation of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic evolution would be extremely useful to support authorities in designing healthcare policies and lockdown measures to contain its impact on public health and economy. In Italy, the devised forecasts have been mostly based on a pure data-driven approach, by fitting and extrapolating open data on the epidemic evolution collected by the Italian Civil Protection Center. In this respect, SIR epidemiological models, which start from the description of the nonlinear interactions between population compartments, would be a much more desirable approach to understand and predict the collective emergent response. The present contribution addresses the fundamental question whether a SIR epidemiological model, suitably enriched with asymptomatic and dead individual compartments, could be able to provide reliable predictions on the epidemic evolution. To this aim, a machine learning approach based on particle swarm optimization (PSO) is proposed to automatically identify the model parameters based on a training set of data of progressive increasing size, considering Lombardy in Italy as a case study. The analysis of the scatter in the forecasts shows that model predictions are quite sensitive to the size of the dataset used for training, and that further data are still required to achieve convergent -- and therefore reliable -- predictions. |
2308.05777 | Martin Buttenschoen | Martin Buttenschoen, Garrett M. Morris, Charlotte M. Deane | PoseBusters: AI-based docking methods fail to generate physically valid
poses or generalise to novel sequences | 10 pages, 6 figures, version 2 added an additional filter to the
PoseBusters Benchmark set to remove ligands with crystal contacts, version 3
corrected the description of the binding site used for Uni-Mol | null | null | null | q-bio.QM physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | The last few years have seen the development of numerous deep learning-based
protein-ligand docking methods. They offer huge promise in terms of speed and
accuracy. However, despite claims of state-of-the-art performance in terms of
crystallographic root-mean-square deviation (RMSD), upon closer inspection, it
has become apparent that they often produce physically implausible molecular
structures. It is therefore not sufficient to evaluate these methods solely by
RMSD to a native binding mode. It is vital, particularly for deep
learning-based methods, that they are also evaluated on steric and energetic
criteria. We present PoseBusters, a Python package that performs a series of
standard quality checks using the well-established cheminformatics toolkit
RDKit. Only methods that both pass these checks and predict native-like binding
modes should be classed as having "state-of-the-art" performance. We use
PoseBusters to compare five deep learning-based docking methods (DeepDock,
DiffDock, EquiBind, TankBind, and Uni-Mol) and two well-established standard
docking methods (AutoDock Vina and CCDC Gold) with and without an additional
post-prediction energy minimisation step using a molecular mechanics force
field. We show that both in terms of physical plausibility and the ability to
generalise to examples that are distinct from the training data, no deep
learning-based method yet outperforms classical docking tools. In addition, we
find that molecular mechanics force fields contain docking-relevant physics
missing from deep-learning methods. PoseBusters allows practitioners to assess
docking and molecular generation methods and may inspire new inductive biases
still required to improve deep learning-based methods, which will help drive
the development of more accurate and more realistic predictions.
| [
{
"created": "Thu, 10 Aug 2023 11:28:48 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Nov 2023 11:09:55 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Nov 2023 12:01:36 GMT",
"version": "v3"
}
] | 2023-11-29 | [
[
"Buttenschoen",
"Martin",
""
],
[
"Morris",
"Garrett M.",
""
],
[
"Deane",
"Charlotte M.",
""
]
] | The last few years have seen the development of numerous deep learning-based protein-ligand docking methods. They offer huge promise in terms of speed and accuracy. However, despite claims of state-of-the-art performance in terms of crystallographic root-mean-square deviation (RMSD), upon closer inspection, it has become apparent that they often produce physically implausible molecular structures. It is therefore not sufficient to evaluate these methods solely by RMSD to a native binding mode. It is vital, particularly for deep learning-based methods, that they are also evaluated on steric and energetic criteria. We present PoseBusters, a Python package that performs a series of standard quality checks using the well-established cheminformatics toolkit RDKit. Only methods that both pass these checks and predict native-like binding modes should be classed as having "state-of-the-art" performance. We use PoseBusters to compare five deep learning-based docking methods (DeepDock, DiffDock, EquiBind, TankBind, and Uni-Mol) and two well-established standard docking methods (AutoDock Vina and CCDC Gold) with and without an additional post-prediction energy minimisation step using a molecular mechanics force field. We show that both in terms of physical plausibility and the ability to generalise to examples that are distinct from the training data, no deep learning-based method yet outperforms classical docking tools. In addition, we find that molecular mechanics force fields contain docking-relevant physics missing from deep-learning methods. PoseBusters allows practitioners to assess docking and molecular generation methods and may inspire new inductive biases still required to improve deep learning-based methods, which will help drive the development of more accurate and more realistic predictions. |
1810.13342 | Diederik Aerts | Diederik Aerts, Lester Beltran, Suzette Geriente, Massimiliano Sassoli
de Bianchi, Sandro Sozzo, Rembrandt Van Sprundel and Tomas Veloz | Quantum Theory Methods as a Possible Alternative for the Double-Blind
Gold Standard of Evidence-Based Medicine: Outlining a New Research Program | 9 pages, no figures | Foundations of Science 24, pp. 217-225 (2019) | 10.1007/s10699-018-9572-0 | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We motivate the possibility of using notions and methods derived from quantum
physics, and more specifically from the research field known as 'quantum
cognition', to optimally model different situations in the field of medicine,
its decision-making processes and ensuing practices, particularly in relation
to chronic and rare diseases. This also as a way to devise alternative
approaches to the generally adopted double-blind gold standard.
| [
{
"created": "Wed, 31 Oct 2018 15:33:24 GMT",
"version": "v1"
}
] | 2023-02-27 | [
[
"Aerts",
"Diederik",
""
],
[
"Beltran",
"Lester",
""
],
[
"Geriente",
"Suzette",
""
],
[
"de Bianchi",
"Massimiliano Sassoli",
""
],
[
"Sozzo",
"Sandro",
""
],
[
"Van Sprundel",
"Rembrandt",
""
],
[
"Veloz",
"Tomas",
""
]
] | We motivate the possibility of using notions and methods derived from quantum physics, and more specifically from the research field known as 'quantum cognition', to optimally model different situations in the field of medicine, its decision-making processes and ensuing practices, particularly in relation to chronic and rare diseases. This also as a way to devise alternative approaches to the generally adopted double-blind gold standard. |
1608.08565 | Fredrik Vannberg | Swetha Srinivasan, Michelle Su, Shashidhar Ravishankar, James Moore,
PamelaSara E Head, J. Brandon Dixon, Fredrik O Vannberg | TLR-exosomes exhibit distinct kinetics and effector function | null | null | null | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The innate immune system is vital to rapidly responding to pathogens and
Toll-like receptors (TLRs) are a critical component of this response.
Nanovesicular exosomes play a role in immunity, but to date their exact
contribution to the dissemination of the TLR response is unknown. Here we show
that exosomes from TLR stimulated cells (TLR-exosomes) can largely recapitulate
TLR activation in distal cells in vitro. We can abrogate the
action-at-a-distance signaling of exosomes by UV irradiation, demonstrating
that RNA is crucial for their effector function. We are the first to show that
exosomes derived from poly(I:C) stimulated cells induce in vivo macrophage
M1-like polarization within murine lymph nodes. These TLR-exosomes demonstrate
enhanced trafficking to the node and preferentially recruit neutrophils as
compared to control-exosomes. This work definitively establishes the
differential effector function for TLR-exosomes in communicating the activation
state of the cell of origin.
| [
{
"created": "Wed, 10 Aug 2016 00:24:01 GMT",
"version": "v1"
}
] | 2016-08-31 | [
[
"Srinivasan",
"Swetha",
""
],
[
"Su",
"Michelle",
""
],
[
"Ravishankar",
"Shashidhar",
""
],
[
"Moore",
"James",
""
],
[
"Head",
"PamelaSara E",
""
],
[
"Dixon",
"J. Brandon",
""
],
[
"Vannberg",
"Fredrik O",
""
]
] | The innate immune system is vital to rapidly responding to pathogens and Toll-like receptors (TLRs) are a critical component of this response. Nanovesicular exosomes play a role in immunity, but to date their exact contribution to the dissemination of the TLR response is unknown. Here we show that exosomes from TLR stimulated cells (TLR-exosomes) can largely recapitulate TLR activation in distal cells in vitro. We can abrogate the action-at-a-distance signaling of exosomes by UV irradiation, demonstrating that RNA is crucial for their effector function. We are the first to show that exosomes derived from poly(I:C) stimulated cells induce in vivo macrophage M1-like polarization within murine lymph nodes. These TLR-exosomes demonstrate enhanced trafficking to the node and preferentially recruit neutrophils as compared to control-exosomes. This work definitively establishes the differential effector function for TLR-exosomes in communicating the activation state of the cell of origin. |
2402.03888 | Luis Sanz | Luis Sanz, Rafael Bravo de la Parra | Stochastic matrix metapopulation models with fast migration: re-scaling
survival to the fast scale | null | Sanz, L., Bravo de la Parra, R., 2020, Stochastic matrix
metapopulation models with fast migration: Re-scaling survival to the fast
scale. Ecological Modelling, 418, 108829 | 10.1016/j.ecolmodel.2019.108829 | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this work we address the analysis of discrete-time models of structured
metapopulations subject to environmental stochasticity. Previous works on these
models made use of the fact that migrations between the patches can be
considered fast with respect to demography (maturation, survival, reproduction)
in the population. It was assumed that, within each time step of the model,
there are many fast migration steps followed by one slow demographic event.
This assumption allowed one to apply approximate reduction techniques that
eased the model analysis. It is however a questionable issue in some cases
since, in particular, individuals can die at any moment of the time step. We
propose new non-equivalent models in which we re-scale survival to consider its
effect on the fast scale. We propose a more general formulation of the
approximate reduction techniques so that they also apply to the proposed new
models. We prove that the main asymptotic elements in this kind of stochastic
models, the Stochastic Growth Rate (SGR) and the Scaled Logarithmic Variance
(SLV), can be related between the original and the reduced systems, so that the
analysis of the latter allows us to ascertain the population fate in the first.
Then we go on to considering some cases where we illustrate the reduction
technique and show the differences between both modelling options. In some
cases using one option represents exponential growth, whereas the other yields
extinction.
| [
{
"created": "Tue, 6 Feb 2024 10:53:31 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Sanz",
"Luis",
""
],
[
"de la Parra",
"Rafael Bravo",
""
]
] | In this work we address the analysis of discrete-time models of structured metapopulations subject to environmental stochasticity. Previous works on these models made use of the fact that migrations between the patches can be considered fast with respect to demography (maturation, survival, reproduction) in the population. It was assumed that, within each time step of the model, there are many fast migration steps followed by one slow demographic event. This assumption allowed one to apply approximate reduction techniques that eased the model analysis. It is however a questionable issue in some cases since, in particular, individuals can die at any moment of the time step. We propose new non-equivalent models in which we re-scale survival to consider its effect on the fast scale. We propose a more general formulation of the approximate reduction techniques so that they also apply to the proposed new models. We prove that the main asymptotic elements in this kind of stochastic models, the Stochastic Growth Rate (SGR) and the Scaled Logarithmic Variance (SLV), can be related between the original and the reduced systems, so that the analysis of the latter allows us to ascertain the population fate in the first. Then we go on to considering some cases where we illustrate the reduction technique and show the differences between both modelling options. In some cases using one option represents exponential growth, whereas the other yields extinction. |
2202.01933 | Jeremy Manning | Jeremy R. Manning | Identifying stimulus-driven neural activity patterns in multi-patient
intracranial recordings | Forthcoming chapter in "Intracranial EEG for Cognitive Neuroscience" | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying stimulus-driven neural activity patterns is critical for studying
the neural basis of cognition. This can be particularly challenging in
intracranial datasets, where electrode locations typically vary across
patients. This chapter first presents an overview of the major challenges to
identifying stimulus-driven neural activity patterns in the general case. Next,
we will review several modality-specific considerations and approaches, along
with a discussion of several issues that are particular to intracranial
recordings. Against this backdrop, we will consider a variety of within-subject
and across-subject approaches to identifying and modeling stimulus-driven
neural activity patterns in multi-patient intracranial recordings. These
approaches include generalized linear models, multivariate pattern analysis,
representational similarity analysis, joint stimulus-activity models,
hierarchical matrix factorization models, Gaussian process models, geometric
alignment models, inter-subject correlations, and inter-subject functional
correlations. Examples from the recent literature serve to illustrate the major
concepts and provide the conceptual intuitions for each approach.
| [
{
"created": "Fri, 4 Feb 2022 01:29:57 GMT",
"version": "v1"
}
] | 2022-02-07 | [
[
"Manning",
"Jeremy R.",
""
]
] | Identifying stimulus-driven neural activity patterns is critical for studying the neural basis of cognition. This can be particularly challenging in intracranial datasets, where electrode locations typically vary across patients. This chapter first presents an overview of the major challenges to identifying stimulus-driven neural activity patterns in the general case. Next, we will review several modality-specific considerations and approaches, along with a discussion of several issues that are particular to intracranial recordings. Against this backdrop, we will consider a variety of within-subject and across-subject approaches to identifying and modeling stimulus-driven neural activity patterns in multi-patient intracranial recordings. These approaches include generalized linear models, multivariate pattern analysis, representational similarity analysis, joint stimulus-activity models, hierarchical matrix factorization models, Gaussian process models, geometric alignment models, inter-subject correlations, and inter-subject functional correlations. Examples from the recent literature serve to illustrate the major concepts and provide the conceptual intuitions for each approach. |
1805.10677 | {\AA}ke Svensson | {\AA}ke Svensson | On a stochastic model of epidemic spread with an application to
competing infections | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple, but ``classical``, stochastic model for epidemic spread in a
finite, but large, population is studied. The progress of the epidemic can be
divided into three different phases that requires different tools to analyse.
Initially the process is approximated by a branching process. It is discussed
for how long time this approximation is valid. When a non-negligible proportion
of the population is already infected the process can be studied using
differential equations. In a final phase the spread will fade out. The results
are used to investigate what happens if two strains of infectious agents, with
different potential for spread, are simultaneously introduced in a totally
susceptible population. It is assumed that an infection causes immunity, and
that a person can only be infected by one strain. The two epidemics will
initially develop approximately as independent branching processes. However, if
both strains causes large epidemics they will, due to immunity, eventually
interact. We will mainly be interested in the final outcome of the spread,
i.e., how large proportion of the population is infected by the different
strains.
| [
{
"created": "Sun, 27 May 2018 19:54:17 GMT",
"version": "v1"
}
] | 2018-05-29 | [
[
"Svensson",
"Åke",
""
]
] | A simple, but ``classical``, stochastic model for epidemic spread in a finite, but large, population is studied. The progress of the epidemic can be divided into three different phases that requires different tools to analyse. Initially the process is approximated by a branching process. It is discussed for how long time this approximation is valid. When a non-negligible proportion of the population is already infected the process can be studied using differential equations. In a final phase the spread will fade out. The results are used to investigate what happens if two strains of infectious agents, with different potential for spread, are simultaneously introduced in a totally susceptible population. It is assumed that an infection causes immunity, and that a person can only be infected by one strain. The two epidemics will initially develop approximately as independent branching processes. However, if both strains causes large epidemics they will, due to immunity, eventually interact. We will mainly be interested in the final outcome of the spread, i.e., how large proportion of the population is infected by the different strains. |
2407.09976 | Dena Clink | Dena J. Clink, Jinsung Kim, Hope Cross-Jaya, Abdul Hamid Ahmad, Moeurk
Hong, Roeun Sala, H\'el\`ene Birot, Cain Agger, Thinh Tien Vu, Hoa Nguyen
Thi, Thanh Nguyen Chi, and Holger Klinck | Automated detection of gibbon calls from passive acoustic monitoring
data using convolutional neural networks in the "torch for R" ecosystem | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Automated detection of acoustic signals is crucial for effective monitoring
of vocal animals and their habitats across ecologically-relevant spatial and
temporal scales. Recent advances in deep learning have made these approaches
more accessible. However, there are few deep learning approaches that can be
implemented natively in the R programming environment; approaches that run
natively in R may be more accessible for ecologists. The "torch for R"
ecosystem has made the use of transfer learning with convolutional neural
networks accessible for R users. Here, we evaluate a workflow that uses
transfer learning for the automated detection of acoustic signals from passive
acoustic monitoring (PAM) data. Our specific goals include: 1) present a method
for automated detection of gibbon calls from PAM data using the "torch for R"
ecosystem; 2) compare the results of transfer learning for six pretrained CNN
architectures; and 3) investigate how well the different architectures perform
on datasets of the female calls from two different gibbon species: the northern
grey gibbon (Hylobates funereus) and the southern yellow-cheeked crested gibbon
(Nomascus gabriellae). We found that the highest performing architecture
depended on the test dataset. We successfully deployed the top performing model
for each gibbon species to investigate spatial of variation in gibbon calling
behavior across two grids of autonomous recording units in Danum Valley
Conservation Area, Malaysia and Keo Seima Wildlife Sanctuary, Cambodia. The
fields of deep learning and automated detection are rapidly evolving, and we
provide the methods and datasets as benchmarks for future work.
| [
{
"created": "Sat, 13 Jul 2024 18:44:04 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jul 2024 16:06:08 GMT",
"version": "v2"
}
] | 2024-07-29 | [
[
"Clink",
"Dena J.",
""
],
[
"Kim",
"Jinsung",
""
],
[
"Cross-Jaya",
"Hope",
""
],
[
"Ahmad",
"Abdul Hamid",
""
],
[
"Hong",
"Moeurk",
""
],
[
"Sala",
"Roeun",
""
],
[
"Birot",
"Hélène",
""
],
[
"Agger",
"Cain",
""
],
[
"Vu",
"Thinh Tien",
""
],
[
"Thi",
"Hoa Nguyen",
""
],
[
"Chi",
"Thanh Nguyen",
""
],
[
"Klinck",
"Holger",
""
]
] | Automated detection of acoustic signals is crucial for effective monitoring of vocal animals and their habitats across ecologically-relevant spatial and temporal scales. Recent advances in deep learning have made these approaches more accessible. However, there are few deep learning approaches that can be implemented natively in the R programming environment; approaches that run natively in R may be more accessible for ecologists. The "torch for R" ecosystem has made the use of transfer learning with convolutional neural networks accessible for R users. Here, we evaluate a workflow that uses transfer learning for the automated detection of acoustic signals from passive acoustic monitoring (PAM) data. Our specific goals include: 1) present a method for automated detection of gibbon calls from PAM data using the "torch for R" ecosystem; 2) compare the results of transfer learning for six pretrained CNN architectures; and 3) investigate how well the different architectures perform on datasets of the female calls from two different gibbon species: the northern grey gibbon (Hylobates funereus) and the southern yellow-cheeked crested gibbon (Nomascus gabriellae). We found that the highest performing architecture depended on the test dataset. We successfully deployed the top performing model for each gibbon species to investigate spatial of variation in gibbon calling behavior across two grids of autonomous recording units in Danum Valley Conservation Area, Malaysia and Keo Seima Wildlife Sanctuary, Cambodia. The fields of deep learning and automated detection are rapidly evolving, and we provide the methods and datasets as benchmarks for future work. |
0709.1947 | Garrett Kenyon | Garrett T. Kenyon | Extreme Synergy in a Retinal Code: Spatiotemporal Correlations Enable
Rapid Image Reconstruction | null | null | null | null | q-bio.NC | null | Over the brief time intervals available for processing retinal output,
roughly 50 to 300 msec, the number of extra spikes generated by individual
ganglion cells can be quite variable. Here, computer-generated spike trains
were used to investigate how signal/noise might be improved by utilizing
spatiotemporal correlations among retinal neurons responding to large,
contiguous stimuli. Realistic correlations were produced by modulating the
instantaneous firing probabilities of all stimulated neurons by a common
oscillatory input whose amplitude and temporal structure were consistent with
experimentally measured field potentials and correlograms. Whereas previous
studies have typically measured synergy between pairs of ganglion cells
examined one at a time, or alternatively have employed optimized linear filters
to decode activity across larger populations, the present study investigated a
distributed, non-linear encoding strategy by using Principal Components
Analysis (PCA) to reconstruct simple visual stimuli from up to one million
oscillatory pairwise correlations extracted on single trials from
massively-parallel spike trains as short as 25 msec in duration. By integrating
signals across retinal neighborhoods commensurate in size to classical
antagonistic surrounds, the first principal component of the pairwise
correlation matrix yielded dramatic improvements in signal/noise without
sacrificing fine spatial detail. These results demonstrate how local intensity
information can distributed across hundreds of neurons linked by a common,
stimulus-dependent oscillatory modulation, a strategy that might have evolved
to minimize the number of spikes required to support rapid image
reconstruction.
| [
{
"created": "Wed, 12 Sep 2007 20:12:20 GMT",
"version": "v1"
}
] | 2007-09-14 | [
[
"Kenyon",
"Garrett T.",
""
]
] | Over the brief time intervals available for processing retinal output, roughly 50 to 300 msec, the number of extra spikes generated by individual ganglion cells can be quite variable. Here, computer-generated spike trains were used to investigate how signal/noise might be improved by utilizing spatiotemporal correlations among retinal neurons responding to large, contiguous stimuli. Realistic correlations were produced by modulating the instantaneous firing probabilities of all stimulated neurons by a common oscillatory input whose amplitude and temporal structure were consistent with experimentally measured field potentials and correlograms. Whereas previous studies have typically measured synergy between pairs of ganglion cells examined one at a time, or alternatively have employed optimized linear filters to decode activity across larger populations, the present study investigated a distributed, non-linear encoding strategy by using Principal Components Analysis (PCA) to reconstruct simple visual stimuli from up to one million oscillatory pairwise correlations extracted on single trials from massively-parallel spike trains as short as 25 msec in duration. By integrating signals across retinal neighborhoods commensurate in size to classical antagonistic surrounds, the first principal component of the pairwise correlation matrix yielded dramatic improvements in signal/noise without sacrificing fine spatial detail. These results demonstrate how local intensity information can distributed across hundreds of neurons linked by a common, stimulus-dependent oscillatory modulation, a strategy that might have evolved to minimize the number of spikes required to support rapid image reconstruction. |
0803.3143 | Denis Semenov A. | Denis A. Semenov | Evolution of the genetic code. Emergence of DNA | 3 pages | null | null | null | q-bio.BM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This hypothesis can provide an opportunity to trace logically the process of
the emergence of the DNA double helix. AT-enrichment in this hypothesis is main
factor of evolution of DNA double helix from RNA double helix.
| [
{
"created": "Fri, 21 Mar 2008 11:04:19 GMT",
"version": "v1"
}
] | 2008-03-24 | [
[
"Semenov",
"Denis A.",
""
]
] | This hypothesis can provide an opportunity to trace logically the process of the emergence of the DNA double helix. AT-enrichment in this hypothesis is main factor of evolution of DNA double helix from RNA double helix. |
1506.01112 | Sylvain Tollis | Sylvain Tollis | A Jump Distance-based Bayesian analysis method to unveil fine single
molecule transport features | 27 pages, 4 figures, 4 supplementary figures | null | null | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-molecule tracking (SMT) methods are under considerable expansion in
many fields of cell biology, as the dynamics of cellular components in
biological mechanisms becomes increasingly relevant. Despite the development of
SMT technologies, it is still difficult to reconcile a sparse signal at all
times (required to distinguish single molecules) with long individual
trajectories, within confined regions of the cell and given experimental
limitations. This strongly reduces the performance of current data analysis
methods in extracting meaningful transport features from single molecules
trajectories. In this work, we develop and implement a new mathematical
analysis method of SMT data, which takes advantage of the large number of
(short) trajectories that are typically obtained with cellular systems in vivo.
The method is based on the fitting of the jump distance distribution, e.g. the
distribution that represents how far molecules travel in a set time interval;
it uses a Bayesian approach to compare plausible molecule motion models and
extract both qualitative and quantitative information. Finally, the method is
tested on in silico trajectories simulated using Monte Carlo algorithms, and
ranges of parameters for which the method yields accurate results are
determined.
| [
{
"created": "Wed, 3 Jun 2015 03:40:44 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jun 2015 04:09:43 GMT",
"version": "v2"
}
] | 2015-06-08 | [
[
"Tollis",
"Sylvain",
""
]
] | Single-molecule tracking (SMT) methods are under considerable expansion in many fields of cell biology, as the dynamics of cellular components in biological mechanisms becomes increasingly relevant. Despite the development of SMT technologies, it is still difficult to reconcile a sparse signal at all times (required to distinguish single molecules) with long individual trajectories, within confined regions of the cell and given experimental limitations. This strongly reduces the performance of current data analysis methods in extracting meaningful transport features from single molecules trajectories. In this work, we develop and implement a new mathematical analysis method of SMT data, which takes advantage of the large number of (short) trajectories that are typically obtained with cellular systems in vivo. The method is based on the fitting of the jump distance distribution, e.g. the distribution that represents how far molecules travel in a set time interval; it uses a Bayesian approach to compare plausible molecule motion models and extract both qualitative and quantitative information. Finally, the method is tested on in silico trajectories simulated using Monte Carlo algorithms, and ranges of parameters for which the method yields accurate results are determined. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.