id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1411.7557 | Liu Hong | Liu Hong, Ya-Jing Huang, Wen-An Yong | A Kinetic Model for Cell Damage Caused by Oligomer Formation | 16 pages+ 5 figures for maintext; 8 pages+ 4 figures for Supporting
Materials | null | 10.1016/j.bpj.2015.08.007 | null | q-bio.BM q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well-known that the formation of amyloid fiber may cause invertible
damage to cells, while the underlying mechanism has not been fully uncovered.
In this paper, we construct a mathematical model, consisting of infinite ODEs
in the form of mass-action equations together with two reaction-convection
PDEs, and then simplify it to a system of 5 ODEs by using the maximum entropy
principle. This model is based on four simple assumptions, one of which is that
cell damage is raised by oligomers rather than mature fibrils. With the
simplified model, the effects of nucleation and elongation, fragmentation,
protein and seeds concentrations on amyloid formation and cell damage are
extensively explored and compared with experiments. We hope that our results
can provide a valuable insight into the processes of amyloid formation and cell
damage thus raised.
| [
{
"created": "Thu, 27 Nov 2014 11:46:33 GMT",
"version": "v1"
}
] | 2023-07-19 | [
[
"Hong",
"Liu",
""
],
[
"Huang",
"Ya-Jing",
""
],
[
"Yong",
"Wen-An",
""
]
] | It is well-known that the formation of amyloid fiber may cause invertible damage to cells, while the underlying mechanism has not been fully uncovered. In this paper, we construct a mathematical model, consisting of infinite ODEs in the form of mass-action equations together with two reaction-convection PDEs, and then simplify it to a system of 5 ODEs by using the maximum entropy principle. This model is based on four simple assumptions, one of which is that cell damage is raised by oligomers rather than mature fibrils. With the simplified model, the effects of nucleation and elongation, fragmentation, protein and seeds concentrations on amyloid formation and cell damage are extensively explored and compared with experiments. We hope that our results can provide a valuable insight into the processes of amyloid formation and cell damage thus raised. |
1811.00477 | Jennifer McManus | Amir R. Khan, Susan James, Michelle K. Quinn, Irem Altan, Patrick
Charbonneau, Jennifer J. McManus | Temperature-dependent non-covalent protein-protein interactions explain
normal and inverted solubility in a mutant of human gamma D-crystallin | null | Biophys. J. 117, 930-937 (2019) | 10.1016/j.bpj.2019.07.019 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein crystal production is a major bottleneck for the structural
characterisation of proteins. To advance beyond large-scale screening, rational
strategies for protein crystallization are crucial. Understanding how chemical
anisotropy (or patchiness) of the protein surface due to the variety of amino
acid side chains in contact with solvent, contributes to protein protein
contact formation in the crystal lattice is a major obstacle to predicting and
optimising crystallization. The relative scarcity of sophisticated theoretical
models that include sufficient detail to link collective behaviour, captured in
protein phase diagrams, and molecular level details, determined from
high-resolution structural information is a further barrier. Here we present
two crystals structures for the P23TR36S mutant of gamma D-crystallin, each
with opposite solubility behaviour, one melts when heated, the other when
cooled. When combined with the protein phase diagram and a tailored patchy
particle model we show that a single temperature dependent interaction is
sufficient to stabilise the inverted solubility crystal. This contact, at the
P23T substitution site, relates to a genetic cataract and reveals at a
molecular level, the origin of the lowered and retrograde solubility of the
protein. Our results show that the approach employed here may present an
alternative strategy for the rationalization of protein crystallization.
| [
{
"created": "Thu, 1 Nov 2018 16:21:54 GMT",
"version": "v1"
}
] | 2022-05-03 | [
[
"Khan",
"Amir R.",
""
],
[
"James",
"Susan",
""
],
[
"Quinn",
"Michelle K.",
""
],
[
"Altan",
"Irem",
""
],
[
"Charbonneau",
"Patrick",
""
],
[
"McManus",
"Jennifer J.",
""
]
] | Protein crystal production is a major bottleneck for the structural characterisation of proteins. To advance beyond large-scale screening, rational strategies for protein crystallization are crucial. Understanding how chemical anisotropy (or patchiness) of the protein surface due to the variety of amino acid side chains in contact with solvent, contributes to protein protein contact formation in the crystal lattice is a major obstacle to predicting and optimising crystallization. The relative scarcity of sophisticated theoretical models that include sufficient detail to link collective behaviour, captured in protein phase diagrams, and molecular level details, determined from high-resolution structural information is a further barrier. Here we present two crystals structures for the P23TR36S mutant of gamma D-crystallin, each with opposite solubility behaviour, one melts when heated, the other when cooled. When combined with the protein phase diagram and a tailored patchy particle model we show that a single temperature dependent interaction is sufficient to stabilise the inverted solubility crystal. This contact, at the P23T substitution site, relates to a genetic cataract and reveals at a molecular level, the origin of the lowered and retrograde solubility of the protein. Our results show that the approach employed here may present an alternative strategy for the rationalization of protein crystallization. |
1202.2702 | Alexander Teplukhin | Alexander V. Teplukhin, Valery I. Poltev and Victor B. Zhurkin | DNA bending and "structural" waters in major and minor grooves of
A-tracts. Monte Carlo computer simulations | The MS was written in 1996. Unpublished data; 46 pages, 9 figures, 5
tables | null | null | null | q-bio.BM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To elucidate the possible role of structural waters in stabilizing bent DNA,
various conformations of AT-containing decamers, (A5T5)2 and A10:T10, were
studied by Monte Carlo simulations. The duplexes were constrained to reproduce
the NMR inter-proton distances for the A-tracts at two temperatures: 5 and 35C.
Analysis of the water shell structures revealed a strong correlation between
the groove widths on the one hand, and the types of hydration patterns and
their probabilities on the other hand. Depending on the minor groove width, the
following patterns were observed in this groove: either an interstrand
"hydration spine", or interstrand two-water bridges, or a double ribbon of
intrastrand sugar-base "water strings". Hydration shell in the major groove is
less regular than in the minor groove, which agrees with crystallographic data.
As in previous studies, energetically advantageous hydration is found for the
A-tract conformations with narrow minor groove and wide major groove (B'-like
DNA), known to be correlated with DNA bending and curvature. In addition, our
calculations indicate that the advantage of such DNA conformations is coupled
with the increase in adenine N7 hydration in the major groove. As the major
groove is opened wider, its hydration shell is enriched by energetically
favorable "trident waters" hydrogen bonded to three hydrophilic centers
belonging to two adjacent adenines: two N7 atoms and one amino-proton H(N6).
Based on these results, we suggest that formation of the novel hydration
pattern in the major groove is one of the factors responsible for stabilization
of the B'-like conformations of A-tracts at low temperature in the absense of
dehydrating agents, leading in turn to the strong intrinsic curvature of DNA
containing alternating A-tracts and "mixed" GC-rich sequences.
| [
{
"created": "Mon, 13 Feb 2012 12:26:21 GMT",
"version": "v1"
}
] | 2012-02-14 | [
[
"Teplukhin",
"Alexander V.",
""
],
[
"Poltev",
"Valery I.",
""
],
[
"Zhurkin",
"Victor B.",
""
]
] | To elucidate the possible role of structural waters in stabilizing bent DNA, various conformations of AT-containing decamers, (A5T5)2 and A10:T10, were studied by Monte Carlo simulations. The duplexes were constrained to reproduce the NMR inter-proton distances for the A-tracts at two temperatures: 5 and 35C. Analysis of the water shell structures revealed a strong correlation between the groove widths on the one hand, and the types of hydration patterns and their probabilities on the other hand. Depending on the minor groove width, the following patterns were observed in this groove: either an interstrand "hydration spine", or interstrand two-water bridges, or a double ribbon of intrastrand sugar-base "water strings". Hydration shell in the major groove is less regular than in the minor groove, which agrees with crystallographic data. As in previous studies, energetically advantageous hydration is found for the A-tract conformations with narrow minor groove and wide major groove (B'-like DNA), known to be correlated with DNA bending and curvature. In addition, our calculations indicate that the advantage of such DNA conformations is coupled with the increase in adenine N7 hydration in the major groove. As the major groove is opened wider, its hydration shell is enriched by energetically favorable "trident waters" hydrogen bonded to three hydrophilic centers belonging to two adjacent adenines: two N7 atoms and one amino-proton H(N6). Based on these results, we suggest that formation of the novel hydration pattern in the major groove is one of the factors responsible for stabilization of the B'-like conformations of A-tracts at low temperature in the absense of dehydrating agents, leading in turn to the strong intrinsic curvature of DNA containing alternating A-tracts and "mixed" GC-rich sequences. |
1508.06966 | Andrew Mugler | Garrett D Potter, Tommy A Byrd, Andrew Mugler, Bo Sun | Communication shapes sensory response in multicellular networks | 23 pages, 18 figures | null | 10.1073/pnas.1605559113 | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collective sensing by interacting cells is observed in a variety of
biological systems, and yet a quantitative understanding of how sensory
information is collectively encoded is lacking. Here we investigate the
ATP-induced calcium dynamics of monolayers of fibroblast cells that communicate
via gap junctions. Combining experiments and stochastic modeling, we find that
increasing the ATP stimulus increases the propensity for calcium oscillations
despite large cell-to-cell variability. The model further predicts that the
oscillation propensity increases not only with the stimulus, but also with the
cell density due to increased communication. Experiments confirm this
prediction, showing that cell density modulates the collective sensory
response. We further implicate cell-cell communication by coculturing the
fibroblasts with cancer cells, which we show act as "defects" in the
communication network, thereby reducing the oscillation propensity. These
results suggest that multicellular networks sit at a point in parameter space
where cell-cell communication has a significant effect on the sensory response,
allowing cells to simultaneously respond to a sensory input and to the presence
of neighbors.
| [
{
"created": "Thu, 27 Aug 2015 18:47:36 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Mar 2016 18:23:54 GMT",
"version": "v2"
}
] | 2017-09-20 | [
[
"Potter",
"Garrett D",
""
],
[
"Byrd",
"Tommy A",
""
],
[
"Mugler",
"Andrew",
""
],
[
"Sun",
"Bo",
""
]
] | Collective sensing by interacting cells is observed in a variety of biological systems, and yet a quantitative understanding of how sensory information is collectively encoded is lacking. Here we investigate the ATP-induced calcium dynamics of monolayers of fibroblast cells that communicate via gap junctions. Combining experiments and stochastic modeling, we find that increasing the ATP stimulus increases the propensity for calcium oscillations despite large cell-to-cell variability. The model further predicts that the oscillation propensity increases not only with the stimulus, but also with the cell density due to increased communication. Experiments confirm this prediction, showing that cell density modulates the collective sensory response. We further implicate cell-cell communication by coculturing the fibroblasts with cancer cells, which we show act as "defects" in the communication network, thereby reducing the oscillation propensity. These results suggest that multicellular networks sit at a point in parameter space where cell-cell communication has a significant effect on the sensory response, allowing cells to simultaneously respond to a sensory input and to the presence of neighbors. |
1109.5979 | Rudolf A. Roemer | Chi-Tin Shih, Stephen A. Wells, Ching-Ling Hsu, Yun-Yin Cheng, Rudolf
A. R\"omer | The interplay of mutations and electronic properties in disease-related
genes | null | Scientific Reports 2, 272-9 (2012) | 10.1038/srep00272 | null | q-bio.OT physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic properties of DNA are believed to play a crucial role in many
phenomena in living organisms, for example the location of DNA lesions by base
excision repair (BER) glycosylases and the regulation of tumor-suppressor genes
such as p53 by detection of oxidative damage. However, the reproducible
measurement and modelling of charge migration through DNA molecules at the
nanometer scale remains a challenging and controversial subject even after more
than a decade of intense efforts. Here we show, by analysing 162
disease-related genes from a variety of medical databases with a total of
almost 20,000 observed pathogenic mutations, a significant difference in the
electronic properties of the population of observed mutations compared to the
set of all possible mutations. Our results have implications for the role of
the electronic properties of DNA in cellular processes, and hint at the
possibility of prediction, early diagnosis and detection of mutation hotspots.
| [
{
"created": "Tue, 27 Sep 2011 17:56:26 GMT",
"version": "v1"
}
] | 2012-02-21 | [
[
"Shih",
"Chi-Tin",
""
],
[
"Wells",
"Stephen A.",
""
],
[
"Hsu",
"Ching-Ling",
""
],
[
"Cheng",
"Yun-Yin",
""
],
[
"Römer",
"Rudolf A.",
""
]
] | Electronic properties of DNA are believed to play a crucial role in many phenomena in living organisms, for example the location of DNA lesions by base excision repair (BER) glycosylases and the regulation of tumor-suppressor genes such as p53 by detection of oxidative damage. However, the reproducible measurement and modelling of charge migration through DNA molecules at the nanometer scale remains a challenging and controversial subject even after more than a decade of intense efforts. Here we show, by analysing 162 disease-related genes from a variety of medical databases with a total of almost 20,000 observed pathogenic mutations, a significant difference in the electronic properties of the population of observed mutations compared to the set of all possible mutations. Our results have implications for the role of the electronic properties of DNA in cellular processes, and hint at the possibility of prediction, early diagnosis and detection of mutation hotspots. |
1006.2908 | Chris Adami | Bj{\o}rn {\O}stman, Arend Hintze, and Christoph Adami (KGI) | Critical properties of complex fitness landscapes | 7 pages, 6 figures, requires alifex11.sty. To appear in Proceedings
of 12th International Conference on Artificial Life | Proc. 12th Intern. Conf. on Artificial Life, H. Fellerman et al.,
eds. (MIT Press, 2010), pp. 126-132 | null | null | q-bio.PE q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary adaptation is the process that increases the fit of a population
to the fitness landscape it inhabits. As a consequence, evolutionary dynamics
is shaped, constrained, and channeled, by that fitness landscape. Much work has
been expended to understand the evolutionary dynamics of adapting populations,
but much less is known about the structure of the landscapes. Here, we study
the global and local structure of complex fitness landscapes of interacting
loci that describe protein folds or sets of interacting genes forming pathways
or modules. We find that in these landscapes, high peaks are more likely to be
found near other high peaks, corroborating Kauffman's "Massif Central"
hypothesis. We study the clusters of peaks as a function of the ruggedness of
the landscape and find that this clustering allows peaks to form interconnected
networks. These networks undergo a percolation phase transition as a function
of minimum peak height, which indicates that evolutionary trajectories that
take no more than two mutations to shift from peak to peak can span the entire
genetic space. These networks have implications for evolution in rugged
landscapes, allowing adaptation to proceed after a local fitness peak has been
ascended.
| [
{
"created": "Tue, 15 Jun 2010 07:31:50 GMT",
"version": "v1"
}
] | 2010-12-17 | [
[
"Østman",
"Bjørn",
"",
"KGI"
],
[
"Hintze",
"Arend",
"",
"KGI"
],
[
"Adami",
"Christoph",
"",
"KGI"
]
] | Evolutionary adaptation is the process that increases the fit of a population to the fitness landscape it inhabits. As a consequence, evolutionary dynamics is shaped, constrained, and channeled, by that fitness landscape. Much work has been expended to understand the evolutionary dynamics of adapting populations, but much less is known about the structure of the landscapes. Here, we study the global and local structure of complex fitness landscapes of interacting loci that describe protein folds or sets of interacting genes forming pathways or modules. We find that in these landscapes, high peaks are more likely to be found near other high peaks, corroborating Kauffman's "Massif Central" hypothesis. We study the clusters of peaks as a function of the ruggedness of the landscape and find that this clustering allows peaks to form interconnected networks. These networks undergo a percolation phase transition as a function of minimum peak height, which indicates that evolutionary trajectories that take no more than two mutations to shift from peak to peak can span the entire genetic space. These networks have implications for evolution in rugged landscapes, allowing adaptation to proceed after a local fitness peak has been ascended. |
0907.2005 | Song Yang | Herbert Sauro, Song Yang | Fundamental Dynamic Units: Feedforward Networks and Adjustable Gates | 26 pages, 15 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The activation/repression of a given gene is typically regulated by multiple
transcription factors (TFs) that bind at the gene regulatory region and recruit
RNA polymerase (RNAP). The interactions between the promoter region and TFs and
between different TFs specify the dynamic responses of the gene under different
physiological conditions. By choosing specific regulatory interactions with up
to three transcription factors, we designed several functional motifs, each of
which is shown to perform a certain function and can be integrated into larger
networks. We analyzed three kinds of networks: (i) Motifs derived from
incoherent feedforward motifs, which behave as `amplitude filters', or
`concentration detectors'. These motifs respond maximally to input
transcription factors with concentrations within a certain range. From these
motifs homeostatic and pulse generating networks are derived. (ii) Tunable
network motifs, which can behave as oscillators or switches for low and high
concentrations of an input transcription factor, respectively. (iii)
Transcription factor controlled adjustable gates, which switch between AND/OR
gate characteristics, depending on the concentration of the input transcription
factor. This study has demonstrated the utility of feedforward networks and the
flexibility of specific transcriptional binding kinetics in generating new
novel behaviors. The flexibility of feedforward networks as dynamic units may
explain the apparent frequency that such motifs are found in real biological
networks.
| [
{
"created": "Sun, 12 Jul 2009 04:49:24 GMT",
"version": "v1"
}
] | 2009-07-14 | [
[
"Sauro",
"Herbert",
""
],
[
"Yang",
"Song",
""
]
] | The activation/repression of a given gene is typically regulated by multiple transcription factors (TFs) that bind at the gene regulatory region and recruit RNA polymerase (RNAP). The interactions between the promoter region and TFs and between different TFs specify the dynamic responses of the gene under different physiological conditions. By choosing specific regulatory interactions with up to three transcription factors, we designed several functional motifs, each of which is shown to perform a certain function and can be integrated into larger networks. We analyzed three kinds of networks: (i) Motifs derived from incoherent feedforward motifs, which behave as `amplitude filters', or `concentration detectors'. These motifs respond maximally to input transcription factors with concentrations within a certain range. From these motifs homeostatic and pulse generating networks are derived. (ii) Tunable network motifs, which can behave as oscillators or switches for low and high concentrations of an input transcription factor, respectively. (iii) Transcription factor controlled adjustable gates, which switch between AND/OR gate characteristics, depending on the concentration of the input transcription factor. This study has demonstrated the utility of feedforward networks and the flexibility of specific transcriptional binding kinetics in generating new novel behaviors. The flexibility of feedforward networks as dynamic units may explain the apparent frequency that such motifs are found in real biological networks. |
2202.10021 | Daniele Marinazzo | Katharina Wegner, Charles R.E. Wilson, Emmanuel Procyk, Karl J.
Friston, Frederik Van de Steen, Dimitris A. Pinotsis, and Daniele Marinazzo | Frontal effective connectivity increases with task demands and time on
task: a Dynamic Causal Model of electrocorticogram in macaque monkeys | null | Neurons, Behavior, Data analysis, and Theory, Volume 6, Issue 1,
2023 | 10.51628/001c.68433 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | We apply Dynamic Causal Models to electrocorticogram recordings from two
macaque monkeys performing a problem-solving task that engages working memory,
and induces time-on-task effects. We thus provide a computational account of
changes in effective connectivity within two regions of the fronto-parietal
network, the dorsolateral prefrontal cortex and the pre-supplementary motor
area. We find that forward connections between the two regions increased in
strength when task demands increased, and as the experimental session
progressed. Similarities in the effects of task demands and time on task allow
us to interpret changes in frontal connectivity in terms of increased
attentional effort allocation that compensates cognitive fatigue.
| [
{
"created": "Mon, 21 Feb 2022 07:26:11 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Jan 2023 11:22:09 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Feb 2023 08:16:21 GMT",
"version": "v3"
}
] | 2023-02-02 | [
[
"Wegner",
"Katharina",
""
],
[
"Wilson",
"Charles R. E.",
""
],
[
"Procyk",
"Emmanuel",
""
],
[
"Friston",
"Karl J.",
""
],
[
"Van de Steen",
"Frederik",
""
],
[
"Pinotsis",
"Dimitris A.",
""
],
[
"Marinazzo",
"Daniele",
""
]
] | We apply Dynamic Causal Models to electrocorticogram recordings from two macaque monkeys performing a problem-solving task that engages working memory, and induces time-on-task effects. We thus provide a computational account of changes in effective connectivity within two regions of the fronto-parietal network, the dorsolateral prefrontal cortex and the pre-supplementary motor area. We find that forward connections between the two regions increased in strength when task demands increased, and as the experimental session progressed. Similarities in the effects of task demands and time on task allow us to interpret changes in frontal connectivity in terms of increased attentional effort allocation that compensates cognitive fatigue. |
2110.07746 | Zhou Fang | Zhou Fang, Ankit Gupta, Mustafa Khammash | Convergence of regularized particle filters for stochastic reaction
networks | 28 pages, 6 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Filtering for stochastic reaction networks (SRNs) is an important problem in
systems/synthetic biology aiming to estimate the state of unobserved chemical
species. A good solution to it can provide scientists valuable information
about the hidden dynamic state and enable optimal feedback control. Usually,
the model parameters need to be inferred simultaneously with state variables,
and a conventional particle filter can fail to solve this problem accurately
due to sample degeneracy. In this case, the regularized particle filter (RPF)
is preferred to the conventional ones, as the RPF can mitigate sample
degeneracy by perturbing particles with artificial noise. However, the
artificial noise introduces an additional bias to the estimate, and, thus, it
is questionable whether the RPF can provide reliable results for SRNs. In this
paper, we aim to identify conditions under which the RPF converges to the exact
filter in the filtering problem determined by a bimolecular network. First, we
establish computationally efficient RPFs for SRNs on different scales using
different dynamical models, including the continuous-time Markov process,
tau-leaping model, and piecewise deterministic process. Then, by parameter
sensitivity analyses, we show that the established RPFs converge to the exact
filters if all reactions leading to an increase of the molecular population
have linearly growing propensities and some other mild conditions are satisfied
simultaneously. This ensures the performance of the RPF for a large class of
SRNs, and several numerical examples are presented to illustrate our results.
| [
{
"created": "Thu, 14 Oct 2021 22:14:10 GMT",
"version": "v1"
}
] | 2021-10-18 | [
[
"Fang",
"Zhou",
""
],
[
"Gupta",
"Ankit",
""
],
[
"Khammash",
"Mustafa",
""
]
] | Filtering for stochastic reaction networks (SRNs) is an important problem in systems/synthetic biology aiming to estimate the state of unobserved chemical species. A good solution to it can provide scientists valuable information about the hidden dynamic state and enable optimal feedback control. Usually, the model parameters need to be inferred simultaneously with state variables, and a conventional particle filter can fail to solve this problem accurately due to sample degeneracy. In this case, the regularized particle filter (RPF) is preferred to the conventional ones, as the RPF can mitigate sample degeneracy by perturbing particles with artificial noise. However, the artificial noise introduces an additional bias to the estimate, and, thus, it is questionable whether the RPF can provide reliable results for SRNs. In this paper, we aim to identify conditions under which the RPF converges to the exact filter in the filtering problem determined by a bimolecular network. First, we establish computationally efficient RPFs for SRNs on different scales using different dynamical models, including the continuous-time Markov process, tau-leaping model, and piecewise deterministic process. Then, by parameter sensitivity analyses, we show that the established RPFs converge to the exact filters if all reactions leading to an increase of the molecular population have linearly growing propensities and some other mild conditions are satisfied simultaneously. This ensures the performance of the RPF for a large class of SRNs, and several numerical examples are presented to illustrate our results. |
1209.5466 | Jeffrey Ross-Ibarra | Lisa B. Kanizay, Tanja Pyh\"aj\"arvi, Elizabeth G. Lowry, Matthew B.
Hufford, Daniel G. Peterson, Jeffrey Ross-Ibarra, R. Kelly Dawe | Diversity and abundance of the Abnormal chromosome 10 meiotic drive
complex in Zea mays | null | Heredity 110, 570-577 (June 2013) | 10.1038/hdy.2013.2 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maize Abnormal chromosome 10 (Ab10) contains a classic meiotic drive system
that exploits asymmetry of meiosis to preferentially transmit itself and other
chromosomes containing specialized heterochromatic regions called knobs. The
structure and diversity of the Ab10 meiotic drive haplotype is poorly
understood. We developed a BAC library from an Ab10 line and used the data to
develop sequence-based markers, focusing on the proximal portion of the
haplotype that shows partial homology to normal chromosome 10. These molecular
and additional cytological data demonstrate that two previously identified Ab10
variants (Ab10-I and Ab10-II) share a common origin. Dominant PCR markers were
used with FISH to assay 160 diverse teosinte and maize landrace populations
from across the Americas, resulting in the identification of a previously
unknown but prevalent form of Ab10 (Ab10-III). We find that Ab10 occurs in at
least 75% of teosinte populations at a mean frequency of 15%. Ab10 was also
found in 13% of the maize landraces, but does not appear to be fixed in any
wild or cultivated population. Quantitative analyses suggest that the abundance
and distribution of Ab10 is governed by a complex combination of intrinsic
fitness effects as well as extrinsic environmental variability.
| [
{
"created": "Tue, 25 Sep 2012 01:10:25 GMT",
"version": "v1"
}
] | 2013-07-30 | [
[
"Kanizay",
"Lisa B.",
""
],
[
"Pyhäjärvi",
"Tanja",
""
],
[
"Lowry",
"Elizabeth G.",
""
],
[
"Hufford",
"Matthew B.",
""
],
[
"Peterson",
"Daniel G.",
""
],
[
"Ross-Ibarra",
"Jeffrey",
""
],
[
"Dawe",
"R. Kelly",
""
]
] | Maize Abnormal chromosome 10 (Ab10) contains a classic meiotic drive system that exploits asymmetry of meiosis to preferentially transmit itself and other chromosomes containing specialized heterochromatic regions called knobs. The structure and diversity of the Ab10 meiotic drive haplotype is poorly understood. We developed a BAC library from an Ab10 line and used the data to develop sequence-based markers, focusing on the proximal portion of the haplotype that shows partial homology to normal chromosome 10. These molecular and additional cytological data demonstrate that two previously identified Ab10 variants (Ab10-I and Ab10-II) share a common origin. Dominant PCR markers were used with FISH to assay 160 diverse teosinte and maize landrace populations from across the Americas, resulting in the identification of a previously unknown but prevalent form of Ab10 (Ab10-III). We find that Ab10 occurs in at least 75% of teosinte populations at a mean frequency of 15%. Ab10 was also found in 13% of the maize landraces, but does not appear to be fixed in any wild or cultivated population. Quantitative analyses suggest that the abundance and distribution of Ab10 is governed by a complex combination of intrinsic fitness effects as well as extrinsic environmental variability. |
0901.4598 | Partha Mitra | Jason W. Bohland, Caizhi Wu, Helen Barbas, Hemant Bokil, Mihail Bota,
Hans C. Breiter, Hollis T. Cline, John C. Doyle, Peter J. Freed, Ralph J.
Greenspan, Suzanne N. Haber, Michael Hawrylycz, Daniel G. Herrera, Claus C.
Hilgetag, Z. Josh Huang, Allan Jones, Edward G. Jones, Harvey J. Karten,
David Kleinfeld, Rolf Kotter, Henry A. Lester, John M. Lin, Brett D. Mensh,
Shawn Mikula, Jaak Panksepp, Joseph L. Price, Joseph Safdieh, Clifford B.
Saper, Nicholas D. Schiff, Jeremy D. Schmahmann, Bruce W. Stillman, Karel
Svoboda, Larry W. Swanson, Arthur W. Toga, David C. Van Essen, James D.
Watson, Partha P. Mitra | A proposal for a coordinated effort for the determination of brainwide
neuroanatomical connectivity in model organisms at a mesoscopic scale | 41 pages | null | 10.1371/journal.pcbi.1000334 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this era of complete genomes, our knowledge of neuroanatomical circuitry
remains surprisingly sparse. Such knowledge is however critical both for basic
and clinical research into brain function. Here we advocate for a concerted
effort to fill this gap, through systematic, experimental mapping of neural
circuits at a mesoscopic scale of resolution suitable for comprehensive,
brain-wide coverage, using injections of tracers or viral vectors. We detail
the scientific and medical rationale and briefly review existing knowledge and
experimental techniques. We define a set of desiderata, including brain-wide
coverage; validated and extensible experimental techniques suitable for
standardization and automation; centralized, open access data repository;
compatibility with existing resources, and tractability with current
informatics technology. We discuss a hypothetical but tractable plan for mouse,
additional efforts for the macaque, and technique development for human. We
estimate that the mouse connectivity project could be completed within five
years with a comparatively modest budget.
| [
{
"created": "Thu, 29 Jan 2009 02:56:17 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Bohland",
"Jason W.",
""
],
[
"Wu",
"Caizhi",
""
],
[
"Barbas",
"Helen",
""
],
[
"Bokil",
"Hemant",
""
],
[
"Bota",
"Mihail",
""
],
[
"Breiter",
"Hans C.",
""
],
[
"Cline",
"Hollis T.",
""
],
[
"Doyle",
"John C.",
""
],
[
"Freed",
"Peter J.",
""
],
[
"Greenspan",
"Ralph J.",
""
],
[
"Haber",
"Suzanne N.",
""
],
[
"Hawrylycz",
"Michael",
""
],
[
"Herrera",
"Daniel G.",
""
],
[
"Hilgetag",
"Claus C.",
""
],
[
"Huang",
"Z. Josh",
""
],
[
"Jones",
"Allan",
""
],
[
"Jones",
"Edward G.",
""
],
[
"Karten",
"Harvey J.",
""
],
[
"Kleinfeld",
"David",
""
],
[
"Kotter",
"Rolf",
""
],
[
"Lester",
"Henry A.",
""
],
[
"Lin",
"John M.",
""
],
[
"Mensh",
"Brett D.",
""
],
[
"Mikula",
"Shawn",
""
],
[
"Panksepp",
"Jaak",
""
],
[
"Price",
"Joseph L.",
""
],
[
"Safdieh",
"Joseph",
""
],
[
"Saper",
"Clifford B.",
""
],
[
"Schiff",
"Nicholas D.",
""
],
[
"Schmahmann",
"Jeremy D.",
""
],
[
"Stillman",
"Bruce W.",
""
],
[
"Svoboda",
"Karel",
""
],
[
"Swanson",
"Larry W.",
""
],
[
"Toga",
"Arthur W.",
""
],
[
"Van Essen",
"David C.",
""
],
[
"Watson",
"James D.",
""
],
[
"Mitra",
"Partha P.",
""
]
] | In this era of complete genomes, our knowledge of neuroanatomical circuitry remains surprisingly sparse. Such knowledge is however critical both for basic and clinical research into brain function. Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brain-wide coverage, using injections of tracers or viral vectors. We detail the scientific and medical rationale and briefly review existing knowledge and experimental techniques. We define a set of desiderata, including brain-wide coverage; validated and extensible experimental techniques suitable for standardization and automation; centralized, open access data repository; compatibility with existing resources, and tractability with current informatics technology. We discuss a hypothetical but tractable plan for mouse, additional efforts for the macaque, and technique development for human. We estimate that the mouse connectivity project could be completed within five years with a comparatively modest budget. |
q-bio/0404014 | Artem Badasyan | Vladimir F. Morozov, Artem V. Badasyan, Arsen V. Grigoryan, Mihran A.
Sahakyan, Evgeni Sh. Mamasakhlisov | Stacking and Hydrogen Bonding. DNA Cooperativity at Melting | 14 pages, 5 figures. Submitted to Biopolymers | Biopolymers 75, 434 (2004) | 10.1002/bip.20143 | null | q-bio.BM q-bio.GN | null | By taking into account base-base stacking interactions we improve the
Generalized Model of Polypeptide Chain (GMPC). Based on a one-dimensional
Potts-like model with many-particle interactions, the GMPC describes the
helix-coil transition in both polypeptides and polynucleotides. In the
framework of the GMPC we show that correctly introduced nearest-neighbor
stacking interactions against the background of hydrogen bonding lead to
increased stability (melting temperature) and, unexpectedly, to decreased
cooperativity (maximal correlation length). The increase in stability is
explained as due to an additional stabilizing interaction (stacking) and the
surprising decrease in cooperativity is seen as a result of mixing of
contributions of hydrogen bonding and stacking.
| [
{
"created": "Mon, 12 Apr 2004 12:40:38 GMT",
"version": "v1"
}
] | 2012-08-22 | [
[
"Morozov",
"Vladimir F.",
""
],
[
"Badasyan",
"Artem V.",
""
],
[
"Grigoryan",
"Arsen V.",
""
],
[
"Sahakyan",
"Mihran A.",
""
],
[
"Mamasakhlisov",
"Evgeni Sh.",
""
]
] | By taking into account base-base stacking interactions we improve the Generalized Model of Polypeptide Chain (GMPC). Based on a one-dimensional Potts-like model with many-particle interactions, the GMPC describes the helix-coil transition in both polypeptides and polynucleotides. In the framework of the GMPC we show that correctly introduced nearest-neighbor stacking interactions against the background of hydrogen bonding lead to increased stability (melting temperature) and, unexpectedly, to decreased cooperativity (maximal correlation length). The increase in stability is explained as due to an additional stabilizing interaction (stacking) and the surprising decrease in cooperativity is seen as a result of mixing of contributions of hydrogen bonding and stacking. |
1811.01043 | Fatma Deniz PhD | Michael C.-K. Wu and Fatma Deniz and Ryan J. Prenger and Jack L.
Gallant | The unified maximum a posteriori (MAP) framework for neuronal system
identification | affiliations changed | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The functional relationship between an input and a sensory neuron's response
can be described by the neuron's stimulus-response mapping function. A general
approach for characterizing the stimulus-response mapping function is called
system identification. Many different names have been used for the
stimulus-response mapping function: kernel or transfer function, transducer,
spatiotemporal receptive field. Many algorithms have been developed to estimate
a neuron's mapping function from an ensemble of stimulus-response pairs. These
include the spike-triggered average, normalized reverse correlation, linearized
reverse correlation, ridge regression, local spectral reverse correlation,
spike-triggered covariance, artificial neural networks, maximally informative
dimensions, kernel regression, boosting, and models based on leaky
integrate-and-fire neurons. Because many of these system identification
algorithms were developed in other disciplines, they seem very different
superficially and bear little relationship with each other. Each algorithm
makes different assumptions about the neuron and how the data is generated.
Without a unified framework it is difficult to select the most suitable
algorithm for estimating the neuron's mapping function. In this review, we
present a unified framework for describing these algorithms called maximum a
posteriori estimation (MAP). In the MAP framework, the implicit assumptions
built into any system identification algorithm are made explicit in three MAP
constituents: model class, noise distributions, and priors. Understanding the
interplay between these three MAP constituents will simplify the task of
selecting the most appropriate algorithms for a given data set. The MAP
framework can also facilitate the development of novel system identification
algorithms by incorporating biophysically plausible assumptions and mechanisms
into the MAP constituents.
| [
{
"created": "Wed, 31 Oct 2018 15:13:09 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Nov 2018 20:16:30 GMT",
"version": "v2"
}
] | 2018-11-08 | [
[
"Wu",
"Michael C. -K.",
""
],
[
"Deniz",
"Fatma",
""
],
[
"Prenger",
"Ryan J.",
""
],
[
"Gallant",
"Jack L.",
""
]
] | The functional relationship between an input and a sensory neuron's response can be described by the neuron's stimulus-response mapping function. A general approach for characterizing the stimulus-response mapping function is called system identification. Many different names have been used for the stimulus-response mapping function: kernel or transfer function, transducer, spatiotemporal receptive field. Many algorithms have been developed to estimate a neuron's mapping function from an ensemble of stimulus-response pairs. These include the spike-triggered average, normalized reverse correlation, linearized reverse correlation, ridge regression, local spectral reverse correlation, spike-triggered covariance, artificial neural networks, maximally informative dimensions, kernel regression, boosting, and models based on leaky integrate-and-fire neurons. Because many of these system identification algorithms were developed in other disciplines, they seem very different superficially and bear little relationship with each other. Each algorithm makes different assumptions about the neuron and how the data is generated. Without a unified framework it is difficult to select the most suitable algorithm for estimating the neuron's mapping function. In this review, we present a unified framework for describing these algorithms called maximum a posteriori estimation (MAP). In the MAP framework, the implicit assumptions built into any system identification algorithm are made explicit in three MAP constituents: model class, noise distributions, and priors. Understanding the interplay between these three MAP constituents will simplify the task of selecting the most appropriate algorithms for a given data set. The MAP framework can also facilitate the development of novel system identification algorithms by incorporating biophysically plausible assumptions and mechanisms into the MAP constituents. |
2210.08949 | Li Chen | Jing Zhang, Zhao Li, Jiqiang Zhang, Lin Ma, Guozhong Zheng and Li Chen | Oscillatory cooperation prevalence emerges from misperception | 8 pages, 6 figures | Physica A 617, 128682 (2023) | 10.1016/j.physa.2023.128682 | null | q-bio.PE cond-mat.stat-mech nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Oscillatory behaviors are ubiquitous in nature and the human society.
However, most previous works fail to reproduce them in the two-strategy
game-theoretical models. Here we show that oscillatory behaviors naturally
emerge if incomplete information is incorporated into the cooperation evolution
of a non-Markov model. Specifically, we consider a population playing
prisoner's dilemma game, where each individual can only probabilistically get
access to their neighbors' payoff information and store them within their
memory with a given length. They make their decisions based upon these
memories. Interestingly, we find that the level of cooperation generally cannot
stabilize but render quasi-periodic oscillation, and this observation is
strengthened for a longer memory and a smaller information acquisition
probability. The mechanism uncovered shows that there are misperceived payoffs
about the player's neighborhood, facilitating the growth of cooperators and
defectors at different stages that leads to oscillatory behaviors as a result.
Our findings are robust to the underlying structure of the population. Given
the omnipresence of incomplete information, our findings may provide a
plausible explanation for the phenomenon of oscillatory behaviors in the real
world.
| [
{
"created": "Mon, 17 Oct 2022 11:26:40 GMT",
"version": "v1"
}
] | 2023-06-27 | [
[
"Zhang",
"Jing",
""
],
[
"Li",
"Zhao",
""
],
[
"Zhang",
"Jiqiang",
""
],
[
"Ma",
"Lin",
""
],
[
"Zheng",
"Guozhong",
""
],
[
"Chen",
"Li",
""
]
] | Oscillatory behaviors are ubiquitous in nature and the human society. However, most previous works fail to reproduce them in the two-strategy game-theoretical models. Here we show that oscillatory behaviors naturally emerge if incomplete information is incorporated into the cooperation evolution of a non-Markov model. Specifically, we consider a population playing prisoner's dilemma game, where each individual can only probabilistically get access to their neighbors' payoff information and store them within their memory with a given length. They make their decisions based upon these memories. Interestingly, we find that the level of cooperation generally cannot stabilize but render quasi-periodic oscillation, and this observation is strengthened for a longer memory and a smaller information acquisition probability. The mechanism uncovered shows that there are misperceived payoffs about the player's neighborhood, facilitating the growth of cooperators and defectors at different stages that leads to oscillatory behaviors as a result. Our findings are robust to the underlying structure of the population. Given the omnipresence of incomplete information, our findings may provide a plausible explanation for the phenomenon of oscillatory behaviors in the real world. |
1905.00378 | Will Xiao | Will Xiao and Gabriel Kreiman | Gradient-free activation maximization for identifying effective stimuli | 16 pages, 8 figures, 3 tables | PLOS Comp Biol 2020 16(6): e1007973 | 10.1371/journal.pcbi.1007973 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental question for understanding brain function is what types of
stimuli drive neurons to fire. In visual neuroscience, this question has also
been posted as characterizing the receptive field of a neuron. The search for
effective stimuli has traditionally been based on a combination of insights
from previous studies, intuition, and luck. Recently, the same question has
emerged in the study of units in convolutional neural networks (ConvNets), and
together with this question a family of solutions were developed that are
generally referred to as "feature visualization by activation maximization."
We sought to bring in tools and techniques developed for studying ConvNets to
the study of biological neural networks. However, one key difference that
impedes direct translation of tools is that gradients can be obtained from
ConvNets using backpropagation, but such gradients are not available from the
brain. To circumvent this problem, we developed a method for gradient-free
activation maximization by combining a generative neural network with a genetic
algorithm. We termed this method XDream (EXtending DeepDream with real-time
evolution for activation maximization), and we have shown that this method can
reliably create strong stimuli for neurons in the macaque visual cortex (Ponce
et al., 2019). In this paper, we describe extensive experiments characterizing
the XDream method by using ConvNet units as in silico models of neurons. We
show that XDream is applicable across network layers, architectures, and
training sets; examine design choices in the algorithm; and provide practical
guides for choosing hyperparameters in the algorithm. XDream is an efficient
algorithm for uncovering neuronal tuning preferences in black-box networks
using a vast and diverse stimulus space.
| [
{
"created": "Wed, 1 May 2019 16:56:57 GMT",
"version": "v1"
}
] | 2020-09-01 | [
[
"Xiao",
"Will",
""
],
[
"Kreiman",
"Gabriel",
""
]
] | A fundamental question for understanding brain function is what types of stimuli drive neurons to fire. In visual neuroscience, this question has also been posted as characterizing the receptive field of a neuron. The search for effective stimuli has traditionally been based on a combination of insights from previous studies, intuition, and luck. Recently, the same question has emerged in the study of units in convolutional neural networks (ConvNets), and together with this question a family of solutions were developed that are generally referred to as "feature visualization by activation maximization." We sought to bring in tools and techniques developed for studying ConvNets to the study of biological neural networks. However, one key difference that impedes direct translation of tools is that gradients can be obtained from ConvNets using backpropagation, but such gradients are not available from the brain. To circumvent this problem, we developed a method for gradient-free activation maximization by combining a generative neural network with a genetic algorithm. We termed this method XDream (EXtending DeepDream with real-time evolution for activation maximization), and we have shown that this method can reliably create strong stimuli for neurons in the macaque visual cortex (Ponce et al., 2019). In this paper, we describe extensive experiments characterizing the XDream method by using ConvNet units as in silico models of neurons. We show that XDream is applicable across network layers, architectures, and training sets; examine design choices in the algorithm; and provide practical guides for choosing hyperparameters in the algorithm. XDream is an efficient algorithm for uncovering neuronal tuning preferences in black-box networks using a vast and diverse stimulus space. |
2310.09167 | Florian F\"uhrer | Florian F\"uhrer, Andrea Gruber, Holger Diedam, Andreas H. G\"oller,
Stephan Menz, Sebastian Schneckener | A Deep Neural Network -- Mechanistic Hybrid Model to Predict
Pharmacokinetics in Rat | Version accepted by Journal of Computer-Aided Molecular Design | null | null | null | q-bio.QM cs.CE cs.LG q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | An important aspect in the development of small molecules as drugs or
agro-chemicals is their systemic availability after intravenous and oral
administration. The prediction of the systemic availability from the chemical
structure of a potential candidate is highly desirable, as it allows to focus
the drug or agrochemical development on compounds with a favorable kinetic
profile. However, such pre-dictions are challenging as the availability is the
result of the complex interplay between molecular properties, biology and
physiology and training data is rare. In this work we improve the hybrid model
developed earlier [1]. We reduce the median fold change error for the total
oral exposure from 2.85 to 2.35 and for intravenous administration from 1.95 to
1.62. This is achieved by training on a larger data set, improving the neural
network architecture as well as the parametrization of mechanistic model.
Further, we extend our approach to predict additional endpoints and to handle
different covariates, like sex and dosage form. In contrast to a pure machine
learning model, our model is able to predict new end points on which it has not
been trained. We demonstrate this feature by predicting the exposure over the
first 24h, while the model has only been trained on the total exposure.
| [
{
"created": "Fri, 13 Oct 2023 15:01:55 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jan 2024 11:48:54 GMT",
"version": "v2"
}
] | 2024-01-03 | [
[
"Führer",
"Florian",
""
],
[
"Gruber",
"Andrea",
""
],
[
"Diedam",
"Holger",
""
],
[
"Göller",
"Andreas H.",
""
],
[
"Menz",
"Stephan",
""
],
[
"Schneckener",
"Sebastian",
""
]
] | An important aspect in the development of small molecules as drugs or agro-chemicals is their systemic availability after intravenous and oral administration. The prediction of the systemic availability from the chemical structure of a potential candidate is highly desirable, as it allows to focus the drug or agrochemical development on compounds with a favorable kinetic profile. However, such pre-dictions are challenging as the availability is the result of the complex interplay between molecular properties, biology and physiology and training data is rare. In this work we improve the hybrid model developed earlier [1]. We reduce the median fold change error for the total oral exposure from 2.85 to 2.35 and for intravenous administration from 1.95 to 1.62. This is achieved by training on a larger data set, improving the neural network architecture as well as the parametrization of mechanistic model. Further, we extend our approach to predict additional endpoints and to handle different covariates, like sex and dosage form. In contrast to a pure machine learning model, our model is able to predict new end points on which it has not been trained. We demonstrate this feature by predicting the exposure over the first 24h, while the model has only been trained on the total exposure. |
1205.4759 | Bjoern Peters | David A. Ostrov, Barry J. Grant, Yuri A. Pompeu, John Sidney, Mikkel
Harndahl, Scott Southwood, Carla Oseroff, Shun Lu, Jean Jakoncic, Cesar
Augusto. F. de Oliveira, Lun Yang, Hu Mei, Leming Shi, Jeffrey Shabanowitz,
A. Michelle English, Amanda Wriston, Andrew Lucas, Elizabeth Phillips, Simon
Mallal, Howard Grey, Alessandro Sette, Donald F. Hunt, Soren Buus and Bjoern
Peters | Drug hypersensitivity caused by alteration of the MHC-presented
self-peptide repertoire | null | null | 10.1073/pnas.1207934109 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Idiosyncratic adverse drug reactions are unpredictable, dose independent and
potentially life threatening; this makes them a major factor contributing to
the cost and uncertainty of drug development. Clinical data suggest that many
such reactions involve immune mechanisms, and genetic association studies have
identified strong linkage between drug hypersensitivity reactions to several
drugs and specific HLA alleles. One of the strongest such genetic associations
found has been for the antiviral drug abacavir, which causes severe adverse
reactions exclusively in patients expressing the HLA molecular variant B*57:01.
Abacavir adverse reactions were recently shown to be driven by drug-specific
activation of cytokine-producing, cytotoxic CD8+ T cells that required
HLA-B*57:01 molecules for their function. However, the mechanism by which
abacavir induces this pathologic T cell response remains unclear. Here we show
that abacavir can bind within the F-pocket of the peptide-binding groove of
HLA-B*57:01 thereby altering its specificity. This supports a novel explanation
for HLA-linked idiosyncratic adverse drug reactions; namely that drugs can
alter the repertoire of self-peptides presented to T cells thus causing the
equivalent of an alloreactive T cell response. Indeed, we identified specific
self-peptides that are presented only in the presence of abacavir, and that
were recognized by T cells of hypersensitive patients. The assays we have
established can be applied to test additional compounds with suspected HLA
linked hypersensitivities in vitro. Where successful, these assays could speed
up the discovery and mechanistic understanding of HLA linked hypersensitivities
as well as guide the development of safer drugs.
| [
{
"created": "Mon, 21 May 2012 21:51:08 GMT",
"version": "v1"
}
] | 2015-06-05 | [
[
"Ostrov",
"David A.",
""
],
[
"Grant",
"Barry J.",
""
],
[
"Pompeu",
"Yuri A.",
""
],
[
"Sidney",
"John",
""
],
[
"Harndahl",
"Mikkel",
""
],
[
"Southwood",
"Scott",
""
],
[
"Oseroff",
"Carla",
""
],
[
"Lu",
"Shun",
""
],
[
"Jakoncic",
"Jean",
""
],
[
"de Oliveira",
"Cesar Augusto. F.",
""
],
[
"Yang",
"Lun",
""
],
[
"Mei",
"Hu",
""
],
[
"Shi",
"Leming",
""
],
[
"Shabanowitz",
"Jeffrey",
""
],
[
"English",
"A. Michelle",
""
],
[
"Wriston",
"Amanda",
""
],
[
"Lucas",
"Andrew",
""
],
[
"Phillips",
"Elizabeth",
""
],
[
"Mallal",
"Simon",
""
],
[
"Grey",
"Howard",
""
],
[
"Sette",
"Alessandro",
""
],
[
"Hunt",
"Donald F.",
""
],
[
"Buus",
"Soren",
""
],
[
"Peters",
"Bjoern",
""
]
] | Idiosyncratic adverse drug reactions are unpredictable, dose independent and potentially life threatening; this makes them a major factor contributing to the cost and uncertainty of drug development. Clinical data suggest that many such reactions involve immune mechanisms, and genetic association studies have identified strong linkage between drug hypersensitivity reactions to several drugs and specific HLA alleles. One of the strongest such genetic associations found has been for the antiviral drug abacavir, which causes severe adverse reactions exclusively in patients expressing the HLA molecular variant B*57:01. Abacavir adverse reactions were recently shown to be driven by drug-specific activation of cytokine-producing, cytotoxic CD8+ T cells that required HLA-B*57:01 molecules for their function. However, the mechanism by which abacavir induces this pathologic T cell response remains unclear. Here we show that abacavir can bind within the F-pocket of the peptide-binding groove of HLA-B*57:01 thereby altering its specificity. This supports a novel explanation for HLA-linked idiosyncratic adverse drug reactions; namely that drugs can alter the repertoire of self-peptides presented to T cells thus causing the equivalent of an alloreactive T cell response. Indeed, we identified specific self-peptides that are presented only in the presence of abacavir, and that were recognized by T cells of hypersensitive patients. The assays we have established can be applied to test additional compounds with suspected HLA linked hypersensitivities in vitro. Where successful, these assays could speed up the discovery and mechanistic understanding of HLA linked hypersensitivities as well as guide the development of safer drugs. |
q-bio/0311036 | Thorsten Poeschel | Alexei Zaikin and Thorsten Poeschel | Peptide size dependent active transport in the proteasome | 4 pages, 4 figures | null | null | null | q-bio.BM cond-mat.soft | null | We investigate the transport of proteins inside the proteasome and propose an
active transport mechanism based on a spatially asymmetric interaction
potential of peptide chains. The transport is driven by fluctuations which are
always present in such systems. We compute the peptide-size dependent transport
rate which is essential for the functioning of the proteasome. In agreement
with recent experiments, varying temperature changes the transport mechanism
qualitatively.
| [
{
"created": "Wed, 26 Nov 2003 16:07:03 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jun 2004 11:26:27 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Zaikin",
"Alexei",
""
],
[
"Poeschel",
"Thorsten",
""
]
] | We investigate the transport of proteins inside the proteasome and propose an active transport mechanism based on a spatially asymmetric interaction potential of peptide chains. The transport is driven by fluctuations which are always present in such systems. We compute the peptide-size dependent transport rate which is essential for the functioning of the proteasome. In agreement with recent experiments, varying temperature changes the transport mechanism qualitatively. |
1311.3573 | S\'ebastien Gigu\`ere | S\'ebastien Gigu\`ere, Fran\c{c}ois Laviolette, Mario Marchand, Denise
Tremblay, Sylvain Moineau, \'Eric Biron and Jacques Corbeil | Improved design and screening of high bioactivity peptides for drug
discovery | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of peptides having high biological activity is very challenging
mainly because there is an enormous diversity of compounds and only a minority
have the desired properties. To lower cost and reduce the time to obtain
promising compounds, machine learning approaches can greatly assist in the
process and even replace expensive laboratory experiments by learning a
predictor with existing data. Unfortunately, selecting ligands having the
greatest predicted bioactivity requires a prohibitive amount of computational
time. For this combinatorial problem, heuristics and stochastic optimization
methods are not guaranteed to find adequate compounds.
We propose an efficient algorithm based on De Bruijn graphs, guaranteed to
find the peptides of maximal predicted bioactivity. We demonstrate how this
algorithm can be part of an iterative combinatorial chemistry procedure to
speed up the discovery and the validation of peptide leads. Moreover, the
proposed approach does not require the use of known ligands for the target
protein since it can leverage recent multi-target machine learning predictors
where ligands for similar targets can serve as initial training data. Finally,
we validated the proposed approach in vitro with the discovery of new cationic
anti-microbial peptides.
Source code is freely available at
http://graal.ift.ulaval.ca/peptide-design/.
| [
{
"created": "Thu, 14 Nov 2013 16:45:01 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jan 2014 20:53:41 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Apr 2014 12:03:35 GMT",
"version": "v3"
}
] | 2014-04-11 | [
[
"Giguère",
"Sébastien",
""
],
[
"Laviolette",
"François",
""
],
[
"Marchand",
"Mario",
""
],
[
"Tremblay",
"Denise",
""
],
[
"Moineau",
"Sylvain",
""
],
[
"Biron",
"Éric",
""
],
[
"Corbeil",
"Jacques",
""
]
] | The discovery of peptides having high biological activity is very challenging mainly because there is an enormous diversity of compounds and only a minority have the desired properties. To lower cost and reduce the time to obtain promising compounds, machine learning approaches can greatly assist in the process and even replace expensive laboratory experiments by learning a predictor with existing data. Unfortunately, selecting ligands having the greatest predicted bioactivity requires a prohibitive amount of computational time. For this combinatorial problem, heuristics and stochastic optimization methods are not guaranteed to find adequate compounds. We propose an efficient algorithm based on De Bruijn graphs, guaranteed to find the peptides of maximal predicted bioactivity. We demonstrate how this algorithm can be part of an iterative combinatorial chemistry procedure to speed up the discovery and the validation of peptide leads. Moreover, the proposed approach does not require the use of known ligands for the target protein since it can leverage recent multi-target machine learning predictors where ligands for similar targets can serve as initial training data. Finally, we validated the proposed approach in vitro with the discovery of new cationic anti-microbial peptides. Source code is freely available at http://graal.ift.ulaval.ca/peptide-design/. |
1512.03126 | Iain Hepburn | I. Hepburn, W. Chen, E. De Schutter | Accurate Reaction-Diffusion Operator Splitting on Tetrahedral Meshes for
Parallel Stochastic Molecular Simulations | 33 pages, 10 figures | null | 10.1063/1.4960034 | null | q-bio.QM cs.DC physics.bio-ph physics.chem-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial stochastic molecular simulations in biology are limited by the
intense computation required to track molecules in space either in a discrete
time or discrete space framework, meaning that the serial limit has already
been reached in sub-cellular models. This calls for parallel simulations that
can take advantage of the power of modern supercomputers; however exact methods
are known to be inherently serial. We introduce an operator splitting
implementation for irregular grids with a novel method to improve accuracy, and
demonstrate potential for scalable parallel simulations in an initial MPI
version. We foresee that this groundwork will enable larger scale, whole-cell
stochastic simulations in the near future.
| [
{
"created": "Thu, 10 Dec 2015 02:21:19 GMT",
"version": "v1"
}
] | 2016-08-24 | [
[
"Hepburn",
"I.",
""
],
[
"Chen",
"W.",
""
],
[
"De Schutter",
"E.",
""
]
] | Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, meaning that the serial limit has already been reached in sub-cellular models. This calls for parallel simulations that can take advantage of the power of modern supercomputers; however exact methods are known to be inherently serial. We introduce an operator splitting implementation for irregular grids with a novel method to improve accuracy, and demonstrate potential for scalable parallel simulations in an initial MPI version. We foresee that this groundwork will enable larger scale, whole-cell stochastic simulations in the near future. |
2011.11088 | Hamza Saad | Hamza Saad and Nagendra Nagarur | Data Mining Techniques in Predicting Breast Cancer | 9 pages, 4 figures, paper published in journal | J. Applied Sci., 20 (4): 124-133, 2020 | 10.3923/jas.2020.124.133 | null | q-bio.QM cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Background and Objective: Breast cancer, which accounts for 23% of all
cancers, is threatening the communities of developing countries because of poor
awareness and treatment. Early diagnosis helps a lot in the treatment of the
disease. The present study conducted in order to improve the prediction process
and extract the main causes impacted the breast cancer. Materials and Methods:
Data were collected based on eight attributes for 130 Libyan women in the
clinical stages infected with this disease. Data mining was used by applying
six algorithms to predict disease based on clinical stages. All the algorithms
gain high accuracy, but the decision tree provides the highest accuracy-diagram
of decision tree utilized to build rules from each leafnode. Ranking variables
applied to extract significant variables and support final rules to predict
disease. Results: All applied algorithms were gained a high prediction with
different accuracies. Rules 1, 3, 4, 5 and 9 provided a pure subset to be
confirmed as significant rules. Only five input variables contributed to
building rules, but not all variables have a significant impact. Conclusion:
Tumor size plays a vital role in constructing all rules with a significant
impact. Variables of inheritance, breast side and menopausal status have an
insignificant impact in analysis, but they may consider remarkable findings
using a different strategy of data analysis.
| [
{
"created": "Sun, 22 Nov 2020 19:12:15 GMT",
"version": "v1"
}
] | 2020-11-24 | [
[
"Saad",
"Hamza",
""
],
[
"Nagarur",
"Nagendra",
""
]
] | Background and Objective: Breast cancer, which accounts for 23% of all cancers, is threatening the communities of developing countries because of poor awareness and treatment. Early diagnosis helps a lot in the treatment of the disease. The present study conducted in order to improve the prediction process and extract the main causes impacted the breast cancer. Materials and Methods: Data were collected based on eight attributes for 130 Libyan women in the clinical stages infected with this disease. Data mining was used by applying six algorithms to predict disease based on clinical stages. All the algorithms gain high accuracy, but the decision tree provides the highest accuracy-diagram of decision tree utilized to build rules from each leafnode. Ranking variables applied to extract significant variables and support final rules to predict disease. Results: All applied algorithms were gained a high prediction with different accuracies. Rules 1, 3, 4, 5 and 9 provided a pure subset to be confirmed as significant rules. Only five input variables contributed to building rules, but not all variables have a significant impact. Conclusion: Tumor size plays a vital role in constructing all rules with a significant impact. Variables of inheritance, breast side and menopausal status have an insignificant impact in analysis, but they may consider remarkable findings using a different strategy of data analysis. |
1908.03514 | Teresa Karrer | Teresa M. Karrer, Jason Z. Kim, Jennifer Stiso, Ari E. Kahn, Fabio
Pasqualetti, Ute Habel, Danielle S. Bassett | A practical guide to methodological considerations in the
controllability of structural brain networks | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting how the brain can be driven to specific states by means of
internal or external control requires a fundamental understanding of the
relationship between neural connectivity and activity. Network control theory
is a powerful tool from the physical and engineering sciences that can provide
insights regarding that relationship; it formalizes the study of how the
dynamics of a complex system can arise from its underlying structure of
interconnected units. Given the recent use of network control theory in
neuroscience, it is now timely to offer a practical guide to methodological
considerations in the controllability of structural brain networks. Here we
provide a systematic overview of the framework, examine the impact of modeling
choices on frequently studied control metrics, and suggest potentially useful
theoretical extensions. We ground our discussions, numerical demonstrations,
and theoretical advances in a dataset of high-resolution diffusion imaging with
730 diffusion directions acquired over approximately 1 hour of scanning from
ten healthy young adults. Following a didactic introduction of the theory, we
probe how a selection of modeling choices affects four common statistics:
average controllability, modal controllability, minimum control energy, and
optimal control energy. Next, we extend the current state of the art in two
ways: first, by developing an alternative measure of structural connectivity
that accounts for radial propagation of activity through abutting tissue, and
second, by defining a complementary metric quantifying the complexity of the
energy landscape of a system. We close with specific modeling recommendations
and a discussion of methodological constraints.
| [
{
"created": "Fri, 9 Aug 2019 16:13:12 GMT",
"version": "v1"
}
] | 2019-08-12 | [
[
"Karrer",
"Teresa M.",
""
],
[
"Kim",
"Jason Z.",
""
],
[
"Stiso",
"Jennifer",
""
],
[
"Kahn",
"Ari E.",
""
],
[
"Pasqualetti",
"Fabio",
""
],
[
"Habel",
"Ute",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Predicting how the brain can be driven to specific states by means of internal or external control requires a fundamental understanding of the relationship between neural connectivity and activity. Network control theory is a powerful tool from the physical and engineering sciences that can provide insights regarding that relationship; it formalizes the study of how the dynamics of a complex system can arise from its underlying structure of interconnected units. Given the recent use of network control theory in neuroscience, it is now timely to offer a practical guide to methodological considerations in the controllability of structural brain networks. Here we provide a systematic overview of the framework, examine the impact of modeling choices on frequently studied control metrics, and suggest potentially useful theoretical extensions. We ground our discussions, numerical demonstrations, and theoretical advances in a dataset of high-resolution diffusion imaging with 730 diffusion directions acquired over approximately 1 hour of scanning from ten healthy young adults. Following a didactic introduction of the theory, we probe how a selection of modeling choices affects four common statistics: average controllability, modal controllability, minimum control energy, and optimal control energy. Next, we extend the current state of the art in two ways: first, by developing an alternative measure of structural connectivity that accounts for radial propagation of activity through abutting tissue, and second, by defining a complementary metric quantifying the complexity of the energy landscape of a system. We close with specific modeling recommendations and a discussion of methodological constraints. |
2106.07262 | Giulia Bertaglia | Giulia Bertaglia, Walter Boscheri, Giacomo Dimarco, Lorenzo Pareschi | Spatial spread of COVID-19 outbreak in Italy using multiscale kinetic
transport equations with uncertainty | null | Math. Biosci. Eng. 18 (2021) 7028-7059 | 10.3934/mbe.2021350 | null | q-bio.PE cs.NA math.NA physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce a space-dependent multiscale model to describe the
spatial spread of an infectious disease under uncertain data with particular
interest in simulating the onset of the COVID-19 epidemic in Italy. While virus
transmission is ruled by a SEIAR type compartmental model, within our approach
the population is given by a sum of commuters moving on a extra-urban scale and
non commuters interacting only on the smaller urban scale. A transport dynamic
of the commuter population at large spatial scales, based on kinetic equations,
is coupled with a diffusion model for non commuters at the urban scale. Thanks
to a suitable scaling limit, the kinetic transport model used to describe the
dynamics of commuters, within a given urban area coincides with the diffusion
equations that characterize the movement of non-commuting individuals. Because
of the high uncertainty in the data reported in the early phase of the
epidemic, the presence of random inputs in both the initial data and the
epidemic parameters is included in the model. A robust numerical method is
designed to deal with the presence of multiple scales and the uncertainty
quantification process. In our simulations, we considered a realistic
geographical domain, describing the Lombardy region, in which the size of the
cities, the number of infected individuals, the average number of daily
commuters moving from one city to another, and the epidemic aspects are taken
into account through a calibration of the model parameters based on the actual
available data. The results show that the model is able to describe correctly
the main features of the spatial expansion of the first wave of COVID-19 in
northern Italy.
| [
{
"created": "Mon, 14 Jun 2021 09:30:43 GMT",
"version": "v1"
}
] | 2021-08-31 | [
[
"Bertaglia",
"Giulia",
""
],
[
"Boscheri",
"Walter",
""
],
[
"Dimarco",
"Giacomo",
""
],
[
"Pareschi",
"Lorenzo",
""
]
] | In this paper we introduce a space-dependent multiscale model to describe the spatial spread of an infectious disease under uncertain data with particular interest in simulating the onset of the COVID-19 epidemic in Italy. While virus transmission is ruled by a SEIAR type compartmental model, within our approach the population is given by a sum of commuters moving on a extra-urban scale and non commuters interacting only on the smaller urban scale. A transport dynamic of the commuter population at large spatial scales, based on kinetic equations, is coupled with a diffusion model for non commuters at the urban scale. Thanks to a suitable scaling limit, the kinetic transport model used to describe the dynamics of commuters, within a given urban area coincides with the diffusion equations that characterize the movement of non-commuting individuals. Because of the high uncertainty in the data reported in the early phase of the epidemic, the presence of random inputs in both the initial data and the epidemic parameters is included in the model. A robust numerical method is designed to deal with the presence of multiple scales and the uncertainty quantification process. In our simulations, we considered a realistic geographical domain, describing the Lombardy region, in which the size of the cities, the number of infected individuals, the average number of daily commuters moving from one city to another, and the epidemic aspects are taken into account through a calibration of the model parameters based on the actual available data. The results show that the model is able to describe correctly the main features of the spatial expansion of the first wave of COVID-19 in northern Italy. |
1402.6303 | Jamie Oaks | Jamie R. Oaks | An Improved Approximate-Bayesian Model-choice Method for Estimating
Shared Evolutionary History | 48 pages, 8 figures, 4 tables, 35 pages of supporting information
with 1 supporting table and 33 supporting figures | Oaks, Jamie R. 2014. An improved approximate-Bayesian model-choice
method for estimating shared evolutionary history. BMC Evolutionary Biology
14:150 | 10.1186/1471-2148-14-150 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand biological diversification, it is important to account for
large-scale processes that affect the evolutionary history of groups of
co-distributed populations of organisms. Such events predict temporally
clustered divergences times, a pattern that can be estimated using genetic data
from co-distributed species. I introduce a new approximate-Bayesian method for
comparative phylogeographical model-choice that estimates the temporal
distribution of divergences across taxa from multi-locus DNA sequence data. The
model is an extension of that implemented in msBayes. By reparameterizing the
model, introducing more flexible priors on demographic and divergence-time
parameters, and implementing a non-parametric Dirichlet-process prior over
divergence models, I improved the robustness, accuracy, and power of the method
for estimating shared evolutionary history across taxa. The results demonstrate
the improved performance of the new method is due to (1) more appropriate
priors on divergence-time and demographic parameters that avoid prohibitively
small marginal likelihoods for models with more divergence events, and (2) the
Dirichlet-process providing a flexible prior on divergence histories that does
not strongly disfavor models with intermediate numbers of divergence events.
The new method yields more robust estimates of posterior uncertainty, and thus
greatly reduces the tendency to incorrectly estimate models of shared
evolutionary history with strong support.
| [
{
"created": "Tue, 25 Feb 2014 20:29:58 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Aug 2014 00:45:03 GMT",
"version": "v2"
}
] | 2014-08-11 | [
[
"Oaks",
"Jamie R.",
""
]
] | To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. |
2408.01316 | Giacomo Vedovati | Giacomo Vedovati and ShiNung Ching | Synergistic pathways of modulation enable robust task packing within
neural dynamics | 24 pages, 6 figures | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how brain networks learn and manage multiple tasks
simultaneously is of interest in both neuroscience and artificial intelligence.
In this regard, a recent research thread in theoretical neuroscience has
focused on how recurrent neural network models and their internal dynamics
enact multi-task learning. To manage different tasks requires a mechanism to
convey information about task identity or context into the model, which from a
biological perspective may involve mechanisms of neuromodulation. In this
study, we use recurrent network models to probe the distinctions between two
forms of contextual modulation of neural dynamics, at the level of neuronal
excitability and at the level of synaptic strength. We characterize these
mechanisms in terms of their functional outcomes, focusing on their robustness
to context ambiguity and, relatedly, their efficiency with respect to packing
multiple tasks into finite size networks. We also demonstrate distinction
between these mechanisms at the level of the neuronal dynamics they induce.
Together, these characterizations indicate complementarity and synergy in how
these mechanisms act, potentially over multiple time-scales, toward enhancing
robustness of multi-task learning.
| [
{
"created": "Fri, 2 Aug 2024 15:12:01 GMT",
"version": "v1"
}
] | 2024-08-05 | [
[
"Vedovati",
"Giacomo",
""
],
[
"Ching",
"ShiNung",
""
]
] | Understanding how brain networks learn and manage multiple tasks simultaneously is of interest in both neuroscience and artificial intelligence. In this regard, a recent research thread in theoretical neuroscience has focused on how recurrent neural network models and their internal dynamics enact multi-task learning. To manage different tasks requires a mechanism to convey information about task identity or context into the model, which from a biological perspective may involve mechanisms of neuromodulation. In this study, we use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics, at the level of neuronal excitability and at the level of synaptic strength. We characterize these mechanisms in terms of their functional outcomes, focusing on their robustness to context ambiguity and, relatedly, their efficiency with respect to packing multiple tasks into finite size networks. We also demonstrate distinction between these mechanisms at the level of the neuronal dynamics they induce. Together, these characterizations indicate complementarity and synergy in how these mechanisms act, potentially over multiple time-scales, toward enhancing robustness of multi-task learning. |
1105.5198 | Margaret Cheung | Qian Wang, Kao-Chen Liang, Arkadiusz Czader, M. Neal Waxham and
Margaret S. Cheung | The Effect of Macromolecular Crowding, Ionic Strength and Calcium
Binding on Calmodulin Dynamics | Accepted to PLoS Comp Biol, 2011 | null | 10.1371/journal.pcbi.1002114 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The flexibility in the structure of calmodulin (CaM) allows its binding to
over 300 target proteins in the cell. To investigate the structure-function
relationship of CaM, we combined methods of computer simulation and experiments
based on circular dichroism (CD) to investigate the structural characteristics
of CaM that influence its target recognition in crowded cell-like conditions.
We developed a unique multiscale solution of charges computed from quantum
chemistry, together with protein reconstruction, coarse-grained molecular
simulations, and statistical physics, to represent the charge distribution in
the transition from apoCaM to holoCaM upon calcium binding. Computationally, we
found that increased levels of macromolecular crowding, in addition to calcium
binding and ionic strength typical of that found inside cells, can impact the
conformation, helicity and the EF hand orientation of CaM. Because EF hand
orientation impacts the affinity of calcium binding and the specificity of
CaM's target selection, our results may provide unique insight into
understanding the promiscuous behavior of calmodulin in target selection inside
cells.
| [
{
"created": "Thu, 26 May 2011 04:26:20 GMT",
"version": "v1"
}
] | 2015-05-28 | [
[
"Wang",
"Qian",
""
],
[
"Liang",
"Kao-Chen",
""
],
[
"Czader",
"Arkadiusz",
""
],
[
"Waxham",
"M. Neal",
""
],
[
"Cheung",
"Margaret S.",
""
]
] | The flexibility in the structure of calmodulin (CaM) allows its binding to over 300 target proteins in the cell. To investigate the structure-function relationship of CaM, we combined methods of computer simulation and experiments based on circular dichroism (CD) to investigate the structural characteristics of CaM that influence its target recognition in crowded cell-like conditions. We developed a unique multiscale solution of charges computed from quantum chemistry, together with protein reconstruction, coarse-grained molecular simulations, and statistical physics, to represent the charge distribution in the transition from apoCaM to holoCaM upon calcium binding. Computationally, we found that increased levels of macromolecular crowding, in addition to calcium binding and ionic strength typical of that found inside cells, can impact the conformation, helicity and the EF hand orientation of CaM. Because EF hand orientation impacts the affinity of calcium binding and the specificity of CaM's target selection, our results may provide unique insight into understanding the promiscuous behavior of calmodulin in target selection inside cells. |
1401.2231 | Daqing Guo | Mingming Chen, Daqing Guo, Tiebin Wang, Wei Jing, Yang Xia, Peng Xu,
Cheng Luo, Pedro A. Valdes-Sosa, and Dezhong Yao | Bidirectional Control of Absence Seizures by the Basal Ganglia: A
Computational Evidence | 10 figures and 1 table. This paper has been accepted by PLoS
Computational Biology | null | 10.1371/journal.pcbi.1003495 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Absence epilepsy is believed to be associated with the abnormal interactions
between the cerebral cortex and thalamus. Besides the direct coupling,
anatomical evidence indicates that the cerebral cortex and thalamus also
communicate indirectly through an important intermediate bridge--basal ganglia.
It has been thus postulated that the basal ganglia might play key roles in the
modulation of absence seizures, but the relevant biophysical mechanisms are
still not completely established. Using a biophysically based model, we
demonstrate here that the typical absence seizure activities can be controlled
and modulated by the direct GABAergic projections from the substantia nigra
pars reticulata (SNr) to either the thalamic reticular nucleus (TRN) or the
specific relay nuclei (SRN) of thalamus, through different biophysical
mechanisms. Under certain conditions, these two types of seizure control are
observed to coexist in the same network. More importantly, due to the
competition between the inhibitory SNr-TRN and SNr-SRN pathways, we find that
both decreasing and increasing the activation of SNr neurons from the normal
level may considerably suppress the generation of SWDs in the coexistence
region. Overall, these results highlight the bidirectional functional roles of
basal ganglia in controlling and modulating absence seizures, and might provide
novel insights into the therapeutic treatments of this brain disorder.
| [
{
"created": "Fri, 10 Jan 2014 05:54:11 GMT",
"version": "v1"
}
] | 2015-06-18 | [
[
"Chen",
"Mingming",
""
],
[
"Guo",
"Daqing",
""
],
[
"Wang",
"Tiebin",
""
],
[
"Jing",
"Wei",
""
],
[
"Xia",
"Yang",
""
],
[
"Xu",
"Peng",
""
],
[
"Luo",
"Cheng",
""
],
[
"Valdes-Sosa",
"Pedro A.",
""
],
[
"Yao",
"Dezhong",
""
]
] | Absence epilepsy is believed to be associated with the abnormal interactions between the cerebral cortex and thalamus. Besides the direct coupling, anatomical evidence indicates that the cerebral cortex and thalamus also communicate indirectly through an important intermediate bridge--basal ganglia. It has been thus postulated that the basal ganglia might play key roles in the modulation of absence seizures, but the relevant biophysical mechanisms are still not completely established. Using a biophysically based model, we demonstrate here that the typical absence seizure activities can be controlled and modulated by the direct GABAergic projections from the substantia nigra pars reticulata (SNr) to either the thalamic reticular nucleus (TRN) or the specific relay nuclei (SRN) of thalamus, through different biophysical mechanisms. Under certain conditions, these two types of seizure control are observed to coexist in the same network. More importantly, due to the competition between the inhibitory SNr-TRN and SNr-SRN pathways, we find that both decreasing and increasing the activation of SNr neurons from the normal level may considerably suppress the generation of SWDs in the coexistence region. Overall, these results highlight the bidirectional functional roles of basal ganglia in controlling and modulating absence seizures, and might provide novel insights into the therapeutic treatments of this brain disorder. |
2312.10122 | Andrey Timashkov | A.Y. Timashkov, I.N. Abrosimov, V.M. Yaltonsky | Illness perception and self-management in patients with type 2 diabetes | 13 pages, 1 figure | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | This paper presents the results of a study on the perception of illness and
adaptation parameters in patients with type 2 diabetes. The study involved 173
patients diagnosed with "Type 2 Diabetes" (ICD-11 code 5 A 11). The average age
of the patients was 55.21+/-13.47 and the average duration of the disease was
11.79+/-8.16. Two profiles of illness perception were identified: Profile 1 -
"Perception of illness threat" and Profile 2 - "Perception of illness and
treatment controllability". Three types of illness perception were also
identified: Type 1 - "Formed illness threat and negative beliefs about illness
and treatment control" (Group 1); Type 2 - "Unformed illness threat and neutral
beliefs about illness and treatment control" (Group 2); Type 3 - "Formed
illness threat and positive beliefs about illness and treatment control" (Group
3). Targets for further psychological interventions were formulated for each
identified type.
| [
{
"created": "Fri, 15 Dec 2023 12:32:22 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Timashkov",
"A. Y.",
""
],
[
"Abrosimov",
"I. N.",
""
],
[
"Yaltonsky",
"V. M.",
""
]
] | This paper presents the results of a study on the perception of illness and adaptation parameters in patients with type 2 diabetes. The study involved 173 patients diagnosed with "Type 2 Diabetes" (ICD-11 code 5 A 11). The average age of the patients was 55.21+/-13.47 and the average duration of the disease was 11.79+/-8.16. Two profiles of illness perception were identified: Profile 1 - "Perception of illness threat" and Profile 2 - "Perception of illness and treatment controllability". Three types of illness perception were also identified: Type 1 - "Formed illness threat and negative beliefs about illness and treatment control" (Group 1); Type 2 - "Unformed illness threat and neutral beliefs about illness and treatment control" (Group 2); Type 3 - "Formed illness threat and positive beliefs about illness and treatment control" (Group 3). Targets for further psychological interventions were formulated for each identified type. |
0909.1918 | Edoardo Milotti | Roberto Chignola, Alessio Del Fabbro, Edoardo Milotti | Dynamics of intracellular Ca$^{2+}$ oscillations in the presence of
multisite Ca$^{2+}$-binding proteins | 4 figures | null | 10.1016/j.physa.2010.03.047 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the dynamics of intracellular calcium oscillations in the presence
of proteins that bind calcium on multiple sites and that are generally believed
to act as passive calcium buffers in cells. We find that multisite
calcium-binding proteins set a sharp threshold for calcium oscillations. Even
with high concentrations of calcium-binding proteins, internal noise, which
shows up spontaneously in cells in the process of calcium wave formation, can
lead to self-oscillations. This produces oscillatory behaviors strikingly
similar to those observed in real cells. In addition, for given intracellular
concentrations of both calcium and calcium-binding proteins the regularity of
these oscillations changes and reaches a maximum as a function noise variance,
and the overall system dynamics displays stochastic coherence. We conclude that
calcium-binding proteins may have an important and active role in cellular
communication.
| [
{
"created": "Thu, 10 Sep 2009 10:44:41 GMT",
"version": "v1"
}
] | 2015-05-14 | [
[
"Chignola",
"Roberto",
""
],
[
"Del Fabbro",
"Alessio",
""
],
[
"Milotti",
"Edoardo",
""
]
] | We study the dynamics of intracellular calcium oscillations in the presence of proteins that bind calcium on multiple sites and that are generally believed to act as passive calcium buffers in cells. We find that multisite calcium-binding proteins set a sharp threshold for calcium oscillations. Even with high concentrations of calcium-binding proteins, internal noise, which shows up spontaneously in cells in the process of calcium wave formation, can lead to self-oscillations. This produces oscillatory behaviors strikingly similar to those observed in real cells. In addition, for given intracellular concentrations of both calcium and calcium-binding proteins the regularity of these oscillations changes and reaches a maximum as a function noise variance, and the overall system dynamics displays stochastic coherence. We conclude that calcium-binding proteins may have an important and active role in cellular communication. |
1109.1462 | Luke Jostins | Luke Jostins | Inferring genotyping error rates from genotyped trios | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genotyping errors are known to influence the power of both family-based and
case-control studies in the genetics of complex disease. Estimating genotyping
error rate in a given dataset can be complex, but when family information is
available error rates can be inferred from the patterns of Mendelian
inheritance between parents and offspring. I introduce a novel likelihood-based
method for calculating error rates from family data, given known allele
frequencies. I apply this to an example dataset, demonstrating a low genotyping
error rate in genotyping data from a personal genomics company.
| [
{
"created": "Wed, 7 Sep 2011 14:02:46 GMT",
"version": "v1"
}
] | 2011-09-08 | [
[
"Jostins",
"Luke",
""
]
] | Genotyping errors are known to influence the power of both family-based and case-control studies in the genetics of complex disease. Estimating genotyping error rate in a given dataset can be complex, but when family information is available error rates can be inferred from the patterns of Mendelian inheritance between parents and offspring. I introduce a novel likelihood-based method for calculating error rates from family data, given known allele frequencies. I apply this to an example dataset, demonstrating a low genotyping error rate in genotyping data from a personal genomics company. |
0705.1057 | Jos K\"afer | Jos K\"afer, Takashi Hayashi, Athanasius F.M. Mar\'ee, Richard W.
Carthew and Fran\c{c}ois Graner | Cell adhesion and cortex contractility determine cell patterning in the
Drosophila retina | revised manuscript; 8 pages, 6 figures; supplementary information not
included | Proc. Natl. Acad. Sci. U.S.A. (2007), 104 (47), 18549-18554 | 10.1073/pnas.0704235104 | null | q-bio.CB q-bio.TO | null | Hayashi and Carthew (Nature 431 [2004], 647) have shown that the packing of
cone cells in the Drosophila retina resembles soap bubble packing, and that
changing E- and N-cadherin expression can change this packing, as well as cell
shape.
The analogy with bubbles suggests that cell packing is driven by surface
minimization. We find that this assumption is insufficient to model the
experimentally observed shapes and packing of the cells based on their cadherin
expression. We then consider a model in which adhesion leads to a surface
increase, balanced by cell cortex contraction. Using the experimentally
observed distributions of E- and N-cadherin, we simulate the packing and cell
shapes in the wildtype eye. Furthermore, by changing only the corresponding
parameters, this model can describe the mutants with different numbers of
cells, or changes in cadherin expression.
| [
{
"created": "Tue, 8 May 2007 09:51:02 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Sep 2007 12:01:49 GMT",
"version": "v2"
}
] | 2007-11-15 | [
[
"Käfer",
"Jos",
""
],
[
"Hayashi",
"Takashi",
""
],
[
"Marée",
"Athanasius F. M.",
""
],
[
"Carthew",
"Richard W.",
""
],
[
"Graner",
"François",
""
]
] | Hayashi and Carthew (Nature 431 [2004], 647) have shown that the packing of cone cells in the Drosophila retina resembles soap bubble packing, and that changing E- and N-cadherin expression can change this packing, as well as cell shape. The analogy with bubbles suggests that cell packing is driven by surface minimization. We find that this assumption is insufficient to model the experimentally observed shapes and packing of the cells based on their cadherin expression. We then consider a model in which adhesion leads to a surface increase, balanced by cell cortex contraction. Using the experimentally observed distributions of E- and N-cadherin, we simulate the packing and cell shapes in the wildtype eye. Furthermore, by changing only the corresponding parameters, this model can describe the mutants with different numbers of cells, or changes in cadherin expression. |
1407.7198 | Liaofu Luo | Liaofu Luo | Quantum Theory on Glucose Transport Across Membrane | 18 pages,3 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After a brief review of the protein folding quantum theory and a short
discussion on its experimental evidences the mechanism of glucose transport
across membrane is studied from the point of quantum conformational transition.
The structural variations among four kinds of conformations of the human
glucose transporter GLUT1 (ligand free occluded, outward open, ligand bound
occluded and inward open) are looked as the quantum transition. The comparative
studies between mechanisms of uniporter (GLUT1) and symporter (XylE and GlcP)
are given. The transitional rates are calculated from the fundamental theory.
The monosaccharide transport kinetics is proposed. The steady state of the
transporter is found and its stability is studied. The glucose (xylose)
translocation rates in two directions and in different steps are compared. The
mean transport time in a cycle is calculated and based on it the comparison of
the transport times between GLUT1,GlcP and XylE can be drawn. The non-Arrhenius
temperature dependence of the transition rate and the mean transport time is
predicted. It is suggested that the direct measurement of temperature
dependence is a useful tool for deeply understanding the transmembrane
transport mechanism.
| [
{
"created": "Sun, 27 Jul 2014 08:27:57 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Nov 2014 06:47:46 GMT",
"version": "v2"
}
] | 2014-11-27 | [
[
"Luo",
"Liaofu",
""
]
] | After a brief review of the protein folding quantum theory and a short discussion on its experimental evidences the mechanism of glucose transport across membrane is studied from the point of quantum conformational transition. The structural variations among four kinds of conformations of the human glucose transporter GLUT1 (ligand free occluded, outward open, ligand bound occluded and inward open) are looked as the quantum transition. The comparative studies between mechanisms of uniporter (GLUT1) and symporter (XylE and GlcP) are given. The transitional rates are calculated from the fundamental theory. The monosaccharide transport kinetics is proposed. The steady state of the transporter is found and its stability is studied. The glucose (xylose) translocation rates in two directions and in different steps are compared. The mean transport time in a cycle is calculated and based on it the comparison of the transport times between GLUT1,GlcP and XylE can be drawn. The non-Arrhenius temperature dependence of the transition rate and the mean transport time is predicted. It is suggested that the direct measurement of temperature dependence is a useful tool for deeply understanding the transmembrane transport mechanism. |
1602.05953 | Luca Sgheri | F. Clarelli and L. Sgheri | Looking for central tendencies in the conformational freedom of proteins
using NMR measurements | null | null | 10.1088/1361-6420/aa54ea | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the conformational freedom of a protein made by two rigid domains
connected by a flexible linker. The conformational freedom is represented as an
unknown probability distribution on the space of allowed states. A new
algorithm for the calculation of the Maximum Allowable Probability is proposed,
which can be extended to any type of measurements. In this paper we use Pseudo
Contact Shifts and Residual Dipolar Coupling. We reconstruct a single central
tendency in the distribution and discuss in depth the results.
| [
{
"created": "Thu, 18 Feb 2016 12:03:19 GMT",
"version": "v1"
}
] | 2017-02-01 | [
[
"Clarelli",
"F.",
""
],
[
"Sgheri",
"L.",
""
]
] | We study the conformational freedom of a protein made by two rigid domains connected by a flexible linker. The conformational freedom is represented as an unknown probability distribution on the space of allowed states. A new algorithm for the calculation of the Maximum Allowable Probability is proposed, which can be extended to any type of measurements. In this paper we use Pseudo Contact Shifts and Residual Dipolar Coupling. We reconstruct a single central tendency in the distribution and discuss in depth the results. |
1811.11341 | Fangting Li | Yunsheng Sun, Congjian Ni, Yingda Ge, Hong Qian, Qi Ouyang, and
Fangting Li | Both cellular ATP level and ATP hydrolysis free energy determine
energetically the calcium oscillation in pancreatic $\beta$-cell | 12 pages, 7 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In pancreatic $\beta$-cells, calcium oscillation signal is the core part of
glucose-stimulated insulin secretion. Intracellular calcium concentration
oscillates in response to the intake of glucose, which triggers the exocytosis
of insulin secretory granules. ATP plays a crucial part in this process. ATP
increases as the result of glucose intake, then ATP binds to ATP-sensitive
$K^+$ channels ($K_{ATP}$), depolarizes the cell and triggers calcium
oscillation, while the ion pumps on the cell membrane consumes the free energy
form ATP hydrolysis. Based on Betram et. al. 2004 model, we construct a kinetic
models to analyze the thermodynamic characteristics of this system, to reveal
how the ATP hydrolysis free energy affects the calcium oscillation. Our results
suggest that bifurcation point is sensitive to both the free energy level and
cellular ATP level, and the insufficient ATP energy supply would cause
dysfunction of calcium oscillation.
| [
{
"created": "Wed, 28 Nov 2018 01:48:20 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Nov 2018 10:08:27 GMT",
"version": "v2"
}
] | 2018-11-30 | [
[
"Sun",
"Yunsheng",
""
],
[
"Ni",
"Congjian",
""
],
[
"Ge",
"Yingda",
""
],
[
"Qian",
"Hong",
""
],
[
"Ouyang",
"Qi",
""
],
[
"Li",
"Fangting",
""
]
] | In pancreatic $\beta$-cells, calcium oscillation signal is the core part of glucose-stimulated insulin secretion. Intracellular calcium concentration oscillates in response to the intake of glucose, which triggers the exocytosis of insulin secretory granules. ATP plays a crucial part in this process. ATP increases as the result of glucose intake, then ATP binds to ATP-sensitive $K^+$ channels ($K_{ATP}$), depolarizes the cell and triggers calcium oscillation, while the ion pumps on the cell membrane consumes the free energy form ATP hydrolysis. Based on Betram et. al. 2004 model, we construct a kinetic models to analyze the thermodynamic characteristics of this system, to reveal how the ATP hydrolysis free energy affects the calcium oscillation. Our results suggest that bifurcation point is sensitive to both the free energy level and cellular ATP level, and the insufficient ATP energy supply would cause dysfunction of calcium oscillation. |
1804.05964 | Jesus Malo | Jesus Malo and Marcelo Bertalmio | Appropriate kernels for Divisive Normalization explained by Wilson-Cowan
equations | MODVIS-18 and Celebration of Cowan's 50th anniv. at Univ. Chicago | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interaction between wavelet-like sensors in Divisive Normalization is
classically described through Gaussian kernels that decay with spatial
distance, angular distance and frequency distance. However, simultaneous
explanation of (a) distortion perception in natural image databases and (b)
contrast perception of artificial stimuli requires very specific modifications
in classical Divisive Normalization. First, the wavelet response has to be
high-pass filtered before the Gaussian interaction is applied. Then, distinct
weights per subband are also required after the Gaussian interaction. In
summary, the classical Gaussian kernel has to be left- and right-multiplied by
two extra diagonal matrices.
In this paper we provide a lower-level justification for this specific
empirical modification required in the Gaussian kernel of Divisive
Normalization. Here we assume that the psychophysical behavior described by
Divisive Normalization comes from neural interactions following the
Wilson-Cowan equations. In particular, we identify the Divisive Normalization
response with the stationary regime of a Wilson-Cowan model. From this
identification we derive an expression for the Divisive Normalization kernel in
terms of the interaction kernel of the Wilson-Cowan equations. It turns out
that the Wilson-Cowan kernel is left- and-right multiplied by diagonal matrices
with high-pass structure. In conclusion, symmetric Gaussian inhibitory
relations between wavelet-like sensors wired in the lower-level Wilson-Cowan
model lead to the appropriate non-symmetric kernel that has to be empirically
included in Divisive Normalization to explain a wider range of phenomena.
| [
{
"created": "Mon, 16 Apr 2018 22:28:58 GMT",
"version": "v1"
}
] | 2018-04-18 | [
[
"Malo",
"Jesus",
""
],
[
"Bertalmio",
"Marcelo",
""
]
] | The interaction between wavelet-like sensors in Divisive Normalization is classically described through Gaussian kernels that decay with spatial distance, angular distance and frequency distance. However, simultaneous explanation of (a) distortion perception in natural image databases and (b) contrast perception of artificial stimuli requires very specific modifications in classical Divisive Normalization. First, the wavelet response has to be high-pass filtered before the Gaussian interaction is applied. Then, distinct weights per subband are also required after the Gaussian interaction. In summary, the classical Gaussian kernel has to be left- and right-multiplied by two extra diagonal matrices. In this paper we provide a lower-level justification for this specific empirical modification required in the Gaussian kernel of Divisive Normalization. Here we assume that the psychophysical behavior described by Divisive Normalization comes from neural interactions following the Wilson-Cowan equations. In particular, we identify the Divisive Normalization response with the stationary regime of a Wilson-Cowan model. From this identification we derive an expression for the Divisive Normalization kernel in terms of the interaction kernel of the Wilson-Cowan equations. It turns out that the Wilson-Cowan kernel is left- and-right multiplied by diagonal matrices with high-pass structure. In conclusion, symmetric Gaussian inhibitory relations between wavelet-like sensors wired in the lower-level Wilson-Cowan model lead to the appropriate non-symmetric kernel that has to be empirically included in Divisive Normalization to explain a wider range of phenomena. |
2308.11836 | Davood Karimi | Yihan Wu, Lana Vasung, Camilo Calixto, Ali Gholipour, Davood Karimi | Characterizing normal perinatal development of the human brain
structural connectivity | null | null | null | null | q-bio.NC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Early brain development is characterized by the formation of a highly
organized structural connectome. The interconnected nature of this connectome
underlies the brain's cognitive abilities and influences its response to
diseases and environmental factors. Hence, quantitative assessment of
structural connectivity in the perinatal stage is useful for studying normal
and abnormal neurodevelopment. However, estimation of the connectome from
diffusion MRI data involves complex computations. For the perinatal period,
these computations are further challenged by the rapid brain development and
imaging difficulties. Combined with high inter-subject variability, these
factors make it difficult to chart the normal development of the structural
connectome. As a result, there is a lack of reliable normative baselines of
structural connectivity metrics at this critical stage in brain development. In
this study, we developed a computational framework, based on spatio-temporal
averaging, for determining such baselines. We used this framework to analyze
the structural connectivity between 33 and 44 postmenstrual weeks using data
from 166 subjects. Our results unveiled clear and strong trends in the
development of structural connectivity in perinatal stage. Connection weighting
based on fractional anisotropy and neurite density produced the most consistent
results. We observed increases in global and local efficiency, a decrease in
characteristic path length, and widespread strengthening of the connections
within and across brain lobes and hemispheres. We also observed asymmetry
patterns that were consistent between different connection weighting
approaches. The new computational method and results are useful for assessing
normal and abnormal development of the structural connectome early in life.
| [
{
"created": "Tue, 22 Aug 2023 23:49:04 GMT",
"version": "v1"
}
] | 2023-08-24 | [
[
"Wu",
"Yihan",
""
],
[
"Vasung",
"Lana",
""
],
[
"Calixto",
"Camilo",
""
],
[
"Gholipour",
"Ali",
""
],
[
"Karimi",
"Davood",
""
]
] | Early brain development is characterized by the formation of a highly organized structural connectome. The interconnected nature of this connectome underlies the brain's cognitive abilities and influences its response to diseases and environmental factors. Hence, quantitative assessment of structural connectivity in the perinatal stage is useful for studying normal and abnormal neurodevelopment. However, estimation of the connectome from diffusion MRI data involves complex computations. For the perinatal period, these computations are further challenged by the rapid brain development and imaging difficulties. Combined with high inter-subject variability, these factors make it difficult to chart the normal development of the structural connectome. As a result, there is a lack of reliable normative baselines of structural connectivity metrics at this critical stage in brain development. In this study, we developed a computational framework, based on spatio-temporal averaging, for determining such baselines. We used this framework to analyze the structural connectivity between 33 and 44 postmenstrual weeks using data from 166 subjects. Our results unveiled clear and strong trends in the development of structural connectivity in perinatal stage. Connection weighting based on fractional anisotropy and neurite density produced the most consistent results. We observed increases in global and local efficiency, a decrease in characteristic path length, and widespread strengthening of the connections within and across brain lobes and hemispheres. We also observed asymmetry patterns that were consistent between different connection weighting approaches. The new computational method and results are useful for assessing normal and abnormal development of the structural connectome early in life. |
0706.1293 | Natalia Kudryavtseva | N.P. Bondar, I.L. Kovalenko, D.F. Avgustinovich, N.N. Kudryavtseva | Influence of experimental context on the development of anhedonia in
male mice imposed to chronic social stress | 9 pages, 3 figures, 1 table | null | null | null | q-bio.OT q-bio.QM | null | Anhedonia is one of the key symptoms of depression in humans. Consumption of
1% sucrose solution supplemented with 0.2% vanillin was studied in two
experimental contexts in male mice living under chronic social stress induced
by daily experience of defeats in agonistic interactions and leading to
development of depression. In the first experiment, vanillin sucrose solution
was made available as an option of water during 10 days to mice living in group
home cages. Then the mice were subjected to social defeat stress and during
stress exposure they were provided with both vanillin sucrose solution and
water using a free two bottles choice paradigm. In the other experiment,
vanillin sucrose solution were first offered to mice after 8 days of exposure
to social defeat stress. Males familiar with vanillin sucrose solution showed
vanillin sucrose preference while experiencing defeat stress: consumption of
vanillin sucrose solution was about 70% of total liquid consumption. However,
the consumption of vanillin sucrose solution per gram of body weight in mice
imposed to social stress during 20 days was significantly lower than in control
males. In the second experiment, males after 8 days of social defeat stress
were found to consume significantly less vanillin sucrose solution as compared
with control males. On average during two weeks of measurements, vanillin
sucrose solution intake was less than 20% of total liquid consumption in males
with symptoms of depression and anxiety. Consumption per gram of body weight
also appeared to be significantly lower than in control group. Influence of the
experimental context on the development of anhedonia, which was measured by the
reduction in sucrose solution intake by chronically stressed male mice, has
been discussed.
| [
{
"created": "Sat, 9 Jun 2007 08:12:55 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Jun 2007 08:20:15 GMT",
"version": "v2"
},
{
"created": "Sat, 10 May 2008 08:45:09 GMT",
"version": "v3"
}
] | 2008-05-10 | [
[
"Bondar",
"N. P.",
""
],
[
"Kovalenko",
"I. L.",
""
],
[
"Avgustinovich",
"D. F.",
""
],
[
"Kudryavtseva",
"N. N.",
""
]
] | Anhedonia is one of the key symptoms of depression in humans. Consumption of 1% sucrose solution supplemented with 0.2% vanillin was studied in two experimental contexts in male mice living under chronic social stress induced by daily experience of defeats in agonistic interactions and leading to development of depression. In the first experiment, vanillin sucrose solution was made available as an option of water during 10 days to mice living in group home cages. Then the mice were subjected to social defeat stress and during stress exposure they were provided with both vanillin sucrose solution and water using a free two bottles choice paradigm. In the other experiment, vanillin sucrose solution were first offered to mice after 8 days of exposure to social defeat stress. Males familiar with vanillin sucrose solution showed vanillin sucrose preference while experiencing defeat stress: consumption of vanillin sucrose solution was about 70% of total liquid consumption. However, the consumption of vanillin sucrose solution per gram of body weight in mice imposed to social stress during 20 days was significantly lower than in control males. In the second experiment, males after 8 days of social defeat stress were found to consume significantly less vanillin sucrose solution as compared with control males. On average during two weeks of measurements, vanillin sucrose solution intake was less than 20% of total liquid consumption in males with symptoms of depression and anxiety. Consumption per gram of body weight also appeared to be significantly lower than in control group. Influence of the experimental context on the development of anhedonia, which was measured by the reduction in sucrose solution intake by chronically stressed male mice, has been discussed. |
2301.08382 | Bo Zhang | Luyao Chen, Zhiqiang Chen, Longsheng Jiang, Xiang Liu, Linlu Xu, Bo
Zhang, Xiaolong Zou, Jinying Gao, Yu Zhu, Xizi Gong, Shan Yu, Sen Song,
Liangyi Chen, Fang Fang, Si Wu, Jia Liu | AI of Brain and Cognitive Sciences: From the Perspective of First
Principles | 59 pages, 5 figures, review article | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Nowadays, we have witnessed the great success of AI in various applications,
including image classification, game playing, protein structure analysis,
language translation, and content generation. Despite these powerful
applications, there are still many tasks in our daily life that are rather
simple to humans but pose great challenges to AI. These include image and
language understanding, few-shot learning, abstract concepts, and low-energy
cost computing. Thus, learning from the brain is still a promising way that can
shed light on the development of next-generation AI. The brain is arguably the
only known intelligent machine in the universe, which is the product of
evolution for animals surviving in the natural environment. At the behavior
level, psychology and cognitive sciences have demonstrated that human and
animal brains can execute very intelligent high-level cognitive functions. At
the structure level, cognitive and computational neurosciences have unveiled
that the brain has extremely complicated but elegant network forms to support
its functions. Over years, people are gathering knowledge about the structure
and functions of the brain, and this process is accelerating recently along
with the initiation of giant brain projects worldwide. Here, we argue that the
general principles of brain functions are the most valuable things to inspire
the development of AI. These general principles are the standard rules of the
brain extracting, representing, manipulating, and retrieving information, and
here we call them the first principles of the brain. This paper collects six
such first principles. They are attractor network, criticality, random network,
sparse coding, relational memory, and perceptual learning. On each topic, we
review its biological background, fundamental property, potential application
to AI, and future development.
| [
{
"created": "Fri, 20 Jan 2023 01:31:24 GMT",
"version": "v1"
}
] | 2023-01-23 | [
[
"Chen",
"Luyao",
""
],
[
"Chen",
"Zhiqiang",
""
],
[
"Jiang",
"Longsheng",
""
],
[
"Liu",
"Xiang",
""
],
[
"Xu",
"Linlu",
""
],
[
"Zhang",
"Bo",
""
],
[
"Zou",
"Xiaolong",
""
],
[
"Gao",
"Jinying",
""
],
[
"Zhu",
"Yu",
""
],
[
"Gong",
"Xizi",
""
],
[
"Yu",
"Shan",
""
],
[
"Song",
"Sen",
""
],
[
"Chen",
"Liangyi",
""
],
[
"Fang",
"Fang",
""
],
[
"Wu",
"Si",
""
],
[
"Liu",
"Jia",
""
]
] | Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation. Despite these powerful applications, there are still many tasks in our daily life that are rather simple to humans but pose great challenges to AI. These include image and language understanding, few-shot learning, abstract concepts, and low-energy cost computing. Thus, learning from the brain is still a promising way that can shed light on the development of next-generation AI. The brain is arguably the only known intelligent machine in the universe, which is the product of evolution for animals surviving in the natural environment. At the behavior level, psychology and cognitive sciences have demonstrated that human and animal brains can execute very intelligent high-level cognitive functions. At the structure level, cognitive and computational neurosciences have unveiled that the brain has extremely complicated but elegant network forms to support its functions. Over years, people are gathering knowledge about the structure and functions of the brain, and this process is accelerating recently along with the initiation of giant brain projects worldwide. Here, we argue that the general principles of brain functions are the most valuable things to inspire the development of AI. These general principles are the standard rules of the brain extracting, representing, manipulating, and retrieving information, and here we call them the first principles of the brain. This paper collects six such first principles. They are attractor network, criticality, random network, sparse coding, relational memory, and perceptual learning. On each topic, we review its biological background, fundamental property, potential application to AI, and future development. |
1109.0029 | Andre X. C. N. Valente | A. X. C. N. Valente, J. H. Shin, A. Sarkar, Y. Gao | Rare coding SNP in DZIP1 gene associated with late-onset sporadic
Parkinson's disease | 14 pages | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the first application of the hypothesis-rich mathematical theory
to genome-wide association data. The Hamza et al. late-onset sporadic
Parkinson's disease genome-wide association study dataset was analyzed. We
found a rare, coding, non-synonymous SNP variant in the gene DZIP1 that confers
increased susceptibility to Parkinson's disease. The association of DZIP1 with
Parkinson's disease is consistent with a Parkinson's disease stem-cell ageing
theory.
| [
{
"created": "Wed, 31 Aug 2011 20:30:26 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Sep 2011 09:24:57 GMT",
"version": "v2"
}
] | 2011-09-13 | [
[
"Valente",
"A. X. C. N.",
""
],
[
"Shin",
"J. H.",
""
],
[
"Sarkar",
"A.",
""
],
[
"Gao",
"Y.",
""
]
] | We present the first application of the hypothesis-rich mathematical theory to genome-wide association data. The Hamza et al. late-onset sporadic Parkinson's disease genome-wide association study dataset was analyzed. We found a rare, coding, non-synonymous SNP variant in the gene DZIP1 that confers increased susceptibility to Parkinson's disease. The association of DZIP1 with Parkinson's disease is consistent with a Parkinson's disease stem-cell ageing theory. |
1001.0685 | Franco Bagnoli | Viet-Anh Nguyen, Zdena Koukolikova-Nicola, Franco Bagnoli, Pietro Lio | Noise and nonlinearities in high-throughput data | 12 pages, 3 figures | J. Stat. Mech. (2009) P01014 | 10.1088/1742-5468/2009/01/P01014 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-throughput data analyses are becoming common in biology, communications,
economics and sociology. The vast amounts of data are usually represented in
the form of matrices and can be considered as knowledge networks. Spectra-based
approaches have proved useful in extracting hidden information within such
networks and for estimating missing data, but these methods are based
essentially on linear assumptions. The physical models of matching, when
applicable, often suggest non-linear mechanisms, that may sometimes be
identified as noise. The use of non-linear models in data analysis, however,
may require the introduction of many parameters, which lowers the statistical
weight of the model. According to the quality of data, a simpler linear
analysis may be more convenient than more complex approaches.
In this paper, we show how a simple non-parametric Bayesian model may be used
to explore the role of non-linearities and noise in synthetic and experimental
data sets.
| [
{
"created": "Tue, 5 Jan 2010 11:52:01 GMT",
"version": "v1"
}
] | 2010-01-06 | [
[
"Nguyen",
"Viet-Anh",
""
],
[
"Koukolikova-Nicola",
"Zdena",
""
],
[
"Bagnoli",
"Franco",
""
],
[
"Lio",
"Pietro",
""
]
] | High-throughput data analyses are becoming common in biology, communications, economics and sociology. The vast amounts of data are usually represented in the form of matrices and can be considered as knowledge networks. Spectra-based approaches have proved useful in extracting hidden information within such networks and for estimating missing data, but these methods are based essentially on linear assumptions. The physical models of matching, when applicable, often suggest non-linear mechanisms, that may sometimes be identified as noise. The use of non-linear models in data analysis, however, may require the introduction of many parameters, which lowers the statistical weight of the model. According to the quality of data, a simpler linear analysis may be more convenient than more complex approaches. In this paper, we show how a simple non-parametric Bayesian model may be used to explore the role of non-linearities and noise in synthetic and experimental data sets. |
1807.11120 | Zixuan Cang | Zixuan Cang, Guo-Wei Wei | Persistent cohomology for data with multicomponent heterogeneous
information | null | null | null | null | q-bio.QM math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Persistent homology is a powerful tool for characterizing the topology of a
data set at various geometric scales. When applied to the description of
molecular structures, persistent homology can capture the multiscale geometric
features and reveal certain interaction patterns in terms of topological
invariants. However, in addition to the geometric information, there is a wide
variety of non-geometric information of molecular structures, such as element
types, atomic partial charges, atomic pairwise interactions, and electrostatic
potential function, that is not described by persistent homology. Although
element specific homology and electrostatic persistent homology can encode some
non-geometric information into geometry based topological invariants, it is
desirable to have a mathematical framework to systematically embed both
geometric and non-geometric information, i.e., multicomponent heterogeneous
information, into unified topological descriptions. To this end, we propose a
mathematical framework based on persistent cohomology. In our framework,
non-geometric information can be either distributed globally or resided locally
on the datasets in the geometric sense and can be properly defined on
topological spaces, i.e., simplicial complexes. Using the proposed persistent
cohomology based framework, enriched barcodes are extracted from datasets to
represent heterogeneous information. We consider a variety of datasets to
validate the present formulation and illustrate the usefulness of the proposed
persistent cohomology. It is found that the proposed framework using cohomology
boosts the performance of persistent homology based methods in the
protein-ligand binding affinity prediction on massive biomolecular datasets.
| [
{
"created": "Sun, 29 Jul 2018 23:10:08 GMT",
"version": "v1"
}
] | 2018-07-31 | [
[
"Cang",
"Zixuan",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Persistent homology is a powerful tool for characterizing the topology of a data set at various geometric scales. When applied to the description of molecular structures, persistent homology can capture the multiscale geometric features and reveal certain interaction patterns in terms of topological invariants. However, in addition to the geometric information, there is a wide variety of non-geometric information of molecular structures, such as element types, atomic partial charges, atomic pairwise interactions, and electrostatic potential function, that is not described by persistent homology. Although element specific homology and electrostatic persistent homology can encode some non-geometric information into geometry based topological invariants, it is desirable to have a mathematical framework to systematically embed both geometric and non-geometric information, i.e., multicomponent heterogeneous information, into unified topological descriptions. To this end, we propose a mathematical framework based on persistent cohomology. In our framework, non-geometric information can be either distributed globally or resided locally on the datasets in the geometric sense and can be properly defined on topological spaces, i.e., simplicial complexes. Using the proposed persistent cohomology based framework, enriched barcodes are extracted from datasets to represent heterogeneous information. We consider a variety of datasets to validate the present formulation and illustrate the usefulness of the proposed persistent cohomology. It is found that the proposed framework using cohomology boosts the performance of persistent homology based methods in the protein-ligand binding affinity prediction on massive biomolecular datasets. |
1706.01345 | Antti Niemi | Yanzhen Hou, Jin Dai, Nevena Ilieva, Antti J. Niemi, Xubiao Peng,
Jianfeng He | Virtual reality analysis of intrinsic protein geometry with applications
to cis peptide planes | 25 figures | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A protein is traditionally visualised as a piecewise linear discrete curve,
and its geometry is conventionally characterised by the extrinsically
determined Ramachandran angles. However, a protein backbone has also two
independent intrinsic geometric structures, due to the peptide planes and the
side chains. Here we adapt and develop modern 3D virtual reality techniques to
scrutinize the atomic geometry along a protein backbone, in the vicinity of a
peptide plane. For this we compare backbone geometry-based (extrinsic) and
structure-based (intrinsic) coordinate systems, and as an example we inspect
the trans and cis peptide planes. We reveal systematics in the way how a cis
peptide plane deforms the neighbouring atomic geometry, and we develop a
virtual reality based visual methodology that can identify the presence of a
cis peptide plane from the arrangement of atoms in its vicinity. Our approach
can easily detect exceptionally placed atoms in crystallographic structures.
Thus it can be employed as a powerful visual refinement tool which is
applicable also in the case when resolution of the protein structure is limited
and whenever refinement is needed. As concrete examples we identify a number of
crystallographic protein structures in Protein Data Bank (PDB) that display
exceptional atomic positions around their cis peptide planes.
| [
{
"created": "Mon, 5 Jun 2017 14:24:42 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jun 2017 07:22:11 GMT",
"version": "v2"
}
] | 2017-06-07 | [
[
"Hou",
"Yanzhen",
""
],
[
"Dai",
"Jin",
""
],
[
"Ilieva",
"Nevena",
""
],
[
"Niemi",
"Antti J.",
""
],
[
"Peng",
"Xubiao",
""
],
[
"He",
"Jianfeng",
""
]
] | A protein is traditionally visualised as a piecewise linear discrete curve, and its geometry is conventionally characterised by the extrinsically determined Ramachandran angles. However, a protein backbone has also two independent intrinsic geometric structures, due to the peptide planes and the side chains. Here we adapt and develop modern 3D virtual reality techniques to scrutinize the atomic geometry along a protein backbone, in the vicinity of a peptide plane. For this we compare backbone geometry-based (extrinsic) and structure-based (intrinsic) coordinate systems, and as an example we inspect the trans and cis peptide planes. We reveal systematics in the way how a cis peptide plane deforms the neighbouring atomic geometry, and we develop a virtual reality based visual methodology that can identify the presence of a cis peptide plane from the arrangement of atoms in its vicinity. Our approach can easily detect exceptionally placed atoms in crystallographic structures. Thus it can be employed as a powerful visual refinement tool which is applicable also in the case when resolution of the protein structure is limited and whenever refinement is needed. As concrete examples we identify a number of crystallographic protein structures in Protein Data Bank (PDB) that display exceptional atomic positions around their cis peptide planes. |
q-bio/0502043 | Kevin E. Cahill | Kevin Cahill | Helices in Biomolecules | five pages, no figures | Physical Review E 72, 062901 (2005) | 10.1103/PhysRevE.72.062901 | null | q-bio.BM | null | Identical objects, regularly assembled, form a helix, which is the principal
motif of nucleic acids, proteins, and viral capsids.
| [
{
"created": "Sun, 27 Feb 2005 00:25:52 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Mar 2005 05:22:08 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Jun 2005 06:36:40 GMT",
"version": "v3"
},
{
"created": "Sat, 27 Aug 2005 22:15:37 GMT",
"version": "v4"
},
{
"created": "Sat, 15 Oct 2005 04:46:43 GMT",
"version": "v5"
}
] | 2009-11-11 | [
[
"Cahill",
"Kevin",
""
]
] | Identical objects, regularly assembled, form a helix, which is the principal motif of nucleic acids, proteins, and viral capsids. |
2102.03422 | Jordi Garcia-Ojalvo | Gabriel Torregrosa and Jordi Garcia-Ojalvo | Mechanistic models of cell-fate transitions from single-cell data | 6 pages, 4 figures | null | null | null | q-bio.CB nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Our knowledge of how individual cells self-organize to form complex
multicellular systems is being revolutionized by a data outburst, coming from
high-throughput experimental breakthroughs such as single-cell RNA sequencing
and spatially resolved single-molecule FISH. This information is starting to be
leveraged by machine learning approaches that are helping us establish a census
and timeline of cell types in developing organisms, shedding light on how
biochemistry regulates cell-fate decisions. In parallel, imaging tools such as
light-sheet microscopy are revealing how cells self-assemble in space and time
as the organism forms, thereby elucidating the role of cell mechanics in
development. Here we argue that mathematical modeling can bring together these
two perspectives, by enabling us to test hypotheses about specific mechanisms,
which can be further validated experimentally. We review the recent literature
on this subject, focusing on representative examples that use modeling to
better understand how single-cell behavior shapes multicellular organisms.
| [
{
"created": "Fri, 5 Feb 2021 21:17:27 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Apr 2021 09:01:33 GMT",
"version": "v2"
}
] | 2021-04-20 | [
[
"Torregrosa",
"Gabriel",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] | Our knowledge of how individual cells self-organize to form complex multicellular systems is being revolutionized by a data outburst, coming from high-throughput experimental breakthroughs such as single-cell RNA sequencing and spatially resolved single-molecule FISH. This information is starting to be leveraged by machine learning approaches that are helping us establish a census and timeline of cell types in developing organisms, shedding light on how biochemistry regulates cell-fate decisions. In parallel, imaging tools such as light-sheet microscopy are revealing how cells self-assemble in space and time as the organism forms, thereby elucidating the role of cell mechanics in development. Here we argue that mathematical modeling can bring together these two perspectives, by enabling us to test hypotheses about specific mechanisms, which can be further validated experimentally. We review the recent literature on this subject, focusing on representative examples that use modeling to better understand how single-cell behavior shapes multicellular organisms. |
1703.07401 | Evgeny Ivanko | Evgeny Ivanko | Big-data approach in abundance estimation of non-identifiable animals
with camera-traps at the spots of attraction | 4 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camera-traps is a relatively new but already popular instrument in the
estimation of abundance of non-identifiable animals. Although camera-traps are
convenient in application, there remain both theoretical complications such as
spatial autocorrelation or false negative problem and practical difficulties,
for example, laborious random sampling. In the article we propose an
alternative way to bypass the mentioned problems.
In the proposed approach, the raw video information collected from the
camera-traps situated at the spots of natural attraction is turned into the
frequency of visits, and the latter is transformed into the desired abundance
estimate. The key for such a transformation is the application of the
correction coefficients, computed for each particular observation environment
using the Bayesian approach and the massive database (DB) of observations under
various conditions.
The main result of the article is a new method of census based on video-data
from camera-traps at the spots of natural attraction and information from a
special community-driven database.
The proposed method is based on automated video-capturing at a moderate
number of easy to reach spots, so in the long term many laborious census works
may be conducted easier, cheaper and cause less disturbance for the wild life.
Information post-processing is strictly formalized, which leaves little chance
for subjective alterations. However, the method heavily relies on the volume
and quality of the DB, which in its turn heavily relies on the efforts of the
community. There is realistic hope that the community of zoologists and
environment specialists could create and maintain a DB similar to the proposed
one. Such a rich DB of visits might benefit not only censuses, but also many
behavioral studies.
| [
{
"created": "Tue, 21 Mar 2017 19:39:09 GMT",
"version": "v1"
}
] | 2017-03-23 | [
[
"Ivanko",
"Evgeny",
""
]
] | Camera-traps is a relatively new but already popular instrument in the estimation of abundance of non-identifiable animals. Although camera-traps are convenient in application, there remain both theoretical complications such as spatial autocorrelation or false negative problem and practical difficulties, for example, laborious random sampling. In the article we propose an alternative way to bypass the mentioned problems. In the proposed approach, the raw video information collected from the camera-traps situated at the spots of natural attraction is turned into the frequency of visits, and the latter is transformed into the desired abundance estimate. The key for such a transformation is the application of the correction coefficients, computed for each particular observation environment using the Bayesian approach and the massive database (DB) of observations under various conditions. The main result of the article is a new method of census based on video-data from camera-traps at the spots of natural attraction and information from a special community-driven database. The proposed method is based on automated video-capturing at a moderate number of easy to reach spots, so in the long term many laborious census works may be conducted easier, cheaper and cause less disturbance for the wild life. Information post-processing is strictly formalized, which leaves little chance for subjective alterations. However, the method heavily relies on the volume and quality of the DB, which in its turn heavily relies on the efforts of the community. There is realistic hope that the community of zoologists and environment specialists could create and maintain a DB similar to the proposed one. Such a rich DB of visits might benefit not only censuses, but also many behavioral studies. |
1012.2422 | Max Alekseyev | Shuai Jiang and Max A. Alekseyev | Weighted genomic distance can hardly impose a bound on the proportion of
transpositions | The 15th Annual International Conference on Research in Computational
Molecular Biology (RECOMB), 2011. (to appear) | Lecture Notes in Computer Science 6577 (2011), pp. 124-133 | 10.1007/978-3-642-20036-6_13 | null | q-bio.GN cs.DM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genomic distance between two genomes, i.e., the smallest number of genome
rearrangements required to transform one genome into the other, is often used
as a measure of evolutionary closeness of the genomes in comparative genomics
studies. However, in models that include rearrangements of significantly
different "power" such as reversals (that are "weak" and most frequent
rearrangements) and transpositions (that are more "powerful" but rare), the
genomic distance typically corresponds to a transformation with a large
proportion of transpositions, which is not biologically adequate.
Weighted genomic distance is a traditional approach to bounding the
proportion of transpositions by assigning them a relative weight {\alpha} > 1.
A number of previous studies addressed the problem of computing weighted
genomic distance with {\alpha} \leq 2.
Employing the model of multi-break rearrangements on circular genomes, that
captures both reversals (modelled as 2-breaks) and transpositions (modelled as
3-breaks), we prove that for {\alpha} \in (1,2], a minimum-weight
transformation may entirely consist of transpositions, implying that the
corresponding weighted genomic distance does not actually achieve its purpose
of bounding the proportion of transpositions. We further prove that for
{\alpha} \in (1,2), the minimum-weight transformations do not depend on a
particular choice of {\alpha} from this interval. We give a complete
characterization of such transformations and show that they coincide with the
transformations that at the same time have the shortest length and make the
smallest number of breakages in the genomes.
Our results also provide a theoretical foundation for the empirical
observation that for {\alpha} < 2, transpositions are favored over reversals in
the minimum-weight transformations.
| [
{
"created": "Sat, 11 Dec 2010 02:26:27 GMT",
"version": "v1"
}
] | 2011-03-30 | [
[
"Jiang",
"Shuai",
""
],
[
"Alekseyev",
"Max A.",
""
]
] | Genomic distance between two genomes, i.e., the smallest number of genome rearrangements required to transform one genome into the other, is often used as a measure of evolutionary closeness of the genomes in comparative genomics studies. However, in models that include rearrangements of significantly different "power" such as reversals (that are "weak" and most frequent rearrangements) and transpositions (that are more "powerful" but rare), the genomic distance typically corresponds to a transformation with a large proportion of transpositions, which is not biologically adequate. Weighted genomic distance is a traditional approach to bounding the proportion of transpositions by assigning them a relative weight {\alpha} > 1. A number of previous studies addressed the problem of computing weighted genomic distance with {\alpha} \leq 2. Employing the model of multi-break rearrangements on circular genomes, that captures both reversals (modelled as 2-breaks) and transpositions (modelled as 3-breaks), we prove that for {\alpha} \in (1,2], a minimum-weight transformation may entirely consist of transpositions, implying that the corresponding weighted genomic distance does not actually achieve its purpose of bounding the proportion of transpositions. We further prove that for {\alpha} \in (1,2), the minimum-weight transformations do not depend on a particular choice of {\alpha} from this interval. We give a complete characterization of such transformations and show that they coincide with the transformations that at the same time have the shortest length and make the smallest number of breakages in the genomes. Our results also provide a theoretical foundation for the empirical observation that for {\alpha} < 2, transpositions are favored over reversals in the minimum-weight transformations. |
1401.4668 | Sadra Sadeh | Sadra Sadeh, Stefano Cardanobile and Stefan Rotter | Mean-Field Analysis of Orientation Selectivity in Inhibition-Dominated
Networks of Spiking Neurons | 19 figures | SpringerPlus 3: 148, 2014 | 10.1186/2193-1801-3-148. | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanisms underlying the emergence of orientation selectivity in the primary
visual cortex are highly debated. Here we study the contribution of
inhibition-dominated random recurrent networks to orientation selectivity, and
more generally to sensory processing. By simulating and analyzing large-scale
networks of spiking neurons, we investigate tuning amplification and contrast
invariance of orientation selectivity in these networks. In particular, we show
how selective attenuation of the common mode and amplification of the
modulation component take place in these networks. Selective attenuation of the
baseline, which is governed by the exceptional eigenvalue of the connectivity
matrix, removes the unspecific, redundant signal component and ensures the
invariance of selectivity across different contrasts. Selective amplification
of modulation, which is governed by the operating regime of the network and
depends on the strength of coupling, amplifies the informative signal component
and thus increases the signal-to-noise ratio. Here, we perform a mean-field
analysis which accounts for this process.
| [
{
"created": "Sun, 19 Jan 2014 13:59:09 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Jun 2014 11:40:10 GMT",
"version": "v2"
}
] | 2014-06-03 | [
[
"Sadeh",
"Sadra",
""
],
[
"Cardanobile",
"Stefano",
""
],
[
"Rotter",
"Stefan",
""
]
] | Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process. |
1009.5962 | Raul Isea | Raul Isea, Juan L. Chaves, Fernando Blanco, Rafael Mayo | Phylogenetic tree calculations using the Grid with DAG job | 5 pages, 2 figures | Revista RET (2009). Vol. 1(2), pp. 21-25 | null | null | q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | The goal of the work is to implement molecular phylogenetic calculations
using the Grid paradigm by means of the MrBayes software using Directed Acyclic
Graphs (DAG) jobs. In this method, a set of jobs depends on the input or the
output of other jobs. Once the runs have been successfully done, all the
results can be collected by a specific Perl script inside the defined DAG job.
For testing this methodology, we calculate the evolution of papillomavirus with
121 sequences.
| [
{
"created": "Wed, 29 Sep 2010 18:02:41 GMT",
"version": "v1"
}
] | 2010-09-30 | [
[
"Isea",
"Raul",
""
],
[
"Chaves",
"Juan L.",
""
],
[
"Blanco",
"Fernando",
""
],
[
"Mayo",
"Rafael",
""
]
] | The goal of the work is to implement molecular phylogenetic calculations using the Grid paradigm by means of the MrBayes software using Directed Acyclic Graphs (DAG) jobs. In this method, a set of jobs depends on the input or the output of other jobs. Once the runs have been successfully done, all the results can be collected by a specific Perl script inside the defined DAG job. For testing this methodology, we calculate the evolution of papillomavirus with 121 sequences. |
1511.00107 | Thierry Mora | Yuval Elhanati, Quentin Marcou, Thierry Mora, and Aleksandra M.
Walczak | repgenHMM: a dynamic programming tool to infer the rules of immune
receptor generation from sequence data | null | Bioinformatics (2016) 32 (13): 1943-1951 | 10.1093/bioinformatics/btw112 | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The diversity of the immune repertoire is initially generated by random
rearrangements of the receptor gene during early T and B cell development.
Rearrangement scenarios are composed of random events -- choices of gene
templates, base pair deletions and insertions -- described by probability
distributions. Not all scenarios are equally likely, and the same receptor
sequence may be obtained in several different ways. Quantifying the
distribution of these rearrangements is an essential baseline for studying the
immune system diversity. Inferring the properties of the distributions from
receptor sequences is a computationally hard problem, requiring enumerating
every possible scenario for every sampled receptor sequence. We present a
Hidden Markov model, which accounts for all plausible scenarios that can
generate the receptor sequences. We developed and implemented a method based on
the Baum-Welch algorithm that can efficiently infer the parameters for the
different events of the rearrangement process. We tested our software tool on
sequence data for both the alpha and beta chains of the T cell receptor. To
test the validity of our algorithm, we also generated synthetic sequences
produced by a known model, and confirmed that its parameters could be
accurately inferred back from the sequences. The inferred model can be used to
generate synthetic sequences, to calculate the probability of generation of any
receptor sequence, as well as the theoretical diversity of the repertoire. We
estimate this diversity to be $\approx 10^{23}$ for human T cells. The model
gives a baseline to investigate the selection and dynamics of immune
repertoires.
| [
{
"created": "Sat, 31 Oct 2015 10:27:12 GMT",
"version": "v1"
}
] | 2016-06-29 | [
[
"Elhanati",
"Yuval",
""
],
[
"Marcou",
"Quentin",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
]
] | The diversity of the immune repertoire is initially generated by random rearrangements of the receptor gene during early T and B cell development. Rearrangement scenarios are composed of random events -- choices of gene templates, base pair deletions and insertions -- described by probability distributions. Not all scenarios are equally likely, and the same receptor sequence may be obtained in several different ways. Quantifying the distribution of these rearrangements is an essential baseline for studying the immune system diversity. Inferring the properties of the distributions from receptor sequences is a computationally hard problem, requiring enumerating every possible scenario for every sampled receptor sequence. We present a Hidden Markov model, which accounts for all plausible scenarios that can generate the receptor sequences. We developed and implemented a method based on the Baum-Welch algorithm that can efficiently infer the parameters for the different events of the rearrangement process. We tested our software tool on sequence data for both the alpha and beta chains of the T cell receptor. To test the validity of our algorithm, we also generated synthetic sequences produced by a known model, and confirmed that its parameters could be accurately inferred back from the sequences. The inferred model can be used to generate synthetic sequences, to calculate the probability of generation of any receptor sequence, as well as the theoretical diversity of the repertoire. We estimate this diversity to be $\approx 10^{23}$ for human T cells. The model gives a baseline to investigate the selection and dynamics of immune repertoires. |
1712.00194 | Fangwei Si | Suckjoon Jun, Fangwei Si, Rami Pugatch, Matthew Scott | Fundamental Principles in Bacterial Physiology - History, Recent
progress, and the Future with Focus on Cell Size Control: A Review | Published in Reports on Progress in Physics.
(https://doi.org/10.1088/1361-6633/aaa628) 96 pages, 48 figures, 7 boxes, 715
references | Suckjoon Jun et al 2018 Rep. Prog. Phys. 81 056601 | 10.1088/1361-6633/aaa628 | null | q-bio.CB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial physiology is a branch of biology that aims to understand
overarching principles of cellular reproduction. Many important issues in
bacterial physiology are inherently quantitative, and major contributors to the
field have often brought together tools and ways of thinking from multiple
disciplines. This article presents a comprehensive overview of major ideas and
approaches developed since the early 20th century for anyone who is interested
in the fundamental problems in bacterial physiology. This article is divided
into two parts. In the first part (Sections 1 to 3), we review the first
`golden era' of bacterial physiology from the 1940s to early 1970s and provide
a complete list of major references from that period. In the second part
(Sections 4 to 7), we explain how the pioneering work from the first golden era
has influenced various rediscoveries of general quantitative principles and
significant further development in modern bacterial physiology. Specifically,
Section 4 presents the history and current progress of the `adder' principle of
cell size homeostasis. Section 5 discusses the implications of coarse-graining
the cellular protein composition, and how the coarse-grained proteome `sectors'
re-balance under different growth conditions. Section 6 focuses on
physiological invariants, and explains how they are the key to understanding
the coordination between growth and the cell cycle underlying cell size control
in steady-state growth. Section 7 overviews how the temporal organization of
all the internal processes enables balanced growth. In the final Section 8, we
conclude by discussing the remaining challenges for the future in the field.
| [
{
"created": "Fri, 1 Dec 2017 05:02:51 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jan 2018 05:48:34 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Mar 2018 05:00:54 GMT",
"version": "v3"
}
] | 2018-03-15 | [
[
"Jun",
"Suckjoon",
""
],
[
"Si",
"Fangwei",
""
],
[
"Pugatch",
"Rami",
""
],
[
"Scott",
"Matthew",
""
]
] | Bacterial physiology is a branch of biology that aims to understand overarching principles of cellular reproduction. Many important issues in bacterial physiology are inherently quantitative, and major contributors to the field have often brought together tools and ways of thinking from multiple disciplines. This article presents a comprehensive overview of major ideas and approaches developed since the early 20th century for anyone who is interested in the fundamental problems in bacterial physiology. This article is divided into two parts. In the first part (Sections 1 to 3), we review the first `golden era' of bacterial physiology from the 1940s to early 1970s and provide a complete list of major references from that period. In the second part (Sections 4 to 7), we explain how the pioneering work from the first golden era has influenced various rediscoveries of general quantitative principles and significant further development in modern bacterial physiology. Specifically, Section 4 presents the history and current progress of the `adder' principle of cell size homeostasis. Section 5 discusses the implications of coarse-graining the cellular protein composition, and how the coarse-grained proteome `sectors' re-balance under different growth conditions. Section 6 focuses on physiological invariants, and explains how they are the key to understanding the coordination between growth and the cell cycle underlying cell size control in steady-state growth. Section 7 overviews how the temporal organization of all the internal processes enables balanced growth. In the final Section 8, we conclude by discussing the remaining challenges for the future in the field. |
1903.07103 | Kabir Husain | Kabir Husain, Weerapat Pittayakanchit, Gopal Pattanayak, Michael J
Rust, Arvind Murugan | The self-tuned sensitivity of circadian clocks | Main text (6 pages, 4 Figures) + SI (6 pages, 1 Figure) | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living organisms need to be sensitive to a changing environment while also
ignoring uninformative environmental fluctuations. Here, we show that the
circadian clock in \textit{Synechococcus elongatus} can naturally tune its
environmental sensitivity, through a clock-metabolism coupling quantified in
recent experiments. The metabolic coupling can detect mismatch between clock
predictions and the day-night light cycle, and temporarily raise the clock's
sensitivity to light changes and thus entrain faster. We also analyze analogous
behavior in recent experiments on switching between slow and fast osmotic
stress response pathways in yeast. In both cases, cells can raise their
sensitivity to new external information in epochs of frequent challenging
stress, much like a Kalman filter with adaptive gain in signal processing. Our
work suggests a new class of experiments that probe the history-dependence of
environmental sensitivity in biophysical sensing mechanisms.
| [
{
"created": "Sun, 17 Mar 2019 15:06:23 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Husain",
"Kabir",
""
],
[
"Pittayakanchit",
"Weerapat",
""
],
[
"Pattanayak",
"Gopal",
""
],
[
"Rust",
"Michael J",
""
],
[
"Murugan",
"Arvind",
""
]
] | Living organisms need to be sensitive to a changing environment while also ignoring uninformative environmental fluctuations. Here, we show that the circadian clock in \textit{Synechococcus elongatus} can naturally tune its environmental sensitivity, through a clock-metabolism coupling quantified in recent experiments. The metabolic coupling can detect mismatch between clock predictions and the day-night light cycle, and temporarily raise the clock's sensitivity to light changes and thus entrain faster. We also analyze analogous behavior in recent experiments on switching between slow and fast osmotic stress response pathways in yeast. In both cases, cells can raise their sensitivity to new external information in epochs of frequent challenging stress, much like a Kalman filter with adaptive gain in signal processing. Our work suggests a new class of experiments that probe the history-dependence of environmental sensitivity in biophysical sensing mechanisms. |
1404.5072 | Wolfram Liebermeister | Wolfram Liebermeister | Metabolic fluxes and value production | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolic fluxes in cells are governed by physical, physiological, and
economic principles. Here I assume an optimal allocation of enzyme resources
and postulate a general principle for metabolism: each enzyme must convert less
valuable into more valuable metabolites to justify its own cost. The "values",
called economic potentials, describe the individual contributions of
metabolites to cell fitness. Local value production implies that the cost of an
enzyme must be balanced by a benefit, given by the economic potential
difference the catalysed reaction multiplied by the flux. Flux profiles that
satisfy this principle - i.e. for which consistent potentials can be found -
are called economical. Economical fluxes must lead from lower to higher
economic potentials, so certain flux cycles are incompatible with any choice of
economic potentials and can be excluded. To obtain economical flux profiles,
non-beneficial local patterns, called futile motifs, can be systematically
removed from a given flux distribution. The principle of local value production
resembles thermodynamic principles and complements them in models. Here I
describe a modelling framework called Value Balance Analysis (VBA) that uses
the two principles and yields the same solution as enzyme cost minimisation (in
kinetic models) and flux cost minimisation (in FBA). Given an economical flux
distribution, kinetic models in enzyme-optimal states and with these fluxes can
be constructed systematically. VBA justifies the principle of minimal fluxes
and the exclusion of futile cycles, predicts enzymes that could be plausible
targets for regulation, provides criteria for the usage of enzymes and
pathways, and explains the choice between high-yield and low-yield flux modes.
| [
{
"created": "Sun, 20 Apr 2014 22:04:50 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Apr 2014 14:21:35 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Oct 2022 18:17:41 GMT",
"version": "v3"
}
] | 2022-10-06 | [
[
"Liebermeister",
"Wolfram",
""
]
] | Metabolic fluxes in cells are governed by physical, physiological, and economic principles. Here I assume an optimal allocation of enzyme resources and postulate a general principle for metabolism: each enzyme must convert less valuable into more valuable metabolites to justify its own cost. The "values", called economic potentials, describe the individual contributions of metabolites to cell fitness. Local value production implies that the cost of an enzyme must be balanced by a benefit, given by the economic potential difference the catalysed reaction multiplied by the flux. Flux profiles that satisfy this principle - i.e. for which consistent potentials can be found - are called economical. Economical fluxes must lead from lower to higher economic potentials, so certain flux cycles are incompatible with any choice of economic potentials and can be excluded. To obtain economical flux profiles, non-beneficial local patterns, called futile motifs, can be systematically removed from a given flux distribution. The principle of local value production resembles thermodynamic principles and complements them in models. Here I describe a modelling framework called Value Balance Analysis (VBA) that uses the two principles and yields the same solution as enzyme cost minimisation (in kinetic models) and flux cost minimisation (in FBA). Given an economical flux distribution, kinetic models in enzyme-optimal states and with these fluxes can be constructed systematically. VBA justifies the principle of minimal fluxes and the exclusion of futile cycles, predicts enzymes that could be plausible targets for regulation, provides criteria for the usage of enzymes and pathways, and explains the choice between high-yield and low-yield flux modes. |
1507.00235 | Stefano Fusi | Daniel Mart\'i, Mattia Rigotti, Mingoo Seok, Stefano Fusi | Energy-efficient neuromorphic classifiers | 11 pages, 7 figures | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromorphic engineering combines the architectural and computational
principles of systems neuroscience with semiconductor electronics, with the aim
of building efficient and compact devices that mimic the synaptic and neural
machinery of the brain. Neuromorphic engineering promises extremely low energy
consumptions, comparable to those of the nervous system. However, until now the
neuromorphic approach has been restricted to relatively simple circuits and
specialized functions, rendering elusive a direct comparison of their energy
consumption to that used by conventional von Neumann digital machines solving
real-world tasks. Here we show that a recent technology developed by IBM can be
leveraged to realize neuromorphic circuits that operate as classifiers of
complex real-world stimuli. These circuits emulate enough neurons to compete
with state-of-the-art classifiers. We also show that the energy consumption of
the IBM chip is typically 2 or more orders of magnitude lower than that of
conventional digital machines when implementing classifiers with comparable
performance. Moreover, the spike-based dynamics display a trade-off between
integration time and accuracy, which naturally translates into algorithms that
can be flexibly deployed for either fast and approximate classifications, or
more accurate classifications at the mere expense of longer running times and
higher energy costs. This work finally proves that the neuromorphic approach
can be efficiently used in real-world applications and it has significant
advantages over conventional digital devices when energy consumption is
considered.
| [
{
"created": "Wed, 1 Jul 2015 13:52:07 GMT",
"version": "v1"
}
] | 2015-07-02 | [
[
"Martí",
"Daniel",
""
],
[
"Rigotti",
"Mattia",
""
],
[
"Seok",
"Mingoo",
""
],
[
"Fusi",
"Stefano",
""
]
] | Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering elusive a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. These circuits emulate enough neurons to compete with state-of-the-art classifiers. We also show that the energy consumption of the IBM chip is typically 2 or more orders of magnitude lower than that of conventional digital machines when implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and it has significant advantages over conventional digital devices when energy consumption is considered. |
2011.03260 | Muktish Acharyya | Agniva Datta and Muktish Acharyya | Phase transition in Kermack-McKendrick Model of Epidemic: Effects of
Additional Nonlinearity and Introduction of Medicated Immunity | A few more referenced added | null | null | PU-Phys-6-11-2020 | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical modelling of the spread of epidemics has been an interesting
challenge in the field of epidemiology. The SIR Model proposed by Kermack and
McKendrick in 1927 is a prototypical model of epidemiology. However, it has its
limitations. In this paper, we show two independent ways of generalizing this
model, the first one if the vaccine isn't discovered or ready to use and the
next one, if the vaccine is discovered and ready to use. In the first part, we
have pointed out a major over-simplification, i.e., assumption of variation of
the time derivatives of the variables with the linear or quadratic powers of
the individual variables and introduce two new parameters to incorporate
further nonlinearity in the number of infected people in the model. As a result
of this, we show how this additional nonlinearity, in the newly introduced
parameters, can bring a significant shift in the peak time of infection, i.e.,
the time at which the infected population reaches maximum. We show that in
special cases, even we can get a transition from epidemic to a non-epidemic
stage of a particular infectious disease. We further study one such special
case and treat it as a problem of phase transition. Then, we investigate all
the necessary parameters of this phase transition, like the order parameter and
critical exponent. We observe that $O_p \sim (q_c-q)^{\beta}$. {\it As far as
we know the phase transition and its quantification in terms of the scaling
behaviour is not yet know in the context of pandemic}. In the second part, we
incorporate in the model, a consideration of artificial herd immunity and show
how we can decrease the peak time of infection with a subsequent decrease in
the maximum number of infected people.
| [
{
"created": "Fri, 6 Nov 2020 10:08:10 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 08:23:37 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Dec 2020 15:25:27 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Mar 2021 11:07:26 GMT",
"version": "v4"
},
{
"created": "Sun, 11 Apr 2021 06:17:44 GMT",
"version": "v5"
}
] | 2021-04-13 | [
[
"Datta",
"Agniva",
""
],
[
"Acharyya",
"Muktish",
""
]
] | Mathematical modelling of the spread of epidemics has been an interesting challenge in the field of epidemiology. The SIR Model proposed by Kermack and McKendrick in 1927 is a prototypical model of epidemiology. However, it has its limitations. In this paper, we show two independent ways of generalizing this model, the first one if the vaccine isn't discovered or ready to use and the next one, if the vaccine is discovered and ready to use. In the first part, we have pointed out a major over-simplification, i.e., assumption of variation of the time derivatives of the variables with the linear or quadratic powers of the individual variables and introduce two new parameters to incorporate further nonlinearity in the number of infected people in the model. As a result of this, we show how this additional nonlinearity, in the newly introduced parameters, can bring a significant shift in the peak time of infection, i.e., the time at which the infected population reaches maximum. We show that in special cases, even we can get a transition from epidemic to a non-epidemic stage of a particular infectious disease. We further study one such special case and treat it as a problem of phase transition. Then, we investigate all the necessary parameters of this phase transition, like the order parameter and critical exponent. We observe that $O_p \sim (q_c-q)^{\beta}$. {\it As far as we know the phase transition and its quantification in terms of the scaling behaviour is not yet know in the context of pandemic}. In the second part, we incorporate in the model, a consideration of artificial herd immunity and show how we can decrease the peak time of infection with a subsequent decrease in the maximum number of infected people. |
1311.3237 | Sandro Ely de Souza Pinto | Pedro J. Miranda, Murilo Delgobo, Giovanni M. Favero, K\'atia S.
Paludo, Murilo S. Baptista and Sandro E. de S. Pinto | The oral tolerance as a complex network phenomenon | 21 pages, 2 figures, 2 tables | null | null | null | q-bio.MN nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenomenon of oral tolerance refers to a local and systemic state of
tolerance, induced in the gut associated lymphoid tissues, after its exposure
to innocuous antigens, such as food proteins. While recent findings shed light
in the cellular and molecular basis of oral tolerance, the network of
interactions between the components mediating oral tolerance has not been
investigated yet. Our work brings a complex systems theory approach, aiming to
identify the contribution of each element in an oral tolerance network. We also
propose a model that allows dynamical plus topological quantifying which must
encompass functional responses as the local host involved on the oral
tolerance. To keep track of reality of our model, we test knockout (KO) of
immunological components (i. e. silencing a vertex) and see how it diverges
when the system is topologically health. The results from these simulated KO's
are then compared to real molecular knock-outs. To infer from these processing
we apply a new implementation of a random walk algorithm for directed graphs,
which ultimately generate statistical quantities provided by the dynamical
behaviour of the simulated KO's. It was observed that the a specifics KO caused
the greatest impact on network standard flux. In a brief analysis, the results
obtained correspond to biological data. Our model addresses both topological
proprieties and dynamical relations. The construction of a qualitative dynamic
model for oral tolerance could reflect empirical observations, through the
standard flux results and relative error based on individual knockout.
| [
{
"created": "Wed, 13 Nov 2013 18:16:28 GMT",
"version": "v1"
}
] | 2013-11-14 | [
[
"Miranda",
"Pedro J.",
""
],
[
"Delgobo",
"Murilo",
""
],
[
"Favero",
"Giovanni M.",
""
],
[
"Paludo",
"Kátia S.",
""
],
[
"Baptista",
"Murilo S.",
""
],
[
"Pinto",
"Sandro E. de S.",
""
]
] | The phenomenon of oral tolerance refers to a local and systemic state of tolerance, induced in the gut associated lymphoid tissues, after its exposure to innocuous antigens, such as food proteins. While recent findings shed light in the cellular and molecular basis of oral tolerance, the network of interactions between the components mediating oral tolerance has not been investigated yet. Our work brings a complex systems theory approach, aiming to identify the contribution of each element in an oral tolerance network. We also propose a model that allows dynamical plus topological quantifying which must encompass functional responses as the local host involved on the oral tolerance. To keep track of reality of our model, we test knockout (KO) of immunological components (i. e. silencing a vertex) and see how it diverges when the system is topologically health. The results from these simulated KO's are then compared to real molecular knock-outs. To infer from these processing we apply a new implementation of a random walk algorithm for directed graphs, which ultimately generate statistical quantities provided by the dynamical behaviour of the simulated KO's. It was observed that the a specifics KO caused the greatest impact on network standard flux. In a brief analysis, the results obtained correspond to biological data. Our model addresses both topological proprieties and dynamical relations. The construction of a qualitative dynamic model for oral tolerance could reflect empirical observations, through the standard flux results and relative error based on individual knockout. |
0901.2521 | Luis Diambra | P.S. Gutierrez, D. Monteoliva and L. Diambra | The role of cooperative binding on noise expression | 11 pages, 3 figures | PHYSICAL REVIEW E 80, 011914, 2009 | null | null | q-bio.SC q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The origin of stochastic fluctuations in gene expression has received
considerable attention recently. Fluctuations in gene expression are
particularly pronounced in cellular systems because of the small copy number of
species undergoing transitions between discrete chemical states and the small
size of biological compartments. In this paper, we propose a stochastic model
for gene expression regulation including several binding sites, considering
elementary reactions only. The model is used to investigate the role of
cooperativity on the intrinsic fluctuations of gene expression, by means of
master equation formalism. We found that the Hill coefficient and the level of
noise increases as the interaction energy between activators increases.
Additionally, we show that the model allows to distinguish between two
cooperative binding mechanisms.
| [
{
"created": "Fri, 16 Jan 2009 16:31:01 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Aug 2009 18:12:48 GMT",
"version": "v2"
}
] | 2009-08-02 | [
[
"Gutierrez",
"P. S.",
""
],
[
"Monteoliva",
"D.",
""
],
[
"Diambra",
"L.",
""
]
] | The origin of stochastic fluctuations in gene expression has received considerable attention recently. Fluctuations in gene expression are particularly pronounced in cellular systems because of the small copy number of species undergoing transitions between discrete chemical states and the small size of biological compartments. In this paper, we propose a stochastic model for gene expression regulation including several binding sites, considering elementary reactions only. The model is used to investigate the role of cooperativity on the intrinsic fluctuations of gene expression, by means of master equation formalism. We found that the Hill coefficient and the level of noise increases as the interaction energy between activators increases. Additionally, we show that the model allows to distinguish between two cooperative binding mechanisms. |
2205.03905 | Ruiyang Zhou | Jiawei Wang, Ruiyang Zhou and Fengying Wei | Single-species population models with age structure and psychological
effect in a polluted environment | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | This paper considers a single-population model with age structure and
psychological effects in a polluted environment. We divide the single
population into two stages of larval and adult structure. The model uses
Logistic input, and the larvae are converted into adult bodies by constant
ratio. We only consider adulthood. The role of psychological effects makes the
contact between adult and environmental toxins a functional form, while the
contact between larvae and environmental toxins is linear.
For the deterministic model embodied as a nonlinear time-varying system, we
discuss the asymptotic stability of the system by Lyapunov one-time
approximation theory, and give a sufficient condition for stability to be
established.
Considering that the contact rate between biological and environmental toxins
in nature is not always constant, we make the contact rate interfere with white
noise, and then modify the contact rate into a stochastic process, thus
establishing a corresponding random single-population model. According to It\^o
formula and Lyapunov in the function method, we first prove the existence of
globally unique positive solutions for stochastic models under arbitrary
initial conditions, and then give sufficient conditions for weak average
long-term survival and random long-term survival for single populations in the
expected sense.
| [
{
"created": "Sun, 8 May 2022 16:12:14 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2022 06:19:51 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Feb 2024 16:33:35 GMT",
"version": "v3"
}
] | 2024-02-21 | [
[
"Wang",
"Jiawei",
""
],
[
"Zhou",
"Ruiyang",
""
],
[
"Wei",
"Fengying",
""
]
] | This paper considers a single-population model with age structure and psychological effects in a polluted environment. We divide the single population into two stages of larval and adult structure. The model uses Logistic input, and the larvae are converted into adult bodies by constant ratio. We only consider adulthood. The role of psychological effects makes the contact between adult and environmental toxins a functional form, while the contact between larvae and environmental toxins is linear. For the deterministic model embodied as a nonlinear time-varying system, we discuss the asymptotic stability of the system by Lyapunov one-time approximation theory, and give a sufficient condition for stability to be established. Considering that the contact rate between biological and environmental toxins in nature is not always constant, we make the contact rate interfere with white noise, and then modify the contact rate into a stochastic process, thus establishing a corresponding random single-population model. According to It\^o formula and Lyapunov in the function method, we first prove the existence of globally unique positive solutions for stochastic models under arbitrary initial conditions, and then give sufficient conditions for weak average long-term survival and random long-term survival for single populations in the expected sense. |
2010.16284 | Oleg Kogan | Steffanie Stanley, Oleg Kogan | FKPP dynamics mediated by a parent field with a delay | null | Phys. Rev. E 104, 034415 (2021) | 10.1103/PhysRevE.104.034415 | null | q-bio.PE cond-mat.stat-mech math-ph math.MP nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine a modification of the Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP)
process in which the diffusing substance requires a parent density field for
reproduction. A biological example would be the density of diffusing spores
(propagules) and the density of a stationary fungus (parent). The parent
produces propagules at a certain rate, and the propagules turn into the parent
substance at another rate. We model this evolution by the FKPP process with
delay, which reflects a finite time typically required for a new parent to
mature before it begins to produce propagules. While the FKPP process with
other types of delays have been considered in the past as a pure mathematical
construct, in our work a delay in the FKPP model arises in a natural science
setting. The speed of the resulting density fronts is shown to decrease with
increasing delay time, and has a non-trivial dependence on the rate of
conversion of propagules into the parent substance. Remarkably, the fronts in
this model are always slower than Fisher waves of the classical FKPP model. The
largest speed is half of the classical value, and it is achieved at zero delay
and when the two rates are matched.
| [
{
"created": "Fri, 30 Oct 2020 14:14:51 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Nov 2020 22:25:35 GMT",
"version": "v2"
}
] | 2021-09-29 | [
[
"Stanley",
"Steffanie",
""
],
[
"Kogan",
"Oleg",
""
]
] | We examine a modification of the Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP) process in which the diffusing substance requires a parent density field for reproduction. A biological example would be the density of diffusing spores (propagules) and the density of a stationary fungus (parent). The parent produces propagules at a certain rate, and the propagules turn into the parent substance at another rate. We model this evolution by the FKPP process with delay, which reflects a finite time typically required for a new parent to mature before it begins to produce propagules. While the FKPP process with other types of delays have been considered in the past as a pure mathematical construct, in our work a delay in the FKPP model arises in a natural science setting. The speed of the resulting density fronts is shown to decrease with increasing delay time, and has a non-trivial dependence on the rate of conversion of propagules into the parent substance. Remarkably, the fronts in this model are always slower than Fisher waves of the classical FKPP model. The largest speed is half of the classical value, and it is achieved at zero delay and when the two rates are matched. |
2304.03239 | Hamid Usefi | Ali Amelia, Lourdes Pena-Castillo, Hamid Usefi | Assessing the Reproducibility of Machine-learning-based Biomarker
Discovery in Parkinson's Disease | 20 pages, 4 figures | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Genome-Wide Association Studies (GWAS) help identify genetic variations in
people with diseases such as Parkinson's disease (PD), which are less common in
those without the disease. Thus, GWAS data can be used to identify genetic
variations associated with the disease. Feature selection and machine learning
approaches can be used to analyze GWAS data and identify potential disease
biomarkers. However, GWAS studies have technical variations that affect the
reproducibility of identified biomarkers, such as differences in genotyping
platforms and selection criteria for individuals to be genotyped. To address
this issue, we collected five GWAS datasets from the database of Genotypes and
Phenotypes (dbGaP) and explored several data integration strategies. We
evaluated the agreement among different strategies in terms of the Single
Nucleotide Polymorphisms (SNPs) that were identified as potential PD
biomarkers. Our results showed a low concordance of biomarkers discovered using
different datasets or integration strategies. However, we identified fifty SNPs
that were identified at least twice, which could potentially serve as novel PD
biomarkers. These SNPs are indirectly linked to PD in the literature but have
not been directly associated with PD before. These findings open up new
potential avenues of investigation.
| [
{
"created": "Thu, 6 Apr 2023 17:21:10 GMT",
"version": "v1"
}
] | 2023-04-07 | [
[
"Amelia",
"Ali",
""
],
[
"Pena-Castillo",
"Lourdes",
""
],
[
"Usefi",
"Hamid",
""
]
] | Genome-Wide Association Studies (GWAS) help identify genetic variations in people with diseases such as Parkinson's disease (PD), which are less common in those without the disease. Thus, GWAS data can be used to identify genetic variations associated with the disease. Feature selection and machine learning approaches can be used to analyze GWAS data and identify potential disease biomarkers. However, GWAS studies have technical variations that affect the reproducibility of identified biomarkers, such as differences in genotyping platforms and selection criteria for individuals to be genotyped. To address this issue, we collected five GWAS datasets from the database of Genotypes and Phenotypes (dbGaP) and explored several data integration strategies. We evaluated the agreement among different strategies in terms of the Single Nucleotide Polymorphisms (SNPs) that were identified as potential PD biomarkers. Our results showed a low concordance of biomarkers discovered using different datasets or integration strategies. However, we identified fifty SNPs that were identified at least twice, which could potentially serve as novel PD biomarkers. These SNPs are indirectly linked to PD in the literature but have not been directly associated with PD before. These findings open up new potential avenues of investigation. |
2405.17032 | Aaron King | Aaron A. King, Qianying Lin, Edward L. Ionides | Exact phylodynamic likelihood via structured Markov genealogy processes | null | null | null | null | q-bio.QM math.PR q-bio.PE stat.AP | http://creativecommons.org/licenses/by/4.0/ | We consider genealogies arising from a Markov population process in which
individuals are categorized into a discrete collection of compartments, with
the requirement that individuals within the same compartment are statistically
exchangeable. When equipped with a sampling process, each such population
process induces a time-evolving tree-valued process defined as the genealogy of
all sampled individuals. We provide a construction of this genealogy process
and derive exact expressions for the likelihood of an observed genealogy in
terms of filter equations. These filter equations can be numerically solved
using standard Monte Carlo integration methods. Thus, we obtain statistically
efficient likelihood-based inference for essentially arbitrary compartment
models based on an observed genealogy of individuals sampled from the
population.
| [
{
"created": "Mon, 27 May 2024 10:39:18 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"King",
"Aaron A.",
""
],
[
"Lin",
"Qianying",
""
],
[
"Ionides",
"Edward L.",
""
]
] | We consider genealogies arising from a Markov population process in which individuals are categorized into a discrete collection of compartments, with the requirement that individuals within the same compartment are statistically exchangeable. When equipped with a sampling process, each such population process induces a time-evolving tree-valued process defined as the genealogy of all sampled individuals. We provide a construction of this genealogy process and derive exact expressions for the likelihood of an observed genealogy in terms of filter equations. These filter equations can be numerically solved using standard Monte Carlo integration methods. Thus, we obtain statistically efficient likelihood-based inference for essentially arbitrary compartment models based on an observed genealogy of individuals sampled from the population. |
1908.11865 | Katherine Newhall | Pamela B Pyzza, Katherine A Newhall, Douglas Zhou, Gregor Kovacic,
David Cai | Network Mechanism for Insect Olfaction | 43 pages with 11 figures | null | null | null | q-bio.NC math.DS nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Early olfactory pathway responses to the presentation of an odor exhibit
remarkably similar dynamical behavior across phyla from insects to mammals, and
frequently involve transitions among quiescence, collective network
oscillations, and asynchronous firing. We hypothesize that the time scales of
fast excitation and fast and slow inhibition present in these networks may be
the essential element underlying this similar behavior, and design an
idealized, conductance-based integrate-and-fire (I&F) model to verify this
hypothesis via numerical simulations. To better understand the mathematical
structure underlying the common dynamical behavior across species, we derive a
firing-rate (FR) model and use it to extract a slow passage through a
saddle-node-on-an-invariant-circle (SNIC) bifurcation structure. We expect this
bifurcation structure to provide new insights into the understanding of the
dynamical behavior of neuronal assemblies and that a similar structure can be
found in other sensory systems.
| [
{
"created": "Fri, 30 Aug 2019 17:50:55 GMT",
"version": "v1"
},
{
"created": "Sun, 27 Sep 2020 14:51:03 GMT",
"version": "v2"
}
] | 2020-09-29 | [
[
"Pyzza",
"Pamela B",
""
],
[
"Newhall",
"Katherine A",
""
],
[
"Zhou",
"Douglas",
""
],
[
"Kovacic",
"Gregor",
""
],
[
"Cai",
"David",
""
]
] | Early olfactory pathway responses to the presentation of an odor exhibit remarkably similar dynamical behavior across phyla from insects to mammals, and frequently involve transitions among quiescence, collective network oscillations, and asynchronous firing. We hypothesize that the time scales of fast excitation and fast and slow inhibition present in these networks may be the essential element underlying this similar behavior, and design an idealized, conductance-based integrate-and-fire (I&F) model to verify this hypothesis via numerical simulations. To better understand the mathematical structure underlying the common dynamical behavior across species, we derive a firing-rate (FR) model and use it to extract a slow passage through a saddle-node-on-an-invariant-circle (SNIC) bifurcation structure. We expect this bifurcation structure to provide new insights into the understanding of the dynamical behavior of neuronal assemblies and that a similar structure can be found in other sensory systems. |
1807.02390 | Nida Obatake | Molly Hoch and Samuel Muthiah and Nida Obatake | On the identification of $k$-inductively pierced codes using toric
ideals | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural codes are binary codes in $\{0,1\}^n$; here we focus on the ones which
represent the firing patterns of a type of neurons called place cells. There is
much interest in determining which neural codes can be realized by a collection
of convex sets. However, drawing representations of these convex sets,
particularly as the number of neurons in a code increases, can be very
difficult. Nevertheless, for a class of codes that are said to be
$k$-inductively pierced for $k=0,1,2$ there is an algorithm for drawing Euler
diagrams. Here we use the toric ideal of a code to show sufficient conditions
for a code to be 1- or 2-inductively pierced, so that we may use the existing
algorithm to draw realizations of such codes.
| [
{
"created": "Fri, 6 Jul 2018 13:06:11 GMT",
"version": "v1"
}
] | 2018-07-09 | [
[
"Hoch",
"Molly",
""
],
[
"Muthiah",
"Samuel",
""
],
[
"Obatake",
"Nida",
""
]
] | Neural codes are binary codes in $\{0,1\}^n$; here we focus on the ones which represent the firing patterns of a type of neurons called place cells. There is much interest in determining which neural codes can be realized by a collection of convex sets. However, drawing representations of these convex sets, particularly as the number of neurons in a code increases, can be very difficult. Nevertheless, for a class of codes that are said to be $k$-inductively pierced for $k=0,1,2$ there is an algorithm for drawing Euler diagrams. Here we use the toric ideal of a code to show sufficient conditions for a code to be 1- or 2-inductively pierced, so that we may use the existing algorithm to draw realizations of such codes. |
1911.04818 | Luciana Ines Oklander Dr. | L. I. Oklander (1), M. Caputo (2), A. Solari (3), D. Corach (2) | Genetic assignment of illegally trafficked neotropical primates and
implications for reintroduction programs | 16 pages, 2 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The black and gold howler monkey (Alouatta caraya) is a neotropical primate
that faces the highest capture pressure for illegal trade in Argentina. We
evaluate the applicability of genetic assignment tests based on microsatellite
genotypic data to accurately assign individuals to their site of origin. The
search was conducted on a genetic database to determine the nearest sampled
population or to associate them to three clusters described here for the
Argentinean populations of A. caraya. We correctly assign 73% of the
individuals in the database to nearest population of origin, and 93.3% to their
cluster of origin. With this database, we were able to determine the probable
origin of 17 confiscated individuals, 12 of which were reintroduced in the
province of Misiones and 5 confiscated individuals reintroduced in the province
of Santa Fe. Moreover, we also determined the probable origin of 3 individuals
found dead in cities in northern Argentina. This approach highlights the
relevance of generating genotype indexing databases of species to assist with
in-situ and ex-situ conservation and management programs. Our results
underscore the importance of knowing the origin of individuals for
reintroduction and/or species recovery programs and to pinpoint the hotspots of
illegal capture of various species.
| [
{
"created": "Tue, 12 Nov 2019 12:29:18 GMT",
"version": "v1"
}
] | 2019-11-13 | [
[
"Oklander",
"L. I.",
""
],
[
"Caputo",
"M.",
""
],
[
"Solari",
"A.",
""
],
[
"Corach",
"D.",
""
]
] | The black and gold howler monkey (Alouatta caraya) is a neotropical primate that faces the highest capture pressure for illegal trade in Argentina. We evaluate the applicability of genetic assignment tests based on microsatellite genotypic data to accurately assign individuals to their site of origin. The search was conducted on a genetic database to determine the nearest sampled population or to associate them to three clusters described here for the Argentinean populations of A. caraya. We correctly assign 73% of the individuals in the database to nearest population of origin, and 93.3% to their cluster of origin. With this database, we were able to determine the probable origin of 17 confiscated individuals, 12 of which were reintroduced in the province of Misiones and 5 confiscated individuals reintroduced in the province of Santa Fe. Moreover, we also determined the probable origin of 3 individuals found dead in cities in northern Argentina. This approach highlights the relevance of generating genotype indexing databases of species to assist with in-situ and ex-situ conservation and management programs. Our results underscore the importance of knowing the origin of individuals for reintroduction and/or species recovery programs and to pinpoint the hotspots of illegal capture of various species. |
1507.06230 | Roeland M.H. Merks | Sonja E. M. Boas and Roeland M.H. Merks | Tip cell overtaking occurs as a side effect of sprouting in
computational models of angiogenesis | 20 pages, 6 figures, 4 supplementary figures | BMC Systems Biology 2015, 9:86 | 10.1186/s12918-015-0230-7 | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During angiogenesis, endothelial cells compete for the tip position during
angiogenesis: a phenomenon named tip cell overtaking. It is still unclear to
what extent tip cell overtaking is a side effect of sprouting or to what extent
a biological function. To address this question, we studied tip cell overtaking
in two existing cellular Potts models of angiogenic sprouting. In these models
angiogenic sprouting-like behavior emerges from a small set of plausible cell
behaviors and the endothelial cells spontaneously migrate forwards and
backwards within sprouts, suggesting that tip cell overtaking might occur as a
side effect of sprouting. In accordance with experimental observations, in our
simulations the cells' tendency to occupy the tip position can be regulated
when two cell lines with different levels of Vegfr2 expression are contributing
to sprouting (mosaic sprouting assay), where cell behavior is regulated by a
simple VEGF-Dll4-Notch signaling network. Our modeling results suggest that tip
cell overtaking occurs spontaneously due to the stochastic motion of cells
during sprouting. Thus, tip cell overtaking and sprouting dynamics may be
interdependent and should be studied and interpreted in combination.
VEGF-Dll4-Notch can regulate the ability of cells to occupy the tip cell
position, but only when cells in the simulation strongly differ in their levels
of Vegfr2. We propose that VEGF-Dll4-Notch signaling might not regulate which
cell ends up at the tip, but assures that the cell that randomly ends up at the
tip position acquires the tip cell phenotype.
| [
{
"created": "Wed, 22 Jul 2015 15:48:19 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Nov 2015 12:17:11 GMT",
"version": "v2"
}
] | 2015-11-24 | [
[
"Boas",
"Sonja E. M.",
""
],
[
"Merks",
"Roeland M. H.",
""
]
] | During angiogenesis, endothelial cells compete for the tip position during angiogenesis: a phenomenon named tip cell overtaking. It is still unclear to what extent tip cell overtaking is a side effect of sprouting or to what extent a biological function. To address this question, we studied tip cell overtaking in two existing cellular Potts models of angiogenic sprouting. In these models angiogenic sprouting-like behavior emerges from a small set of plausible cell behaviors and the endothelial cells spontaneously migrate forwards and backwards within sprouts, suggesting that tip cell overtaking might occur as a side effect of sprouting. In accordance with experimental observations, in our simulations the cells' tendency to occupy the tip position can be regulated when two cell lines with different levels of Vegfr2 expression are contributing to sprouting (mosaic sprouting assay), where cell behavior is regulated by a simple VEGF-Dll4-Notch signaling network. Our modeling results suggest that tip cell overtaking occurs spontaneously due to the stochastic motion of cells during sprouting. Thus, tip cell overtaking and sprouting dynamics may be interdependent and should be studied and interpreted in combination. VEGF-Dll4-Notch can regulate the ability of cells to occupy the tip cell position, but only when cells in the simulation strongly differ in their levels of Vegfr2. We propose that VEGF-Dll4-Notch signaling might not regulate which cell ends up at the tip, but assures that the cell that randomly ends up at the tip position acquires the tip cell phenotype. |
2312.06241 | Daniel Evans-Yamamoto | Shuo Jiang, Daniel Evans-Yamamoto, Dennis Bersenev, Sucheendra K.
Palaniappan, Ayako Yachie-Kinoshita | ProtoCode: Leveraging Large Language Models for Automated Generation of
Machine-Readable Protocols from Scientific Publications | 12 pages, 3 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Protocol standardization and sharing are crucial for reproducibility in life
sciences. In spite of numerous efforts for standardized protocol description,
adherence to these standards in literature remains largely inconsistent.
Curation of protocols are especially challenging due to the labor intensive
process, requiring expert domain knowledge of each experimental procedure.
Recent advancements in Large Language Models (LLMs) offer a promising solution
to interpret and curate knowledge from complex scientific literature. In this
work, we develop ProtoCode, a tool leveraging fine-tune LLMs to curate
protocols which can be interpretable by both human and machine interfaces. Our
proof-of-concept, focused on polymerase chain reaction (PCR) protocols,
retrieves information from PCR protocols at an accuracy ranging 69-100%
depending on the information content. In all the tested protocols, we
demonstrate that ProtoCode successfully converts literature-based protocols
into correct operational files for multiple thermal cycler systems. In
conclusion, ProtoCode can alleviate labor intensive curation and
standardization of life science protocols to enhance research reproducibility
by providing a reliable, automated means to process and standardize protocols.
ProtoCode is freely available as a web server at
https://curation.taxila.io/ProtoCode/.
| [
{
"created": "Mon, 11 Dec 2023 09:28:47 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Jiang",
"Shuo",
""
],
[
"Evans-Yamamoto",
"Daniel",
""
],
[
"Bersenev",
"Dennis",
""
],
[
"Palaniappan",
"Sucheendra K.",
""
],
[
"Yachie-Kinoshita",
"Ayako",
""
]
] | Protocol standardization and sharing are crucial for reproducibility in life sciences. In spite of numerous efforts for standardized protocol description, adherence to these standards in literature remains largely inconsistent. Curation of protocols are especially challenging due to the labor intensive process, requiring expert domain knowledge of each experimental procedure. Recent advancements in Large Language Models (LLMs) offer a promising solution to interpret and curate knowledge from complex scientific literature. In this work, we develop ProtoCode, a tool leveraging fine-tune LLMs to curate protocols which can be interpretable by both human and machine interfaces. Our proof-of-concept, focused on polymerase chain reaction (PCR) protocols, retrieves information from PCR protocols at an accuracy ranging 69-100% depending on the information content. In all the tested protocols, we demonstrate that ProtoCode successfully converts literature-based protocols into correct operational files for multiple thermal cycler systems. In conclusion, ProtoCode can alleviate labor intensive curation and standardization of life science protocols to enhance research reproducibility by providing a reliable, automated means to process and standardize protocols. ProtoCode is freely available as a web server at https://curation.taxila.io/ProtoCode/. |
1702.03762 | Etienne Tanr\'e | Alexandre Richard, Patricio Orio and Etienne Tanr\'e | An integrate-and-fire model to generate spike trains with long-range
dependence | null | null | 10.1007/s10827-018-0680-1 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-range dependence (LRD) has been observed in a variety of phenomena in
nature, and for several years also in the spiking activity of neurons. Often,
this is interpreted as originating from a non-Markovian system. Here we show
that a purely Markovian integrate-and-fire (IF) model, with a noisy slow
adaptation term, can generate interspike intervals (ISIs) that appear as having
LRD. However a proper analysis shows that this is not the case asymptotically.
For comparison, we also consider a new model of individual IF neuron with
fractional (non-Markovian) noise. The correlations of its spike trains are
studied and proven to have LRD, unlike classical IF models. On the other hand,
to correctly measure long-range dependence, it is usually necessary to know if
the data are stationary. Thus, a methodology to evaluate stationarity of the
ISIs is presented and applied to the various IF models. We explain that
Markovian IF models may seem to have LRD because of non-stationarities.
| [
{
"created": "Mon, 13 Feb 2017 13:19:07 GMT",
"version": "v1"
},
{
"created": "Thu, 4 May 2017 16:30:12 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Mar 2018 04:50:58 GMT",
"version": "v3"
}
] | 2018-03-29 | [
[
"Richard",
"Alexandre",
""
],
[
"Orio",
"Patricio",
""
],
[
"Tanré",
"Etienne",
""
]
] | Long-range dependence (LRD) has been observed in a variety of phenomena in nature, and for several years also in the spiking activity of neurons. Often, this is interpreted as originating from a non-Markovian system. Here we show that a purely Markovian integrate-and-fire (IF) model, with a noisy slow adaptation term, can generate interspike intervals (ISIs) that appear as having LRD. However a proper analysis shows that this is not the case asymptotically. For comparison, we also consider a new model of individual IF neuron with fractional (non-Markovian) noise. The correlations of its spike trains are studied and proven to have LRD, unlike classical IF models. On the other hand, to correctly measure long-range dependence, it is usually necessary to know if the data are stationary. Thus, a methodology to evaluate stationarity of the ISIs is presented and applied to the various IF models. We explain that Markovian IF models may seem to have LRD because of non-stationarities. |
1810.02046 | David Sivak | Steven Blaber and David A. Sivak | Optimal control of protein copy number | 10 pages, 4 figures | Phys. Rev. E 101, 022118 (2020) | 10.1103/PhysRevE.101.022118 | null | q-bio.SC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell-cell communication is often achieved by secreted signaling molecules
that bind membrane-bound receptors. A common class of such receptors are
G-protein coupled receptors, where extracellular binding induces changes on the
membrane affinity near the receptor for certain cytosolic proteins, effectively
altering their chemical potential. We analyze the minimum-dissipation schedules
for dynamically changing chemical potential to induce steady-state changes in
protein copy-number distributions, and illustrate with analytic solutions for
linear chemical reaction networks. Protocols that change chemical potential on
biologically relevant timescales are experimentally accessible using
optogenetic manipulations, and our framework provides non-trivial predictions
about functional dynamical cell-cell interactions.
| [
{
"created": "Thu, 4 Oct 2018 03:55:55 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Feb 2020 00:05:19 GMT",
"version": "v2"
}
] | 2020-03-04 | [
[
"Blaber",
"Steven",
""
],
[
"Sivak",
"David A.",
""
]
] | Cell-cell communication is often achieved by secreted signaling molecules that bind membrane-bound receptors. A common class of such receptors are G-protein coupled receptors, where extracellular binding induces changes on the membrane affinity near the receptor for certain cytosolic proteins, effectively altering their chemical potential. We analyze the minimum-dissipation schedules for dynamically changing chemical potential to induce steady-state changes in protein copy-number distributions, and illustrate with analytic solutions for linear chemical reaction networks. Protocols that change chemical potential on biologically relevant timescales are experimentally accessible using optogenetic manipulations, and our framework provides non-trivial predictions about functional dynamical cell-cell interactions. |
1307.7678 | Jeffrey Ross-Ibarra | Amanda J. Waters, Paul Bilinski, Steve R. Eichten, Matthew W. Vaughn,
Jeffrey Ross-Ibarra, Mary Gehring, Nathan M. Springer | Comprehensive analysis of imprinted genes in maize reveals limited
conservation with other species and allelic variation for imprinting | null | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In plants, a subset of genes exhibit imprinting in endosperm tissue such that
expression is primarily from the maternal or paternal allele. Imprinting may
arise as a consequence of mechanisms for silencing of transposons during
reproduction, and in some cases imprinted expression of particular genes may
provide a selective advantage such that it is conserved across species.
Separate mechanisms for the origin of imprinted expression patterns and
maintenance of these patterns may result in substantial variation in the
targets of imprinting in different species. Here we present deep sequencing of
RNAs isolated from reciprocal crosses of four diverse maize genotypes,
providing a comprehensive analysis of imprinting in maize that allows
evaluation of imprinting at more than 95% of endosperm-expressed genes. We find
that over 500 genes exhibit statistically significant parent-of-origin effects
in maize endosperm tissue, but focused our analyses on a subset of these genes
that had >90% expression from the maternal allele (69 genes) or from the
paternal allele (108 genes) in at least one reciprocal cross. Over 10% of
imprinted genes show evidence of allelic variation for imprinting. A comparison
of imprinting in maize and rice reveals that only 13% of genes with syntenic
orthologs in both species exhibit conserved imprinting. Genes that exhibit
conserved imprinting in maize relative to rice have elevated dN/dS ratios
compared to other imprinted genes, suggesting a history of more rapid
evolution. Together, these data suggest that imprinting only has functional
relevance at a subset of loci that currently exhibit imprinting in maize.
| [
{
"created": "Mon, 29 Jul 2013 18:40:25 GMT",
"version": "v1"
}
] | 2013-07-30 | [
[
"Waters",
"Amanda J.",
""
],
[
"Bilinski",
"Paul",
""
],
[
"Eichten",
"Steve R.",
""
],
[
"Vaughn",
"Matthew W.",
""
],
[
"Ross-Ibarra",
"Jeffrey",
""
],
[
"Gehring",
"Mary",
""
],
[
"Springer",
"Nathan M.",
""
]
] | In plants, a subset of genes exhibit imprinting in endosperm tissue such that expression is primarily from the maternal or paternal allele. Imprinting may arise as a consequence of mechanisms for silencing of transposons during reproduction, and in some cases imprinted expression of particular genes may provide a selective advantage such that it is conserved across species. Separate mechanisms for the origin of imprinted expression patterns and maintenance of these patterns may result in substantial variation in the targets of imprinting in different species. Here we present deep sequencing of RNAs isolated from reciprocal crosses of four diverse maize genotypes, providing a comprehensive analysis of imprinting in maize that allows evaluation of imprinting at more than 95% of endosperm-expressed genes. We find that over 500 genes exhibit statistically significant parent-of-origin effects in maize endosperm tissue, but focused our analyses on a subset of these genes that had >90% expression from the maternal allele (69 genes) or from the paternal allele (108 genes) in at least one reciprocal cross. Over 10% of imprinted genes show evidence of allelic variation for imprinting. A comparison of imprinting in maize and rice reveals that only 13% of genes with syntenic orthologs in both species exhibit conserved imprinting. Genes that exhibit conserved imprinting in maize relative to rice have elevated dN/dS ratios compared to other imprinted genes, suggesting a history of more rapid evolution. Together, these data suggest that imprinting only has functional relevance at a subset of loci that currently exhibit imprinting in maize. |
1611.02538 | Jeethu Devasia | Jeethu V. Devasia and Priya Chandran | Inferring disease causing genes and their pathways: A mathematical
perspective | This article had submitted in the journals Bioinformatics and
Computer Methods and Programs in Biomedicine. But, it was not accepted for
publication | null | null | null | q-bio.MN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A system level view of cellular processes for human and several organisms can
be cap- tured by analyzing molecular interaction networks. A molecular
interaction network formed of differentially expressed genes and their
interactions helps to understand key players behind disease development. So, if
the functions of these genes are blocked by altering their interactions, it
would have a great impact in controlling the disease. Due to this promising
consequence, the problem of inferring disease causing genes and their pathways
has attained a crucial position in computational biology research. However,
considering the huge size of interaction networks, executing computations can
be costly. Review of literatures shows that the methods proposed for finding
the set of disease causing genes could be assessed in terms of their accuracy
which a perfect algorithm would find. Along with accuracy, the time complexity
of the method is also important, as high time complexities would limit the
number of pathways that could be found within a pragmatic time interval.
| [
{
"created": "Sun, 30 Oct 2016 10:16:22 GMT",
"version": "v1"
}
] | 2016-11-09 | [
[
"Devasia",
"Jeethu V.",
""
],
[
"Chandran",
"Priya",
""
]
] | A system level view of cellular processes for human and several organisms can be cap- tured by analyzing molecular interaction networks. A molecular interaction network formed of differentially expressed genes and their interactions helps to understand key players behind disease development. So, if the functions of these genes are blocked by altering their interactions, it would have a great impact in controlling the disease. Due to this promising consequence, the problem of inferring disease causing genes and their pathways has attained a crucial position in computational biology research. However, considering the huge size of interaction networks, executing computations can be costly. Review of literatures shows that the methods proposed for finding the set of disease causing genes could be assessed in terms of their accuracy which a perfect algorithm would find. Along with accuracy, the time complexity of the method is also important, as high time complexities would limit the number of pathways that could be found within a pragmatic time interval. |
2102.00562 | Nasim Farajpour | Nasim Farajpour, Ram Deivanayagam, Abhijit Phakatkar, Surya Narayanan,
Reza Shahbazian-Yassar, Tolou Shokuhfar | A Novel Antimicrobial Electrochemical Glucose Biosensor Based on
Silver-Prussian Blue Modified TiO$_2$ Nanotube Arrays | null | Med Devices Sens. 2020;3:e10061 | 10.1002/mds3.10061 | null | q-bio.BM physics.app-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Glucose biosensors play an important role in the diagnosis and continued
monitoring of the disease, diabetes mellitus. This report proposes the
development of a novel enzymatic electrochemical glucose biosensor based on
TiO$_2$ nanotubes modified by AgO and Prussian blue (PB) nanoparticles (NPs),
which has an additional advantage of possessing antimicrobial properties for
implantable biosensor applications. In this study, we developed two high
performance glucose biosensors based on the immobilization of glucose oxidase
(GOx) onto Prussian blue (PB) modified TiO$_2$ nanotube arrays functionalized
by Au and AgO NPs. AgO-deposited TiO$_2$ nanotubes were synthesized through an
electrochemical anodization process followed by Ag electroplating process in
the same electrolyte. Deposition of PB particles was performed from an acidic
ferricyanide solution. The surface morphology and elemental composition of the
two fabricated biosensors were investigated by scanning electron microscopy
(SEM) and energy-dispersive X-ray spectroscopy (EDS) which indicate the
successful deposition of Au and AgO nanoparticles as well as PB nanocrystals.
Cyclic voltammetry and chronoamperometry were used to investigate the
performance of the modified electrochemical biosensors. The results show that
the developed electrochemical biosensors display excellent properties in terms
of electron transmission, low detection limit as well as high stability for the
determination of glucose. Under the optimized conditions, the amperometric
response shows a linear dependence on the glucose concentration to a detection
limit down to 4.91 $\mu$M with sensitivity of 185.1 mA M$^{-1}$ cm$^{-2]$ in Au
modified biosensor and detection limit of 58.7 $\mu$M with 29.1 mA M$^{-1}$
cm$^{-2]$ sensitivity in AgO modified biosensor.
| [
{
"created": "Sun, 31 Jan 2021 23:39:40 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Farajpour",
"Nasim",
""
],
[
"Deivanayagam",
"Ram",
""
],
[
"Phakatkar",
"Abhijit",
""
],
[
"Narayanan",
"Surya",
""
],
[
"Shahbazian-Yassar",
"Reza",
""
],
[
"Shokuhfar",
"Tolou",
""
]
] | Glucose biosensors play an important role in the diagnosis and continued monitoring of the disease, diabetes mellitus. This report proposes the development of a novel enzymatic electrochemical glucose biosensor based on TiO$_2$ nanotubes modified by AgO and Prussian blue (PB) nanoparticles (NPs), which has an additional advantage of possessing antimicrobial properties for implantable biosensor applications. In this study, we developed two high performance glucose biosensors based on the immobilization of glucose oxidase (GOx) onto Prussian blue (PB) modified TiO$_2$ nanotube arrays functionalized by Au and AgO NPs. AgO-deposited TiO$_2$ nanotubes were synthesized through an electrochemical anodization process followed by Ag electroplating process in the same electrolyte. Deposition of PB particles was performed from an acidic ferricyanide solution. The surface morphology and elemental composition of the two fabricated biosensors were investigated by scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDS) which indicate the successful deposition of Au and AgO nanoparticles as well as PB nanocrystals. Cyclic voltammetry and chronoamperometry were used to investigate the performance of the modified electrochemical biosensors. The results show that the developed electrochemical biosensors display excellent properties in terms of electron transmission, low detection limit as well as high stability for the determination of glucose. Under the optimized conditions, the amperometric response shows a linear dependence on the glucose concentration to a detection limit down to 4.91 $\mu$M with sensitivity of 185.1 mA M$^{-1}$ cm$^{-2]$ in Au modified biosensor and detection limit of 58.7 $\mu$M with 29.1 mA M$^{-1}$ cm$^{-2]$ sensitivity in AgO modified biosensor. |
0707.3811 | John Herrick | John Herrick and Aaron Bensimon | Global regulation of genome duplication in eukaryotes: an overview from
the epifluorescence microscope | 57 pages 5 figures fourth version references corrected | null | null | null | q-bio.GN | null | In eukaryotes, DNA replication is initiated along each chromosome at multiple
sites called replication origins. Locally, each replication origin is
"licensed", or specified, at the end of the M and the beginning of G1 phases of
the cell cycle. During S phase when DNA synthesis takes place, origins are
activated in stages corresponding to early and late replicating domains. The
staged and progressive activation of replication origins reflects the need to
maintain a strict balance between the number of active replication forks and
the rate at which DNA synthesis procedes. This suggests that origin densities
(frequency of intiation) and replication fork movement (rates of elongation)
must be co-regulated in order to guarantee the efficient and complete
duplication of each subchromosomal domain. Emerging evidence supports this
proposal and suggests that the ATM/ATR intra-S phase checkpoint plays an
important role in the co-regulation of initiation frequencies and rates of
elongation. In the following, we review recent results concerning the
mechanisms governing the global regulation of DNA replication and discuss the
roles these mechanisms play in maintaining genome stability during both a
normal and perturbed S phase.
| [
{
"created": "Wed, 25 Jul 2007 19:59:44 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Oct 2007 21:57:14 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Jan 2008 17:18:34 GMT",
"version": "v3"
},
{
"created": "Mon, 21 Jan 2008 10:03:38 GMT",
"version": "v4"
}
] | 2008-01-21 | [
[
"Herrick",
"John",
""
],
[
"Bensimon",
"Aaron",
""
]
] | In eukaryotes, DNA replication is initiated along each chromosome at multiple sites called replication origins. Locally, each replication origin is "licensed", or specified, at the end of the M and the beginning of G1 phases of the cell cycle. During S phase when DNA synthesis takes place, origins are activated in stages corresponding to early and late replicating domains. The staged and progressive activation of replication origins reflects the need to maintain a strict balance between the number of active replication forks and the rate at which DNA synthesis procedes. This suggests that origin densities (frequency of intiation) and replication fork movement (rates of elongation) must be co-regulated in order to guarantee the efficient and complete duplication of each subchromosomal domain. Emerging evidence supports this proposal and suggests that the ATM/ATR intra-S phase checkpoint plays an important role in the co-regulation of initiation frequencies and rates of elongation. In the following, we review recent results concerning the mechanisms governing the global regulation of DNA replication and discuss the roles these mechanisms play in maintaining genome stability during both a normal and perturbed S phase. |
2104.00786 | Erhard Scholz | Matthias Kreck, Erhard Scholz | Back to the roots: A discrete Kermack-McKendrick model adapted to
Covid-19 | Changes in v3: p. 3+6. Explication of a prehistory added p. 7+8.
Discussion of a similar discrete model added p. 9-11 discussion how to chose
the medical function gamma p. 14 explanation added p. 15-18 Section 6
completely rewritten p. 21+22 estimation of increase of infectivity of the
delta mutant based on statistical data p. 29 Point 5 of the discussion
adapted to the change of sec. 6 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A widely used tool for analysing the Covid-19 pandemic is the standard SIR
model. It seems often to be used as a black box, not taking into account that
this model was derived as a special case of the seminal Kermack-McKendrick
theory from 1927. This is our starting point. We explain the setup of the
Kermack-McKendrick theory (passing to a discrete approach) and use medical
information for specializing to a model which we call {\em adapted
K-McK-model}. This includes effects of vaccination, mass testing and mutants.
We demonstrate the use of the model by applying it to the development in
Germany. As a striking application we demonstrate that a comparatively mild
intervention reducing the time until quarantine by one day leads to a drastic
improvement. A similar effect can be obtained by certain mass testings as we
will demonstrate. We discuss possibilities to apply the model both for
predictions and as an analysis tool. We compare the adapted K-McK-model to the
standard SIR model and observe condiderable differences if the contact rates
are not constant. Finally we compare the model reproduction rate with the
empirical reproduction rate determined by the Robert Koch-Institut.
| [
{
"created": "Thu, 1 Apr 2021 22:14:26 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Apr 2021 20:29:08 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Nov 2021 12:44:26 GMT",
"version": "v3"
}
] | 2021-11-16 | [
[
"Kreck",
"Matthias",
""
],
[
"Scholz",
"Erhard",
""
]
] | A widely used tool for analysing the Covid-19 pandemic is the standard SIR model. It seems often to be used as a black box, not taking into account that this model was derived as a special case of the seminal Kermack-McKendrick theory from 1927. This is our starting point. We explain the setup of the Kermack-McKendrick theory (passing to a discrete approach) and use medical information for specializing to a model which we call {\em adapted K-McK-model}. This includes effects of vaccination, mass testing and mutants. We demonstrate the use of the model by applying it to the development in Germany. As a striking application we demonstrate that a comparatively mild intervention reducing the time until quarantine by one day leads to a drastic improvement. A similar effect can be obtained by certain mass testings as we will demonstrate. We discuss possibilities to apply the model both for predictions and as an analysis tool. We compare the adapted K-McK-model to the standard SIR model and observe condiderable differences if the contact rates are not constant. Finally we compare the model reproduction rate with the empirical reproduction rate determined by the Robert Koch-Institut. |
2004.09009 | Emmanuel De-Graft Johnson Owusu-Ansah PhD | Emmanuel de-Graft Johnson Owusu-Ansah Atinuke O. Adebanji Eric
Nimako-Aidoo | Data Driven Modeling of Projected Mitigation and Suppressing Strategy
Interventions for SARS-COV 2 in Ghana | 21 Pages, 6 Figures 5 Tables This study assess the mitigation and
suppression measures undertaken by the government of Ghana for managing
COVID-19 | null | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the midst of pandemic for respiratory illness, the call for
non-pharmaceutical interventions become the highest priority for infectious
disease and public health experts, while the race towards vaccine or medical
intervention are ongoing. Individuals may modify their behavior and take
preventative steps to reduce infection risk in the bid to adhere to the call by
government officials and experts. As a result, the existence of relationship
between the preliminary and the final transmission rates become feeble. This
study evaluates the behavioral changes (mitigation and suppression measures)
proposed by public health experts for COVID-19 which had altered human behavior
and their day to day lives. The dynamics underlying the mitigation and
suppression measures reduces the contacts among citizens and significantly
interfere with their physical and social behavior. The results show all the
measures have a significant impact on the decline of transmission rate.
However, the mitigation measures might prolong the elimination of the
transmission which might lead to a severe economic meltdown, yet, a combination
of the measures show a possibility of rooting out transmission within 30 days
if adhered to in an extreme manner. The result shows a peak period of infection
for Ghana ranges from 64th day to 74th day of infection time period.
| [
{
"created": "Mon, 20 Apr 2020 01:14:55 GMT",
"version": "v1"
}
] | 2020-04-21 | [
[
"Nimako-Aidoo",
"Emmanuel de-Graft Johnson Owusu-Ansah Atinuke O. Adebanji Eric",
""
]
] | In the midst of pandemic for respiratory illness, the call for non-pharmaceutical interventions become the highest priority for infectious disease and public health experts, while the race towards vaccine or medical intervention are ongoing. Individuals may modify their behavior and take preventative steps to reduce infection risk in the bid to adhere to the call by government officials and experts. As a result, the existence of relationship between the preliminary and the final transmission rates become feeble. This study evaluates the behavioral changes (mitigation and suppression measures) proposed by public health experts for COVID-19 which had altered human behavior and their day to day lives. The dynamics underlying the mitigation and suppression measures reduces the contacts among citizens and significantly interfere with their physical and social behavior. The results show all the measures have a significant impact on the decline of transmission rate. However, the mitigation measures might prolong the elimination of the transmission which might lead to a severe economic meltdown, yet, a combination of the measures show a possibility of rooting out transmission within 30 days if adhered to in an extreme manner. The result shows a peak period of infection for Ghana ranges from 64th day to 74th day of infection time period. |
1701.05809 | Shoudong Zhao | Shoudong Zhao, Yuan Jiang, Manyu Dong, Hui Xu, Neil Pederson | Early monsoon drought and mid-summer vapor pressure deficit induce
growth cessation of lower margin Picea crassifolia | 33 pages, 8 figures, 2 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme climatic events have been shown to be strong drivers of tree growth,
forest dynamics, and range contraction. Here we study the climatic drivers of
Picea crassifolia Kom., an endemic to northwest China where climate has
significantly warmed. Picea crassifolia was sampled from its lower
distributional margin to its upper distributional margin on the Helan Mountains
to test the hypothesis that 1) growth at the upper limit is limited by cool
temperatures and 2) is limited by drought at its lower limit. We found that
trees at the lower distributional margin have experienced a higher rate of
stem-growth cessation events since 2001 compared to trees at other elevations.
While all populations have a similar climatic sensitivity, stem-growth
cessation events in trees at lower distributional margin appear to be driven by
low precipitation in June as the monsoon begins to deliver moisture to the
region. Evidence indicates that mid-summer (July) vapor pressure deficit (VPD)
exacerbates the frequency of these events. These data and our analysis makes it
evident that an increase in severity and frequency of drought early in the
monsoon season could increase the frequency and severity of stem-growth
cessation in Picea crassifolia trees at lower elevations. Increases in VPD and
warming would likely exacerbate the growth stress of this species on Helan
Mountain. Hypothetically, if the combinations of low moisture and increased VPD
stress becomes more common, the mortality rate of lower distributional margin
trees could increase, especially of those that are already experiencing events
of temporary growth cessation.
| [
{
"created": "Fri, 20 Jan 2017 16:18:19 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Nov 2017 08:54:43 GMT",
"version": "v2"
}
] | 2017-11-27 | [
[
"Zhao",
"Shoudong",
""
],
[
"Jiang",
"Yuan",
""
],
[
"Dong",
"Manyu",
""
],
[
"Xu",
"Hui",
""
],
[
"Pederson",
"Neil",
""
]
] | Extreme climatic events have been shown to be strong drivers of tree growth, forest dynamics, and range contraction. Here we study the climatic drivers of Picea crassifolia Kom., an endemic to northwest China where climate has significantly warmed. Picea crassifolia was sampled from its lower distributional margin to its upper distributional margin on the Helan Mountains to test the hypothesis that 1) growth at the upper limit is limited by cool temperatures and 2) is limited by drought at its lower limit. We found that trees at the lower distributional margin have experienced a higher rate of stem-growth cessation events since 2001 compared to trees at other elevations. While all populations have a similar climatic sensitivity, stem-growth cessation events in trees at lower distributional margin appear to be driven by low precipitation in June as the monsoon begins to deliver moisture to the region. Evidence indicates that mid-summer (July) vapor pressure deficit (VPD) exacerbates the frequency of these events. These data and our analysis makes it evident that an increase in severity and frequency of drought early in the monsoon season could increase the frequency and severity of stem-growth cessation in Picea crassifolia trees at lower elevations. Increases in VPD and warming would likely exacerbate the growth stress of this species on Helan Mountain. Hypothetically, if the combinations of low moisture and increased VPD stress becomes more common, the mortality rate of lower distributional margin trees could increase, especially of those that are already experiencing events of temporary growth cessation. |
2110.04299 | Daniel Han Mr. | Sergei Fedotov and Daniel Han and Alexey O Ivanov and Marco A A da
Silva | Superdiffusion in self-reinforcing run-and-tumble model with rests | null | null | 10.1103/PhysRevE.105.014126 | null | q-bio.QM cond-mat.stat-mech physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a run-and-tumble model with self-reinforcing
directionality and rests. We derive a single governing hyperbolic partial
differential equation for the probability density of random walk position, from
which we obtain the second moment in the long time limit. We find the criteria
for the transition between superdiffusion and diffusion caused by the addition
of a rest state. The emergence of superdiffusion depends on both the parameter
representing the strength of self-reinforcement and the ratio between mean
running and resting times. The mean running time must be at least $2/3$ of the
mean resting time for superdiffusion to be possible. Monte Carlo simulations
validate this theoretical result. This work demonstrates the possibility of
extending the telegrapher's (or Cattaneo) equation by adding self-reinforcing
directionality so that superdiffusion occurs even when rests are introduced.
| [
{
"created": "Fri, 8 Oct 2021 11:48:48 GMT",
"version": "v1"
}
] | 2022-02-09 | [
[
"Fedotov",
"Sergei",
""
],
[
"Han",
"Daniel",
""
],
[
"Ivanov",
"Alexey O",
""
],
[
"da Silva",
"Marco A A",
""
]
] | This paper introduces a run-and-tumble model with self-reinforcing directionality and rests. We derive a single governing hyperbolic partial differential equation for the probability density of random walk position, from which we obtain the second moment in the long time limit. We find the criteria for the transition between superdiffusion and diffusion caused by the addition of a rest state. The emergence of superdiffusion depends on both the parameter representing the strength of self-reinforcement and the ratio between mean running and resting times. The mean running time must be at least $2/3$ of the mean resting time for superdiffusion to be possible. Monte Carlo simulations validate this theoretical result. This work demonstrates the possibility of extending the telegrapher's (or Cattaneo) equation by adding self-reinforcing directionality so that superdiffusion occurs even when rests are introduced. |
1601.05065 | James Shine | James M. Shine, Oluwasanmi Koyejo, Russell A. Poldrack | Temporal meta-states are associated with differential patterns of
dynamic connectivity, network topology and attention | 7 pages, 5 figures | null | 10.1073/pnas.1604898113 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Little is currently known about the coordination of neural activity over
longitudinal time-scales and how these changes relate to behavior. To
investigate this issue, we used resting-state fMRI data from a single
individual to identify the presence of two distinct temporal states that
fluctuated over the course of 18 months. We then demonstrated that these
temporal states were associated with distinct neural dynamics within individual
scanning sessions. In addition, the temporal states were also related to
significant alterations in global efficiency, as well as differences in
self-reported attention. These patterns were replicated in a separate
longitudinal dataset, providing further supportive evidence for the presence of
fluctuations in functional network topology over time. Together, our results
underscore the importance of longitudinal phenotyping in cognitive
neuroscience.
| [
{
"created": "Tue, 19 Jan 2016 20:28:00 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Aug 2016 23:53:14 GMT",
"version": "v2"
}
] | 2017-05-30 | [
[
"Shine",
"James M.",
""
],
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Poldrack",
"Russell A.",
""
]
] | Little is currently known about the coordination of neural activity over longitudinal time-scales and how these changes relate to behavior. To investigate this issue, we used resting-state fMRI data from a single individual to identify the presence of two distinct temporal states that fluctuated over the course of 18 months. We then demonstrated that these temporal states were associated with distinct neural dynamics within individual scanning sessions. In addition, the temporal states were also related to significant alterations in global efficiency, as well as differences in self-reported attention. These patterns were replicated in a separate longitudinal dataset, providing further supportive evidence for the presence of fluctuations in functional network topology over time. Together, our results underscore the importance of longitudinal phenotyping in cognitive neuroscience. |
1601.07144 | \'Alvaro Garc\'ia L\'opez | \'Alvaro G. L\'opez, Jes\'us M. Seoane and Miguel A. F. Sanju\'an | On the fractional cell kill law governing the lysis of solid tumors | null | null | null | null | q-bio.TO nlin.CD q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in silico simulations and mathematical analyses supporting several
hypotheses that explain the saturation expressed in the fractional cell kill
law that governs the lysis of tumor cells by cytotoxic CD8 + T cells (CTLs). In
order to give insight into the significance of the parameters appearing in such
law, a hybrid cellular automaton model describing the spatio-temporal evolution
of tumor growth and its interaction with the cell-mediated immune response is
used. When the CTLs eradicate efficiently the tumor cells, the model predicts a
correlation between the morphology of the tumors and the rate at which they are
lysed. As the effectiveness of the effector cells is decreased, the saturation
gradually disappears. This limit is thoroughly discussed and a new fractional
cell kill is proposed.
| [
{
"created": "Mon, 25 Jan 2016 16:08:00 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jul 2016 16:32:51 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Aug 2016 10:13:57 GMT",
"version": "v3"
}
] | 2016-08-30 | [
[
"López",
"Álvaro G.",
""
],
[
"Seoane",
"Jesús M.",
""
],
[
"Sanjuán",
"Miguel A. F.",
""
]
] | We present in silico simulations and mathematical analyses supporting several hypotheses that explain the saturation expressed in the fractional cell kill law that governs the lysis of tumor cells by cytotoxic CD8 + T cells (CTLs). In order to give insight into the significance of the parameters appearing in such law, a hybrid cellular automaton model describing the spatio-temporal evolution of tumor growth and its interaction with the cell-mediated immune response is used. When the CTLs eradicate efficiently the tumor cells, the model predicts a correlation between the morphology of the tumors and the rate at which they are lysed. As the effectiveness of the effector cells is decreased, the saturation gradually disappears. This limit is thoroughly discussed and a new fractional cell kill is proposed. |
1604.03409 | Carla Molteni | Federico Comitani, Vittorio Limongelli and Carla Molteni | The free energy landscape of GABA binding to a pentameric ligand-gated
ion channel and its disruption by mutations | accepted (May 2016); 27 pages, 6 figures, Table of contents graphic,
Journal of Chemical Theory and Computation (2016) | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pentameric ligand-gated ion channels (pLGICs) of the Cys-loop superfamily are
important neuroreceptors that mediate fast synaptic transmission. They are
activated by the binding of a neurotransmitter, but the details of this process
are still not fully understood. As a prototypical pLGIC, here we choose the
insect resistance to dieldrin (RDL) receptor, involved in the resistance to
insecticides, and investigate the binding of the neurotransmitter GABA to its
extracellular domain at the atomistic level. We achieve this by means of
$\mu$-sec funnel-metadynamics simulations, which efficiently enhance the
sampling of bound and unbound states by using a funnel-shaped restraining
potential to limit the exploration in the solvent. We reveal the sequence of
events in the binding process, from the capture of GABA from the solvent to its
pinning between the charged residues Arg111 and Glu204 in the binding pocket.
We characterize the associated free energy landscapes in the wild-type RDL
receptor and in two mutant forms, where the key residues Arg111 and Glu204 are
mutated to Ala. Experimentally these mutations produce non-functional channels,
which is reflected in the reduced ligand binding affinities, due to the loss of
essential interactions. We also analyze the dynamical behaviour of the crucial
loop C, whose opening allows the access of GABA to the binding site, while its
closure locks the ligand into the protein. The RDL receptor shares structural
and functional features with other pLGICs, hence our work outlines a valuable
protocol to study the binding of ligands to pLGICs beyond conventional docking
and molecular dynamics techniques.
| [
{
"created": "Tue, 12 Apr 2016 13:50:26 GMT",
"version": "v1"
},
{
"created": "Fri, 27 May 2016 14:05:57 GMT",
"version": "v2"
}
] | 2016-05-30 | [
[
"Comitani",
"Federico",
""
],
[
"Limongelli",
"Vittorio",
""
],
[
"Molteni",
"Carla",
""
]
] | Pentameric ligand-gated ion channels (pLGICs) of the Cys-loop superfamily are important neuroreceptors that mediate fast synaptic transmission. They are activated by the binding of a neurotransmitter, but the details of this process are still not fully understood. As a prototypical pLGIC, here we choose the insect resistance to dieldrin (RDL) receptor, involved in the resistance to insecticides, and investigate the binding of the neurotransmitter GABA to its extracellular domain at the atomistic level. We achieve this by means of $\mu$-sec funnel-metadynamics simulations, which efficiently enhance the sampling of bound and unbound states by using a funnel-shaped restraining potential to limit the exploration in the solvent. We reveal the sequence of events in the binding process, from the capture of GABA from the solvent to its pinning between the charged residues Arg111 and Glu204 in the binding pocket. We characterize the associated free energy landscapes in the wild-type RDL receptor and in two mutant forms, where the key residues Arg111 and Glu204 are mutated to Ala. Experimentally these mutations produce non-functional channels, which is reflected in the reduced ligand binding affinities, due to the loss of essential interactions. We also analyze the dynamical behaviour of the crucial loop C, whose opening allows the access of GABA to the binding site, while its closure locks the ligand into the protein. The RDL receptor shares structural and functional features with other pLGICs, hence our work outlines a valuable protocol to study the binding of ligands to pLGICs beyond conventional docking and molecular dynamics techniques. |
2208.11126 | Weili Nie | Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard Baraniuk,
Anima Anandkumar | Retrieval-based Controllable Molecule Generation | ICLR 2023 | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating new molecules with specified chemical and biological properties
via generative models has emerged as a promising direction for drug discovery.
However, existing methods require extensive training/fine-tuning with a large
dataset, often unavailable in real-world generation tasks. In this work, we
propose a new retrieval-based framework for controllable molecule generation.
We use a small set of exemplar molecules, i.e., those that (partially) satisfy
the design criteria, to steer the pre-trained generative model towards
synthesizing molecules that satisfy the given design criteria. We design a
retrieval mechanism that retrieves and fuses the exemplar molecules with the
input molecule, which is trained by a new self-supervised objective that
predicts the nearest neighbor of the input molecule. We also propose an
iterative refinement process to dynamically update the generated molecules and
retrieval database for better generalization. Our approach is agnostic to the
choice of generative models and requires no task-specific fine-tuning. On
various tasks ranging from simple design criteria to a challenging real-world
scenario for designing lead compounds that bind to the SARS-CoV-2 main
protease, we demonstrate our approach extrapolates well beyond the retrieval
database, and achieves better performance and wider applicability than previous
methods. Code is available at https://github.com/NVlabs/RetMol.
| [
{
"created": "Tue, 23 Aug 2022 17:01:16 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Sep 2022 16:47:41 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Apr 2023 17:50:51 GMT",
"version": "v3"
}
] | 2023-04-25 | [
[
"Wang",
"Zichao",
""
],
[
"Nie",
"Weili",
""
],
[
"Qiao",
"Zhuoran",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Baraniuk",
"Richard",
""
],
[
"Anandkumar",
"Anima",
""
]
] | Generating new molecules with specified chemical and biological properties via generative models has emerged as a promising direction for drug discovery. However, existing methods require extensive training/fine-tuning with a large dataset, often unavailable in real-world generation tasks. In this work, we propose a new retrieval-based framework for controllable molecule generation. We use a small set of exemplar molecules, i.e., those that (partially) satisfy the design criteria, to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria. We design a retrieval mechanism that retrieves and fuses the exemplar molecules with the input molecule, which is trained by a new self-supervised objective that predicts the nearest neighbor of the input molecule. We also propose an iterative refinement process to dynamically update the generated molecules and retrieval database for better generalization. Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning. On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods. Code is available at https://github.com/NVlabs/RetMol. |
1802.07905 | Hanna Keren | Hanna Keren, Johannes Partzsch, Shimon Marom, and Christian Mayr | Closed-loop control of a modular neuromorphic biohybrid | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks modularity is a major challenge for the development of
control circuits of neural activity. Under physiological limitations, the
accessible regions for external stimulation are possibly different from the
functionally relevant ones, requiring complex indirect control designs.
Moreover, control over one region might affect activity of other downstream
networks, once sparse connections exist. We address these questions by
developing a hybrid device of a cortical culture functionally integrated with a
biomimetic hardware neural network. This design enables the study of modular
networks controllability, while connectivity is well-defined and key features
of cortical networks are accessible. Using a closed-loop control to monitor the
activity of the coupled hybrid, we show that both modules are congruently
modified, in the macroscopic as well as the microscopic activity levels.
Control impacts efficiently the activity on both sides whether the control
circuit is an indirect series one, or implemented independently only on one of
the modules. Hence, these results present global functional impacts of a local
control intervention. Overall, this strategy provides an experimental access to
the controllability of neural activity irregularities, when embedded in a
modular organization.
| [
{
"created": "Thu, 22 Feb 2018 05:08:48 GMT",
"version": "v1"
}
] | 2018-02-23 | [
[
"Keren",
"Hanna",
""
],
[
"Partzsch",
"Johannes",
""
],
[
"Marom",
"Shimon",
""
],
[
"Mayr",
"Christian",
""
]
] | Neural networks modularity is a major challenge for the development of control circuits of neural activity. Under physiological limitations, the accessible regions for external stimulation are possibly different from the functionally relevant ones, requiring complex indirect control designs. Moreover, control over one region might affect activity of other downstream networks, once sparse connections exist. We address these questions by developing a hybrid device of a cortical culture functionally integrated with a biomimetic hardware neural network. This design enables the study of modular networks controllability, while connectivity is well-defined and key features of cortical networks are accessible. Using a closed-loop control to monitor the activity of the coupled hybrid, we show that both modules are congruently modified, in the macroscopic as well as the microscopic activity levels. Control impacts efficiently the activity on both sides whether the control circuit is an indirect series one, or implemented independently only on one of the modules. Hence, these results present global functional impacts of a local control intervention. Overall, this strategy provides an experimental access to the controllability of neural activity irregularities, when embedded in a modular organization. |
2212.09870 | Yuan Luo | Yiming Li, Yuan Luo | Metabolomics of Aging and Alzheimer's Disease: From Single-Omics to
Multi-Omics | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Aging is a multifactorial process and a key factor of morbidity and
mortality. Alzheimer's disease (AD) is an age-related disorder and a main cause
of worldwide disability. Both aging and AD can be characterized by metabolic
dysfunction. Metabolomics can quantify the complete set of metabolites in a
studied sample and is helpful for studying metabolic alterations in aging and
AD. In this review, we summarize the metabolomic changes regarding aging and
AD, discuss their biological functions, and highlight their potential
application as diagnostic biomarkers or therapeutic targets. Recent advances in
multi-omics approaches for understanding the metabolic mechanism of aging and
AD are also reviewed.
| [
{
"created": "Mon, 19 Dec 2022 21:48:32 GMT",
"version": "v1"
}
] | 2022-12-21 | [
[
"Li",
"Yiming",
""
],
[
"Luo",
"Yuan",
""
]
] | Aging is a multifactorial process and a key factor of morbidity and mortality. Alzheimer's disease (AD) is an age-related disorder and a main cause of worldwide disability. Both aging and AD can be characterized by metabolic dysfunction. Metabolomics can quantify the complete set of metabolites in a studied sample and is helpful for studying metabolic alterations in aging and AD. In this review, we summarize the metabolomic changes regarding aging and AD, discuss their biological functions, and highlight their potential application as diagnostic biomarkers or therapeutic targets. Recent advances in multi-omics approaches for understanding the metabolic mechanism of aging and AD are also reviewed. |
1806.07879 | Miguel Aguilera | Miguel Aguilera, Ezequiel Di Paolo | Integrated information in the thermodynamic limit | arXiv admin note: substantial text overlap with arXiv:1805.00393 | null | 10.1016/j.neunet.2019.03.001 | Neural Networks, 2019, Volume 114, pp 136-146 | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.AO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The capacity to integrate information is a prominent feature of biological
and cognitive systems. Integrated Information Theory (IIT) provides a
mathematical approach to quantify the level of integration in a system, yet its
computational cost generally precludes its applications beyond relatively small
models. In consequence, it is not yet well understood how integration scales up
with the size of a system or with different temporal scales of activity, nor
how a system maintains its integration as its interacts with its environment.
Here, we show for the first time how measures of information integration scale
when systems become very large. Using kinetic Ising models and mean-field
approximations from statistical mechanics, we show that information integration
diverges in the thermodynamic limit at certain critical points. Moreover, by
comparing different divergent tendencies of blocks of a system at these
critical points, we delimit the boundary between an integrated unit and its
environment. Finally, we present a model that adaptively maintains its
integration despite changes in its environment by generating a critical surface
where its integrity is preserved. We argue that the exploration of integrated
information for these limit cases helps in addressing a variety of poorly
understood questions about the organization of biological, neural, and
cognitive systems.
| [
{
"created": "Wed, 20 Jun 2018 08:30:37 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2018 11:11:27 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Jul 2018 07:42:45 GMT",
"version": "v3"
}
] | 2020-08-31 | [
[
"Aguilera",
"Miguel",
""
],
[
"Di Paolo",
"Ezequiel",
""
]
] | The capacity to integrate information is a prominent feature of biological and cognitive systems. Integrated Information Theory (IIT) provides a mathematical approach to quantify the level of integration in a system, yet its computational cost generally precludes its applications beyond relatively small models. In consequence, it is not yet well understood how integration scales up with the size of a system or with different temporal scales of activity, nor how a system maintains its integration as its interacts with its environment. Here, we show for the first time how measures of information integration scale when systems become very large. Using kinetic Ising models and mean-field approximations from statistical mechanics, we show that information integration diverges in the thermodynamic limit at certain critical points. Moreover, by comparing different divergent tendencies of blocks of a system at these critical points, we delimit the boundary between an integrated unit and its environment. Finally, we present a model that adaptively maintains its integration despite changes in its environment by generating a critical surface where its integrity is preserved. We argue that the exploration of integrated information for these limit cases helps in addressing a variety of poorly understood questions about the organization of biological, neural, and cognitive systems. |
1209.3974 | Francisco-Jose Perez-Reche | F.J. Perez-Reche, S.N. Taraskin, W. Otten, M.P. Viana, L. da F. Costa
and C.A. Gilligan | Prominent effect of soil network heterogeneity on microbial invasion | Main text: 6 pages, 4 figures. Supporting information: 19 pages, 15
figures | Phys. Rev. Lett. 109, 098102 (2012) | 10.1103/PhysRevLett.109.098102 | null | q-bio.PE physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a network representation for real soil samples and mathematical models
for microbial spread, we show that the structural heterogeneity of the soil
habitat may have a very significant influence on the size of microbial
invasions of the soil pore space. In particular, neglecting the soil structural
heterogeneity may lead to a substantial underestimation of microbial invasion.
Such effects are explained in terms of a crucial interplay between
heterogeneity in microbial spread and heterogeneity in the topology of soil
networks. The main influence of network topology on invasion is linked to the
existence of long channels in soil networks that may act as bridges for
transmission of microorganisms between distant parts of soil.
| [
{
"created": "Tue, 18 Sep 2012 14:21:43 GMT",
"version": "v1"
}
] | 2012-09-19 | [
[
"Perez-Reche",
"F. J.",
""
],
[
"Taraskin",
"S. N.",
""
],
[
"Otten",
"W.",
""
],
[
"Viana",
"M. P.",
""
],
[
"Costa",
"L. da F.",
""
],
[
"Gilligan",
"C. A.",
""
]
] | Using a network representation for real soil samples and mathematical models for microbial spread, we show that the structural heterogeneity of the soil habitat may have a very significant influence on the size of microbial invasions of the soil pore space. In particular, neglecting the soil structural heterogeneity may lead to a substantial underestimation of microbial invasion. Such effects are explained in terms of a crucial interplay between heterogeneity in microbial spread and heterogeneity in the topology of soil networks. The main influence of network topology on invasion is linked to the existence of long channels in soil networks that may act as bridges for transmission of microorganisms between distant parts of soil. |
2105.07865 | Jacques Van Helden | Jacques van Helden, Colin D Butler, Bruno Canard, Guillaume Achaz,
Fran\c{c}ois Graner, Rossana Segreto, Yuri Deigin, Fabien Colombo, Serge
Morand, Didier Casane, Dan Sirotkin, Karl Sirotkin, Etienne Decroly Jos\'e
Halloy | An appeal for an open scientific debate about the proximal origin of
SARS-CoV-2 | This letter was submitted to The Lancet on January 6, 2021. When the
letter was rejected, we asked the editors whether this refusal meant that
they consider this scientific evaluation of the alternative hypotheses should
not be hosted by scientific journals. The submission was reassessed by the
chief editor, who confirmed the decision to reject the letter without peer
review | null | 10.13140/RG.2.2.15356.46727 | null | q-bio.OT physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One year after the onset of the COVID-19 pandemic, the origin of SARS-CoV-2
still eludes humanity. Early publications firmly stated that the virus was of
natural origin, and the possibility that the virus might have escaped from a
lab was discarded in most subsequent publications. However, based on a
re-analysis of the initial arguments, highlighted by the current knowledge
about the virus, we show that the natural origin is not supported by conclusive
arguments, and that a lab origin cannot be formally discarded. We call for an
opening of peer-reviewed journals to a rational, evidence-based and
prejudice-free evaluation of all the reasonable hypotheses about the virus'
origin. We advocate that this debate should take place in the columns of
renowned scientific journals, rather than being left to social media and
newspapers.
| [
{
"created": "Thu, 13 May 2021 05:02:48 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"van Helden",
"Jacques",
""
],
[
"Butler",
"Colin D",
""
],
[
"Canard",
"Bruno",
""
],
[
"Achaz",
"Guillaume",
""
],
[
"Graner",
"François",
""
],
[
"Segreto",
"Rossana",
""
],
[
"Deigin",
"Yuri",
""
],
[
"Colombo",
"Fabien",
""
],
[
"Morand",
"Serge",
""
],
[
"Casane",
"Didier",
""
],
[
"Sirotkin",
"Dan",
""
],
[
"Sirotkin",
"Karl",
""
],
[
"Halloy",
"Etienne Decroly José",
""
]
] | One year after the onset of the COVID-19 pandemic, the origin of SARS-CoV-2 still eludes humanity. Early publications firmly stated that the virus was of natural origin, and the possibility that the virus might have escaped from a lab was discarded in most subsequent publications. However, based on a re-analysis of the initial arguments, highlighted by the current knowledge about the virus, we show that the natural origin is not supported by conclusive arguments, and that a lab origin cannot be formally discarded. We call for an opening of peer-reviewed journals to a rational, evidence-based and prejudice-free evaluation of all the reasonable hypotheses about the virus' origin. We advocate that this debate should take place in the columns of renowned scientific journals, rather than being left to social media and newspapers. |
2307.03934 | Florian Jug | Joran Deschamps, Damian Dalle Nogare, Florian Jug | Better Research Software Tools to Elevate the Rate of Scientific
Discovery -- or why we need to invest in research software engineering | 8 pages, 0 figures | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the past decade, enormous progress has been made in advancing the
state-of-the-art in bioimage analysis - a young computational field that works
in close collaboration with the life sciences on the quantitative analysis of
scientific image data. In many cases, tremendous effort has been spent to
package these new advances into usable software tools and, as a result, users
can nowadays routinely apply cutting-edge methods to their analysis problems
using software tools such as ilastik [1], cellprofiler [2], Fiji/ImageJ2 [3,4]
and its many modern plugins that build on the BigDataViewer ecosystem [5], and
many others. Such software tools have now become part of a critical
infrastructure for science [6]. Unfortunately, overshadowed by the few
exceptions that have had long-lasting impact, many other potentially useful
tools fail to find their way into the hands of users. While there are many
reasons for this, we believe that at least some of the underlying problems,
which we discuss in more detail below, can be mitigated. In this opinion piece,
we specifically argue that embedding teams of research software engineers
(RSEs) within imaging and image analysis core facilities would be a major step
towards sustainable bioimage analysis software.
| [
{
"created": "Sat, 8 Jul 2023 08:46:14 GMT",
"version": "v1"
}
] | 2023-07-11 | [
[
"Deschamps",
"Joran",
""
],
[
"Nogare",
"Damian Dalle",
""
],
[
"Jug",
"Florian",
""
]
] | In the past decade, enormous progress has been made in advancing the state-of-the-art in bioimage analysis - a young computational field that works in close collaboration with the life sciences on the quantitative analysis of scientific image data. In many cases, tremendous effort has been spent to package these new advances into usable software tools and, as a result, users can nowadays routinely apply cutting-edge methods to their analysis problems using software tools such as ilastik [1], cellprofiler [2], Fiji/ImageJ2 [3,4] and its many modern plugins that build on the BigDataViewer ecosystem [5], and many others. Such software tools have now become part of a critical infrastructure for science [6]. Unfortunately, overshadowed by the few exceptions that have had long-lasting impact, many other potentially useful tools fail to find their way into the hands of users. While there are many reasons for this, we believe that at least some of the underlying problems, which we discuss in more detail below, can be mitigated. In this opinion piece, we specifically argue that embedding teams of research software engineers (RSEs) within imaging and image analysis core facilities would be a major step towards sustainable bioimage analysis software. |
1209.2089 | Nicholas Eriksson | Amy K. Kiefer, Joyce Y. Tung, Chuong B. Do, David A. Hinds, Joanna L.
Mountain, Uta Francke and Nicholas Eriksson | Genome-wide analysis points to roles for extracellular matrix
remodeling, the visual cycle, and neuronal development in myopia | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Myopia, or nearsightedness, is the most common eye disorder, resulting
primarily from excess elongation of the eye. The etiology of myopia, although
known to be complex, is poorly understood. Here we report the largest ever
genome-wide association study (43,360 participants) on myopia in Europeans. We
performed a survival analysis on age of myopia onset and identified 19
significant associations (p < 5e-8), two of which are replications of earlier
associations with refractive error. These 19 associations in total explain 2.7%
of the variance in myopia age of onset, and point towards a number of different
mechanisms behind the development of myopia. One association is in the gene
PRSS56, which has previously been linked to abnormally small eyes; one is in a
gene that forms part of the extracellular matrix (LAMA2); two are in or near
genes involved in the regeneration of 11-cis-retinal (RGR and RDH5); two are
near genes known to be involved in the growth and guidance of retinal ganglion
cells (ZIC2, SFRP1); and five are in or near genes involved in neuronal
signaling or development. These novel findings point towards multiple genetic
factors involved in the development of myopia and suggest that complex
interactions between extracellular matrix remodeling, neuronal development, and
visual signals from the retina may underlie the development of myopia in
humans.
| [
{
"created": "Mon, 10 Sep 2012 18:44:17 GMT",
"version": "v1"
}
] | 2012-09-11 | [
[
"Kiefer",
"Amy K.",
""
],
[
"Tung",
"Joyce Y.",
""
],
[
"Do",
"Chuong B.",
""
],
[
"Hinds",
"David A.",
""
],
[
"Mountain",
"Joanna L.",
""
],
[
"Francke",
"Uta",
""
],
[
"Eriksson",
"Nicholas",
""
]
] | Myopia, or nearsightedness, is the most common eye disorder, resulting primarily from excess elongation of the eye. The etiology of myopia, although known to be complex, is poorly understood. Here we report the largest ever genome-wide association study (43,360 participants) on myopia in Europeans. We performed a survival analysis on age of myopia onset and identified 19 significant associations (p < 5e-8), two of which are replications of earlier associations with refractive error. These 19 associations in total explain 2.7% of the variance in myopia age of onset, and point towards a number of different mechanisms behind the development of myopia. One association is in the gene PRSS56, which has previously been linked to abnormally small eyes; one is in a gene that forms part of the extracellular matrix (LAMA2); two are in or near genes involved in the regeneration of 11-cis-retinal (RGR and RDH5); two are near genes known to be involved in the growth and guidance of retinal ganglion cells (ZIC2, SFRP1); and five are in or near genes involved in neuronal signaling or development. These novel findings point towards multiple genetic factors involved in the development of myopia and suggest that complex interactions between extracellular matrix remodeling, neuronal development, and visual signals from the retina may underlie the development of myopia in humans. |
1810.04707 | Houssam Nassif | Houssam Nassif, Hassan Al-Ali, Sawsan Khuri, Walid Keirouz, and David
Page | An Inductive Logic Programming Approach to Validate Hexose Binding
Biochemical Knowledge | null | International Conference on Inductive Logic Programming (ILP'09),
Leuven, Belgium, pp. 149-165, 2009 | 10.1007/978-3-642-13840-9_14 | null | q-bio.OT cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hexoses are simple sugars that play a key role in many cellular pathways, and
in the regulation of development and disease mechanisms. Current protein-sugar
computational models are based, at least partially, on prior biochemical
findings and knowledge. They incorporate different parts of these findings in
predictive black-box models. We investigate the empirical support for
biochemical findings by comparing Inductive Logic Programming (ILP) induced
rules to actual biochemical results. We mine the Protein Data Bank for a
representative data set of hexose binding sites, non-hexose binding sites and
surface grooves. We build an ILP model of hexose-binding sites and evaluate our
results against several baseline machine learning classifiers. Our method
achieves an accuracy similar to that of other black-box classifiers while
providing insight into the discriminating process. In addition, it confirms
wet-lab findings and reveals a previously unreported Trp-Glu amino acids
dependency.
| [
{
"created": "Tue, 2 Oct 2018 19:59:18 GMT",
"version": "v1"
}
] | 2018-10-12 | [
[
"Nassif",
"Houssam",
""
],
[
"Al-Ali",
"Hassan",
""
],
[
"Khuri",
"Sawsan",
""
],
[
"Keirouz",
"Walid",
""
],
[
"Page",
"David",
""
]
] | Hexoses are simple sugars that play a key role in many cellular pathways, and in the regulation of development and disease mechanisms. Current protein-sugar computational models are based, at least partially, on prior biochemical findings and knowledge. They incorporate different parts of these findings in predictive black-box models. We investigate the empirical support for biochemical findings by comparing Inductive Logic Programming (ILP) induced rules to actual biochemical results. We mine the Protein Data Bank for a representative data set of hexose binding sites, non-hexose binding sites and surface grooves. We build an ILP model of hexose-binding sites and evaluate our results against several baseline machine learning classifiers. Our method achieves an accuracy similar to that of other black-box classifiers while providing insight into the discriminating process. In addition, it confirms wet-lab findings and reveals a previously unreported Trp-Glu amino acids dependency. |
2103.07772 | Sathish Ande | Sathish Ande, Jayanth R Regatti, Neha Pandey, Ajith Karunarathne,
Lopamudra Giri, Soumya Jana | Heterogeneity in Neuronal Calcium Spike Trains based on Empirical
Distance | null | null | null | null | q-bio.NC eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Statistical similarities between neuronal spike trains could reveal
significant information on complex underlying processing. In general, the
similarity between synchronous spike trains is somewhat easy to identify.
However, the similar patterns also potentially appear in an asynchronous
manner. However, existing methods for their identification tend to converge
slowly, and cannot be applied to short sequences. In response, we propose
Hellinger distance measure based on empirical probabilities, which we show to
be as accurate as existing techniques, yet faster to converge for synthetic as
well as experimental spike trains. Further, we cluster pairs of neuronal spike
trains based on statistical similarities and found two non-overlapping classes,
which could indicate functional similarities in neurons. Significantly, our
technique detected functional heterogeneity in pairs of neuronal responses with
the same performance as existing techniques, while exhibiting faster
convergence. We expect the proposed method to facilitate large-scale studies of
functional clustering, especially involving short sequences, which would in
turn identify signatures of various diseases in terms of clustering patterns.
| [
{
"created": "Sat, 13 Mar 2021 19:09:11 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Ande",
"Sathish",
""
],
[
"Regatti",
"Jayanth R",
""
],
[
"Pandey",
"Neha",
""
],
[
"Karunarathne",
"Ajith",
""
],
[
"Giri",
"Lopamudra",
""
],
[
"Jana",
"Soumya",
""
]
] | Statistical similarities between neuronal spike trains could reveal significant information on complex underlying processing. In general, the similarity between synchronous spike trains is somewhat easy to identify. However, the similar patterns also potentially appear in an asynchronous manner. However, existing methods for their identification tend to converge slowly, and cannot be applied to short sequences. In response, we propose Hellinger distance measure based on empirical probabilities, which we show to be as accurate as existing techniques, yet faster to converge for synthetic as well as experimental spike trains. Further, we cluster pairs of neuronal spike trains based on statistical similarities and found two non-overlapping classes, which could indicate functional similarities in neurons. Significantly, our technique detected functional heterogeneity in pairs of neuronal responses with the same performance as existing techniques, while exhibiting faster convergence. We expect the proposed method to facilitate large-scale studies of functional clustering, especially involving short sequences, which would in turn identify signatures of various diseases in terms of clustering patterns. |
2306.08096 | Einar Bjarki Gunnarsson | Einar Bjarki Gunnarsson and Jasmine Foo and Kevin Leder | Statistical inference of the rates of cell proliferation and phenotypic
switching in cancer | 45 pages, 11 figures, accepted for publication in Journal of
Theoretical Biology | Journal of Theoretical Biology, 568 (2023), 111497 | 10.1016/j.jtbi.2023.111497 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent evidence suggests that nongenetic (epigenetic) mechanisms play an
important role at all stages of cancer evolution. In many cancers, these
mechanisms have been observed to induce dynamic switching between two or more
cell states, which commonly show differential responses to drug treatments. To
understand how these cancers evolve over time, and how they respond to
treatment, we need to understand the state-dependent rates of cell
proliferation and phenotypic switching. In this work, we propose a rigorous
statistical framework for estimating these parameters, using data from commonly
performed cell line experiments, where phenotypes are sorted and expanded in
culture. The framework explicitly models the stochastic dynamics of cell
division, cell death and phenotypic switching, and it provides likelihood-based
confidence intervals for the model parameters. The input data can be either the
fraction of cells or the number of cells in each state at one or more time
points. Through a combination of theoretical analysis and numerical
simulations, we show that when cell fraction data is used, the rates of
switching may be the only parameters that can be estimated accurately. On the
other hand, using cell number data enables accurate estimation of the net
division rate for each phenotype, and it can even enable estimation of the
state-dependent rates of cell division and cell death. We conclude by applying
our framework to a publicly available dataset.
| [
{
"created": "Tue, 13 Jun 2023 19:29:37 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Gunnarsson",
"Einar Bjarki",
""
],
[
"Foo",
"Jasmine",
""
],
[
"Leder",
"Kevin",
""
]
] | Recent evidence suggests that nongenetic (epigenetic) mechanisms play an important role at all stages of cancer evolution. In many cancers, these mechanisms have been observed to induce dynamic switching between two or more cell states, which commonly show differential responses to drug treatments. To understand how these cancers evolve over time, and how they respond to treatment, we need to understand the state-dependent rates of cell proliferation and phenotypic switching. In this work, we propose a rigorous statistical framework for estimating these parameters, using data from commonly performed cell line experiments, where phenotypes are sorted and expanded in culture. The framework explicitly models the stochastic dynamics of cell division, cell death and phenotypic switching, and it provides likelihood-based confidence intervals for the model parameters. The input data can be either the fraction of cells or the number of cells in each state at one or more time points. Through a combination of theoretical analysis and numerical simulations, we show that when cell fraction data is used, the rates of switching may be the only parameters that can be estimated accurately. On the other hand, using cell number data enables accurate estimation of the net division rate for each phenotype, and it can even enable estimation of the state-dependent rates of cell division and cell death. We conclude by applying our framework to a publicly available dataset. |
2311.14717 | Mattia Sensi | Lukas Eigentler, Mattia Sensi | Delayed loss of stability of periodic travelling waves: insights from
the analysis of essential spectra | 25 pages, 12 figures | null | null | null | q-bio.PE math.DS nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Periodic travelling waves (PTW) are a common solution type of partial
differential equations. Such models exhibit multistability of PTWs, typically
visualised through the Busse balloon, and parameter changes typically lead to a
cascade of wavelength changes through the Busse balloon. In the past, the
stability boundaries of the Busse balloon have been used to predict such
wavelength changes. Here, motivated by anecdotal evidence from previous work,
we provide compelling evidence that the Busse balloon provides insufficient
information to predict wavelength changes due to a delayed loss of stability
phenomenon. Using two different reaction-advection-diffusion systems, we relate
the delay that occurs between the crossing of a stability boundary in the Busse
balloon and the occurrence of a wavelength change to features of the essential
spectrum of the destabilised PTW. This leads to a predictive framework that can
estimate the order of magnitude of such a time delay, which provides a novel
``early warning sign'' for pattern destabilization. We illustrate the
implementation of the predictive framework to predict under what conditions a
wavelength change of a PTW occurs.
| [
{
"created": "Fri, 17 Nov 2023 19:17:55 GMT",
"version": "v1"
}
] | 2023-11-28 | [
[
"Eigentler",
"Lukas",
""
],
[
"Sensi",
"Mattia",
""
]
] | Periodic travelling waves (PTW) are a common solution type of partial differential equations. Such models exhibit multistability of PTWs, typically visualised through the Busse balloon, and parameter changes typically lead to a cascade of wavelength changes through the Busse balloon. In the past, the stability boundaries of the Busse balloon have been used to predict such wavelength changes. Here, motivated by anecdotal evidence from previous work, we provide compelling evidence that the Busse balloon provides insufficient information to predict wavelength changes due to a delayed loss of stability phenomenon. Using two different reaction-advection-diffusion systems, we relate the delay that occurs between the crossing of a stability boundary in the Busse balloon and the occurrence of a wavelength change to features of the essential spectrum of the destabilised PTW. This leads to a predictive framework that can estimate the order of magnitude of such a time delay, which provides a novel ``early warning sign'' for pattern destabilization. We illustrate the implementation of the predictive framework to predict under what conditions a wavelength change of a PTW occurs. |
2012.13593 | Hongyi Li Dr. | Li Hongyi, Yin Yajun, Hu Jun, Li Hua, Wang Fang, Ji Fusui, Ma Chao | An insight into acupoints and meridians in human body based on
interstitial fluid circulation | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The atlas of human acupoints and meridians has been utilized in clinical
practice for almost a millennium although the anatomical structures and
functions remain to be clarified. It has recently been reported that a
long-distance interstitial fluid (ISF) circulatory pathway may originate from
the acupoints in the extremities. As observed in living human subjects,
cadavers and animals using magnetic resonance imaging and fluorescent tracers,
the ISF flow pathways include at least 4 types of anatomical structures: the
cutaneous-, perivenous-, periarterial-, and neural-pathways. Unlike the blood
or lymphatic vessels, these ISF flow pathways are composed of highly ordered
and topologically connected interstitial fibrous connective tissues that may
work as guiderails for the ISF to flow actively over long distance under
certain driving forces. Our experimental results demonstrated that most
acupoints in the extremity endings connect with one or more ISF flow pathways
and comprise a complex network of acupoint-ISF-pathways. We also found that
this acupoint-ISF-pathway network can connect to visceral organs or tissues
such as the pericardium and epicardium, even though the topographical geometry
in human extremities does not totally match the meridian lines on the atlas
that is currently used in traditional Chinese medicine. Based on our
experimental data, the following working hypotheses are proposed. A
comprehensive atlas will be constructed to systemically reveal the detailed
anatomical structures of the acupoints-originated ISF circulation. Such an
atlas may shed light on the mysteries shrouding the visceral correlations of
acupoints and meridians, and inaugurate a new frontier for innovative medical
applications.
| [
{
"created": "Fri, 25 Dec 2020 15:24:57 GMT",
"version": "v1"
}
] | 2020-12-29 | [
[
"Hongyi",
"Li",
""
],
[
"Yajun",
"Yin",
""
],
[
"Jun",
"Hu",
""
],
[
"Hua",
"Li",
""
],
[
"Fang",
"Wang",
""
],
[
"Fusui",
"Ji",
""
],
[
"Chao",
"Ma",
""
]
] | The atlas of human acupoints and meridians has been utilized in clinical practice for almost a millennium although the anatomical structures and functions remain to be clarified. It has recently been reported that a long-distance interstitial fluid (ISF) circulatory pathway may originate from the acupoints in the extremities. As observed in living human subjects, cadavers and animals using magnetic resonance imaging and fluorescent tracers, the ISF flow pathways include at least 4 types of anatomical structures: the cutaneous-, perivenous-, periarterial-, and neural-pathways. Unlike the blood or lymphatic vessels, these ISF flow pathways are composed of highly ordered and topologically connected interstitial fibrous connective tissues that may work as guiderails for the ISF to flow actively over long distance under certain driving forces. Our experimental results demonstrated that most acupoints in the extremity endings connect with one or more ISF flow pathways and comprise a complex network of acupoint-ISF-pathways. We also found that this acupoint-ISF-pathway network can connect to visceral organs or tissues such as the pericardium and epicardium, even though the topographical geometry in human extremities does not totally match the meridian lines on the atlas that is currently used in traditional Chinese medicine. Based on our experimental data, the following working hypotheses are proposed. A comprehensive atlas will be constructed to systemically reveal the detailed anatomical structures of the acupoints-originated ISF circulation. Such an atlas may shed light on the mysteries shrouding the visceral correlations of acupoints and meridians, and inaugurate a new frontier for innovative medical applications. |
2210.14954 | Christophe Bastien | C. Bastien, A. Scattina, C. Neal-Sturgess, R. Panno, V. Shrinivas | A Numerical Method to Compute Brain Injury Associated to Concussion | 12 pages | null | null | null | q-bio.QM physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research proposes a new a numerical method to compute brain injury
associated with concussion using the Peak Virtual Power method, using the THUMS
4.02 head model. The results indicate that mild and severe concussions could be
prevented for lateral collisions and frontal impacts with PVP values lower than
0.928mW and 9.405mW, respectively, and no concussion would happen in the head
vertical direction for a PVP value less than 1.184mW. This innovative method
proposes a new paradigm to improve helmet designs, assess sports injuries and
improve people's wellbeing.
| [
{
"created": "Tue, 25 Oct 2022 14:34:50 GMT",
"version": "v1"
}
] | 2022-10-28 | [
[
"Bastien",
"C.",
""
],
[
"Scattina",
"A.",
""
],
[
"Neal-Sturgess",
"C.",
""
],
[
"Panno",
"R.",
""
],
[
"Shrinivas",
"V.",
""
]
] | This research proposes a new a numerical method to compute brain injury associated with concussion using the Peak Virtual Power method, using the THUMS 4.02 head model. The results indicate that mild and severe concussions could be prevented for lateral collisions and frontal impacts with PVP values lower than 0.928mW and 9.405mW, respectively, and no concussion would happen in the head vertical direction for a PVP value less than 1.184mW. This innovative method proposes a new paradigm to improve helmet designs, assess sports injuries and improve people's wellbeing. |
2002.06330 | Peng Cao | Peng Cao and Susan M. Noworolski and Olga Starobinets and Natalie Korn
and Sage P. Kramer and Antonio C. Westphalen and Andrew P. Leynes and
Valentina Pedoia and Peder Larson | Development of Conditional Random Field Insert for UNet-based Zonal
Prostate Segmentation on T2-Weighted MRI | null | null | null | null | q-bio.QM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: A conventional 2D UNet convolutional neural network (CNN)
architecture may result in ill-defined boundaries in segmentation output.
Several studies imposed stronger constraints on each level of UNet to improve
the performance of 2D UNet, such as SegNet. In this study, we investigated 2D
SegNet and a proposed conditional random field insert (CRFI) for zonal prostate
segmentation from clinical T2-weighted MRI data.
Methods: We introduced a new methodology that combines SegNet and CRFI to
improve the accuracy and robustness of the segmentation. CRFI has feedback
connections that encourage the data consistency at multiple levels of the
feature pyramid. On the encoder side of the SegNet, the CRFI combines the input
feature maps and convolution block output based on their spatial local
similarity, like a trainable bilateral filter. For all networks, 725 2D images
(i.e., 29 MRI cases) were used in training; while, 174 2D images (i.e., 6
cases) were used in testing.
Results: The SegNet with CRFI achieved the relatively high Dice coefficients
(0.76, 0.84, and 0.89) for the peripheral zone, central zone, and whole gland,
respectively. Compared with UNet, the SegNet+CRFIs segmentation has generally
higher Dice score and showed the robustness in determining the boundaries of
anatomical structures compared with the SegNet or UNet segmentation. The SegNet
with a CRFI at the end showed the CRFI can correct the segmentation errors from
SegNet output, generating smooth and consistent segmentation for the prostate.
Conclusion: UNet based deep neural networks demonstrated in this study can
perform zonal prostate segmentation, achieving high Dice coefficients compared
with those in the literature. The proposed CRFI method can reduce the fuzzy
boundaries that affected the segmentation performance of baseline UNet and
SegNet models.
| [
{
"created": "Sat, 15 Feb 2020 06:27:12 GMT",
"version": "v1"
}
] | 2020-02-18 | [
[
"Cao",
"Peng",
""
],
[
"Noworolski",
"Susan M.",
""
],
[
"Starobinets",
"Olga",
""
],
[
"Korn",
"Natalie",
""
],
[
"Kramer",
"Sage P.",
""
],
[
"Westphalen",
"Antonio C.",
""
],
[
"Leynes",
"Andrew P.",
""
],
[
"Pedoia",
"Valentina",
""
],
[
"Larson",
"Peder",
""
]
] | Purpose: A conventional 2D UNet convolutional neural network (CNN) architecture may result in ill-defined boundaries in segmentation output. Several studies imposed stronger constraints on each level of UNet to improve the performance of 2D UNet, such as SegNet. In this study, we investigated 2D SegNet and a proposed conditional random field insert (CRFI) for zonal prostate segmentation from clinical T2-weighted MRI data. Methods: We introduced a new methodology that combines SegNet and CRFI to improve the accuracy and robustness of the segmentation. CRFI has feedback connections that encourage the data consistency at multiple levels of the feature pyramid. On the encoder side of the SegNet, the CRFI combines the input feature maps and convolution block output based on their spatial local similarity, like a trainable bilateral filter. For all networks, 725 2D images (i.e., 29 MRI cases) were used in training; while, 174 2D images (i.e., 6 cases) were used in testing. Results: The SegNet with CRFI achieved the relatively high Dice coefficients (0.76, 0.84, and 0.89) for the peripheral zone, central zone, and whole gland, respectively. Compared with UNet, the SegNet+CRFIs segmentation has generally higher Dice score and showed the robustness in determining the boundaries of anatomical structures compared with the SegNet or UNet segmentation. The SegNet with a CRFI at the end showed the CRFI can correct the segmentation errors from SegNet output, generating smooth and consistent segmentation for the prostate. Conclusion: UNet based deep neural networks demonstrated in this study can perform zonal prostate segmentation, achieving high Dice coefficients compared with those in the literature. The proposed CRFI method can reduce the fuzzy boundaries that affected the segmentation performance of baseline UNet and SegNet models. |
2002.01467 | Lilianne Mujica-Parodi | LR Mujica-Parodi and HH Strey | Making Sense of Computational Psychiatry | 16 pages, 5 figures. Int J Neuropsychopharmacol. 2020 Mar 27 | null | 10.1093/ijnp/pyaa013 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In psychiatry, we often speak of constructing "models." Here we try to make
sense of what such a claim might mean, starting with the most fundamental
question: "What is (and isn't) a model?". We then discuss, in a concrete
measurable sense, what it means for a model to be useful. In so doing, we first
identify the added value that a computational model can provide, in the context
of accuracy and power. We then present the limitations of standard statistical
methods and provide suggestions for how we can expand the explanatory power of
our analyses by reconceptualizing statistical models as dynamical systems.
Finally, we address the problem of model building, suggesting ways in which
computational psychiatry can escape the potential for cognitive biases imposed
by classical hypothesis-driven research, exploiting deep systems-level
information contained within neuroimaging data to advance our understanding of
psychiatric neuroscience.
| [
{
"created": "Tue, 4 Feb 2020 18:46:58 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Apr 2020 15:30:17 GMT",
"version": "v2"
}
] | 2020-04-15 | [
[
"Mujica-Parodi",
"LR",
""
],
[
"Strey",
"HH",
""
]
] | In psychiatry, we often speak of constructing "models." Here we try to make sense of what such a claim might mean, starting with the most fundamental question: "What is (and isn't) a model?". We then discuss, in a concrete measurable sense, what it means for a model to be useful. In so doing, we first identify the added value that a computational model can provide, in the context of accuracy and power. We then present the limitations of standard statistical methods and provide suggestions for how we can expand the explanatory power of our analyses by reconceptualizing statistical models as dynamical systems. Finally, we address the problem of model building, suggesting ways in which computational psychiatry can escape the potential for cognitive biases imposed by classical hypothesis-driven research, exploiting deep systems-level information contained within neuroimaging data to advance our understanding of psychiatric neuroscience. |
2304.00727 | Linn\'ea Gyllingberg | Linn\'ea Gyllingberg, Alex Szorkovszky, and David J.T. Sumpter | Using neuronal models to capture burst and glide motion and leadership
in fish | null | null | null | null | q-bio.NC physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While mathematical models, in particular self-propelled particle (SPP)
models, capture many of the observed properties of large fish schools, they do
not always capture the interactions of smaller shoals. Nor do these models tend
to account for the observation that, when swimming alone or in smaller groups,
many species of fish use intermittent locomotion, often referred to as burst
and coast or burst and glide. Recent empirical studies have suggested that
burst and glide movement is indeed pivotal to the social interactions of
individual fish. In this paper, we propose a model of social burst and glide
motion by combining a well-studied model of neuronal dynamics, the
FitzHugh-Nagumo model, with a model of fish motion. We begin by showing that
the model can capture the motion of a single fish swimming down a channel. By
then extending to a two fish model, where visual stimuli of the position of the
other fish affect the internal burst or glide state of the fish, we find that
our model captures a rich set of swimming dynamics found in many species of
fish. These include: leader-follower behaviour; periodic changes in leadership;
apparently random (i.e. chaotic) leadership change; and pendulum-like
tit-for-tat turn taking. Unlike SPP models, which assume that fish move at a
constant speed, the model produces realistic motion of individual fish.
Moreover, unlike previous studies where a random component is used for
leadership switching to occur, we show that leadership switching, both periodic
and chaotic, can be the result of a deterministic interaction. We give several
empirically testable predictions on how fish interact and discuss our results
in light of recently established correlations between fish locomotion and brain
activity.
| [
{
"created": "Mon, 3 Apr 2023 05:41:06 GMT",
"version": "v1"
}
] | 2023-04-04 | [
[
"Gyllingberg",
"Linnéa",
""
],
[
"Szorkovszky",
"Alex",
""
],
[
"Sumpter",
"David J. T.",
""
]
] | While mathematical models, in particular self-propelled particle (SPP) models, capture many of the observed properties of large fish schools, they do not always capture the interactions of smaller shoals. Nor do these models tend to account for the observation that, when swimming alone or in smaller groups, many species of fish use intermittent locomotion, often referred to as burst and coast or burst and glide. Recent empirical studies have suggested that burst and glide movement is indeed pivotal to the social interactions of individual fish. In this paper, we propose a model of social burst and glide motion by combining a well-studied model of neuronal dynamics, the FitzHugh-Nagumo model, with a model of fish motion. We begin by showing that the model can capture the motion of a single fish swimming down a channel. By then extending to a two fish model, where visual stimuli of the position of the other fish affect the internal burst or glide state of the fish, we find that our model captures a rich set of swimming dynamics found in many species of fish. These include: leader-follower behaviour; periodic changes in leadership; apparently random (i.e. chaotic) leadership change; and pendulum-like tit-for-tat turn taking. Unlike SPP models, which assume that fish move at a constant speed, the model produces realistic motion of individual fish. Moreover, unlike previous studies where a random component is used for leadership switching to occur, we show that leadership switching, both periodic and chaotic, can be the result of a deterministic interaction. We give several empirically testable predictions on how fish interact and discuss our results in light of recently established correlations between fish locomotion and brain activity. |
1309.3724 | Daniel Soudry | Daniel Soudry, Suraj Keshri, Patrick Stinson, Min-hwan Oh, Garud
Iyengar, Liam Paninski | A shotgun sampling solution for the common input problem in neural
connectivity inference | null | null | null | null | q-bio.NC q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring connectivity in neuronal networks remains a key challenge in
statistical neuroscience. The `common input' problem presents the major
roadblock: it is difficult to reliably distinguish causal connections between
pairs of observed neurons from correlations induced by common input from
unobserved neurons. Since available recording techniques allow us to sample
from only a small fraction of large networks simultaneously with sufficient
temporal resolution, naive connectivity estimators that neglect these common
input effects are highly biased. This work proposes a `shotgun' experimental
design, in which we observe multiple sub-networks briefly, in a serial manner.
Thus, while the full network cannot be observed simultaneously at any given
time, we may be able to observe most of it during the entire experiment. Using
a generalized linear model for a spiking recurrent neural network, we develop
scalable approximate Bayesian methods to perform network inference given this
type of data, in which only a small fraction of the network is observed in each
time bin. We demonstrate in simulation that, using this method: (1) The shotgun
experimental design can eliminate the biases induced by common input effects.
(2) Networks with thousands of neurons, in which only a small fraction of the
neurons is observed in each time bin, could be quickly and accurately
estimated. (3) Performance can be improved if we exploit prior information
about the probability of having a connection between two neurons, its
dependence on neuronal cell types (e.g., Dale's law), or its dependence on the
distance between neurons.
| [
{
"created": "Sun, 15 Sep 2013 05:29:53 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Dec 2014 01:12:14 GMT",
"version": "v2"
}
] | 2014-12-19 | [
[
"Soudry",
"Daniel",
""
],
[
"Keshri",
"Suraj",
""
],
[
"Stinson",
"Patrick",
""
],
[
"Oh",
"Min-hwan",
""
],
[
"Iyengar",
"Garud",
""
],
[
"Paninski",
"Liam",
""
]
] | Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The `common input' problem presents the major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons from correlations induced by common input from unobserved neurons. Since available recording techniques allow us to sample from only a small fraction of large networks simultaneously with sufficient temporal resolution, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a `shotgun' experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe most of it during the entire experiment. Using a generalized linear model for a spiking recurrent neural network, we develop scalable approximate Bayesian methods to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that, using this method: (1) The shotgun experimental design can eliminate the biases induced by common input effects. (2) Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, could be quickly and accurately estimated. (3) Performance can be improved if we exploit prior information about the probability of having a connection between two neurons, its dependence on neuronal cell types (e.g., Dale's law), or its dependence on the distance between neurons. |
1504.06091 | Chuan Xue | Chuan Xue, Blerta Shtylla and Anthony Brown | A Stochastic Multiscale Model that Explains the Segregation of Axonal
Microtubules and Neurofilaments in Neurological Diseases | null | null | 10.1371/journal.pcbi.1004406 | null | q-bio.CB physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The organization of the axonal cytoskeleton is a key determinant of the
normal function of an axon, which is a long thin projection away from a neuron.
Under normal conditions two axonal cytoskeletal polymers microtubules and
neurofilaments align longitudinally in axons and are interspersed in axonal
cross-sections. However, in many neurotoxic and neurodegenerative disorders,
microtubules and neurofilaments segregate apart from each other, with
microtubules and membranous organelles clustered centrally and neurofilaments
displaced to the periphery. This striking segregation precedes abnormal and
excessive neurofilament accumulation in these diseases, which in turn leads to
focal axonal swellings. While neurofilament accumulation suggests the
impairment of neurofilament transport along axons, the underlying mechanism of
their segregation from microtubules remains poorly understood for over 30
years. To address this question, we developed a stochastic multiscale model for
the cross-sectional distribution of microtubules and neurofilaments in axons.
The model describes microtubules, neurofilaments and organelles as interacting
particles in a 2D cross-section, and incorporates the stochastic interactions
these particles through molecular motors. Simulations of the model demonstrate
that organelles can pull nearby microtubules together, and in the absence of
neurofilament transport, this mechanism gradually segregates microtubules from
neurofilaments on a time scale of hours, similar to that observed in toxic
neuropathies. This suggests that the microtubule-neurofilament segregation is
simply a consequence of the selective impairment of neurofilament transport.
The model generates the experimentally testable prediction that the rate and
extent of segregation will be dependent on the sizes of the moving organelles
as well as the density of their traffic.
| [
{
"created": "Thu, 23 Apr 2015 09:04:50 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Xue",
"Chuan",
""
],
[
"Shtylla",
"Blerta",
""
],
[
"Brown",
"Anthony",
""
]
] | The organization of the axonal cytoskeleton is a key determinant of the normal function of an axon, which is a long thin projection away from a neuron. Under normal conditions two axonal cytoskeletal polymers microtubules and neurofilaments align longitudinally in axons and are interspersed in axonal cross-sections. However, in many neurotoxic and neurodegenerative disorders, microtubules and neurofilaments segregate apart from each other, with microtubules and membranous organelles clustered centrally and neurofilaments displaced to the periphery. This striking segregation precedes abnormal and excessive neurofilament accumulation in these diseases, which in turn leads to focal axonal swellings. While neurofilament accumulation suggests the impairment of neurofilament transport along axons, the underlying mechanism of their segregation from microtubules remains poorly understood for over 30 years. To address this question, we developed a stochastic multiscale model for the cross-sectional distribution of microtubules and neurofilaments in axons. The model describes microtubules, neurofilaments and organelles as interacting particles in a 2D cross-section, and incorporates the stochastic interactions these particles through molecular motors. Simulations of the model demonstrate that organelles can pull nearby microtubules together, and in the absence of neurofilament transport, this mechanism gradually segregates microtubules from neurofilaments on a time scale of hours, similar to that observed in toxic neuropathies. This suggests that the microtubule-neurofilament segregation is simply a consequence of the selective impairment of neurofilament transport. The model generates the experimentally testable prediction that the rate and extent of segregation will be dependent on the sizes of the moving organelles as well as the density of their traffic. |
2206.02789 | Shuo Zhang | Shuo Zhang, Yang Liu, Lei Xie | Efficient and Accurate Physics-aware Multiplex Graph Neural Networks for
3D Small Molecules and Macromolecule Complexes | An enhanced version of this preprint has been published in Scientific
Reports (DOI: 10.1038/s41598-023-46382-8) | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in applying Graph Neural Networks (GNNs) to molecular science
have showcased the power of learning three-dimensional (3D) structure
representations with GNNs. However, most existing GNNs suffer from the
limitations of insufficient modeling of diverse interactions, computational
expensive operations, and ignorance of vectorial values. Here, we tackle these
limitations by proposing a novel GNN model, Physics-aware Multiplex Graph
Neural Network (PaxNet), to efficiently and accurately learn the
representations of 3D molecules for both small organic compounds and
macromolecule complexes. PaxNet separates the modeling of local and non-local
interactions inspired by molecular mechanics, and reduces the expensive
angle-related computations. Besides scalar properties, PaxNet can also predict
vectorial properties by learning an associated vector for each atom. To
evaluate the performance of PaxNet, we compare it with state-of-the-art
baselines in two tasks. On small molecule dataset for predicting quantum
chemical properties, PaxNet reduces the prediction error by 15% and uses 73%
less memory than the best baseline. On macromolecule dataset for predicting
protein-ligand binding affinities, PaxNet outperforms the best baseline while
reducing the memory consumption by 33% and the inference time by 85%. Thus,
PaxNet provides a universal, robust and accurate method for large-scale machine
learning of molecules. Our code is available at
https://github.com/zetayue/Physics-aware-Multiplex-GNN.
| [
{
"created": "Mon, 6 Jun 2022 00:28:37 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2023 01:23:54 GMT",
"version": "v2"
},
{
"created": "Sun, 19 Nov 2023 04:38:51 GMT",
"version": "v3"
}
] | 2023-11-21 | [
[
"Zhang",
"Shuo",
""
],
[
"Liu",
"Yang",
""
],
[
"Xie",
"Lei",
""
]
] | Recent advances in applying Graph Neural Networks (GNNs) to molecular science have showcased the power of learning three-dimensional (3D) structure representations with GNNs. However, most existing GNNs suffer from the limitations of insufficient modeling of diverse interactions, computational expensive operations, and ignorance of vectorial values. Here, we tackle these limitations by proposing a novel GNN model, Physics-aware Multiplex Graph Neural Network (PaxNet), to efficiently and accurately learn the representations of 3D molecules for both small organic compounds and macromolecule complexes. PaxNet separates the modeling of local and non-local interactions inspired by molecular mechanics, and reduces the expensive angle-related computations. Besides scalar properties, PaxNet can also predict vectorial properties by learning an associated vector for each atom. To evaluate the performance of PaxNet, we compare it with state-of-the-art baselines in two tasks. On small molecule dataset for predicting quantum chemical properties, PaxNet reduces the prediction error by 15% and uses 73% less memory than the best baseline. On macromolecule dataset for predicting protein-ligand binding affinities, PaxNet outperforms the best baseline while reducing the memory consumption by 33% and the inference time by 85%. Thus, PaxNet provides a universal, robust and accurate method for large-scale machine learning of molecules. Our code is available at https://github.com/zetayue/Physics-aware-Multiplex-GNN. |
2312.10085 | Ray Han | Ray Han | Phylogeny of Twenty-One Mammals | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Phylogeny can be inferred using two sources of data from an organism:
morphological data and molecular data. Historically, phylogenies were usually
inferred using morphological characters, but some morphological features may
not necessarily indicate shared heritage. With the introduction of molecular
phylogenies, the base sequence of genes, or amino acid sequence of proteins can
be compared to find the number of similarities or differences to ascertain
levels of relatedness between species. These two types of phylogenies are to be
taken as a data-driven hypothesis about the evolutionary history of the studied
organisms, and a consensus is drawn from the comparison between the different
phylogenies built from the two sources of data, utilizing different methods.
| [
{
"created": "Tue, 12 Dec 2023 03:34:00 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Han",
"Ray",
""
]
] | Phylogeny can be inferred using two sources of data from an organism: morphological data and molecular data. Historically, phylogenies were usually inferred using morphological characters, but some morphological features may not necessarily indicate shared heritage. With the introduction of molecular phylogenies, the base sequence of genes, or amino acid sequence of proteins can be compared to find the number of similarities or differences to ascertain levels of relatedness between species. These two types of phylogenies are to be taken as a data-driven hypothesis about the evolutionary history of the studied organisms, and a consensus is drawn from the comparison between the different phylogenies built from the two sources of data, utilizing different methods. |
1407.4380 | Fabrizio De Vico Fallani | Fabrizio De Vico Fallani, Martina Corazzol, Jenna R. Sternberg, Claire
Wyart, Mario Chavez | Hierarchy of neural organization in the embryonic spinal cord:
Granger-causality graph analysis of in vivo calcium imaging data | null | null | 10.1109/TNSRE.2014.2341632 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent development of genetically encoded calcium indicators enables
monitoring in vivo the activity of neuronal populations. Most analysis of these
calcium transients relies on linear regression analysis based on the sensory
stimulus applied or the behavior observed. To estimate the basic properties of
the functional neural circuitry, we propose a network-based approach based on
calcium imaging recorded at single cell resolution. Differently from previous
analysis based on cross-correlation, we used Granger-causality estimates to
infer activity propagation between the activities of different neurons. The
resulting functional networks were then modeled as directed graphs and
characterized in terms of connectivity and node centralities. We applied our
approach to calcium transients recorded at low frequency (4 Hz) in ventral
neurons of the zebrafish spinal cord at the embryonic stage when spontaneous
coiling of the tail occurs. Our analysis on population calcium imaging data
revealed a strong ipsilateral connectivity and a characteristic hierarchical
organization of the network hubs that supported established propagation of
activity from rostral to caudal spinal cord. Our method could be used for
detecting functional defects in neuronal circuitry during development and
pathological conditions.
| [
{
"created": "Wed, 16 Jul 2014 16:43:06 GMT",
"version": "v1"
}
] | 2014-09-10 | [
[
"Fallani",
"Fabrizio De Vico",
""
],
[
"Corazzol",
"Martina",
""
],
[
"Sternberg",
"Jenna R.",
""
],
[
"Wyart",
"Claire",
""
],
[
"Chavez",
"Mario",
""
]
] | The recent development of genetically encoded calcium indicators enables monitoring in vivo the activity of neuronal populations. Most analysis of these calcium transients relies on linear regression analysis based on the sensory stimulus applied or the behavior observed. To estimate the basic properties of the functional neural circuitry, we propose a network-based approach based on calcium imaging recorded at single cell resolution. Differently from previous analysis based on cross-correlation, we used Granger-causality estimates to infer activity propagation between the activities of different neurons. The resulting functional networks were then modeled as directed graphs and characterized in terms of connectivity and node centralities. We applied our approach to calcium transients recorded at low frequency (4 Hz) in ventral neurons of the zebrafish spinal cord at the embryonic stage when spontaneous coiling of the tail occurs. Our analysis on population calcium imaging data revealed a strong ipsilateral connectivity and a characteristic hierarchical organization of the network hubs that supported established propagation of activity from rostral to caudal spinal cord. Our method could be used for detecting functional defects in neuronal circuitry during development and pathological conditions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.