id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1308.5430 | Toni Gossmann | Toni I. Gossmann, David Waxman and Adam Eyre-Walker | Fluctuating selection models and McDonald-Kreitman type analyses | null | null | 10.1371/journal.pone.0084540 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is likely that the strength of selection acting upon a mutation varies
through time due to changes in the environment. However, most population
genetic theory assumes that the strength of selection remains constant. Here we
investigate the consequences of fluctuating selection pressures on the
quantification of adaptive evolution using McDonald-Kreitman (MK) style
approaches. In agreement with previous work, we show that fluctuating selection
can generate evidence of adaptive evolution even when the expected strength of
selection on a mutation is zero. However, we also find that the mutations,
which contribute to both polymorphism and divergence tend, on average, to be
positively selected during their lifetime, under fluctuating selection models.
This is because mutations that fluctuate, by chance, to positive selected
values, tend to reach higher frequencies in the population than those that
fluctuate towards negative values. Hence the evidence of positive adaptive
evolution detected under a fluctuating selection model by MK type approaches is
genuine since fixed mutations tend to be advantageous on average during their
lifetime. Never-the-less we show that methods tend to underestimate the rate of
adaptive evolution when selection fluctuates.
| [
{
"created": "Sun, 25 Aug 2013 18:00:15 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Gossmann",
"Toni I.",
""
],
[
"Waxman",
"David",
""
],
[
"Eyre-Walker",
"Adam",
""
]
] | It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK) style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates. |
1803.05840 | Hans-Christian Ruiz Dipl-Phys | H.C. Ruiz-Euler, H.J. Kappen | Effective Connectivity from Single Trial fMRI Data by Sampling
Biologically Plausible Models | null | null | null | null | q-bio.NC physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The estimation of causal network architectures in the brain is fundamental
for understanding cognitive information processes. However, access to the
dynamic processes underlying cognition is limited to indirect measurements of
the hidden neuronal activity, for instance through fMRI data. Thus, estimating
the network structure of the underlying process is challenging. In this
article, we embed an adaptive importance sampler called Adaptive Path Integral
Smoother (APIS) into the Expectation-Maximization algorithm to obtain point
estimates of causal connectivity. We demonstrate on synthetic data that this
procedure finds not only the correct network structure but also the direction
of effective connections from random initializations of the connectivity
matrix. In addition--motivated by contradictory claims in the literature--we
examine the effect of the neuronal timescale on the sensitivity of the BOLD
signal to changes in the connectivity and on the maximum likelihood solutions
of the connectivity. We conclude with two warnings: First, the connectivity
estimates under the assumption of slow dynamics can be extremely biased if the
data was generated by fast neuronal processes. Second, the faster the time
scale, the less sensitive the BOLD signal is to changes in the incoming
connections to a node. Hence, connectivity estimation using realistic neural
dynamics timescale requires extremely high-quality data and seems infeasible in
many practical data sets.
| [
{
"created": "Thu, 15 Mar 2018 16:25:47 GMT",
"version": "v1"
}
] | 2020-08-17 | [
[
"Ruiz-Euler",
"H. C.",
""
],
[
"Kappen",
"H. J.",
""
]
] | The estimation of causal network architectures in the brain is fundamental for understanding cognitive information processes. However, access to the dynamic processes underlying cognition is limited to indirect measurements of the hidden neuronal activity, for instance through fMRI data. Thus, estimating the network structure of the underlying process is challenging. In this article, we embed an adaptive importance sampler called Adaptive Path Integral Smoother (APIS) into the Expectation-Maximization algorithm to obtain point estimates of causal connectivity. We demonstrate on synthetic data that this procedure finds not only the correct network structure but also the direction of effective connections from random initializations of the connectivity matrix. In addition--motivated by contradictory claims in the literature--we examine the effect of the neuronal timescale on the sensitivity of the BOLD signal to changes in the connectivity and on the maximum likelihood solutions of the connectivity. We conclude with two warnings: First, the connectivity estimates under the assumption of slow dynamics can be extremely biased if the data was generated by fast neuronal processes. Second, the faster the time scale, the less sensitive the BOLD signal is to changes in the incoming connections to a node. Hence, connectivity estimation using realistic neural dynamics timescale requires extremely high-quality data and seems infeasible in many practical data sets. |
2406.16977 | Gerardo F. Goya | Eva Martin Solana, Laura Casado Zueras, Teobaldo E. Torres, Gerardo F.
Goya, Maria Rosario Fernandez Fernandez and Jose Jesus Fernandez | Disruption of the mitochondrial network in a mouse model of Huntington's
disease visualized by in tissue multiscale 3D electron microscopy | 31 pages 5 figures | Acta neuropathol commun 12, 88 (2024) | 10.1186/s40478-024-01802-2 | null | q-bio.SC cond-mat.soft | http://creativecommons.org/licenses/by/4.0/ | Huntington's disease (HD) is an inherited neurodegenerative disorder caused
by an expanded CAG repeat in the coding sequence of the huntingtin protein.
Initially, it predominantly affects medium-sized spiny neurons (MSSNs) of the
corpus striatum. No effective treatment is available, thus urging the
identification of potential therapeutic targets. While evidence of
mitochondrial structural alterations in HD exists, previous studies mainly
employed 2D approaches and were performed outside the strictly native brain
context. In this study, we adopted a novel multiscale approach to conduct a
comprehensive 3D in situ structural analysis of mitochondrial disturbances in a
mouse model of HD.
We investigated MSSNs within brain tissue under optimal structural conditions
utilizing state-of-the-art 3D imaging technologies, specifically FIB/SEM for
the complete imaging of neuronal somas and Electron Tomography for detailed
morphological examination and image processing-based quantitative analysis. Our
findings suggest a disruption of the mitochondrial network towards
fragmentation in HD. The network of interlaced, slim, and long mitochondria
observed in healthy conditions transforms into isolated, swollen, and short
entities, with internal cristae disorganization, cavities, and abnormally large
matrix granules.
| [
{
"created": "Sun, 23 Jun 2024 07:51:57 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Solana",
"Eva Martin",
""
],
[
"Zueras",
"Laura Casado",
""
],
[
"Torres",
"Teobaldo E.",
""
],
[
"Goya",
"Gerardo F.",
""
],
[
"Fernandez",
"Maria Rosario Fernandez",
""
],
[
"Fernandez",
"Jose Jesus",
""
]
] | Huntington's disease (HD) is an inherited neurodegenerative disorder caused by an expanded CAG repeat in the coding sequence of the huntingtin protein. Initially, it predominantly affects medium-sized spiny neurons (MSSNs) of the corpus striatum. No effective treatment is available, thus urging the identification of potential therapeutic targets. While evidence of mitochondrial structural alterations in HD exists, previous studies mainly employed 2D approaches and were performed outside the strictly native brain context. In this study, we adopted a novel multiscale approach to conduct a comprehensive 3D in situ structural analysis of mitochondrial disturbances in a mouse model of HD. We investigated MSSNs within brain tissue under optimal structural conditions utilizing state-of-the-art 3D imaging technologies, specifically FIB/SEM for the complete imaging of neuronal somas and Electron Tomography for detailed morphological examination and image processing-based quantitative analysis. Our findings suggest a disruption of the mitochondrial network towards fragmentation in HD. The network of interlaced, slim, and long mitochondria observed in healthy conditions transforms into isolated, swollen, and short entities, with internal cristae disorganization, cavities, and abnormally large matrix granules. |
2306.09239 | Soumyabrata Dey | Soumyabrata Dey, Ravishankar Rao, Mubarak Shah | Exploiting the Brain's Network Structure for Automatic Identification of
ADHD Subjects | null | null | null | null | q-bio.NC cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Attention Deficit Hyperactive Disorder (ADHD) is a common behavioral problem
affecting children. In this work, we investigate the automatic classification
of ADHD subjects using the resting state Functional Magnetic Resonance Imaging
(fMRI) sequences of the brain. We show that the brain can be modeled as a
functional network, and certain properties of the networks differ in ADHD
subjects from control subjects. We compute the pairwise correlation of brain
voxels' activity over the time frame of the experimental protocol which helps
to model the function of a brain as a network. Different network features are
computed for each of the voxels constructing the network. The concatenation of
the network features of all the voxels in a brain serves as the feature vector.
Feature vectors from a set of subjects are then used to train a PCA-LDA
(principal component analysis-linear discriminant analysis) based classifier.
We hypothesized that ADHD-related differences lie in some specific regions of
the brain and using features only from those regions is sufficient to
discriminate ADHD and control subjects. We propose a method to create a brain
mask that includes the useful regions only and demonstrate that using the
feature from the masked regions improves classification accuracy on the test
data set. We train our classifier with 776 subjects and test on 171 subjects
provided by The Neuro Bureau for the ADHD-200 challenge. We demonstrate the
utility of graph-motif features, specifically the maps that represent the
frequency of participation of voxels in network cycles of length 3. The best
classification performance (69.59%) is achieved using 3-cycle map features with
masking. Our proposed approach holds promise in being able to diagnose and
understand the disorder.
| [
{
"created": "Thu, 15 Jun 2023 16:22:57 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Dey",
"Soumyabrata",
""
],
[
"Rao",
"Ravishankar",
""
],
[
"Shah",
"Mubarak",
""
]
] | Attention Deficit Hyperactive Disorder (ADHD) is a common behavioral problem affecting children. In this work, we investigate the automatic classification of ADHD subjects using the resting state Functional Magnetic Resonance Imaging (fMRI) sequences of the brain. We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects. We compute the pairwise correlation of brain voxels' activity over the time frame of the experimental protocol which helps to model the function of a brain as a network. Different network features are computed for each of the voxels constructing the network. The concatenation of the network features of all the voxels in a brain serves as the feature vector. Feature vectors from a set of subjects are then used to train a PCA-LDA (principal component analysis-linear discriminant analysis) based classifier. We hypothesized that ADHD-related differences lie in some specific regions of the brain and using features only from those regions is sufficient to discriminate ADHD and control subjects. We propose a method to create a brain mask that includes the useful regions only and demonstrate that using the feature from the masked regions improves classification accuracy on the test data set. We train our classifier with 776 subjects and test on 171 subjects provided by The Neuro Bureau for the ADHD-200 challenge. We demonstrate the utility of graph-motif features, specifically the maps that represent the frequency of participation of voxels in network cycles of length 3. The best classification performance (69.59%) is achieved using 3-cycle map features with masking. Our proposed approach holds promise in being able to diagnose and understand the disorder. |
q-bio/0609047 | Liu Quanxing | Quan-Xing Liu and Zhen Jin | Formation of regular spatial patterns in ratio-dependent predator-prey
model driven by spatial colored-noise | 4 pages and 3 figures | null | 10.1088/1742-5468/2007/05/P05002 | null | q-bio.PE | null | Results are reported concerning the formation of spatial patterns in the
two-species ratio-dependent predator-prey model driven by spatial
colored-noise. The results show that there is a critical value with respect to
the intensity of spatial noise for this system when the parameters are in the
Turing space, above which the regular spatial patterns appear in two
dimensions, but under which there are not regular spatial patterns produced. In
particular, we investigate in two-dimensional space the formation of regular
spatial patterns with the spatial noise added in the side and the center of the
simulation domain, respectively.
| [
{
"created": "Wed, 27 Sep 2006 00:38:24 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Liu",
"Quan-Xing",
""
],
[
"Jin",
"Zhen",
""
]
] | Results are reported concerning the formation of spatial patterns in the two-species ratio-dependent predator-prey model driven by spatial colored-noise. The results show that there is a critical value with respect to the intensity of spatial noise for this system when the parameters are in the Turing space, above which the regular spatial patterns appear in two dimensions, but under which there are not regular spatial patterns produced. In particular, we investigate in two-dimensional space the formation of regular spatial patterns with the spatial noise added in the side and the center of the simulation domain, respectively. |
2003.11013 | Pietro Astolfi | Pietro Astolfi, Ruben Verhagen, Laurent Petit, Emanuele Olivetti,
Jonathan Masci, Davide Boscaini, Paolo Avesani | Tractogram filtering of anatomically non-plausible fibers with geometric
deep learning | Accepted at MICCAI2020 | null | null | null | q-bio.NC cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Tractograms are virtual representations of the white matter fibers of the
brain. They are of primary interest for tasks like presurgical planning, and
investigation of neuroplasticity or brain disorders. Each tractogram is
composed of millions of fibers encoded as 3D polylines. Unfortunately, a large
portion of those fibers are not anatomically plausible and can be considered
artifacts of the tracking algorithms. Common methods for tractogram filtering
are based on signal reconstruction, a principled approach, but unable to
consider the knowledge of brain anatomy. In this work, we address the problem
of tractogram filtering as a supervised learning problem by exploiting the
ground truth annotations obtained with a recent heuristic method, which labels
fibers as either anatomically plausible or non-plausible according to
well-established anatomical properties. The intuitive idea is to model a fiber
as a point cloud and the goal is to investigate whether and how a geometric
deep learning model might capture its anatomical properties. Our contribution
is an extension of the Dynamic Edge Convolution model that exploits the
sequential relations of points in a fiber and discriminates with high accuracy
plausible/non-plausible fibers.
| [
{
"created": "Tue, 24 Mar 2020 17:56:44 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jul 2020 13:57:11 GMT",
"version": "v2"
}
] | 2020-07-10 | [
[
"Astolfi",
"Pietro",
""
],
[
"Verhagen",
"Ruben",
""
],
[
"Petit",
"Laurent",
""
],
[
"Olivetti",
"Emanuele",
""
],
[
"Masci",
"Jonathan",
""
],
[
"Boscaini",
"Davide",
""
],
[
"Avesani",
"Paolo",
""
]
] | Tractograms are virtual representations of the white matter fibers of the brain. They are of primary interest for tasks like presurgical planning, and investigation of neuroplasticity or brain disorders. Each tractogram is composed of millions of fibers encoded as 3D polylines. Unfortunately, a large portion of those fibers are not anatomically plausible and can be considered artifacts of the tracking algorithms. Common methods for tractogram filtering are based on signal reconstruction, a principled approach, but unable to consider the knowledge of brain anatomy. In this work, we address the problem of tractogram filtering as a supervised learning problem by exploiting the ground truth annotations obtained with a recent heuristic method, which labels fibers as either anatomically plausible or non-plausible according to well-established anatomical properties. The intuitive idea is to model a fiber as a point cloud and the goal is to investigate whether and how a geometric deep learning model might capture its anatomical properties. Our contribution is an extension of the Dynamic Edge Convolution model that exploits the sequential relations of points in a fiber and discriminates with high accuracy plausible/non-plausible fibers. |
1211.1281 | Magnus Ekeberg | Magnus Ekeberg, Cecilia L\"ovkvist, Yueheng Lan, Martin Weigt, Erik
Aurell | Improved contact prediction in proteins: Using pseudolikelihoods to
infer Potts models | 19 pages, 16 figures, published version | M. Ekeberg, C. L\"ovkvist, Y. Lan, M. Weigt, E. Aurell, Improved
contact prediction in proteins: Using pseudolikelihoods to infer Potts
models, Phys. Rev. E 87, 012707 (2013) | 10.1103/PhysRevE.87.012707 | null | q-bio.QM cond-mat.dis-nn cond-mat.stat-mech physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatially proximate amino acids in a protein tend to coevolve. A protein's
three-dimensional (3D) structure hence leaves an echo of correlations in the
evolutionary record. Reverse engineering 3D structures from such correlations
is an open problem in structural biology, pursued with increasing vigor as more
and more protein sequences continue to fill the data banks. Within this task
lies a statistical inference problem, rooted in the following: correlation
between two sites in a protein sequence can arise from firsthand interaction
but can also be network-propagated via intermediate sites; observed correlation
is not enough to guarantee proximity. To separate direct from indirect
interactions is an instance of the general problem of inverse statistical
mechanics, where the task is to learn model parameters (fields, couplings) from
observables (magnetizations, correlations, samples) in large systems. In the
context of protein sequences, the approach has been referred to as
direct-coupling analysis. Here we show that the pseudolikelihood method,
applied to 21-state Potts models describing the statistical properties of
families of evolutionarily related proteins, significantly outperforms existing
approaches to the direct-coupling analysis, the latter being based on standard
mean-field techniques. This improved performance also relies on a modified
score for the coupling strength. The results are verified using known crystal
structures of specific sequence instances of various protein families. Code
implementing the new method can be found at http://plmdca.csc.kth.se/.
| [
{
"created": "Tue, 6 Nov 2012 16:02:27 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Jan 2013 11:17:29 GMT",
"version": "v2"
}
] | 2013-01-15 | [
[
"Ekeberg",
"Magnus",
""
],
[
"Lövkvist",
"Cecilia",
""
],
[
"Lan",
"Yueheng",
""
],
[
"Weigt",
"Martin",
""
],
[
"Aurell",
"Erik",
""
]
] | Spatially proximate amino acids in a protein tend to coevolve. A protein's three-dimensional (3D) structure hence leaves an echo of correlations in the evolutionary record. Reverse engineering 3D structures from such correlations is an open problem in structural biology, pursued with increasing vigor as more and more protein sequences continue to fill the data banks. Within this task lies a statistical inference problem, rooted in the following: correlation between two sites in a protein sequence can arise from firsthand interaction but can also be network-propagated via intermediate sites; observed correlation is not enough to guarantee proximity. To separate direct from indirect interactions is an instance of the general problem of inverse statistical mechanics, where the task is to learn model parameters (fields, couplings) from observables (magnetizations, correlations, samples) in large systems. In the context of protein sequences, the approach has been referred to as direct-coupling analysis. Here we show that the pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques. This improved performance also relies on a modified score for the coupling strength. The results are verified using known crystal structures of specific sequence instances of various protein families. Code implementing the new method can be found at http://plmdca.csc.kth.se/. |
1412.5040 | Vincent Niviere | Gergely Katona (IBS - UMR 5075), Philippe Carpentier (IBS - UMR 5075),
Vincent Nivi\`ere (LCBM - UMR 5249), Patricia Amara (IBS - UMR 5075), Virgile
Adam (ESRF), J\'er\'emy Ohana (IBS - UMR 5075), Nikolay Tsanov (IBS - UMR
5075), Dominique Bourgeois (ESRF, IBS - UMR 5075) | Raman-assisted crystallography reveals end-on peroxide intermediates in
a nonheme iron enzyme | null | Science, American Association for the Advancement of Science,
2007, pp.449-53 | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iron-peroxide intermediates are central in the reaction cycle of many
iron-containing biomolecules. We trapped iron(III)-(hydro)peroxo species in
crystals of superoxide reductase (SOR), a nonheme mononuclear iron enzyme that
scavenges superoxide radicals. X-ray diffraction data at 1.95 angstrom
resolution and Raman spectra recorded in crystallo revealed iron-(hydro)peroxo
intermediates with the (hydro)peroxo group bound end-on. The dynamic SOR active
site promotes the formation of transient hydrogen bond networks, which
presumably assist the cleavage of the iron-oxygen bond in order to release the
reaction product, hydrogen peroxide.
| [
{
"created": "Tue, 16 Dec 2014 15:21:50 GMT",
"version": "v1"
}
] | 2014-12-17 | [
[
"Katona",
"Gergely",
"",
"IBS - UMR 5075"
],
[
"Carpentier",
"Philippe",
"",
"IBS - UMR 5075"
],
[
"Nivière",
"Vincent",
"",
"LCBM - UMR 5249"
],
[
"Amara",
"Patricia",
"",
"IBS - UMR 5075"
],
[
"Adam",
"Virgile",
"",
... | Iron-peroxide intermediates are central in the reaction cycle of many iron-containing biomolecules. We trapped iron(III)-(hydro)peroxo species in crystals of superoxide reductase (SOR), a nonheme mononuclear iron enzyme that scavenges superoxide radicals. X-ray diffraction data at 1.95 angstrom resolution and Raman spectra recorded in crystallo revealed iron-(hydro)peroxo intermediates with the (hydro)peroxo group bound end-on. The dynamic SOR active site promotes the formation of transient hydrogen bond networks, which presumably assist the cleavage of the iron-oxygen bond in order to release the reaction product, hydrogen peroxide. |
q-bio/0512023 | Philippe Marcq | M. Leonetti, P. Marcq, J. Nuebler, F. Homble | Co-transport-induced instability of membrane voltage in tip-growing
cells | null | Phys. Rev. Lett. 95 (2005) 208105 | 10.1103/PhysRevLett.95.208105 | null | q-bio.QM | null | A salient feature of stationary patterns in tip-growing cells is the key role
played by the symports and antiports, membrane proteins that translocate two
ionic species at the same time. It is shown that these co-transporters
destabilize generically the membrane voltage if the two translocated ions
diffuse differently and carry a charge of opposite (same) sign for symports
(antiports). Orders of magnitude obtained for the time and lengthscale are in
agreement with experiments. A weakly nonlinear analysis characterizes the
bifurcation.
| [
{
"created": "Fri, 9 Dec 2005 21:14:28 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Leonetti",
"M.",
""
],
[
"Marcq",
"P.",
""
],
[
"Nuebler",
"J.",
""
],
[
"Homble",
"F.",
""
]
] | A salient feature of stationary patterns in tip-growing cells is the key role played by the symports and antiports, membrane proteins that translocate two ionic species at the same time. It is shown that these co-transporters destabilize generically the membrane voltage if the two translocated ions diffuse differently and carry a charge of opposite (same) sign for symports (antiports). Orders of magnitude obtained for the time and lengthscale are in agreement with experiments. A weakly nonlinear analysis characterizes the bifurcation. |
1212.0790 | Pleuni Pennings | Tobias Pamminger, Susanne Foitzik, Dirk Metzler, Pleuni S. Pennings | Oh sister, where art thou? Spatial population structure and the
evolution of an altruistic defence trait | This manuscript was reviewed at Peerage of Science. This version is
the version that is accepted for publication in Journal of Evolutionary
Biology | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of parasite virulence and host defences is affected by
population structure. This effect has been confirmed in studies focusing on
large spatial scales, whereas the importance of local structure is not well
understood. Slavemaking ants are social parasites that exploit workers of
another species to rear their offspring. Enslaved workers of the host species
Temnothorax longispinosus have been found to exhibit an effective
post-enslavement defence behaviour: enslaved workers were observed killing a
large proportion of the parasites' offspring. Since enslaved workers do not
reproduce, they gain no direct fitness benefit from this 'rebellion' behaviour.
However, there may be an indirect benefit: neighbouring host nests that are
related to 'rebel' nests can benefit from a reduced raiding pressure, as a
result of the reduction in parasite nest size due to the enslaved workers'
killing behaviour. We use a simple mathematical model to examine whether the
small-scale population structure of the host species could explain the
evolution of this potentially altruistic defence trait against slavemaking
ants. We find that this is the case if enslaved host workers are related to
nearby host nests. In a population genetic study we confirm that enslaved
workers are, indeed, more closely related to host nests within the raiding
range of their resident slavemaker nest, than to host nests outside the raiding
range. This small-scale population structure seems to be a result of polydomy
(e.g. the occupation of several nests in close proximity by a single colony)
and could have enabled the evolution of 'rebellion' by kin selection.
| [
{
"created": "Tue, 4 Dec 2012 17:08:06 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Sep 2014 23:40:47 GMT",
"version": "v2"
}
] | 2014-09-17 | [
[
"Pamminger",
"Tobias",
""
],
[
"Foitzik",
"Susanne",
""
],
[
"Metzler",
"Dirk",
""
],
[
"Pennings",
"Pleuni S.",
""
]
] | The evolution of parasite virulence and host defences is affected by population structure. This effect has been confirmed in studies focusing on large spatial scales, whereas the importance of local structure is not well understood. Slavemaking ants are social parasites that exploit workers of another species to rear their offspring. Enslaved workers of the host species Temnothorax longispinosus have been found to exhibit an effective post-enslavement defence behaviour: enslaved workers were observed killing a large proportion of the parasites' offspring. Since enslaved workers do not reproduce, they gain no direct fitness benefit from this 'rebellion' behaviour. However, there may be an indirect benefit: neighbouring host nests that are related to 'rebel' nests can benefit from a reduced raiding pressure, as a result of the reduction in parasite nest size due to the enslaved workers' killing behaviour. We use a simple mathematical model to examine whether the small-scale population structure of the host species could explain the evolution of this potentially altruistic defence trait against slavemaking ants. We find that this is the case if enslaved host workers are related to nearby host nests. In a population genetic study we confirm that enslaved workers are, indeed, more closely related to host nests within the raiding range of their resident slavemaker nest, than to host nests outside the raiding range. This small-scale population structure seems to be a result of polydomy (e.g. the occupation of several nests in close proximity by a single colony) and could have enabled the evolution of 'rebellion' by kin selection. |
2305.03893 | Helen Shang | Helen Shang, Yi Ding, Vidhya Venkateswaran, Kristin Boulier, Nikhita
Kathuria-Prakash, Parisa Boodaghi Malidarreh, Jacob M. Luber, Bogdan Pasaniuc | Generalizability of PRS313 for breast cancer risk amongst non-Europeans
in a Los Angeles biobank | 27 pages, 2 figures | null | null | null | q-bio.GN stat.AP | http://creativecommons.org/licenses/by/4.0/ | Polygenic risk scores (PRS) summarize the combined effect of common risk
variants and are associated with breast cancer risk in patients without
identifiable monogenic risk factors. One of the most well-validated PRSs in
breast cancer to date is PRS313, which was developed from a Northern European
biobank but has shown attenuated performance in non-European ancestries. We
further investigate the generalizability of the PRS313 for American women of
European (EA), African (AFR), Asian (EAA), and Latinx (HL) ancestry within one
institution with a singular EHR system, genotyping platform, and quality
control process. We found that the PRS313 achieved overlapping Areas under the
ROC Curve (AUCs) in females of Lantix (AUC, 0.68; 95 CI, 0.65-0.71) and
European ancestry (AUC, 0.70; 95 CI, 0.69-0.71) but lower AUCs for the AFR and
EAA populations (AFR: AUC, 0.61; 95 CI, 0.56-0.65; EAA: AUC, 0.64; 95 CI,
0.60-0.680). While PRS313 is associated with Hormone Positive (HR+) disease in
European Americans (OR, 1.42; 95 CI, 1.16-1.64), for Latinx females, it may be
instead associated with Human Epidermal Growth Factor Receptor 2 (HER2+)
disease (OR, 2.52; 95 CI, 1.35-4.70) although due to small numbers, additional
studies are needed. In summary, we found that PRS313 was significantly
associated with breast cancer but with attenuated accuracy in women of African
and Asian descent within a singular health system in Los Angeles. Our work
further highlights the need for additional validation in diverse cohorts prior
to clinical implementation of polygenic risk scores.
| [
{
"created": "Sat, 6 May 2023 01:47:38 GMT",
"version": "v1"
}
] | 2023-05-09 | [
[
"Shang",
"Helen",
""
],
[
"Ding",
"Yi",
""
],
[
"Venkateswaran",
"Vidhya",
""
],
[
"Boulier",
"Kristin",
""
],
[
"Kathuria-Prakash",
"Nikhita",
""
],
[
"Malidarreh",
"Parisa Boodaghi",
""
],
[
"Luber",
"Jacob M... | Polygenic risk scores (PRS) summarize the combined effect of common risk variants and are associated with breast cancer risk in patients without identifiable monogenic risk factors. One of the most well-validated PRSs in breast cancer to date is PRS313, which was developed from a Northern European biobank but has shown attenuated performance in non-European ancestries. We further investigate the generalizability of the PRS313 for American women of European (EA), African (AFR), Asian (EAA), and Latinx (HL) ancestry within one institution with a singular EHR system, genotyping platform, and quality control process. We found that the PRS313 achieved overlapping Areas under the ROC Curve (AUCs) in females of Lantix (AUC, 0.68; 95 CI, 0.65-0.71) and European ancestry (AUC, 0.70; 95 CI, 0.69-0.71) but lower AUCs for the AFR and EAA populations (AFR: AUC, 0.61; 95 CI, 0.56-0.65; EAA: AUC, 0.64; 95 CI, 0.60-0.680). While PRS313 is associated with Hormone Positive (HR+) disease in European Americans (OR, 1.42; 95 CI, 1.16-1.64), for Latinx females, it may be instead associated with Human Epidermal Growth Factor Receptor 2 (HER2+) disease (OR, 2.52; 95 CI, 1.35-4.70) although due to small numbers, additional studies are needed. In summary, we found that PRS313 was significantly associated with breast cancer but with attenuated accuracy in women of African and Asian descent within a singular health system in Los Angeles. Our work further highlights the need for additional validation in diverse cohorts prior to clinical implementation of polygenic risk scores. |
1505.03242 | Bo Deng | Bo Deng | Mechanistic Model to Replace Hodgkin-Huxley Equations | 23 pages, 10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we construct a mathematical model for excitable membranes by
introducing circuit characteristics for ion pump, ion current activation, and
voltage-gating. The model is capable of reestablishing the Nernst resting
potentials, all-or-nothing action potentials, absolute refraction, anode break
excitation, and spike bursts. We propose to replace the Hodgkin-Huxley model by
our model as the basis template for neurons and excitable membranes.
| [
{
"created": "Wed, 13 May 2015 04:52:43 GMT",
"version": "v1"
}
] | 2015-05-14 | [
[
"Deng",
"Bo",
""
]
] | In this paper we construct a mathematical model for excitable membranes by introducing circuit characteristics for ion pump, ion current activation, and voltage-gating. The model is capable of reestablishing the Nernst resting potentials, all-or-nothing action potentials, absolute refraction, anode break excitation, and spike bursts. We propose to replace the Hodgkin-Huxley model by our model as the basis template for neurons and excitable membranes. |
1610.00820 | Michael Hinczewski | David Hathcock, James Sheehy, Casey Weisenberger, Efe Ilker, Michael
Hinczewski | Noise Filtering and Prediction in Biological Signaling Networks | 15 pages, 6 figures | null | 10.1109/TMBMC.2016.2633269 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information transmission in biological signaling circuits has often been
described using the metaphor of a noise filter. Cellular systems need accurate,
real-time data about their environmental conditions, but the biochemical
reaction networks that propagate, amplify, and process signals work with noisy
representations of that data. Biology must implement strategies that not only
filter the noise, but also predict the current state of the environment based
on information delayed due to the finite speed of chemical signaling. The idea
of a biochemical noise filter is actually more than just a metaphor: we
describe recent work that has made an explicit mathematical connection between
signaling fidelity in cellular circuits and the classic theories of optimal
noise filtering and prediction that began with Wiener, Kolmogorov, Shannon, and
Bode. This theoretical framework provides a versatile tool, allowing us to
derive analytical bounds on the maximum mutual information between the
environmental signal and the real-time estimate constructed by the system. It
helps us understand how the structure of a biological network, and the response
times of its components, influences the accuracy of that estimate. The theory
also provides insights into how evolution may have tuned enzyme kinetic
parameters and populations to optimize information transfer.
| [
{
"created": "Tue, 4 Oct 2016 02:05:36 GMT",
"version": "v1"
}
] | 2019-02-27 | [
[
"Hathcock",
"David",
""
],
[
"Sheehy",
"James",
""
],
[
"Weisenberger",
"Casey",
""
],
[
"Ilker",
"Efe",
""
],
[
"Hinczewski",
"Michael",
""
]
] | Information transmission in biological signaling circuits has often been described using the metaphor of a noise filter. Cellular systems need accurate, real-time data about their environmental conditions, but the biochemical reaction networks that propagate, amplify, and process signals work with noisy representations of that data. Biology must implement strategies that not only filter the noise, but also predict the current state of the environment based on information delayed due to the finite speed of chemical signaling. The idea of a biochemical noise filter is actually more than just a metaphor: we describe recent work that has made an explicit mathematical connection between signaling fidelity in cellular circuits and the classic theories of optimal noise filtering and prediction that began with Wiener, Kolmogorov, Shannon, and Bode. This theoretical framework provides a versatile tool, allowing us to derive analytical bounds on the maximum mutual information between the environmental signal and the real-time estimate constructed by the system. It helps us understand how the structure of a biological network, and the response times of its components, influences the accuracy of that estimate. The theory also provides insights into how evolution may have tuned enzyme kinetic parameters and populations to optimize information transfer. |
0806.4845 | Avni Pllana | Avni Pllana, Herbert Bauer | Localization of Simultaneous Multiple Sources using SMS-LORETA | 6 pages, 1 figure. From a talk available at
http://psychologie.univie.ac.at/index.php?id=184&projektid=7 | null | null | FWF: P19830-B02 | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new localization method SMS-LORETA (Simultaneous
Multiple Sources- Low Resolution Brain Electromagnetic Tomography), capable to
locate efficiently multiple simultaneous sources. The new method overcomes some
of the drawbacks of sLORETA (standardized Low Resolution Brain Electromagnetic
Tomography). The key idea of the new method is the iterative search for current
dipoles, harnessing the low error single source localization performance of
sLORETA. An evaluation of the new method by simulation has been enclosed.
| [
{
"created": "Mon, 30 Jun 2008 10:24:29 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Mar 2010 09:12:46 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Jan 2011 14:33:15 GMT",
"version": "v3"
}
] | 2011-01-07 | [
[
"Pllana",
"Avni",
""
],
[
"Bauer",
"Herbert",
""
]
] | In this paper we present a new localization method SMS-LORETA (Simultaneous Multiple Sources- Low Resolution Brain Electromagnetic Tomography), capable to locate efficiently multiple simultaneous sources. The new method overcomes some of the drawbacks of sLORETA (standardized Low Resolution Brain Electromagnetic Tomography). The key idea of the new method is the iterative search for current dipoles, harnessing the low error single source localization performance of sLORETA. An evaluation of the new method by simulation has been enclosed. |
1801.10356 | Pablo Villegas G\'ongora | Serena di Santo, Pablo Villegas, Raffaella Burioni and Miguel A.
Mu\~noz | Landau-Ginzburg theory of cortex dynamics: Scale-free avalanches emerge
at the edge of synchronization | Pre-print version of the paper published in Proc. Natl. Acad. Sci.
USA | null | 10.1073/pnas.1712989115 | null | q-bio.NC cond-mat.stat-mech nlin.AO physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the origin, nature, and functional significance of complex
patterns of neural activity, as recorded by diverse electrophysiological and
neuroimaging techniques, is a central challenge in neuroscience. Such patterns
include collective oscillations emerging out of neural synchronization as well
as highly heterogeneous outbursts of activity interspersed by periods of
quiescence, called "neuronal avalanches." Much debate has been generated about
the possible scale invariance or criticality of such avalanches and its
relevance for brain function. Aimed at shedding light onto this, here we
analyze the large-scale collective properties of the cortex by using a
mesoscopic approach following the principle of parsimony of Landau-Ginzburg.
Our model is similar to that of Wilson-Cowan for neural dynamics but crucially,
includes stochasticity and space; synaptic plasticity and inhibition are
considered as possible regulatory mechanisms. Detailed analyses uncover a phase
diagram including down-state, synchronous, asynchronous, and up-state phases
and reveal that empirical findings for neuronal avalanches are consistently
reproduced by tuning our model to the edge of synchronization. This reveals
that the putative criticality of cortical dynamics does not correspond to a
quiescent-to-active phase transition as usually assumed in theoretical
approaches but to a synchronization phase transition, at which incipient
oscillations and scale-free avalanches coexist. Furthermore, our model also
accounts for up and down states as they occur (e.g., during deep sleep). This
approach constitutes a framework to rationalize the possible collective phases
and phase transitions of cortical networks in simple terms, thus helping to
shed light on basic aspects of brain functioning from a very broad perspective.
| [
{
"created": "Wed, 31 Jan 2018 08:57:39 GMT",
"version": "v1"
}
] | 2018-02-01 | [
[
"di Santo",
"Serena",
""
],
[
"Villegas",
"Pablo",
""
],
[
"Burioni",
"Raffaella",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] | Understanding the origin, nature, and functional significance of complex patterns of neural activity, as recorded by diverse electrophysiological and neuroimaging techniques, is a central challenge in neuroscience. Such patterns include collective oscillations emerging out of neural synchronization as well as highly heterogeneous outbursts of activity interspersed by periods of quiescence, called "neuronal avalanches." Much debate has been generated about the possible scale invariance or criticality of such avalanches and its relevance for brain function. Aimed at shedding light onto this, here we analyze the large-scale collective properties of the cortex by using a mesoscopic approach following the principle of parsimony of Landau-Ginzburg. Our model is similar to that of Wilson-Cowan for neural dynamics but crucially, includes stochasticity and space; synaptic plasticity and inhibition are considered as possible regulatory mechanisms. Detailed analyses uncover a phase diagram including down-state, synchronous, asynchronous, and up-state phases and reveal that empirical findings for neuronal avalanches are consistently reproduced by tuning our model to the edge of synchronization. This reveals that the putative criticality of cortical dynamics does not correspond to a quiescent-to-active phase transition as usually assumed in theoretical approaches but to a synchronization phase transition, at which incipient oscillations and scale-free avalanches coexist. Furthermore, our model also accounts for up and down states as they occur (e.g., during deep sleep). This approach constitutes a framework to rationalize the possible collective phases and phase transitions of cortical networks in simple terms, thus helping to shed light on basic aspects of brain functioning from a very broad perspective. |
1405.5795 | Thomas Schmidt | Thomas Schmidt | Behavioral criteria of feedforward processing in rapid-chase theory:
Some formal considerations | 9 pages, 2 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid-chase theory of response priming defines a set of behavioral
criteria that indicate feedforward processing of visual stimulus features
rather than recurrent processing. These feedforward criteria are strong
predictions from a feedforward model that argues that sequentially presented
prime and target stimuli lead to strictly sequential waves of visuomotor
processing that can still be traced in continuous motor output, for instance,
in pointing movements, muscle forces, or EEG readiness potentials. The
feedforward criteria make it possible to evaluate whether some continuous motor
output is consistent with feedforward processing, even though the neuronal
processes themselves are not readily observable. This paper is intended as an
auxiliary resource that states the criteria with some degree of formal
precision.
| [
{
"created": "Thu, 22 May 2014 15:30:00 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Oct 2014 15:26:24 GMT",
"version": "v2"
}
] | 2014-10-16 | [
[
"Schmidt",
"Thomas",
""
]
] | The rapid-chase theory of response priming defines a set of behavioral criteria that indicate feedforward processing of visual stimulus features rather than recurrent processing. These feedforward criteria are strong predictions from a feedforward model that argues that sequentially presented prime and target stimuli lead to strictly sequential waves of visuomotor processing that can still be traced in continuous motor output, for instance, in pointing movements, muscle forces, or EEG readiness potentials. The feedforward criteria make it possible to evaluate whether some continuous motor output is consistent with feedforward processing, even though the neuronal processes themselves are not readily observable. This paper is intended as an auxiliary resource that states the criteria with some degree of formal precision. |
1903.07429 | Larry Bull | Larry Bull | Sex and Coevolution | 15 pages. arXiv admin note: substantial text overlap with
arXiv:1811.04073, arXiv:1808.03471 | null | null | null | q-bio.PE cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been suggested that the fundamental haploid-diploid cycle of
eukaryotic sex exploits a rudimentary form of the Baldwin effect. This paper
uses the well-known NKCS model to explore the effects of coevolution upon the
behaviour of eukaryotes. It is shown how varying fitness landscape size,
ruggedness and connectedness can vary the conditions under which eukaryotic sex
proves beneficial over asexual reproduction in haploids in a coevolutionary
context. Moreover, eukaryotic sex is shown to be more sensitive to the relative
rate of evolution exhibited by its partnering species than asexual haploids.
| [
{
"created": "Fri, 15 Mar 2019 15:12:43 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Bull",
"Larry",
""
]
] | It has been suggested that the fundamental haploid-diploid cycle of eukaryotic sex exploits a rudimentary form of the Baldwin effect. This paper uses the well-known NKCS model to explore the effects of coevolution upon the behaviour of eukaryotes. It is shown how varying fitness landscape size, ruggedness and connectedness can vary the conditions under which eukaryotic sex proves beneficial over asexual reproduction in haploids in a coevolutionary context. Moreover, eukaryotic sex is shown to be more sensitive to the relative rate of evolution exhibited by its partnering species than asexual haploids. |
2105.01410 | Danko Georgiev | Danko D. Georgiev | Quantum information in neural systems | 31 pages, 7 figures | Symmetry 2021; 13(5): 773 | 10.3390/sym13050773 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Identifying the physiological processes in the central nervous system that
underlie our conscious experiences has been at the forefront of cognitive
neuroscience. While the principles of classical physics were long found to be
unaccommodating for a causally effective consciousness, the inherent
indeterminism of quantum physics, together with its characteristic dichotomy
between quantum states and quantum observables, provides a fertile ground for
the physical modeling of consciousness. Here, we utilize the Schr\"odinger
equation, together with the Planck--Einstein relation between energy and
frequency, in order to determine the appropriate quantum dynamical timescale of
conscious processes. Furthermore, with the help of a simple two-qubit toy model
we illustrate the importance of non-zero interaction Hamiltonian for the
generation of quantum entanglement and manifestation of observable correlations
between different measurement outcomes. Employing a quantitative measure of
entanglement based on Schmidt decomposition, we show that quantum evolution
governed only by internal Hamiltonians for the individual quantum subsystems
preserves quantum coherence of separable initial quantum states, but eliminates
the possibility of any interaction and quantum entanglement. The presence of
non-zero interaction Hamiltonian, however, allows for decoherence of the
individual quantum subsystems along with their mutual interaction and quantum
entanglement. The presented results show that quantum coherence of individual
subsystems cannot be used for cognitive binding because it is a physical
mechanism that leads to separability and non-interaction. In contrast, quantum
interactions with their associated decoherence of individual subsystems are
instrumental for dynamical changes in the quantum entanglement of the composite
quantum state vector and manifested correlations of different observable
outcomes.
| [
{
"created": "Tue, 4 May 2021 10:43:17 GMT",
"version": "v1"
}
] | 2021-05-05 | [
[
"Georgiev",
"Danko D.",
""
]
] | Identifying the physiological processes in the central nervous system that underlie our conscious experiences has been at the forefront of cognitive neuroscience. While the principles of classical physics were long found to be unaccommodating for a causally effective consciousness, the inherent indeterminism of quantum physics, together with its characteristic dichotomy between quantum states and quantum observables, provides a fertile ground for the physical modeling of consciousness. Here, we utilize the Schr\"odinger equation, together with the Planck--Einstein relation between energy and frequency, in order to determine the appropriate quantum dynamical timescale of conscious processes. Furthermore, with the help of a simple two-qubit toy model we illustrate the importance of non-zero interaction Hamiltonian for the generation of quantum entanglement and manifestation of observable correlations between different measurement outcomes. Employing a quantitative measure of entanglement based on Schmidt decomposition, we show that quantum evolution governed only by internal Hamiltonians for the individual quantum subsystems preserves quantum coherence of separable initial quantum states, but eliminates the possibility of any interaction and quantum entanglement. The presence of non-zero interaction Hamiltonian, however, allows for decoherence of the individual quantum subsystems along with their mutual interaction and quantum entanglement. The presented results show that quantum coherence of individual subsystems cannot be used for cognitive binding because it is a physical mechanism that leads to separability and non-interaction. In contrast, quantum interactions with their associated decoherence of individual subsystems are instrumental for dynamical changes in the quantum entanglement of the composite quantum state vector and manifested correlations of different observable outcomes. |
q-bio/0607037 | Edward Lyman Ph.D. | Edward Lyman and Daniel M. Zuckerman | The structural de-correlation time: A robust statistical measure of
convergence of biomolecular simulations | null | null | null | null | q-bio.QM q-bio.BM | null | Although atomistic simulations of proteins and other biological systems are
approaching microsecond timescales, the quality of trajectories has remained
difficult to assess. Such assessment is critical not only for establishing the
relevance of any individual simulation but also in the extremely active field
of developing computational methods. Here we map the trajectory assessment
problem onto a simple statistical calculation of the ``effective sample size''
- i.e., the number of statistically independent configurations. The mapping is
achieved by asking the question, ``How much time must elapse between snapshots
included in a sample for that sample to exhibit the statistical properties
expected for independent and identically distributed configurations?'' The
resulting ``structural de-correlation time'' is robustly calculated using exact
properties deduced from our previously developed ``structural histograms,''
without any fitting parameters. We show the method is equally and directly
applicable to toy models, peptides, and a 72-residue protein model. Variants of
our approach can readily be applied to a wide range of physical and chemical
systems.
| [
{
"created": "Fri, 21 Jul 2006 19:16:53 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Feb 2007 14:47:36 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Lyman",
"Edward",
""
],
[
"Zuckerman",
"Daniel M.",
""
]
] | Although atomistic simulations of proteins and other biological systems are approaching microsecond timescales, the quality of trajectories has remained difficult to assess. Such assessment is critical not only for establishing the relevance of any individual simulation but also in the extremely active field of developing computational methods. Here we map the trajectory assessment problem onto a simple statistical calculation of the ``effective sample size'' - i.e., the number of statistically independent configurations. The mapping is achieved by asking the question, ``How much time must elapse between snapshots included in a sample for that sample to exhibit the statistical properties expected for independent and identically distributed configurations?'' The resulting ``structural de-correlation time'' is robustly calculated using exact properties deduced from our previously developed ``structural histograms,'' without any fitting parameters. We show the method is equally and directly applicable to toy models, peptides, and a 72-residue protein model. Variants of our approach can readily be applied to a wide range of physical and chemical systems. |
2305.00063 | Luzhou Zhang | Luzhou Zhang | Applications of Computer Vision in Analysis of the Clock-Drawing Test as
a Metric of Cognitive Impairment | 5 pages, 3 figures | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The Clock-Drawing test is a well known and widely used neuropsychological
metric to assess basic cognitive function. My objective is to combine methods
of machine learning in computer vision and image analysis to predict a
subject's level of cognitive impairment.
| [
{
"created": "Wed, 26 Apr 2023 18:04:39 GMT",
"version": "v1"
}
] | 2023-05-02 | [
[
"Zhang",
"Luzhou",
""
]
] | The Clock-Drawing test is a well known and widely used neuropsychological metric to assess basic cognitive function. My objective is to combine methods of machine learning in computer vision and image analysis to predict a subject's level of cognitive impairment. |
1203.1064 | Sandip Ghosal | Sandip Ghosal and Zhen Chen | Nonlinear waves in capillary electrophoresis | 20 pages, 4 figures | Bulletin of Mathematical Biology (2010) Volume 72, Number 8, page
2047-2066 | 10.1007/s11538-010-9527-2 | null | q-bio.QM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrophoretic separation of a mixture of chemical species is a fundamental
technique of great usefulness in biology, health care and forensics. In
capillary electrophoresis the sample migrates in a microcapillary in the
presence of a background electrolyte. When the ionic concentration of the
sample is sufficiently high, the signal is known to exhibit features
reminiscent of nonlinear waves including sharp concentration `shocks'.In this
paper we consider a simplified model consisting of a single sample ion and a
background electrolyte consisting of a single co-ion and a counterion in the
absence of any processes that might change the ionization states of the
constituents. If the ionic diffusivities are assumed to be the same for all
constituents the concentration of sample ion is shown to obey a one dimensional
advection diffusion equation with a concentration dependent advection
velocity.If the analyte concentration is sufficiently low in a suitable
non-dimensional sense, Burgers' equation is recovered, and thus, the time
dependent problem is exactly solvable with arbitrary initial conditions. In the
case of small diffusivity either a leading edge or trailing edge shock is
formed depending on the electrophoretic mobility of the sample ion relative to
the background ions. Analytical formulas are presented for the shape, width and
migration velocity of the sample peak and it is shown that axial dispersion at
long times may be characterized by an effective diffusivity that is exactly
calculated. These results are consistent with known observations from physical
and numerical simulation experiments.
| [
{
"created": "Mon, 5 Mar 2012 22:07:04 GMT",
"version": "v1"
}
] | 2012-03-07 | [
[
"Ghosal",
"Sandip",
""
],
[
"Chen",
"Zhen",
""
]
] | Electrophoretic separation of a mixture of chemical species is a fundamental technique of great usefulness in biology, health care and forensics. In capillary electrophoresis the sample migrates in a microcapillary in the presence of a background electrolyte. When the ionic concentration of the sample is sufficiently high, the signal is known to exhibit features reminiscent of nonlinear waves including sharp concentration `shocks'.In this paper we consider a simplified model consisting of a single sample ion and a background electrolyte consisting of a single co-ion and a counterion in the absence of any processes that might change the ionization states of the constituents. If the ionic diffusivities are assumed to be the same for all constituents the concentration of sample ion is shown to obey a one dimensional advection diffusion equation with a concentration dependent advection velocity.If the analyte concentration is sufficiently low in a suitable non-dimensional sense, Burgers' equation is recovered, and thus, the time dependent problem is exactly solvable with arbitrary initial conditions. In the case of small diffusivity either a leading edge or trailing edge shock is formed depending on the electrophoretic mobility of the sample ion relative to the background ions. Analytical formulas are presented for the shape, width and migration velocity of the sample peak and it is shown that axial dispersion at long times may be characterized by an effective diffusivity that is exactly calculated. These results are consistent with known observations from physical and numerical simulation experiments. |
1211.4606 | Gergely J Sz\"oll\H{o}si | Gergely J Sz\"oll\"osi and Eric Tannier and Nicolas Lartillot and
Vincent Daubin | Lateral Gene Transfer from the Dead | published in Systematic Biology
http://sysbio.oxfordjournals.org/content/early/2013/01/25/sysbio.syt003.abstract.html | Systematic Biology 62(3):386-397, 2013 | 10.1093/sysbio/syt003 | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In phylogenetic studies, the evolution of molecular sequences is assumed to
have taken place along the phylogeny traced by the ancestors of extant species.
In the presence of lateral gene transfer (LGT), however, this may not be the
case, because the species lineage from which a gene was transferred may have
gone extinct or not have been sampled. Because it is not feasible to specify or
reconstruct the complete phylogeny of all species, we must describe the
evolution of genes outside the represented phylogeny by modelling the
speciation dynamics that gave rise to the complete phylogeny. We demonstrate
that if the number of sampled species is small compared to the total number of
existing species, the overwhelming majority of gene transfers involve
speciation to, and evolution along extinct or unsampled lineages. We show that
the evolution of genes along extinct or unsampled lineages can to good
approximation be treated as those of independently evolving lineages described
by a few global parameters. Using this result, we derive an algorithm to
calculate the probability of a gene tree and recover the maximum likelihood
reconciliation given the phylogeny of the sampled species. Examining 473 near
universal gene families from 36 cyanobacteria, we find that nearly a third of
transfer events -- 28% -- appear to have topological signatures of evolution
along extinct species, but only approximately 6% of transfers trace their
ancestry to before the common ancestor of the sampled cyanobacteria.
| [
{
"created": "Mon, 19 Nov 2012 22:06:06 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Nov 2012 16:55:43 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Nov 2012 05:20:00 GMT",
"version": "v3"
},
{
"created": "Sat, 26 Jan 2013 17:03:13 GMT",
"version": "v4"
},
{
"c... | 2013-07-01 | [
[
"Szöllösi",
"Gergely J",
""
],
[
"Tannier",
"Eric",
""
],
[
"Lartillot",
"Nicolas",
""
],
[
"Daubin",
"Vincent",
""
]
] | In phylogenetic studies, the evolution of molecular sequences is assumed to have taken place along the phylogeny traced by the ancestors of extant species. In the presence of lateral gene transfer (LGT), however, this may not be the case, because the species lineage from which a gene was transferred may have gone extinct or not have been sampled. Because it is not feasible to specify or reconstruct the complete phylogeny of all species, we must describe the evolution of genes outside the represented phylogeny by modelling the speciation dynamics that gave rise to the complete phylogeny. We demonstrate that if the number of sampled species is small compared to the total number of existing species, the overwhelming majority of gene transfers involve speciation to, and evolution along extinct or unsampled lineages. We show that the evolution of genes along extinct or unsampled lineages can to good approximation be treated as those of independently evolving lineages described by a few global parameters. Using this result, we derive an algorithm to calculate the probability of a gene tree and recover the maximum likelihood reconciliation given the phylogeny of the sampled species. Examining 473 near universal gene families from 36 cyanobacteria, we find that nearly a third of transfer events -- 28% -- appear to have topological signatures of evolution along extinct species, but only approximately 6% of transfers trace their ancestry to before the common ancestor of the sampled cyanobacteria. |
1201.0052 | Pan-Jun Kim | Pan-Jun Kim, Nathan D. Price | Genetic Co-Occurrence Network across Sequenced Microbes | Supporting information is available at PLoS Computational Biology | PLoS Comput. Biol. 7, e1002340 (2011) | 10.1371/journal.pcbi.1002340 | null | q-bio.MN physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenotype of any organism on earth is, in large part, the consequence of
interplay between numerous gene products encoded in the genome, and such
interplay between gene products affects the evolutionary fate of the genome
itself through the resulting phenotype. In this regard, contemporary genomes
can be used as molecular records that reveal associations of various genes
working in their natural lifestyles. By analyzing thousands of orthologs across
~600 bacterial species, we constructed a map of gene-gene co-occurrence across
much of the sequenced biome. If genes preferentially co-occur in the same
organisms, they were called herein correlogs; in the opposite case, called
anti-correlogs. To quantify correlogy and anti-correlogy, we alleviated the
contribution of indirect correlations between genes by adapting ideas developed
for reverse engineering of transcriptional regulatory networks. Resultant
correlogous associations are highly enriched for physically interacting
proteins and for co-expressed transcripts, clearly differentiating a subgroup
of functionally-obligatory protein interactions from conditional or transient
interactions. Other biochemical and phylogenetic properties were also found to
be reflected in correlogous and anti-correlogous relationships. Additionally,
our study elucidates the global organization of the gene association map, in
which various modules of correlogous genes are strikingly interconnected by
anti-correlogous crosstalk between the modules. We then demonstrate the
effectiveness of such associations along different domains of life and
environmental microbial communities. These phylogenetic profiling approaches
infer functional coupling of genes regardless of mechanistic details, and may
be useful to guide exogenous gene import in synthetic biology.
| [
{
"created": "Fri, 30 Dec 2011 02:26:12 GMT",
"version": "v1"
}
] | 2012-01-04 | [
[
"Kim",
"Pan-Jun",
""
],
[
"Price",
"Nathan D.",
""
]
] | The phenotype of any organism on earth is, in large part, the consequence of interplay between numerous gene products encoded in the genome, and such interplay between gene products affects the evolutionary fate of the genome itself through the resulting phenotype. In this regard, contemporary genomes can be used as molecular records that reveal associations of various genes working in their natural lifestyles. By analyzing thousands of orthologs across ~600 bacterial species, we constructed a map of gene-gene co-occurrence across much of the sequenced biome. If genes preferentially co-occur in the same organisms, they were called herein correlogs; in the opposite case, called anti-correlogs. To quantify correlogy and anti-correlogy, we alleviated the contribution of indirect correlations between genes by adapting ideas developed for reverse engineering of transcriptional regulatory networks. Resultant correlogous associations are highly enriched for physically interacting proteins and for co-expressed transcripts, clearly differentiating a subgroup of functionally-obligatory protein interactions from conditional or transient interactions. Other biochemical and phylogenetic properties were also found to be reflected in correlogous and anti-correlogous relationships. Additionally, our study elucidates the global organization of the gene association map, in which various modules of correlogous genes are strikingly interconnected by anti-correlogous crosstalk between the modules. We then demonstrate the effectiveness of such associations along different domains of life and environmental microbial communities. These phylogenetic profiling approaches infer functional coupling of genes regardless of mechanistic details, and may be useful to guide exogenous gene import in synthetic biology. |
1812.07494 | Parisa Teimoori Baghaee | Parisa Teimoori Baghaee and Atena Donya | Review of studies in field of the effects of nanotechnology on breast
cancer | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer is the main cause of mortality at the developed countries and is the
second cause of mortality at the developing countries and breast cancer is the
most prevalent malignancy and the first cause of mortality among women of the
world. At the US, 28% of new cancer cases and 15% of mortality caused by that
in 2010 was caused by breast cancer. Moreover, breast cancer is the most common
cancer among Iranian women, which includes 24.4% of overall malignancies and
causes 3.3% of mortalities per 1.000 people. The disease is most spread in
Tehran and including 25.5% of all cancers. According to the mentioned, the
study has tried to assess the effect of nanotechnology in form of a review to
diagnose and treat breast cancer. The method applied in this study is
descriptive analytical method and library method has been used for data
collection purpose. In the data collection method, 45 articles relevant to
structure of types of nanoparticles and their uses in diagnosis, imaging and
medicine delivery systems and breast cancer treatment have been used. Although
there are still some challenges and limitations to use nanoparticles in
medication, it is hope that nanoparticles can make wonderful revolution not
only in oncology, but also in medication in near future. The most underlying
headlines of the present study include mineral nanoparticles, antibodies and
tumor imaging methods. Keywords: magnetic nanoparticles, breast cancer,
nanobody.
| [
{
"created": "Tue, 20 Nov 2018 19:55:09 GMT",
"version": "v1"
}
] | 2018-12-19 | [
[
"Baghaee",
"Parisa Teimoori",
""
],
[
"Donya",
"Atena",
""
]
] | Cancer is the main cause of mortality at the developed countries and is the second cause of mortality at the developing countries and breast cancer is the most prevalent malignancy and the first cause of mortality among women of the world. At the US, 28% of new cancer cases and 15% of mortality caused by that in 2010 was caused by breast cancer. Moreover, breast cancer is the most common cancer among Iranian women, which includes 24.4% of overall malignancies and causes 3.3% of mortalities per 1.000 people. The disease is most spread in Tehran and including 25.5% of all cancers. According to the mentioned, the study has tried to assess the effect of nanotechnology in form of a review to diagnose and treat breast cancer. The method applied in this study is descriptive analytical method and library method has been used for data collection purpose. In the data collection method, 45 articles relevant to structure of types of nanoparticles and their uses in diagnosis, imaging and medicine delivery systems and breast cancer treatment have been used. Although there are still some challenges and limitations to use nanoparticles in medication, it is hope that nanoparticles can make wonderful revolution not only in oncology, but also in medication in near future. The most underlying headlines of the present study include mineral nanoparticles, antibodies and tumor imaging methods. Keywords: magnetic nanoparticles, breast cancer, nanobody. |
q-bio/0405008 | Hans-G\"unther D\"obereiner | H.-G. Doebereiner, B.J. Dubin-Thaler, G. Giannone, H.S. Xenias, and
M.P. Sheetz | Dynamic Phase Transitions in Cell Spreading | 4 pages, 5 figures | null | 10.1103/PhysRevLett.93.108105 | null | q-bio.CB cond-mat.soft physics.bio-ph | null | We monitored isotropic spreading of mouse embryonic fibroblasts on
fibronectin-coated substrates. Cell adhesion area versus time was measured via
total internal reflection fluorescence microscopy. Spreading proceeds in
well-defined phases. We found a power-law area growth with distinct exponents
a_i in three sequential phases, which we denote basal (a_1=0.4+-0.2), continous
(a_2=1.6+-0.9) and contractile (a_3=0.3+-0.2) spreading. High resolution
differential interference contrast microscopy was used to characterize local
membrane dynamics at the spreading front. Fourier power spectra of membrane
velocity reveal the sudden development of periodic membrane retractions at the
transition from continous to contractile spreading. We propose that the
classification of cell spreading into phases with distinct functional
characteristics and protein activity patterns serves as a paradigm for a
general program of a phase classification of cellular phenotype. Biological
variability is drastically reduced when only the corresponding phases are used
for comparison across species/different cell lines.
| [
{
"created": "Fri, 7 May 2004 22:44:55 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Doebereiner",
"H. -G.",
""
],
[
"Dubin-Thaler",
"B. J.",
""
],
[
"Giannone",
"G.",
""
],
[
"Xenias",
"H. S.",
""
],
[
"Sheetz",
"M. P.",
""
]
] | We monitored isotropic spreading of mouse embryonic fibroblasts on fibronectin-coated substrates. Cell adhesion area versus time was measured via total internal reflection fluorescence microscopy. Spreading proceeds in well-defined phases. We found a power-law area growth with distinct exponents a_i in three sequential phases, which we denote basal (a_1=0.4+-0.2), continous (a_2=1.6+-0.9) and contractile (a_3=0.3+-0.2) spreading. High resolution differential interference contrast microscopy was used to characterize local membrane dynamics at the spreading front. Fourier power spectra of membrane velocity reveal the sudden development of periodic membrane retractions at the transition from continous to contractile spreading. We propose that the classification of cell spreading into phases with distinct functional characteristics and protein activity patterns serves as a paradigm for a general program of a phase classification of cellular phenotype. Biological variability is drastically reduced when only the corresponding phases are used for comparison across species/different cell lines. |
2309.06707 | Lingyu Li | Lingyu Li, Chunbo Li | Return to Lacan: an approach to digital twin mind with free energy
principle | 19 pages, 4 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Free energy principle (FEP) is a burgeoning theory in theoretical
neuroscience that provides a universal law for modelling living systems of any
scale. Expecting a digital twin mind from this first principle, we propose a
macro-level interpretation that bridge neuroscience and psychoanalysis through
the lens of computational Lacanian psychoanalysis. In this article, we claim
three fundamental parallels between FEP and Lacanian psychoanalysis, and
suggest a FEP approach to formalizing Lacan's theory. Sharing the non-linear
temporal structure that combines prediction and retrospection (logical time),
both of two theories focus on epistemological questions that how systems
represented themselves and external world, and those elements failed to be
represented (lacks and free energy) significantly influence the systems'
subsequent states. Additionally, the fundamental hypothesis of FEP that the
precise state of environment is always concealed, accounts for object petit a,
the core concept in Lacan's theory. With neuropsychoanalytic mapping from three
orders (the Real, the Symbolic, and the Imaginary, RSI) onto brain regions, we
propose a brain-wide FEP model for a minimal definition of Lacanian mind -
composite state of RSI that is perturbated by desire running over the logical
time. The FEP-RSI model involves three FEP units connected by respective free
energy with a natural compliance with logical time, mimicking core dynamics of
Lacanian mind. The biological plausibility of current model is considered from
perspectives of cognitive neuroscience. In conclusion, the FEP-RSI model
encapsulates a unified framework for digital twin modeling at the macro level.
| [
{
"created": "Wed, 13 Sep 2023 04:12:53 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2024 07:39:53 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Li",
"Lingyu",
""
],
[
"Li",
"Chunbo",
""
]
] | Free energy principle (FEP) is a burgeoning theory in theoretical neuroscience that provides a universal law for modelling living systems of any scale. Expecting a digital twin mind from this first principle, we propose a macro-level interpretation that bridge neuroscience and psychoanalysis through the lens of computational Lacanian psychoanalysis. In this article, we claim three fundamental parallels between FEP and Lacanian psychoanalysis, and suggest a FEP approach to formalizing Lacan's theory. Sharing the non-linear temporal structure that combines prediction and retrospection (logical time), both of two theories focus on epistemological questions that how systems represented themselves and external world, and those elements failed to be represented (lacks and free energy) significantly influence the systems' subsequent states. Additionally, the fundamental hypothesis of FEP that the precise state of environment is always concealed, accounts for object petit a, the core concept in Lacan's theory. With neuropsychoanalytic mapping from three orders (the Real, the Symbolic, and the Imaginary, RSI) onto brain regions, we propose a brain-wide FEP model for a minimal definition of Lacanian mind - composite state of RSI that is perturbated by desire running over the logical time. The FEP-RSI model involves three FEP units connected by respective free energy with a natural compliance with logical time, mimicking core dynamics of Lacanian mind. The biological plausibility of current model is considered from perspectives of cognitive neuroscience. In conclusion, the FEP-RSI model encapsulates a unified framework for digital twin modeling at the macro level. |
0807.1375 | Cameron Mura | Cameron Mura and J. Andrew McCammon | Molecular Dynamics of a kB DNA Element: Base Flipping via Cross-strand
Intercalative Stacking in a Microsecond-scale Simulation | 21 pages, 9 figures; revised version has been updated to include
figures | Nucleic Acids Research (2008), 36(15), 4941-4955 | 10.1093/nar/gkn473 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sequence-dependent structural variability and conformational dynamics of
DNA play pivotal roles in many biological milieus, such as in the site-specific
binding of transcription factors to target regulatory elements. To better
understand DNA structure, function, and dynamics in general, and protein-DNA
recognition in the 'kB' family of genetic regulatory elements in particular, we
performed molecular dynamics simulations of a 20-base pair DNA encompassing a
cognate kB site recognized by the proto-oncogenic 'c-Rel' subfamily of NF-kB
transcription factors. Simulations of the kB DNA in explicit water were
extended to microsecond duration, providing a broad, atomically-detailed
glimpse into the structural and dynamical behavior of double helical DNA over
many timescales. Of particular note, novel (and structurally plausible)
conformations of DNA developed only at the long times sampled in this
simulation -- including a peculiar state arising at ~ 0.7 us and characterized
by cross-strand intercalative stacking of nucleotides within a
longitudinally-sheared base pair, followed (at ~ 1 us) by spontaneous base
flipping of a neighboring thymine within the A-rich duplex. Results and
predictions from the us-scale simulation include implications for a dynamical
NF-kB recognition motif, and are amenable to testing and further exploration
via specific experimental approaches that are suggested herein.
| [
{
"created": "Wed, 9 Jul 2008 05:25:29 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jul 2008 04:18:44 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Jul 2014 20:25:46 GMT",
"version": "v3"
},
{
"created": "Fri, 18 Jul 2014 03:10:30 GMT",
"version": "v4"
}
] | 2014-07-22 | [
[
"Mura",
"Cameron",
""
],
[
"McCammon",
"J. Andrew",
""
]
] | The sequence-dependent structural variability and conformational dynamics of DNA play pivotal roles in many biological milieus, such as in the site-specific binding of transcription factors to target regulatory elements. To better understand DNA structure, function, and dynamics in general, and protein-DNA recognition in the 'kB' family of genetic regulatory elements in particular, we performed molecular dynamics simulations of a 20-base pair DNA encompassing a cognate kB site recognized by the proto-oncogenic 'c-Rel' subfamily of NF-kB transcription factors. Simulations of the kB DNA in explicit water were extended to microsecond duration, providing a broad, atomically-detailed glimpse into the structural and dynamical behavior of double helical DNA over many timescales. Of particular note, novel (and structurally plausible) conformations of DNA developed only at the long times sampled in this simulation -- including a peculiar state arising at ~ 0.7 us and characterized by cross-strand intercalative stacking of nucleotides within a longitudinally-sheared base pair, followed (at ~ 1 us) by spontaneous base flipping of a neighboring thymine within the A-rich duplex. Results and predictions from the us-scale simulation include implications for a dynamical NF-kB recognition motif, and are amenable to testing and further exploration via specific experimental approaches that are suggested herein. |
1701.03883 | Jared Michael Field | Jared M. Field, Michael B. Bonsall | Evolutionary stability and the rarity of grandmothering | 4 pages, 1 figure. Submitted | null | 10.1002/ece3.2958 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The provision of intergenerational care, via the Grandmother Hypothesis, has
been implicated in the evolution of post-fertile longevity, particularly in
humans. However, if grandmothering does provide fitness benefits, a key
question is why has it evolved so infrequently? We investigate this question
with a combination of life-history and evolutionary game theory. We derive
simple eligibility and stability thresholds, both of which must be satisfied if
intergenerational care is first to evolve and then to persist in a population.
As one threshold becomes easier to fulfill, the other becomes more difficult,
revealing a conflict between the two. As such, we suggest that, in fact, we
should expect the evolution of grandmothering to be rare.
| [
{
"created": "Sat, 14 Jan 2017 06:22:01 GMT",
"version": "v1"
}
] | 2017-05-01 | [
[
"Field",
"Jared M.",
""
],
[
"Bonsall",
"Michael B.",
""
]
] | The provision of intergenerational care, via the Grandmother Hypothesis, has been implicated in the evolution of post-fertile longevity, particularly in humans. However, if grandmothering does provide fitness benefits, a key question is why has it evolved so infrequently? We investigate this question with a combination of life-history and evolutionary game theory. We derive simple eligibility and stability thresholds, both of which must be satisfied if intergenerational care is first to evolve and then to persist in a population. As one threshold becomes easier to fulfill, the other becomes more difficult, revealing a conflict between the two. As such, we suggest that, in fact, we should expect the evolution of grandmothering to be rare. |
2408.06391 | Dingyi Rong | Dingyi Rong, Wenzhuo Zheng, Bozitao Zhong, Zhouhan Lin, Liang Hong,
Ning Liu | Autoregressive Enzyme Function Prediction with Multi-scale
Multi-modality Fusion | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate prediction of enzyme function is crucial for elucidating biological
mechanisms and driving innovation across various sectors. Existing deep
learning methods tend to rely solely on either sequence data or structural data
and predict the EC number as a whole, neglecting the intrinsic hierarchical
structure of EC numbers. To address these limitations, we introduce MAPred, a
novel multi-modality and multi-scale model designed to autoregressively predict
the EC number of proteins. MAPred integrates both the primary amino acid
sequence and the 3D tokens of proteins, employing a dual-pathway approach to
capture comprehensive protein characteristics and essential local functional
sites. Additionally, MAPred utilizes an autoregressive prediction network to
sequentially predict the digits of the EC number, leveraging the hierarchical
organization of EC classifications. Evaluations on benchmark datasets,
including New-392, Price, and New-815, demonstrate that our method outperforms
existing models, marking a significant advance in the reliability and
granularity of protein function prediction within bioinformatics.
| [
{
"created": "Sun, 11 Aug 2024 08:28:43 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Rong",
"Dingyi",
""
],
[
"Zheng",
"Wenzhuo",
""
],
[
"Zhong",
"Bozitao",
""
],
[
"Lin",
"Zhouhan",
""
],
[
"Hong",
"Liang",
""
],
[
"Liu",
"Ning",
""
]
] | Accurate prediction of enzyme function is crucial for elucidating biological mechanisms and driving innovation across various sectors. Existing deep learning methods tend to rely solely on either sequence data or structural data and predict the EC number as a whole, neglecting the intrinsic hierarchical structure of EC numbers. To address these limitations, we introduce MAPred, a novel multi-modality and multi-scale model designed to autoregressively predict the EC number of proteins. MAPred integrates both the primary amino acid sequence and the 3D tokens of proteins, employing a dual-pathway approach to capture comprehensive protein characteristics and essential local functional sites. Additionally, MAPred utilizes an autoregressive prediction network to sequentially predict the digits of the EC number, leveraging the hierarchical organization of EC classifications. Evaluations on benchmark datasets, including New-392, Price, and New-815, demonstrate that our method outperforms existing models, marking a significant advance in the reliability and granularity of protein function prediction within bioinformatics. |
0812.3011 | Petter Holme | Petter Holme | Model validation of simple-graph representations of metabolism | to appear in J. Roy. Soc. Inteface | J. R. Soc. Interface 6, 1027-1034 (2009) | 10.1098/rsif.2008.0489 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The large-scale properties of chemical reaction systems, such as the
metabolism, can be studied with graph-based methods. To do this, one needs to
reduce the information -- lists of chemical reactions -- available in
databases. Even for the simplest type of graph representation, this reduction
can be done in several ways. We investigate different simple network
representations by testing how well they encode information about one
biologically important network structure -- network modularity (the propensity
for edges to be cluster into dense groups that are sparsely connected between
each other). To reach this goal, we design a model of reaction-systems where
network modularity can be controlled and measure how well the reduction to
simple graphs capture the modular structure of the model reaction system. We
find that the network types that best capture the modular structure of the
reaction system are substrate-product networks (where substrates are linked to
products of a reaction) and substance networks (with edges between all
substances participating in a reaction). Furthermore, we argue that the
proposed model for reaction systems with tunable clustering is a general
framework for studies of how reaction-systems are affected by modularity. To
this end, we investigate statistical properties of the model and find, among
other things, that it recreate correlations between degree and mass of the
molecules.
| [
{
"created": "Tue, 16 Dec 2008 09:50:51 GMT",
"version": "v1"
}
] | 2009-09-25 | [
[
"Holme",
"Petter",
""
]
] | The large-scale properties of chemical reaction systems, such as the metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information -- lists of chemical reactions -- available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure -- network modularity (the propensity for edges to be cluster into dense groups that are sparsely connected between each other). To reach this goal, we design a model of reaction-systems where network modularity can be controlled and measure how well the reduction to simple graphs capture the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate-product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction-systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreate correlations between degree and mass of the molecules. |
2404.04983 | Nora Ouzir | Aur\'elie Beaufr\`ere, Nora Ouzir, Paul Emile Zafar, Astrid
Laurent-Bellue, Miguel Albuquerque, Gwladys Lubuela, Jules Gr\'egory,
Catherine Guettier, K\'evin Mondet, Jean-Christophe Pesquet, Val\'erie
Paradis | Primary liver cancer classification from routine tumour biopsy using
weakly supervised deep learning | https://www.sciencedirect.com/science/article/pii/S2589555924000090 | JHEP Reports, Volume 6, Issue 3, 2024 | 10.1016/j.jhepr.2024.101008 | null | q-bio.TO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | The diagnosis of primary liver cancers (PLCs) can be challenging, especially
on biopsies and for combined hepatocellular-cholangiocarcinoma (cHCC-CCA). We
automatically classified PLCs on routine-stained biopsies using a weakly
supervised learning method. Weak tumour/non-tumour annotations served as labels
for training a Resnet18 neural network, and the network's last convolutional
layer was used to extract new tumour tile features. Without knowledge of the
precise labels of the malignancies, we then applied an unsupervised clustering
algorithm. Our model identified specific features of hepatocellular carcinoma
(HCC) and intrahepatic cholangiocarcinoma (iCCA). Despite no specific features
of cHCC-CCA being recognized, the identification of HCC and iCCA tiles within a
slide could facilitate the diagnosis of primary liver cancers, particularly
cHCC-CCA.
Method and results: 166 PLC biopsies were divided into training, internal and
external validation sets: 90, 29 and 47 samples. Two liver pathologists
reviewed each whole-slide hematein eosin saffron (HES)-stained image (WSI).
After annotating the tumour/non-tumour areas, 256x256 pixel tiles were
extracted from the WSIs and used to train a ResNet18. The network was used to
extract new tile features. An unsupervised clustering algorithm was then
applied to the new tile features. In a two-cluster model, Clusters 0 and 1
contained mainly HCC and iCCA histological features. The diagnostic agreement
between the pathological diagnosis and the model predictions in the internal
and external validation sets was 100% (11/11) and 96% (25/26) for HCC and 78%
(7/9) and 87% (13/15) for iCCA, respectively. For cHCC-CCA, we observed a
highly variable proportion of tiles from each cluster (Cluster 0: 5-97%;
Cluster 1: 2-94%).
| [
{
"created": "Sun, 7 Apr 2024 15:03:46 GMT",
"version": "v1"
}
] | 2024-04-09 | [
[
"Beaufrère",
"Aurélie",
""
],
[
"Ouzir",
"Nora",
""
],
[
"Zafar",
"Paul Emile",
""
],
[
"Laurent-Bellue",
"Astrid",
""
],
[
"Albuquerque",
"Miguel",
""
],
[
"Lubuela",
"Gwladys",
""
],
[
"Grégory",
"Jules",
... | The diagnosis of primary liver cancers (PLCs) can be challenging, especially on biopsies and for combined hepatocellular-cholangiocarcinoma (cHCC-CCA). We automatically classified PLCs on routine-stained biopsies using a weakly supervised learning method. Weak tumour/non-tumour annotations served as labels for training a Resnet18 neural network, and the network's last convolutional layer was used to extract new tumour tile features. Without knowledge of the precise labels of the malignancies, we then applied an unsupervised clustering algorithm. Our model identified specific features of hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (iCCA). Despite no specific features of cHCC-CCA being recognized, the identification of HCC and iCCA tiles within a slide could facilitate the diagnosis of primary liver cancers, particularly cHCC-CCA. Method and results: 166 PLC biopsies were divided into training, internal and external validation sets: 90, 29 and 47 samples. Two liver pathologists reviewed each whole-slide hematein eosin saffron (HES)-stained image (WSI). After annotating the tumour/non-tumour areas, 256x256 pixel tiles were extracted from the WSIs and used to train a ResNet18. The network was used to extract new tile features. An unsupervised clustering algorithm was then applied to the new tile features. In a two-cluster model, Clusters 0 and 1 contained mainly HCC and iCCA histological features. The diagnostic agreement between the pathological diagnosis and the model predictions in the internal and external validation sets was 100% (11/11) and 96% (25/26) for HCC and 78% (7/9) and 87% (13/15) for iCCA, respectively. For cHCC-CCA, we observed a highly variable proportion of tiles from each cluster (Cluster 0: 5-97%; Cluster 1: 2-94%). |
0707.3642 | Joshua E. S. Socolar | Andre S. Ribeiro, Stuart A. Kauffman, Jason Lloyd-Price, Bj\"orn
Samuelsson and Joshua E. S. Socolar | Mutual information in random Boolean models of regulatory networks | 11 pages, 6 figures; Minor revisions for clarity and figure format,
one reference added | null | 10.1103/PhysRevE.77.011901 | null | q-bio.OT q-bio.QM | null | The amount of mutual information contained in time series of two elements
gives a measure of how well their activities are coordinated. In a large,
complex network of interacting elements, such as a genetic regulatory network
within a cell, the average of the mutual information over all pairs <I> is a
global measure of how well the system can coordinate its internal dynamics. We
study this average pairwise mutual information in random Boolean networks
(RBNs) as a function of the distribution of Boolean rules implemented at each
element, assuming that the links in the network are randomly placed. Efficient
numerical methods for calculating <I> show that as the number of network nodes
N approaches infinity, the quantity N<I> exhibits a discontinuity at parameter
values corresponding to critical RBNs. For finite systems it peaks near the
critical value, but slightly in the disordered regime for typical parameter
variations. The source of high values of N<I> is the indirect correlations
between pairs of elements from different long chains with a common starting
point. The contribution from pairs that are directly linked approaches zero for
critical networks and peaks deep in the disordered regime.
| [
{
"created": "Tue, 24 Jul 2007 21:51:15 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Nov 2007 20:43:11 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Ribeiro",
"Andre S.",
""
],
[
"Kauffman",
"Stuart A.",
""
],
[
"Lloyd-Price",
"Jason",
""
],
[
"Samuelsson",
"Björn",
""
],
[
"Socolar",
"Joshua E. S.",
""
]
] | The amount of mutual information contained in time series of two elements gives a measure of how well their activities are coordinated. In a large, complex network of interacting elements, such as a genetic regulatory network within a cell, the average of the mutual information over all pairs <I> is a global measure of how well the system can coordinate its internal dynamics. We study this average pairwise mutual information in random Boolean networks (RBNs) as a function of the distribution of Boolean rules implemented at each element, assuming that the links in the network are randomly placed. Efficient numerical methods for calculating <I> show that as the number of network nodes N approaches infinity, the quantity N<I> exhibits a discontinuity at parameter values corresponding to critical RBNs. For finite systems it peaks near the critical value, but slightly in the disordered regime for typical parameter variations. The source of high values of N<I> is the indirect correlations between pairs of elements from different long chains with a common starting point. The contribution from pairs that are directly linked approaches zero for critical networks and peaks deep in the disordered regime. |
1210.1544 | Tao Hu | Tao Hu and Dmitri B. Chklovskii | Reconstruction of Sparse Circuits Using Multi-neuronal Excitation
(RESCUME) | 9 pages, 6 figures. Advances in Neural Information Processing Systems
(NIPS) 22, 790 (2009) | null | null | null | q-bio.NC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the central problems in neuroscience is reconstructing synaptic
connectivity in neural circuits. Synapses onto a neuron can be probed by
sequentially stimulating potentially pre-synaptic neurons while monitoring the
membrane voltage of the post-synaptic neuron. Reconstructing a large neural
circuit using such a "brute force" approach is rather time-consuming and
inefficient because the connectivity in neural circuits is sparse. Instead, we
propose to measure a post-synaptic neuron's voltage while stimulating
sequentially random subsets of multiple potentially pre-synaptic neurons. To
reconstruct these synaptic connections from the recorded voltage we apply a
decoding algorithm recently developed for compressive sensing. Compared to the
brute force approach, our method promises significant time savings that grow
with the size of the circuit. We use computer simulations to find optimal
stimulation parameters and explore the feasibility of our reconstruction method
under realistic experimental conditions including noise and non-linear synaptic
integration. Multineuronal stimulation allows reconstructing synaptic
connectivity just from the spiking activity of post-synaptic neurons, even when
sub-threshold voltage is unavailable. By using calcium indicators,
voltage-sensitive dyes, or multi-electrode arrays one could monitor activity of
multiple postsynaptic neurons simultaneously, thus mapping their synaptic
inputs in parallel, potentially reconstructing a complete neural circuit.
| [
{
"created": "Thu, 4 Oct 2012 19:03:19 GMT",
"version": "v1"
}
] | 2012-10-05 | [
[
"Hu",
"Tao",
""
],
[
"Chklovskii",
"Dmitri B.",
""
]
] | One of the central problems in neuroscience is reconstructing synaptic connectivity in neural circuits. Synapses onto a neuron can be probed by sequentially stimulating potentially pre-synaptic neurons while monitoring the membrane voltage of the post-synaptic neuron. Reconstructing a large neural circuit using such a "brute force" approach is rather time-consuming and inefficient because the connectivity in neural circuits is sparse. Instead, we propose to measure a post-synaptic neuron's voltage while stimulating sequentially random subsets of multiple potentially pre-synaptic neurons. To reconstruct these synaptic connections from the recorded voltage we apply a decoding algorithm recently developed for compressive sensing. Compared to the brute force approach, our method promises significant time savings that grow with the size of the circuit. We use computer simulations to find optimal stimulation parameters and explore the feasibility of our reconstruction method under realistic experimental conditions including noise and non-linear synaptic integration. Multineuronal stimulation allows reconstructing synaptic connectivity just from the spiking activity of post-synaptic neurons, even when sub-threshold voltage is unavailable. By using calcium indicators, voltage-sensitive dyes, or multi-electrode arrays one could monitor activity of multiple postsynaptic neurons simultaneously, thus mapping their synaptic inputs in parallel, potentially reconstructing a complete neural circuit. |
1909.02136 | John Halloran | John T. Halloran and David M. Rocke | Learning Concave Conditional Likelihood Models for Improved Analysis of
Tandem Mass Spectra | 16 pages. A partitioned version of this appeared in NeurIPS 2018 | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most widely used technology to identify the proteins present in a complex
biological sample is tandem mass spectrometry, which quickly produces a large
collection of spectra representative of the peptides (i.e., protein
subsequences) present in the original sample. In this work, we greatly expand
the parameter learning capabilities of a dynamic Bayesian network (DBN)
peptide-scoring algorithm, Didea, by deriving emission distributions for which
its conditional log-likelihood scoring function remains concave. We show that
this class of emission distributions, called Convex Virtual Emissions (CVEs),
naturally generalizes the log-sum-exp function while rendering both maximum
likelihood estimation and conditional maximum likelihood estimation concave for
a wide range of Bayesian networks. Utilizing CVEs in Didea allows efficient
learning of a large number of parameters while ensuring global convergence, in
stark contrast to Didea's previous parameter learning framework (which could
only learn a single parameter using a costly grid search) and other trainable
models (which only ensure convergence to local optima). The newly trained
scoring function substantially outperforms the state-of-the-art in both scoring
function accuracy and downstream Fisher kernel analysis. Furthermore, we
significantly improve Didea's runtime performance through successive
optimizations to its message passing schedule and derive explicit connections
between Didea's new concave score and related MS/MS scoring functions.
| [
{
"created": "Wed, 4 Sep 2019 22:27:10 GMT",
"version": "v1"
}
] | 2019-09-06 | [
[
"Halloran",
"John T.",
""
],
[
"Rocke",
"David M.",
""
]
] | The most widely used technology to identify the proteins present in a complex biological sample is tandem mass spectrometry, which quickly produces a large collection of spectra representative of the peptides (i.e., protein subsequences) present in the original sample. In this work, we greatly expand the parameter learning capabilities of a dynamic Bayesian network (DBN) peptide-scoring algorithm, Didea, by deriving emission distributions for which its conditional log-likelihood scoring function remains concave. We show that this class of emission distributions, called Convex Virtual Emissions (CVEs), naturally generalizes the log-sum-exp function while rendering both maximum likelihood estimation and conditional maximum likelihood estimation concave for a wide range of Bayesian networks. Utilizing CVEs in Didea allows efficient learning of a large number of parameters while ensuring global convergence, in stark contrast to Didea's previous parameter learning framework (which could only learn a single parameter using a costly grid search) and other trainable models (which only ensure convergence to local optima). The newly trained scoring function substantially outperforms the state-of-the-art in both scoring function accuracy and downstream Fisher kernel analysis. Furthermore, we significantly improve Didea's runtime performance through successive optimizations to its message passing schedule and derive explicit connections between Didea's new concave score and related MS/MS scoring functions. |
1503.08033 | Jonathan Touboul | Jonathan Touboul and Alain Destexhe | Power-law statistics and universal scaling in the absence of criticality | in press in Phys. Rev. E | Phys. Rev. E 95, 012413 (2017) | 10.1103/PhysRevE.95.012413 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Critical states are sometimes identified experimentally through power-law
statistics or universal scaling functions. We show here that such features
naturally emerge from networks in self-sustained irregular regimes away from
criticality. In these regimes, statistical physics theory of large interacting
systems predict a regime where the nodes have independent and identically
distributed dynamics. We thus investigated the statistics of a system in which
units are replaced by independent stochastic surrogates, and found the same
power-law statistics, indicating that these are not sufficient to establish
criticality. We rather suggest that these are universal features of large-scale
networks when considered macroscopically. These results put caution on the
interpretation of scaling laws found in nature.
| [
{
"created": "Fri, 27 Mar 2015 11:46:57 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Apr 2015 14:10:26 GMT",
"version": "v2"
},
{
"created": "Sun, 27 Dec 2015 13:48:10 GMT",
"version": "v3"
},
{
"created": "Fri, 1 Jul 2016 00:15:46 GMT",
"version": "v4"
},
{
"cr... | 2017-02-08 | [
[
"Touboul",
"Jonathan",
""
],
[
"Destexhe",
"Alain",
""
]
] | Critical states are sometimes identified experimentally through power-law statistics or universal scaling functions. We show here that such features naturally emerge from networks in self-sustained irregular regimes away from criticality. In these regimes, statistical physics theory of large interacting systems predict a regime where the nodes have independent and identically distributed dynamics. We thus investigated the statistics of a system in which units are replaced by independent stochastic surrogates, and found the same power-law statistics, indicating that these are not sufficient to establish criticality. We rather suggest that these are universal features of large-scale networks when considered macroscopically. These results put caution on the interpretation of scaling laws found in nature. |
2304.06253 | Masahito Ohue | Apakorn Kengkanna, Masahito Ohue | Enhancing Model Learning and Interpretation Using Multiple Molecular
Graph Representations for Compound Property and Activity Prediction | null | null | 10.1109/CIBCB56990.2023.10264879 | null | q-bio.BM cs.LG q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) demonstrate great performance in compound
property and activity prediction due to their capability to efficiently learn
complex molecular graph structures. However, two main limitations persist
including compound representation and model interpretability. While atom-level
molecular graph representations are commonly used because of their ability to
capture natural topology, they may not fully express important substructures or
functional groups which significantly influence molecular properties.
Consequently, recent research proposes alternative representations employing
reduction techniques to integrate higher-level information and leverages both
representations for model learning. However, there is still a lack of study
about different molecular graph representations on model learning and
interpretation. Interpretability is also crucial for drug discovery as it can
offer chemical insights and inspiration for optimization. Numerous studies
attempt to include model interpretation to explain the rationale behind
predictions, but most of them focus solely on individual prediction with little
analysis of the interpretation on different molecular graph representations.
This research introduces multiple molecular graph representations that
incorporate higher-level information and investigates their effects on model
learning and interpretation from diverse perspectives. The results indicate
that combining atom graph representation with reduced molecular graph
representation can yield promising model performance. Furthermore, the
interpretation results can provide significant features and potential
substructures consistently aligning with background knowledge. These multiple
molecular graph representations and interpretation analysis can bolster model
comprehension and facilitate relevant applications in drug discovery.
| [
{
"created": "Thu, 13 Apr 2023 04:20:30 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Kengkanna",
"Apakorn",
""
],
[
"Ohue",
"Masahito",
""
]
] | Graph neural networks (GNNs) demonstrate great performance in compound property and activity prediction due to their capability to efficiently learn complex molecular graph structures. However, two main limitations persist including compound representation and model interpretability. While atom-level molecular graph representations are commonly used because of their ability to capture natural topology, they may not fully express important substructures or functional groups which significantly influence molecular properties. Consequently, recent research proposes alternative representations employing reduction techniques to integrate higher-level information and leverages both representations for model learning. However, there is still a lack of study about different molecular graph representations on model learning and interpretation. Interpretability is also crucial for drug discovery as it can offer chemical insights and inspiration for optimization. Numerous studies attempt to include model interpretation to explain the rationale behind predictions, but most of them focus solely on individual prediction with little analysis of the interpretation on different molecular graph representations. This research introduces multiple molecular graph representations that incorporate higher-level information and investigates their effects on model learning and interpretation from diverse perspectives. The results indicate that combining atom graph representation with reduced molecular graph representation can yield promising model performance. Furthermore, the interpretation results can provide significant features and potential substructures consistently aligning with background knowledge. These multiple molecular graph representations and interpretation analysis can bolster model comprehension and facilitate relevant applications in drug discovery. |
1410.7998 | Liane Gabora | Liane Gabora and Stefan Leijnen | The Relationship between Creativity, Imitation, and Cultural Diversity | 13 pages. arXiv admin note: substantial text overlap with
arXiv:0911.2390, arXiv:1005.1516, arXiv:1310.3781, arXiv:1310.0522,
arXiv:0811.2551 | International Journal of Software and Informatics, 7(4), 615-627
2013 | null | null | q-bio.NC cs.CY cs.MA cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are both benefits and drawbacks to cultural diversity. It can lead to
friction and exacerbate differences. However, as with biological diversity,
cultural diversity is valuable in times of upheaval; if a previously effective
solution no longer works, it is good to have alternatives available. What
factors give rise to cultural diversity? This paper describes a preliminary
investigation of this question using a computational model of cultural
evolution. The model is composed of neural network based agents that evolve
fitter ideas for actions by (1) inventing new ideas through modification of
existing ones, and (2) imitating neighbors' ideas. Numerical simulations
indicate that the diversity of ideas in a population is positively correlated
with both the proportion of creators to imitators in the population, and the
rate at which creators create. This is the case for both minimum and peak
diversity of actions over the duration of a run.
| [
{
"created": "Sat, 18 Oct 2014 02:57:14 GMT",
"version": "v1"
}
] | 2019-07-19 | [
[
"Gabora",
"Liane",
""
],
[
"Leijnen",
"Stefan",
""
]
] | There are both benefits and drawbacks to cultural diversity. It can lead to friction and exacerbate differences. However, as with biological diversity, cultural diversity is valuable in times of upheaval; if a previously effective solution no longer works, it is good to have alternatives available. What factors give rise to cultural diversity? This paper describes a preliminary investigation of this question using a computational model of cultural evolution. The model is composed of neural network based agents that evolve fitter ideas for actions by (1) inventing new ideas through modification of existing ones, and (2) imitating neighbors' ideas. Numerical simulations indicate that the diversity of ideas in a population is positively correlated with both the proportion of creators to imitators in the population, and the rate at which creators create. This is the case for both minimum and peak diversity of actions over the duration of a run. |
1607.06835 | J. C. Phillips | J. C. Phillips | Minimal Immunogenic Epitopes Have Nine Amino Acids | 7 pages, 2 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To be cost-effective, biomedical proteins must be optimized with regard to
many factors. Road maps are customary for large-scale projects, and here
descriptive methods based on bioinformatic fractal thermodynamic scales are
tested against an important example, HPV vaccine. Older scales from before 2000
are found to yield inconclusive results, but modern bioinformatic scales are
amazingly accurate, with a high level of internal consistency, and little
ambiguity.
| [
{
"created": "Fri, 22 Jul 2016 20:36:15 GMT",
"version": "v1"
}
] | 2016-07-26 | [
[
"Phillips",
"J. C.",
""
]
] | To be cost-effective, biomedical proteins must be optimized with regard to many factors. Road maps are customary for large-scale projects, and here descriptive methods based on bioinformatic fractal thermodynamic scales are tested against an important example, HPV vaccine. Older scales from before 2000 are found to yield inconclusive results, but modern bioinformatic scales are amazingly accurate, with a high level of internal consistency, and little ambiguity. |
1302.5964 | Il Memming Park | Il Memming Park, Sohan Seth, Antonio R. C. Paiva, Lin Li, Jose C.
Principe | Kernel methods on spike train space for neuroscience: a tutorial | 12 pages, 8 figures, accepted in IEEE Signal Processing Magazine | IEEE Signal Processing Magazine 30(4), 2013 | 10.1109/MSP.2013.2251072 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last decade several positive definite kernels have been proposed to
treat spike trains as objects in Hilbert space. However, for the most part,
such attempts still remain a mere curiosity for both computational
neuroscientists and signal processing experts. This tutorial illustrates why
kernel methods can, and have already started to, change the way spike trains
are analyzed and processed. The presentation incorporates simple mathematical
analogies and convincing practical examples in an attempt to show the yet
unexplored potential of positive definite functions to quantify point
processes. It also provides a detailed overview of the current state of the art
and future challenges with the hope of engaging the readers in active
participation.
| [
{
"created": "Sun, 24 Feb 2013 22:44:20 GMT",
"version": "v1"
}
] | 2013-10-16 | [
[
"Park",
"Il Memming",
""
],
[
"Seth",
"Sohan",
""
],
[
"Paiva",
"Antonio R. C.",
""
],
[
"Li",
"Lin",
""
],
[
"Principe",
"Jose C.",
""
]
] | Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation. |
0908.3610 | Thimo Rohlf | Thimo Rohlf and Christopher R. Winkler | Emergent Network Structure, evolvable Robustness and non-linear Effects
of Point Mutations in an Artificial Genome Model | null | Advances in Complex Systems, Vol. 12, pp. 293 - 310 (2009) | null | null | q-bio.MN cond-mat.dis-nn cs.NE q-bio.GN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic regulation is a key component in development, but a clear
understanding of the structure and dynamics of genetic networks is not yet at
hand. In this paper we investigate these properties within an artificial genome
model originally introduced by Reil (1999). We analyze statistical properties
of randomly generated genomes both on the sequence- and network level, and show
that this model correctly predicts the frequency of genes in genomes as found
in experimental data. Using an evolutionary algorithm based on stabilizing
selection for a phenotype, we show that dynamical robustness against single
base mutations, as well as against random changes in initial states of
regulatory dynamics that mimic stochastic fluctuations in environmental
conditions, can emerge in parallel. Point mutations at the sequence level have
strongly non-linear effects on network wiring, including as well structurally
neutral mutations and simultaneous rewiring of multiple connections, which
occasionally lead to strong reorganization of the attractor landscape and
metastability of evolutionary dynamics. Evolved genomes exhibit characteristic
patterns on both sequence and network level.
| [
{
"created": "Tue, 25 Aug 2009 12:44:43 GMT",
"version": "v1"
}
] | 2009-08-26 | [
[
"Rohlf",
"Thimo",
""
],
[
"Winkler",
"Christopher R.",
""
]
] | Genetic regulation is a key component in development, but a clear understanding of the structure and dynamics of genetic networks is not yet at hand. In this paper we investigate these properties within an artificial genome model originally introduced by Reil (1999). We analyze statistical properties of randomly generated genomes both on the sequence- and network level, and show that this model correctly predicts the frequency of genes in genomes as found in experimental data. Using an evolutionary algorithm based on stabilizing selection for a phenotype, we show that dynamical robustness against single base mutations, as well as against random changes in initial states of regulatory dynamics that mimic stochastic fluctuations in environmental conditions, can emerge in parallel. Point mutations at the sequence level have strongly non-linear effects on network wiring, including as well structurally neutral mutations and simultaneous rewiring of multiple connections, which occasionally lead to strong reorganization of the attractor landscape and metastability of evolutionary dynamics. Evolved genomes exhibit characteristic patterns on both sequence and network level. |
2201.07091 | Friedhelm Serwane | Katja A. Salbaum, Elijah R. Shelton, Friedhelm Serwane | Retina organoids: Window into the biophysics of neuronal systems | 41 pages, 2 Figures, This article may be downloaded for personal use
only. Any other use requires prior permission of the author and AIP
Publishing. This article appeared in Biophysics Rev. 3, 011302 (2022) and may
be found at https://aip.scitation.org/doi/10.1063/5.0077014 | Biophysics Rev. 3, 011302 (2022) | 10.1063/5.0077014 | null | q-bio.TO cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With a kind of magnetism, the human retina draws the eye of neuroscientist
and physicist alike. It is attractive as a self-organizing system, which forms
as a part of the central nervous system via biochemical and mechanical cues.
The retina is also intriguing as an electro-optical device, converting photons
into voltages to perform on-the-fly filtering before the signals are sent to
our brain. Here, we consider how the advent of stem cell derived in vitro
analogs of the retina, termed retina organoids, opens up an exploration of the
interplay between optics, electrics, and mechanics in a complex neuronal
network, all in a Petri dish. This review presents state-of-the-art retina
organoid protocols by emphasizing links to the biochemical and mechanical
signals of in vivo retinogenesis. Electrophysiological recording of active
signal processing becomes possible as retina organoids generate light sensitive
and synaptically connected photoreceptors. Experimental biophysical tools
provide data to steer the development of mathematical models operating at
different levels of coarse-graining. In concert, they provide a means to study
how mechanical factors guide retina self-assembly. In turn, this understanding
informs the engineering of mechanical signals required to tailor the growth of
neuronal network morphology. Tackling the complex developmental and
computational processes in the retina requires an interdisciplinary endeavor
combining experiment and theory, physics, and biology. The reward is enticing:
in the next few years, retina organoids could offer a glimpse inside the
machinery of simultaneous cellular self-assembly and signal processing, all in
an in vitro setting.
| [
{
"created": "Tue, 18 Jan 2022 16:13:04 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Salbaum",
"Katja A.",
""
],
[
"Shelton",
"Elijah R.",
""
],
[
"Serwane",
"Friedhelm",
""
]
] | With a kind of magnetism, the human retina draws the eye of neuroscientist and physicist alike. It is attractive as a self-organizing system, which forms as a part of the central nervous system via biochemical and mechanical cues. The retina is also intriguing as an electro-optical device, converting photons into voltages to perform on-the-fly filtering before the signals are sent to our brain. Here, we consider how the advent of stem cell derived in vitro analogs of the retina, termed retina organoids, opens up an exploration of the interplay between optics, electrics, and mechanics in a complex neuronal network, all in a Petri dish. This review presents state-of-the-art retina organoid protocols by emphasizing links to the biochemical and mechanical signals of in vivo retinogenesis. Electrophysiological recording of active signal processing becomes possible as retina organoids generate light sensitive and synaptically connected photoreceptors. Experimental biophysical tools provide data to steer the development of mathematical models operating at different levels of coarse-graining. In concert, they provide a means to study how mechanical factors guide retina self-assembly. In turn, this understanding informs the engineering of mechanical signals required to tailor the growth of neuronal network morphology. Tackling the complex developmental and computational processes in the retina requires an interdisciplinary endeavor combining experiment and theory, physics, and biology. The reward is enticing: in the next few years, retina organoids could offer a glimpse inside the machinery of simultaneous cellular self-assembly and signal processing, all in an in vitro setting. |
1905.12958 | Edmund Crampin | Peter J. Gawthrop, Peter Cudmore and Edmund J. Crampin | Physically-Plausible Modelling of Biomolecular Systems: A Simplified,
Energy-Based Model of the Mitochondrial Electron Transport Chain | null | Journal of Theoretical Biology, Vol. 493, May 2020 | 10.1016/j.jtbi.2020.110223 | null | q-bio.QM q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Systems biology and whole-cell modelling are demanding increasingly
comprehensive mathematical models of cellular biochemistry. These models
require the development of simplified models of specific processes which
capture essential biophysical features but without unnecessarily complexity.
Recently there has been renewed interest in thermodynamically-based modelling
of cellular processes. Here we present an approach to developing of simplified
yet thermodynamically consistent (hence physically plausible) models which can
readily be incorporated into large scale biochemical descriptions but which do
not require full mechanistic detail of the underlying processes. We illustrate
the approach through development of a simplified, physically plausible model of
the mitochondrial electron transport chain and show that the simplified model
behaves like the full system.
| [
{
"created": "Thu, 30 May 2019 10:56:01 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jan 2020 03:07:37 GMT",
"version": "v2"
}
] | 2020-08-03 | [
[
"Gawthrop",
"Peter J.",
""
],
[
"Cudmore",
"Peter",
""
],
[
"Crampin",
"Edmund J.",
""
]
] | Systems biology and whole-cell modelling are demanding increasingly comprehensive mathematical models of cellular biochemistry. These models require the development of simplified models of specific processes which capture essential biophysical features but without unnecessarily complexity. Recently there has been renewed interest in thermodynamically-based modelling of cellular processes. Here we present an approach to developing of simplified yet thermodynamically consistent (hence physically plausible) models which can readily be incorporated into large scale biochemical descriptions but which do not require full mechanistic detail of the underlying processes. We illustrate the approach through development of a simplified, physically plausible model of the mitochondrial electron transport chain and show that the simplified model behaves like the full system. |
1711.01662 | Hassaan Majeed | Hassaan Majeed, Tan Huu Nguyen, Mikhail Eugene Kandel, Andre
Kajdacsy-Balla and Gabriel Popescu | Label-free quantitative screening of breast tissue using Spatial Light
Interference Microscopy (SLIM) | 5 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Breast cancer is the most common type of cancer among women worldwide. The
standard histopathology of breast tissue, the primary means of disease
diagnosis, involves manual microscopic examination of stained tissue by a
pathologist. Because this method relies on qualitative information, it can
result in inter-observer variation. Furthermore, for difficult cases the
pathologist often needs additional markers of malignancy to help in making a
diagnosis. We present a quantitative method for label-free tissue screening
using Spatial Light Interference Microscopy (SLIM). By extracting tissue
markers of malignancy based on the nanostructure revealed by the optical
path-length, our method provides an objective and potentially automatable
method for rapidly flagging suspicious tissue. We demonstrated our method by
imaging a tissue microarray comprising 68 different subjects - 34 with
malignant and 34 with benign tissues. Three-fold cross validation results
showed a sensitivity of 94% and specificity of 85% for detecting cancer. The
quantitative biomarkers we extract provide a repeatable and objective basis for
determining malignancy. Thus, these disease signatures can be automatically
classified through machine learning packages, since our images do not vary from
scan to scan or instrument to instrument, i.e., they represent intrinsic
physical attributes of the sample, independent of staining quality.
| [
{
"created": "Sun, 5 Nov 2017 21:20:44 GMT",
"version": "v1"
}
] | 2017-11-07 | [
[
"Majeed",
"Hassaan",
""
],
[
"Nguyen",
"Tan Huu",
""
],
[
"Kandel",
"Mikhail Eugene",
""
],
[
"Kajdacsy-Balla",
"Andre",
""
],
[
"Popescu",
"Gabriel",
""
]
] | Breast cancer is the most common type of cancer among women worldwide. The standard histopathology of breast tissue, the primary means of disease diagnosis, involves manual microscopic examination of stained tissue by a pathologist. Because this method relies on qualitative information, it can result in inter-observer variation. Furthermore, for difficult cases the pathologist often needs additional markers of malignancy to help in making a diagnosis. We present a quantitative method for label-free tissue screening using Spatial Light Interference Microscopy (SLIM). By extracting tissue markers of malignancy based on the nanostructure revealed by the optical path-length, our method provides an objective and potentially automatable method for rapidly flagging suspicious tissue. We demonstrated our method by imaging a tissue microarray comprising 68 different subjects - 34 with malignant and 34 with benign tissues. Three-fold cross validation results showed a sensitivity of 94% and specificity of 85% for detecting cancer. The quantitative biomarkers we extract provide a repeatable and objective basis for determining malignancy. Thus, these disease signatures can be automatically classified through machine learning packages, since our images do not vary from scan to scan or instrument to instrument, i.e., they represent intrinsic physical attributes of the sample, independent of staining quality. |
2009.11168 | Hongyi Li Dr. | Hongyi Li (1, 2), You Lv (2), Xiaoliang Chen (4), Bei Li (2), Qi Hua
(1), Fusui Ji (2), Yajun Yin (5), Hua Li (6) ((1) Cardiology Department,
Xuanwu Hospital, Capital Medical University, Xicheng, Beijing, China, (2)
Cardiology Department, Radiology Department, Beijing Hospital, National
Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of
Medical Science, Beijing, China (4) Radiology Department, China-Japan
Friendship Hospital, Beijing Hospital, China, (5) Department of Engineering
Mechanics, Tsinghua University, Beijing, China (6) Institute of Computing
Technology, Chinese Academy of Sciences, Beijing, China) | A multilayer interstitial fluid flow along vascular adventitia | 28 pages, 8 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Interstitial fluid flow through vascular adventitia has been
disclosed recently. However, its kinetic pattern was unclear. Methods and
Results: We used histological and topographical identifications to observe ISF
flow along venous vessels in rabbits. By MRI in alive subjects, the inherent
ISF flow pathways in legs, abdomen and thorax were enhanced by paramagnetic
contrast from ankle dermis. By fluorescence stereomicroscopy and layer-by-layer
dissection after the rabbits were sacrificed, the perivascular and adventitial
connective tissues (PACT) along the saphenous veins and inferior vena cava were
found to be stained by sodium fluorescein from ankle dermis, which coincided
with the findings by MRI. By confocal microscopy and histological analysis, the
stained PACT pathways were verified to be the fibrous connective tissues and
consisted of longitudinally assembled fibers. By usages of nanoparticles and
surfactants, a PACT pathway was found to be accessible for a nanoparticle under
100nm and contain two parts: a tunica channel part and an absorptive part. In
real-time observations, the calculated velocity of a continuous ISF flow along
fibers of a PACT pathway was 3.6-15.6 mm/sec. Conclusion: These data further
revealed more kinetic features of a continuous ISF flow along vascular vessel.
A multiscale, multilayer, and multiform interstitial/interfacial fluid flow
throughout perivascular and adventitial connective tissues was suggested as one
of kinetic and dynamic mechanisms for ISF flow, which might be another
principal fluid dynamic pattern besides convective/vascular and diffusive
transport in biological system.
| [
{
"created": "Wed, 23 Sep 2020 14:30:03 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Sep 2020 03:31:07 GMT",
"version": "v2"
}
] | 2020-09-25 | [
[
"Li",
"Hongyi",
""
],
[
"Lv",
"You",
""
],
[
"Chen",
"Xiaoliang",
""
],
[
"Li",
"Bei",
""
],
[
"Hua",
"Qi",
""
],
[
"Ji",
"Fusui",
""
],
[
"Yin",
"Yajun",
""
],
[
"Li",
"Hua",
""
]
] | Objective: Interstitial fluid flow through vascular adventitia has been disclosed recently. However, its kinetic pattern was unclear. Methods and Results: We used histological and topographical identifications to observe ISF flow along venous vessels in rabbits. By MRI in alive subjects, the inherent ISF flow pathways in legs, abdomen and thorax were enhanced by paramagnetic contrast from ankle dermis. By fluorescence stereomicroscopy and layer-by-layer dissection after the rabbits were sacrificed, the perivascular and adventitial connective tissues (PACT) along the saphenous veins and inferior vena cava were found to be stained by sodium fluorescein from ankle dermis, which coincided with the findings by MRI. By confocal microscopy and histological analysis, the stained PACT pathways were verified to be the fibrous connective tissues and consisted of longitudinally assembled fibers. By usages of nanoparticles and surfactants, a PACT pathway was found to be accessible for a nanoparticle under 100nm and contain two parts: a tunica channel part and an absorptive part. In real-time observations, the calculated velocity of a continuous ISF flow along fibers of a PACT pathway was 3.6-15.6 mm/sec. Conclusion: These data further revealed more kinetic features of a continuous ISF flow along vascular vessel. A multiscale, multilayer, and multiform interstitial/interfacial fluid flow throughout perivascular and adventitial connective tissues was suggested as one of kinetic and dynamic mechanisms for ISF flow, which might be another principal fluid dynamic pattern besides convective/vascular and diffusive transport in biological system. |
2408.04768 | Mattia Manica | Francesca Rovida, Marino Faccini, Carla Molina Gran\'e, Irene
Cassaniti, Sabrina Senatore, Eva Rossetti, Giuditta Scardina, Manuela Piazza,
Giulia Campanini, Daniele Lilleri, Stefania Paolucci, Guglielmo Ferrari,
Antonio Piralla, Francesco Defilippo, Davide Lelli, Ana Moreno, Luigi
Vezzosi, Federica Attanasi, Soresini Marzia, Barozzi Manuela, Lorenzo
Cerutti, Stefano Paglia, Angelo Regazzetti, Maurilia Marcacci, Guido Di
Donato, Marco Farioli, Mattia Manica, Piero Poletti, Antonio Lavazza, Maira
Bonini, Stefano Merler, Fausto Baldanti, Danilo Cereda, Lombardy Dengue
network | The 2023 Dengue Outbreak in Lombardy, Italy: A One-Health Perspective | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Introduction. Here we reported the virological, entomological and
epidemiological characteristics of the large autochthonous outbreak of dengue
(DENV) occurred in a small village of the Lombardy region (Northern Italy)
during summer 2023.
Methods. After the diagnosis of the first autochthonous case on 18 August
2023, public health measures, including epidemiological investigation and
vector control measures, were carried out. A serological screening for DENV
antibodies detection was offered to the population. In the case of positive
DENV IgM, a second sample was collected to detect DENV RNA and verify
seroconversion. Entomological and epidemiological investigations were also
performed. A modeling analysis was conducted to estimate the dengue generation
time, transmission potential, distance of transmission, and assess diagnostic
delays.
Results. Overall, 416 subjects participated to the screening program and 20
were identified as DENV-1 cases (15 confirmed and 5 probable). In addition,
DENV-1 infection was diagnosed in 24 symptomatic subjects referred to the local
Emergency Room Department for suggestive symptoms and 1 case was identified
through blood donation screening. The average generation time was estimated to
be 18.3 days (95% CI: 13.1-23.5 days). R0 was estimated at 1.31 (95% CI:
0.76-1.98); 90% of transmission occurred within 500m. Entomological
investigations performed in 46 pools of mosquitoes revealed the presence of
only one positive pool for DENV-1.
Discussion. This report highlights the importance of synergic surveillance,
including virological, entomological and public health measures to control the
spread of arboviral infections.
| [
{
"created": "Thu, 8 Aug 2024 21:49:39 GMT",
"version": "v1"
}
] | 2024-08-12 | [
[
"Rovida",
"Francesca",
""
],
[
"Faccini",
"Marino",
""
],
[
"Grané",
"Carla Molina",
""
],
[
"Cassaniti",
"Irene",
""
],
[
"Senatore",
"Sabrina",
""
],
[
"Rossetti",
"Eva",
""
],
[
"Scardina",
"Giuditta",
"... | Introduction. Here we reported the virological, entomological and epidemiological characteristics of the large autochthonous outbreak of dengue (DENV) occurred in a small village of the Lombardy region (Northern Italy) during summer 2023. Methods. After the diagnosis of the first autochthonous case on 18 August 2023, public health measures, including epidemiological investigation and vector control measures, were carried out. A serological screening for DENV antibodies detection was offered to the population. In the case of positive DENV IgM, a second sample was collected to detect DENV RNA and verify seroconversion. Entomological and epidemiological investigations were also performed. A modeling analysis was conducted to estimate the dengue generation time, transmission potential, distance of transmission, and assess diagnostic delays. Results. Overall, 416 subjects participated to the screening program and 20 were identified as DENV-1 cases (15 confirmed and 5 probable). In addition, DENV-1 infection was diagnosed in 24 symptomatic subjects referred to the local Emergency Room Department for suggestive symptoms and 1 case was identified through blood donation screening. The average generation time was estimated to be 18.3 days (95% CI: 13.1-23.5 days). R0 was estimated at 1.31 (95% CI: 0.76-1.98); 90% of transmission occurred within 500m. Entomological investigations performed in 46 pools of mosquitoes revealed the presence of only one positive pool for DENV-1. Discussion. This report highlights the importance of synergic surveillance, including virological, entomological and public health measures to control the spread of arboviral infections. |
2011.12676 | Mohammadreza Edalati | Mohammadreza Edalati, Mahdi Mahmoudzadeh, Javad Safaie, Fabrice
Wallois, Sahar Moghimi | Great expectations in music: violation of rhythmic expectancies elicits
late frontal gamma activity nested in theta oscillations | null | null | 10.1111/psyp.13909 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rhythm processing involves building expectations according to the
hierarchical temporal structure of auditory events. Although rhythm processing
has been addressed in the context of predictive coding, the properties of the
oscillatory response in different cortical areas is still not clear. We
explored the oscillatory properties of the neural response to rhythmic
incongruence and explored the cross-frequency coupling between multiple
frequencies to provide links between the concepts of predictive coding and
rhythm perception. We designed an experiment to investigate the neural response
to rhythmic deviations in which the tone either arrived earlier than expected
or the tone in the same metrical position was omitted. These two manipulations
modulate the rhythmic structure differently, with the former creating a larger
violation of the general structure of the musical stimulus than the latter.
Both deviations resulted in an MMN response, whereas only the rhythmic deviant
resulted in a subsequent P3a. Rhythmic deviants due to the early occurrence of
a tone, but not omission deviants, elicited a late high gamma response (60-80
Hz) at the end of the P3a over the left frontal region, which, interestingly,
correlated with the P3a amplitude over the same region and was also nested in
theta oscillations. The timing of the elicited high-frequency gamma
oscillations related to rhythmic deviation suggests that it might be related to
the update of the predictive neural model, corresponding to the temporal
structure of the events in higher-level cortical areas.
| [
{
"created": "Wed, 25 Nov 2020 12:11:40 GMT",
"version": "v1"
}
] | 2021-10-19 | [
[
"Edalati",
"Mohammadreza",
""
],
[
"Mahmoudzadeh",
"Mahdi",
""
],
[
"Safaie",
"Javad",
""
],
[
"Wallois",
"Fabrice",
""
],
[
"Moghimi",
"Sahar",
""
]
] | Rhythm processing involves building expectations according to the hierarchical temporal structure of auditory events. Although rhythm processing has been addressed in the context of predictive coding, the properties of the oscillatory response in different cortical areas is still not clear. We explored the oscillatory properties of the neural response to rhythmic incongruence and explored the cross-frequency coupling between multiple frequencies to provide links between the concepts of predictive coding and rhythm perception. We designed an experiment to investigate the neural response to rhythmic deviations in which the tone either arrived earlier than expected or the tone in the same metrical position was omitted. These two manipulations modulate the rhythmic structure differently, with the former creating a larger violation of the general structure of the musical stimulus than the latter. Both deviations resulted in an MMN response, whereas only the rhythmic deviant resulted in a subsequent P3a. Rhythmic deviants due to the early occurrence of a tone, but not omission deviants, elicited a late high gamma response (60-80 Hz) at the end of the P3a over the left frontal region, which, interestingly, correlated with the P3a amplitude over the same region and was also nested in theta oscillations. The timing of the elicited high-frequency gamma oscillations related to rhythmic deviation suggests that it might be related to the update of the predictive neural model, corresponding to the temporal structure of the events in higher-level cortical areas. |
1002.3907 | Thierry Mora | Thierry Mora and Ned S. Wingreen | Limits of sensing temporal concentration changes by single cells | 11 pages, 2 figures | Phys. Rev. Lett. 104, 248101 (2010) | 10.1103/PhysRevLett.104.248101 | null | q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Berg and Purcell [Biophys. J. 20, 193 (1977)] calculated how the accuracy of
concentration sensing by single-celled organisms is limited by noise from the
small number of counted molecules. Here we generalize their results to the
sensing of concentration ramps, which is often the biologically relevant
situation (e.g. during bacterial chemotaxis). We calculate lower bounds on the
uncertainty of ramp sensing by three measurement devices: a single receptor, an
absorbing sphere, and a monitoring sphere. We contrast two strategies, simple
linear regression of the input signal versus maximum likelihood estimation, and
show that the latter can be twice as accurate as the former. Finally, we
consider biological implementations of these two strategies, and identify
possible signatures that maximum likelihood estimation is implemented by real
biological systems.
| [
{
"created": "Sat, 20 Feb 2010 16:54:08 GMT",
"version": "v1"
}
] | 2011-11-28 | [
[
"Mora",
"Thierry",
""
],
[
"Wingreen",
"Ned S.",
""
]
] | Berg and Purcell [Biophys. J. 20, 193 (1977)] calculated how the accuracy of concentration sensing by single-celled organisms is limited by noise from the small number of counted molecules. Here we generalize their results to the sensing of concentration ramps, which is often the biologically relevant situation (e.g. during bacterial chemotaxis). We calculate lower bounds on the uncertainty of ramp sensing by three measurement devices: a single receptor, an absorbing sphere, and a monitoring sphere. We contrast two strategies, simple linear regression of the input signal versus maximum likelihood estimation, and show that the latter can be twice as accurate as the former. Finally, we consider biological implementations of these two strategies, and identify possible signatures that maximum likelihood estimation is implemented by real biological systems. |
2403.15405 | Elodie Germani | Elodie Germani (EMPENN, LACODAM), Nikhil Baghwat, Mathieu Dugr\'e
(CSE), R\'emi Gau, Albert Montillo, Kevin Nguyen, Andrzej Sokolowski (CSE),
Madeleine Sharp, Jean-Baptiste Poline, Tristan Glatard (CSE) | Predicting Parkinson's disease trajectory using clinical and functional
MRI features: a reproduction and replication study | null | null | null | null | q-bio.NC cs.AI eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parkinson's disease (PD) is a common neurodegenerative disorder with a poorly
understood physiopathology and no established biomarkers for the diagnosis of
early stages and for prediction of disease progression. Several neuroimaging
biomarkers have been studied recently, but these are susceptible to several
sources of variability. In this context, an evaluation of the robustness of
such biomarkers is essential. This study is part of a larger project
investigating the replicability of potential neuroimaging biomarkers of PD.
Here, we attempt to reproduce (same data, same method) and replicate (different
data or method) the models described in Nguyen et al., 2021 to predict
individual's PD current state and progression using demographic, clinical and
neuroimaging features (fALFF and ReHo extracted from resting-state fMRI). We
use the Parkinson's Progression Markers Initiative dataset (PPMI,
ppmi-info.org), as in Nguyen et al.,2021 and aim to reproduce the original
cohort, imaging features and machine learning models as closely as possible
using the information available in the paper and the code. We also investigated
methodological variations in cohort selection, feature extraction pipelines and
sets of input features. The success of the reproduction was assessed using
different criteria. Notably, we obtained significantly better than chance
performance using the analysis pipeline closest to that in the original study
(R2 > 0), which is consistent with its findings. The challenges encountered
while reproducing and replicating the original work are likely explained by the
complexity of neuroimaging studies, in particular in clinical settings. We
provide recommendations to further facilitate the reproducibility of such
studies in the future.
| [
{
"created": "Tue, 20 Feb 2024 13:42:50 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 11:33:01 GMT",
"version": "v2"
}
] | 2024-05-27 | [
[
"Germani",
"Elodie",
"",
"EMPENN, LACODAM"
],
[
"Baghwat",
"Nikhil",
"",
"CSE"
],
[
"Dugré",
"Mathieu",
"",
"CSE"
],
[
"Gau",
"Rémi",
"",
"CSE"
],
[
"Montillo",
"Albert",
"",
"CSE"
],
[
"Nguyen",
"Kevin",
... | Parkinson's disease (PD) is a common neurodegenerative disorder with a poorly understood physiopathology and no established biomarkers for the diagnosis of early stages and for prediction of disease progression. Several neuroimaging biomarkers have been studied recently, but these are susceptible to several sources of variability. In this context, an evaluation of the robustness of such biomarkers is essential. This study is part of a larger project investigating the replicability of potential neuroimaging biomarkers of PD. Here, we attempt to reproduce (same data, same method) and replicate (different data or method) the models described in Nguyen et al., 2021 to predict individual's PD current state and progression using demographic, clinical and neuroimaging features (fALFF and ReHo extracted from resting-state fMRI). We use the Parkinson's Progression Markers Initiative dataset (PPMI, ppmi-info.org), as in Nguyen et al.,2021 and aim to reproduce the original cohort, imaging features and machine learning models as closely as possible using the information available in the paper and the code. We also investigated methodological variations in cohort selection, feature extraction pipelines and sets of input features. The success of the reproduction was assessed using different criteria. Notably, we obtained significantly better than chance performance using the analysis pipeline closest to that in the original study (R2 > 0), which is consistent with its findings. The challenges encountered while reproducing and replicating the original work are likely explained by the complexity of neuroimaging studies, in particular in clinical settings. We provide recommendations to further facilitate the reproducibility of such studies in the future. |
2012.04399 | William Streilein | Ted Londner, Jonathan Saunders, Dieter W. Schuldt, and Bill Streilein | SimAEN -- Simulated Automated Exposure Notification | 22 pages, 7 figures, appendix of parameters | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mitigation strategies that remove infectious individuals from the greater
population have to balance their efficacy with the economic effects associated
with quarantine and have to contend with the limited resources available to the
public health authorities. Prior strategies have relied on testing and contact
tracing to find individuals before they become infectious and in order to limit
their interactions with others until after their infectious period has passed.
Manual contact tracing is a public health intervention where individuals
testing positive are interviewed to identify other members of the community who
they may have come into contact with. These interviews can take a significant
amount of time that has to be tallied in the overall accounting of the outbreak
cost. The concept of contact tracing has been expanded recently into Automated
Exposure Notification whereby cellphones can be used as sensor platforms to log
close contacts and notify the owner in the event that one of their close
contacts tests positive. The intention is that this notification will prompt
the person to be tested and then restrict their interactions with others until
their status is determined. In this paper we describe our efforts to
investigate the effectiveness of contact tracing interventions on controlling
an outbreak. This is accomplished by creating a model of disease spread and
then observing the impact that simulated tracing and testing have on the number
of infected individuals. Model parameters are explored to identify critical
transition points where interventions become effective. We estimate the
benefits as well as costs in order to offer insight to public health officials
as they select courses of action.
| [
{
"created": "Tue, 8 Dec 2020 12:36:27 GMT",
"version": "v1"
}
] | 2020-12-09 | [
[
"Londner",
"Ted",
""
],
[
"Saunders",
"Jonathan",
""
],
[
"Schuldt",
"Dieter W.",
""
],
[
"Streilein",
"Bill",
""
]
] | Mitigation strategies that remove infectious individuals from the greater population have to balance their efficacy with the economic effects associated with quarantine and have to contend with the limited resources available to the public health authorities. Prior strategies have relied on testing and contact tracing to find individuals before they become infectious and in order to limit their interactions with others until after their infectious period has passed. Manual contact tracing is a public health intervention where individuals testing positive are interviewed to identify other members of the community who they may have come into contact with. These interviews can take a significant amount of time that has to be tallied in the overall accounting of the outbreak cost. The concept of contact tracing has been expanded recently into Automated Exposure Notification whereby cellphones can be used as sensor platforms to log close contacts and notify the owner in the event that one of their close contacts tests positive. The intention is that this notification will prompt the person to be tested and then restrict their interactions with others until their status is determined. In this paper we describe our efforts to investigate the effectiveness of contact tracing interventions on controlling an outbreak. This is accomplished by creating a model of disease spread and then observing the impact that simulated tracing and testing have on the number of infected individuals. Model parameters are explored to identify critical transition points where interventions become effective. We estimate the benefits as well as costs in order to offer insight to public health officials as they select courses of action. |
2209.12640 | Nicola Calonaci | Nicola Calonaci, Mattia Bernetti, Alisha Jones, Michael Sattler and
Giovanni Bussi | Molecular dynamics simulations with grand-canonical reweighting suggest
cooperativity effects in RNA structure probing experiments | null | null | null | null | q-bio.BM physics.app-ph physics.bio-ph physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chemical probing experiments such as SHAPE are routinely used to probe RNA
molecules. In this work, we use atomistic molecular dynamics simulations to
test the hypothesis that binding of RNA with SHAPE reagents is affected by
cooperative effects leading to an observed reactivity that is dependent on the
reagent concentration. We develop a general technique that enables the
calculation of the affinity for arbitrary molecules as a function of their
concentration in the grand-canonical ensemble. Our simulations of an RNA
structural motif suggest that, at the concentration typically used in SHAPE
experiments, cooperative binding would lead to a measurable
concentration-dependent reactivity. We also provide a qualitative validation of
this statement by analyzing a new set of experiments collected at different
reagent concentrations.
| [
{
"created": "Mon, 26 Sep 2022 12:40:28 GMT",
"version": "v1"
}
] | 2022-09-27 | [
[
"Calonaci",
"Nicola",
""
],
[
"Bernetti",
"Mattia",
""
],
[
"Jones",
"Alisha",
""
],
[
"Sattler",
"Michael",
""
],
[
"Bussi",
"Giovanni",
""
]
] | Chemical probing experiments such as SHAPE are routinely used to probe RNA molecules. In this work, we use atomistic molecular dynamics simulations to test the hypothesis that binding of RNA with SHAPE reagents is affected by cooperative effects leading to an observed reactivity that is dependent on the reagent concentration. We develop a general technique that enables the calculation of the affinity for arbitrary molecules as a function of their concentration in the grand-canonical ensemble. Our simulations of an RNA structural motif suggest that, at the concentration typically used in SHAPE experiments, cooperative binding would lead to a measurable concentration-dependent reactivity. We also provide a qualitative validation of this statement by analyzing a new set of experiments collected at different reagent concentrations. |
2404.14353 | Pragatheiswar Giri | Praveen Sahu, Ignacio G. Camarillo, Pragatheiswar Giri and Raji
Sundararajan | Electroporation-mediated Metformin for effective anticancer treatment of
triple-negative breast cancer cells | null | null | null | null | q-bio.BM q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, we investigated the efficacy of Metformin, the most
commonly administered type-2 diabetes drug for triple negative breast cancer
(TNBC) treatment, due to its various anticancer properties. It is a plant-based
bio-compound, synthesized as a novel biguanide, called dimethyl biguanide or
metformin. One of the ways it operates is by hindering electron transport
chain-complex I, in mitochondria, which causes a drop-in energy (ATP)
generation. This eventually builds energetic stress and a decline in energy.
Therefore, the natural cellular processes and proliferating tumor cells are
obstructed. Here, we used electroporation, where, the MDA-MB-231, human TNBC
cells were subjected to high intensity, short-duration electrical pulses (EP)
in the presence of Metformin. The cell viability results indicate lower cell
viability of 43.45% as compared to 85.20% with drug alone at 5mM concentration.
This indicates that Metformin, the most common diabetes drug could also be
explored for cancer treatment.
| [
{
"created": "Mon, 22 Apr 2024 17:05:06 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Sahu",
"Praveen",
""
],
[
"Camarillo",
"Ignacio G.",
""
],
[
"Giri",
"Pragatheiswar",
""
],
[
"Sundararajan",
"Raji",
""
]
] | In this research, we investigated the efficacy of Metformin, the most commonly administered type-2 diabetes drug for triple negative breast cancer (TNBC) treatment, due to its various anticancer properties. It is a plant-based bio-compound, synthesized as a novel biguanide, called dimethyl biguanide or metformin. One of the ways it operates is by hindering electron transport chain-complex I, in mitochondria, which causes a drop-in energy (ATP) generation. This eventually builds energetic stress and a decline in energy. Therefore, the natural cellular processes and proliferating tumor cells are obstructed. Here, we used electroporation, where, the MDA-MB-231, human TNBC cells were subjected to high intensity, short-duration electrical pulses (EP) in the presence of Metformin. The cell viability results indicate lower cell viability of 43.45% as compared to 85.20% with drug alone at 5mM concentration. This indicates that Metformin, the most common diabetes drug could also be explored for cancer treatment. |
1904.07860 | Andrzej Z. Gorski | Andrzej Z. G\'orski and Monika Piwowar | Base spacing distribution analysis for human genome | LaTeX, 9 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The distribution of bases spacing in human genome was investigated. An
analysis of the frequency of occurrence in the human genome of different
sequence lengths flanked by one type of nucleotide was carried out showing that
the distribution has no self-similar (fractal) structure. The results
nevertheless revealed several characteristic features: (i) the distribution for
short-range spacing is quite similar to the purely stochastic sequences; (ii)
the distribution for long-range spacing essentially deviates from the random
sequence distribution, showing strong long-range correlations; (iii) the
differences between (A, T) and (C, G) bases are quite significant; (iv) the
spacing distribution displays tiny oscillations.
| [
{
"created": "Tue, 16 Apr 2019 10:32:38 GMT",
"version": "v1"
}
] | 2019-04-18 | [
[
"Górski",
"Andrzej Z.",
""
],
[
"Piwowar",
"Monika",
""
]
] | The distribution of bases spacing in human genome was investigated. An analysis of the frequency of occurrence in the human genome of different sequence lengths flanked by one type of nucleotide was carried out showing that the distribution has no self-similar (fractal) structure. The results nevertheless revealed several characteristic features: (i) the distribution for short-range spacing is quite similar to the purely stochastic sequences; (ii) the distribution for long-range spacing essentially deviates from the random sequence distribution, showing strong long-range correlations; (iii) the differences between (A, T) and (C, G) bases are quite significant; (iv) the spacing distribution displays tiny oscillations. |
2104.00520 | Betul Guvenc Paltun | Bet\"ul G\"uven\c{c} Paltun, Samuel Kaski and Hiroshi Mamitsuka | DIVERSE: Bayesian Data IntegratiVE learning for precise drug ResponSE
prediction | null | Guvencpaltun, B., Kaski, S., & Mamitsuka, H. (2021). DIVERSE:
Bayesian Data IntegratiVE learning for precise drug ResponSE prediction.
IEEE/ACM Transactions on Computational Biology and Bioinformatics, (01), 1-1 | 10.1109/TCBB.2021.3065535 | null | q-bio.QM cs.LG q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Detecting predictive biomarkers from multi-omics data is important for
precision medicine, to improve diagnostics of complex diseases and for better
treatments. This needs substantial experimental efforts that are made difficult
by the heterogeneity of cell lines and huge cost. An effective solution is to
build a computational model over the diverse omics data, including genomic,
molecular, and environmental information. However, choosing informative and
reliable data sources from among the different types of data is a challenging
problem. We propose DIVERSE, a framework of Bayesian importance-weighted tri-
and bi-matrix factorization(DIVERSE3 or DIVERSE2) to predict drug responses
from data of cell lines, drugs, and gene interactions. DIVERSE integrates the
data sources systematically, in a step-wise manner, examining the importance of
each added data set in turn. More specifically, we sequentially integrate five
different data sets, which have not all been combined in earlier bioinformatic
methods for predicting drug responses. Empirical experiments show that DIVERSE
clearly outperformed five other methods including three state-of-the-art
approaches, under cross-validation, particularly in out-of-matrix prediction,
which is closer to the setting of real use cases and more challenging than
simpler in-matrix prediction. Additionally, case studies for discovering new
drugs further confirmed the performance advantage of DIVERSE.
| [
{
"created": "Wed, 31 Mar 2021 12:40:00 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jun 2021 14:26:59 GMT",
"version": "v2"
}
] | 2021-06-08 | [
[
"Paltun",
"Betül Güvenç",
""
],
[
"Kaski",
"Samuel",
""
],
[
"Mamitsuka",
"Hiroshi",
""
]
] | Detecting predictive biomarkers from multi-omics data is important for precision medicine, to improve diagnostics of complex diseases and for better treatments. This needs substantial experimental efforts that are made difficult by the heterogeneity of cell lines and huge cost. An effective solution is to build a computational model over the diverse omics data, including genomic, molecular, and environmental information. However, choosing informative and reliable data sources from among the different types of data is a challenging problem. We propose DIVERSE, a framework of Bayesian importance-weighted tri- and bi-matrix factorization(DIVERSE3 or DIVERSE2) to predict drug responses from data of cell lines, drugs, and gene interactions. DIVERSE integrates the data sources systematically, in a step-wise manner, examining the importance of each added data set in turn. More specifically, we sequentially integrate five different data sets, which have not all been combined in earlier bioinformatic methods for predicting drug responses. Empirical experiments show that DIVERSE clearly outperformed five other methods including three state-of-the-art approaches, under cross-validation, particularly in out-of-matrix prediction, which is closer to the setting of real use cases and more challenging than simpler in-matrix prediction. Additionally, case studies for discovering new drugs further confirmed the performance advantage of DIVERSE. |
2011.01739 | Omar El Deeb | Omar El Deeb | Spatial autocorrelation and the dynamics of the mean center of COVID-19
infections in Lebanon | 15 pages, 6 figures, 1 table | Front. Appl. Math. Stat., 13 (2021) | 10.3389/fams.2020.620064 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study the spatial spread of the COVID-19 infection in
Lebanon. We inspect the spreading of the daily new infections across the 26
administrative districts of the country, and implement Moran's $I$ statistics
in order to analyze the tempo-spatial clustering of the infection in relation
to various variables parameterized by adjacency, proximity, population,
population density, poverty rate and poverty density, and we find out that
except for the poverty rate, the spread of the infection is clustered and
associated to those parameters with varying magnitude for the time span between
July (geographic adjacency and proximity) or August (population, population
density and poverty density) through October. We also determine the temporal
dynamics of geographic location of the mean center of new and cumulative
infections since late March. The results obtained allow for regionally and
locally adjusted health policies and measures that would provide higher levels
of public health safety in the country.
| [
{
"created": "Mon, 2 Nov 2020 10:27:45 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Deeb",
"Omar El",
""
]
] | In this paper we study the spatial spread of the COVID-19 infection in Lebanon. We inspect the spreading of the daily new infections across the 26 administrative districts of the country, and implement Moran's $I$ statistics in order to analyze the tempo-spatial clustering of the infection in relation to various variables parameterized by adjacency, proximity, population, population density, poverty rate and poverty density, and we find out that except for the poverty rate, the spread of the infection is clustered and associated to those parameters with varying magnitude for the time span between July (geographic adjacency and proximity) or August (population, population density and poverty density) through October. We also determine the temporal dynamics of geographic location of the mean center of new and cumulative infections since late March. The results obtained allow for regionally and locally adjusted health policies and measures that would provide higher levels of public health safety in the country. |
1503.07469 | Subutai Ahmad | Subutai Ahmad and Jeff Hawkins | Properties of Sparse Distributed Representations and their Application
to Hierarchical Temporal Memory | null | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Empirical evidence demonstrates that every region of the neocortex represents
information using sparse activity patterns. This paper examines Sparse
Distributed Representations (SDRs), the primary information representation
strategy in Hierarchical Temporal Memory (HTM) systems and the neocortex. We
derive a number of properties that are core to scaling, robustness, and
generalization. We use the theory to provide practical guidelines and
illustrate the power of SDRs as the basis of HTM. Our goal is to help create a
unified mathematical and practical framework for SDRs as it relates to cortical
function.
| [
{
"created": "Wed, 25 Mar 2015 17:36:05 GMT",
"version": "v1"
}
] | 2015-03-26 | [
[
"Ahmad",
"Subutai",
""
],
[
"Hawkins",
"Jeff",
""
]
] | Empirical evidence demonstrates that every region of the neocortex represents information using sparse activity patterns. This paper examines Sparse Distributed Representations (SDRs), the primary information representation strategy in Hierarchical Temporal Memory (HTM) systems and the neocortex. We derive a number of properties that are core to scaling, robustness, and generalization. We use the theory to provide practical guidelines and illustrate the power of SDRs as the basis of HTM. Our goal is to help create a unified mathematical and practical framework for SDRs as it relates to cortical function. |
1507.03511 | Su-Chan Park | Su-Chan Park, Johannes Neidhart, Joachim Krug | Greedy adaptive walks on a correlated fitness landscape | minor changes | J. Theor. Biol. 397, 89 (2016) | 10.1016/j.jtbi.2016.02.035 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study adaptation of a haploid asexual population on a fitness landscape
defined over binary genotype sequences of length $L$. We consider greedy
adaptive walks in which the population moves to the fittest among all single
mutant neighbors of the current genotype until a local fitness maximum is
reached. The landscape is of the rough mount Fuji type, which means that the
fitness value assigned to a sequence is the sum of a random and a deterministic
component. The random components are independent and identically distributed
random variables, and the deterministic component varies linearly with the
distance to a reference sequence. The deterministic fitness gradient $c$ is a
parameter that interpolates between the limits of an uncorrelated random
landscape ($c = 0$) and an effectively additive landscape ($c \to \infty$).
When the random fitness component is chosen from the Gumbel distribution,
explicit expressions for the distribution of the number of steps taken by the
greedy walk are obtained, and it is shown that the walk length varies
non-monotonically with the strength of the fitness gradient when the starting
point is sufficiently close to the reference sequence. Asymptotic results for
general distributions of the random fitness component are obtained using
extreme value theory, and it is found that the walk length attains a
non-trivial limit for $L \to \infty$, different from its values for $c=0$ and
$c = \infty$, if $c$ is scaled with $L$ in an appropriate combination.
| [
{
"created": "Mon, 13 Jul 2015 16:22:12 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Mar 2016 06:29:19 GMT",
"version": "v2"
}
] | 2016-03-21 | [
[
"Park",
"Su-Chan",
""
],
[
"Neidhart",
"Johannes",
""
],
[
"Krug",
"Joachim",
""
]
] | We study adaptation of a haploid asexual population on a fitness landscape defined over binary genotype sequences of length $L$. We consider greedy adaptive walks in which the population moves to the fittest among all single mutant neighbors of the current genotype until a local fitness maximum is reached. The landscape is of the rough mount Fuji type, which means that the fitness value assigned to a sequence is the sum of a random and a deterministic component. The random components are independent and identically distributed random variables, and the deterministic component varies linearly with the distance to a reference sequence. The deterministic fitness gradient $c$ is a parameter that interpolates between the limits of an uncorrelated random landscape ($c = 0$) and an effectively additive landscape ($c \to \infty$). When the random fitness component is chosen from the Gumbel distribution, explicit expressions for the distribution of the number of steps taken by the greedy walk are obtained, and it is shown that the walk length varies non-monotonically with the strength of the fitness gradient when the starting point is sufficiently close to the reference sequence. Asymptotic results for general distributions of the random fitness component are obtained using extreme value theory, and it is found that the walk length attains a non-trivial limit for $L \to \infty$, different from its values for $c=0$ and $c = \infty$, if $c$ is scaled with $L$ in an appropriate combination. |
0903.3083 | Thomas Kreuz | T. Kreuz, D. Chicharro, R.G. Andrzejak, J.S. Haas, H.D.I. Abarbanel | Measuring multiple spike train synchrony | 15 pages, 17 figures, 30 references Changes: Abstract corrected, one
Figure and one Section in Appendix added, plus some minor corrections (Final
Version) | J Neurosci Methods 183, 287 (2009) | 10.1016/j.jneumeth.2009.06.039 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measures of multiple spike train synchrony are essential in order to study
issues such as spike timing reliability, network synchronization, and neuronal
coding. These measures can broadly be divided in multivariate measures and
averages over bivariate measures. One of the most recent bivariate approaches,
the ISI-distance, employs the ratio of instantaneous interspike intervals. In
this study we propose two extensions of the ISI-distance, the straightforward
averaged bivariate ISI-distance and the multivariate ISI-diversity based on the
coefficient of variation. Like the original measure these extensions combine
many properties desirable in applications to real data. In particular, they are
parameter free, time scale independent, and easy to visualize in a
time-resolved manner, as we illustrate with in vitro recordings from a cortical
neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled
configuration we compare the performance of our methods in distinguishing
different levels of multi-neuron spike train synchrony to the performance of
six other previously published measures. We show and explain why the averaged
bivariate measures perform better than the multivariate ones and why the
multivariate ISI-diversity is the best performer among the multivariate
methods. Finally, in a comparison against standard methods that rely on moving
window estimates, we use single-unit monkey data to demonstrate the advantages
of the instantaneous nature of our methods.
| [
{
"created": "Wed, 18 Mar 2009 03:59:05 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jul 2009 17:10:41 GMT",
"version": "v2"
}
] | 2012-12-11 | [
[
"Kreuz",
"T.",
""
],
[
"Chicharro",
"D.",
""
],
[
"Andrzejak",
"R. G.",
""
],
[
"Haas",
"J. S.",
""
],
[
"Abarbanel",
"H. D. I.",
""
]
] | Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals. In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISI-diversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods. |
2010.06468 | Ezequiel Alvarez | Ezequiel Alvarez, Daniela Obando, Sebastian Crespo, Enio Garcia,
Nicolas Kreplak and Franco Marsico | Estimating COVID-19 cases and outbreaks on-stream through phone-calls | 16 pages, 8 figs. Includes details on the Villa Azul outbreak in
Argentina | null | null | ICAS 054/20 | q-bio.PE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main problems in controlling COVID-19 epidemic spread is the delay
in confirming cases. Having information on changes in the epidemic evolution or
outbreaks rise before lab-confirmation is crucial in decision making for Public
Health policies. We present an algorithm to estimate on-stream the number of
COVID-19 cases using the data from telephone calls to a COVID-line. By modeling
the calls as background (proportional to population) plus signal (proportional
to infected), we fit the calls in Province of Buenos Aires (Argentina) with
coefficient of determination $R^2 > 0.85$. This result allows us to estimate
the number of cases given the number of calls from a specific district, days
before the lab results are available. We validate the algorithm with real data.
We show how to use the algorithm to track on-stream the epidemic, and present
the Early Outbreak Alarm to detect outbreaks in advance to lab results. One key
point in the developed algorithm is a detailed track of the uncertainties in
the estimations, since the alarm uses the significance of the observables as a
main indicator to detect an anomaly. We present the details of the explicit
example in Villa Azul (Quilmes) where this tool resulted crucial to control an
outbreak on time. The presented tools have been designed in urgency with the
available data at the time of the development, and therefore have their
limitations which we describe and discuss. We consider possible improvements on
the tools, many of which are currently under development.
| [
{
"created": "Sat, 10 Oct 2020 15:44:05 GMT",
"version": "v1"
}
] | 2020-10-14 | [
[
"Alvarez",
"Ezequiel",
""
],
[
"Obando",
"Daniela",
""
],
[
"Crespo",
"Sebastian",
""
],
[
"Garcia",
"Enio",
""
],
[
"Kreplak",
"Nicolas",
""
],
[
"Marsico",
"Franco",
""
]
] | One of the main problems in controlling COVID-19 epidemic spread is the delay in confirming cases. Having information on changes in the epidemic evolution or outbreaks rise before lab-confirmation is crucial in decision making for Public Health policies. We present an algorithm to estimate on-stream the number of COVID-19 cases using the data from telephone calls to a COVID-line. By modeling the calls as background (proportional to population) plus signal (proportional to infected), we fit the calls in Province of Buenos Aires (Argentina) with coefficient of determination $R^2 > 0.85$. This result allows us to estimate the number of cases given the number of calls from a specific district, days before the lab results are available. We validate the algorithm with real data. We show how to use the algorithm to track on-stream the epidemic, and present the Early Outbreak Alarm to detect outbreaks in advance to lab results. One key point in the developed algorithm is a detailed track of the uncertainties in the estimations, since the alarm uses the significance of the observables as a main indicator to detect an anomaly. We present the details of the explicit example in Villa Azul (Quilmes) where this tool resulted crucial to control an outbreak on time. The presented tools have been designed in urgency with the available data at the time of the development, and therefore have their limitations which we describe and discuss. We consider possible improvements on the tools, many of which are currently under development. |
2106.01836 | Xiao Luo | Yuhang Guo, Xiao Luo, Liang Chen and Minghua Deng | DNA-GCN: Graph convolutional networks for predicting DNA-protein binding | 10 pages, 3 figures | In ICIC 2021 | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting DNA-protein binding is an important and classic problem in
bioinformatics. Convolutional neural networks have outperformed conventional
methods in modeling the sequence specificity of DNA-protein binding. However,
none of the studies has utilized graph convolutional networks for motif
inference. In this work, we propose to use graph convolutional networks for
motif inference. We build a sequence k-mer graph for the whole dataset based on
k-mer co-occurrence and k-mer sequence relationship and then learn DNA Graph
Convolutional Network (DNA-GCN) for the whole dataset. Our DNA-GCN is
initialized with a one-hot representation for all nodes, and it then jointly
learns the embeddings for both k-mers and sequences, as supervised by the known
labels of sequences. We evaluate our model on 50 datasets from ENCODE. DNA-GCN
shows its competitive performance compared with the baseline model. Besides, we
analyze our model and design several different architectures to help fit
different datasets.
| [
{
"created": "Wed, 2 Jun 2021 07:36:11 GMT",
"version": "v1"
}
] | 2021-06-04 | [
[
"Guo",
"Yuhang",
""
],
[
"Luo",
"Xiao",
""
],
[
"Chen",
"Liang",
""
],
[
"Deng",
"Minghua",
""
]
] | Predicting DNA-protein binding is an important and classic problem in bioinformatics. Convolutional neural networks have outperformed conventional methods in modeling the sequence specificity of DNA-protein binding. However, none of the studies has utilized graph convolutional networks for motif inference. In this work, we propose to use graph convolutional networks for motif inference. We build a sequence k-mer graph for the whole dataset based on k-mer co-occurrence and k-mer sequence relationship and then learn DNA Graph Convolutional Network (DNA-GCN) for the whole dataset. Our DNA-GCN is initialized with a one-hot representation for all nodes, and it then jointly learns the embeddings for both k-mers and sequences, as supervised by the known labels of sequences. We evaluate our model on 50 datasets from ENCODE. DNA-GCN shows its competitive performance compared with the baseline model. Besides, we analyze our model and design several different architectures to help fit different datasets. |
1509.03642 | Allen Tannenbaum | Romeil Sandhu, Salah-Eddine Lamhamedi-Cherradi, Sarah Tannenbaum,
Joseph Ludwig, Allen Tannenbaum | An Analytical Approach for Insulin-like Growth Factor Receptor 1 and
Mammalian Target of Rapamycin Blockades in Ewing Sarcoma | 10 pages, 4 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present preliminary results that quantify network robustness and fragility
of Ewing sarcoma (ES), a rare pediatric bone cancer that often exhibits de novo
or acquired drug resistance. By identifying novel proteins or pathways
susceptible to drug targeting, this formalized approach promises to improve
preclinical drug development and may lead to better treatment outcomes. Toward
that end, our network modeling focused upon the IGF-1R-PI3K-Akt-mTOR pathway,
which is of proven importance in ES. The clinical response and proteomic
networks of drug-sensitive parental cell lines and their drug-resistant
counterparts were assessed using two small molecule inhibitors for IGF-1R
(OSI-906 and NVP-ADW-742) and an mTOR inhibitor (mTORi), MK8669, such that
protein-to-protein expression networks could be generated for each group. For
the first time, mathematical modeling proves that drug resistant ES samples
exhibit higher degrees of overall network robustness (e.g., the ability of a
system to withstand random perturbations to its network configuration) to that
of their untreated or short-term (72-hour) treated samples. This was done by
leveraging previous work, which suggests that Ricci curvature, a key geometric
feature of a given network, is positively correlated to increased network
robustness. More importantly, given that Ricci curvature is a local property of
the system, it is capable of resolving pathway fragility. In this note, we
offer some encouraging yet limited insights in terms of system-level robustness
of ES and lay the foundation for scope of future work in which a complete study
will be conducted.
| [
{
"created": "Fri, 11 Sep 2015 20:11:16 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jun 2020 18:30:39 GMT",
"version": "v2"
}
] | 2020-06-30 | [
[
"Sandhu",
"Romeil",
""
],
[
"Lamhamedi-Cherradi",
"Salah-Eddine",
""
],
[
"Tannenbaum",
"Sarah",
""
],
[
"Ludwig",
"Joseph",
""
],
[
"Tannenbaum",
"Allen",
""
]
] | We present preliminary results that quantify network robustness and fragility of Ewing sarcoma (ES), a rare pediatric bone cancer that often exhibits de novo or acquired drug resistance. By identifying novel proteins or pathways susceptible to drug targeting, this formalized approach promises to improve preclinical drug development and may lead to better treatment outcomes. Toward that end, our network modeling focused upon the IGF-1R-PI3K-Akt-mTOR pathway, which is of proven importance in ES. The clinical response and proteomic networks of drug-sensitive parental cell lines and their drug-resistant counterparts were assessed using two small molecule inhibitors for IGF-1R (OSI-906 and NVP-ADW-742) and an mTOR inhibitor (mTORi), MK8669, such that protein-to-protein expression networks could be generated for each group. For the first time, mathematical modeling proves that drug resistant ES samples exhibit higher degrees of overall network robustness (e.g., the ability of a system to withstand random perturbations to its network configuration) to that of their untreated or short-term (72-hour) treated samples. This was done by leveraging previous work, which suggests that Ricci curvature, a key geometric feature of a given network, is positively correlated to increased network robustness. More importantly, given that Ricci curvature is a local property of the system, it is capable of resolving pathway fragility. In this note, we offer some encouraging yet limited insights in terms of system-level robustness of ES and lay the foundation for scope of future work in which a complete study will be conducted. |
q-bio/0611056 | Brigitte Gaillard | S. Fossette (DEPE-Iphc), J.Y. Georges (DEPE-Iphc), H. Tanaka, Y.
Ropert-Coudert, S. Ferraroli (DEPE-Iphc), N. Arai, K. Sato, Y. Naito, Y. Le
Maho (DEPE-Iphc) | Dispersal and dive patterns in gravid leatherback turtles during the
nesting season in French Guiana | null | null | null | null | q-bio.PE | null | We present the first combined analysis of diving behaviour and dispersal
patterns in gravid leatherback turtles during 3 consecutive nesting seasons in
French Guiana. In total 23 turtles were fitted with Argos satellite
transmitters and 16 individuals (including 6 concurrently satellite-tracked)
were equipped with an electronic time-depth recorder for single inter-nesting
intervals, i.e. between two consecutive ovi-positions. The leatherbacks
dispersed over the continental shelf, ranging from the coastal zone to the
shelf break and moved over 546.2 $\pm$ 154.1 km (mean $\pm$ SD) in waters of
French Guiana and neighbouring Surinam. They mostly performed shallow (9.4
$\pm$ 9.2 m) and short (4.4 $\pm$ 3.4 min) dives with a slight diurnal pattern.
They dived deeper as they moved away from the coast suggesting that they were
predominantly following the seabed. Inter-nesting intervals could be divided
into two phases: during the first 75% of the time turtles spent at sea, they
dived on average 47 min h-1 before showing a lower and more variable diving
effort as they came back to the shore. The extended movements of leatherbacks
and the fine analysis of dive shapes suggest that in French Guiana leatherbacks
may feed during the inter-nesting interval, probably to compensate for the
energy costs associated with reproduction. This results in this endangered
species being exposed to high risks of interactions with local fisheries
throughout the continental shelf.
| [
{
"created": "Fri, 17 Nov 2006 15:27:40 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Fossette",
"S.",
"",
"DEPE-Iphc"
],
[
"Georges",
"J. Y.",
"",
"DEPE-Iphc"
],
[
"Tanaka",
"H.",
"",
"DEPE-Iphc"
],
[
"Ropert-Coudert",
"Y.",
"",
"DEPE-Iphc"
],
[
"Ferraroli",
"S.",
"",
"DEPE-Iphc"
],
[
"Arai",
... | We present the first combined analysis of diving behaviour and dispersal patterns in gravid leatherback turtles during 3 consecutive nesting seasons in French Guiana. In total 23 turtles were fitted with Argos satellite transmitters and 16 individuals (including 6 concurrently satellite-tracked) were equipped with an electronic time-depth recorder for single inter-nesting intervals, i.e. between two consecutive ovi-positions. The leatherbacks dispersed over the continental shelf, ranging from the coastal zone to the shelf break and moved over 546.2 $\pm$ 154.1 km (mean $\pm$ SD) in waters of French Guiana and neighbouring Surinam. They mostly performed shallow (9.4 $\pm$ 9.2 m) and short (4.4 $\pm$ 3.4 min) dives with a slight diurnal pattern. They dived deeper as they moved away from the coast suggesting that they were predominantly following the seabed. Inter-nesting intervals could be divided into two phases: during the first 75% of the time turtles spent at sea, they dived on average 47 min h-1 before showing a lower and more variable diving effort as they came back to the shore. The extended movements of leatherbacks and the fine analysis of dive shapes suggest that in French Guiana leatherbacks may feed during the inter-nesting interval, probably to compensate for the energy costs associated with reproduction. This results in this endangered species being exposed to high risks of interactions with local fisheries throughout the continental shelf. |
2109.06672 | Yujiang Wang | Gabrielle M. Schroeder, Fahmida A. Chowdhury, Mark J. Cook, Beate
Diehl, John S. Duncan, Philippa J. Karoly, Peter N. Taylor, Yujiang Wang | Seizure pathways and seizure durations can vary independently within
individual patients with focal epilepsy | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A seizure's electrographic dynamics are characterised by its spatiotemporal
evolution, also termed dynamical "pathway" and the time it takes to complete
that pathway, which results in the seizure's duration. Both seizure pathways
and durations can vary within the same patient, producing seizures with
different dynamics, severity, and clinical implications. However, it is unclear
whether seizures following the same pathway will have the same duration or if
these features can vary independently. We compared within-subject variability
in these seizure features using 1) epilepsy monitoring unit intracranial EEG
(iEEG) recordings of 31 patients (mean 6.7 days, 16.5 seizures/subject), 2)
NeuroVista chronic iEEG recordings of 10 patients (mean 521.2 days, 252.6
seizures/subject), and 3) chronic iEEG recordings of 3 dogs with focal-onset
seizures (mean 324.4 days, 62.3 seizures/subject). While the strength of the
relationship between seizure pathways and durations was highly
subject-specific, in most subjects, changes in seizure pathways were only
weakly to moderately associated with differences in seizure durations. The
relationship between seizure pathways and durations was weakened by seizures
that 1) had a common pathway, but different durations ("elastic pathways"), or
2) had similar durations, but followed different pathways ("duplicate
durations"). Even in subjects with distinct populations of short and long
seizures, seizure durations were not a reliable indicator of different seizure
pathways. These findings suggest that seizure pathways and durations are
modulated by different processes. Uncovering such modulators may reveal novel
therapeutic targets for reducing seizure duration and severity.
| [
{
"created": "Tue, 14 Sep 2021 13:19:53 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Schroeder",
"Gabrielle M.",
""
],
[
"Chowdhury",
"Fahmida A.",
""
],
[
"Cook",
"Mark J.",
""
],
[
"Diehl",
"Beate",
""
],
[
"Duncan",
"John S.",
""
],
[
"Karoly",
"Philippa J.",
""
],
[
"Taylor",
"Peter N.",
... | A seizure's electrographic dynamics are characterised by its spatiotemporal evolution, also termed dynamical "pathway" and the time it takes to complete that pathway, which results in the seizure's duration. Both seizure pathways and durations can vary within the same patient, producing seizures with different dynamics, severity, and clinical implications. However, it is unclear whether seizures following the same pathway will have the same duration or if these features can vary independently. We compared within-subject variability in these seizure features using 1) epilepsy monitoring unit intracranial EEG (iEEG) recordings of 31 patients (mean 6.7 days, 16.5 seizures/subject), 2) NeuroVista chronic iEEG recordings of 10 patients (mean 521.2 days, 252.6 seizures/subject), and 3) chronic iEEG recordings of 3 dogs with focal-onset seizures (mean 324.4 days, 62.3 seizures/subject). While the strength of the relationship between seizure pathways and durations was highly subject-specific, in most subjects, changes in seizure pathways were only weakly to moderately associated with differences in seizure durations. The relationship between seizure pathways and durations was weakened by seizures that 1) had a common pathway, but different durations ("elastic pathways"), or 2) had similar durations, but followed different pathways ("duplicate durations"). Even in subjects with distinct populations of short and long seizures, seizure durations were not a reliable indicator of different seizure pathways. These findings suggest that seizure pathways and durations are modulated by different processes. Uncovering such modulators may reveal novel therapeutic targets for reducing seizure duration and severity. |
2005.00573 | Adilson Silva | Adilson Silva | Modeling COVID-19 in Cape Verde Islands -- An application of SIR model | 23 pages, 8 figures and 6 tables | null | null | null | q-bio.PE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid and surprised emergence of COVID-19, having infected three million
and killed two hundred thousand people worldwide in less than five months, has
led many experts to focus on simulating its propagation dynamics in order to
have an estimated outlook for the not too distante future and so supporting the
local and national governments in making decisions. In this paper, we apply the
SIR model to simulating the propagation dynamics of COVID-19 on the Cape Verde
Islands. It will be done firstly for Santiago and Boavista Islands, ant then
for Cape Verde in general. The choice of Santiago rests on the fact that it is
the largest island, with more than 50% of the Population of the country,
whereas Boavista was chosen because it is the island where the first case of
COVID-19 in Cape Verde was diagnosed. Observations made after the date of the
simulations were carried out corroborates our projections.
| [
{
"created": "Fri, 1 May 2020 19:05:35 GMT",
"version": "v1"
},
{
"created": "Tue, 5 May 2020 14:39:43 GMT",
"version": "v2"
}
] | 2020-05-06 | [
[
"Silva",
"Adilson",
""
]
] | The rapid and surprised emergence of COVID-19, having infected three million and killed two hundred thousand people worldwide in less than five months, has led many experts to focus on simulating its propagation dynamics in order to have an estimated outlook for the not too distante future and so supporting the local and national governments in making decisions. In this paper, we apply the SIR model to simulating the propagation dynamics of COVID-19 on the Cape Verde Islands. It will be done firstly for Santiago and Boavista Islands, ant then for Cape Verde in general. The choice of Santiago rests on the fact that it is the largest island, with more than 50% of the Population of the country, whereas Boavista was chosen because it is the island where the first case of COVID-19 in Cape Verde was diagnosed. Observations made after the date of the simulations were carried out corroborates our projections. |
1706.07660 | Vaibhav Wasnik | Vaibhav H. Wasnik, Peter Lipp, and Karsten Kruse | Positional information readout in $Ca^{2+}$ signaling | null | Phys. Rev. Lett. 123, 058102 (2019) | 10.1103/PhysRevLett.123.058102 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living cells respond to spatial signals. Signal transmission to the cell
interior often involves the release of second messengers like $Ca^{2+}$ . They
will eventually trigger a physiological response by activating kinases that in
turn activate target proteins through phosphorylation. Here, we investigate
theoretically how positional information can be accurately read out by protein
phosphorylation in spite of rapid second messenger diffusion. We find that
accuracy is increased by binding of the kinases to the cell membrane prior to
phosphorylation and by increasing the rate of $Ca^{2+}$ loss from the cell
interior. These findings could explain some salient features of conventional
protein kinases C.
| [
{
"created": "Fri, 23 Jun 2017 12:18:41 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2019 04:35:21 GMT",
"version": "v2"
}
] | 2019-08-07 | [
[
"Wasnik",
"Vaibhav H.",
""
],
[
"Lipp",
"Peter",
""
],
[
"Kruse",
"Karsten",
""
]
] | Living cells respond to spatial signals. Signal transmission to the cell interior often involves the release of second messengers like $Ca^{2+}$ . They will eventually trigger a physiological response by activating kinases that in turn activate target proteins through phosphorylation. Here, we investigate theoretically how positional information can be accurately read out by protein phosphorylation in spite of rapid second messenger diffusion. We find that accuracy is increased by binding of the kinases to the cell membrane prior to phosphorylation and by increasing the rate of $Ca^{2+}$ loss from the cell interior. These findings could explain some salient features of conventional protein kinases C. |
2405.16922 | Friedemann Zenke | Friedemann Zenke and Axel Laborieux | Theories of synaptic memory consolidation and intelligent plasticity for
continual learning | An introductory-level book chapter. 34 pages, 14 figures | null | null | null | q-bio.NC cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Humans and animals learn throughout life. Such continual learning is crucial
for intelligence. In this chapter, we examine the pivotal role plasticity
mechanisms with complex internal synaptic dynamics could play in enabling this
ability in neural networks. By surveying theoretical research, we highlight two
fundamental enablers for continual learning. First, synaptic plasticity
mechanisms must maintain and evolve an internal state over several behaviorally
relevant timescales. Second, plasticity algorithms must leverage the internal
state to intelligently regulate plasticity at individual synapses to facilitate
the seamless integration of new memories while avoiding detrimental
interference with existing ones. Our chapter covers successful applications of
these principles to deep neural networks and underscores the significance of
synaptic metaplasticity in sustaining continual learning capabilities. Finally,
we outline avenues for further research to understand the brain's superb
continual learning abilities and harness similar mechanisms for artificial
intelligence systems.
| [
{
"created": "Mon, 27 May 2024 08:13:39 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Zenke",
"Friedemann",
""
],
[
"Laborieux",
"Axel",
""
]
] | Humans and animals learn throughout life. Such continual learning is crucial for intelligence. In this chapter, we examine the pivotal role plasticity mechanisms with complex internal synaptic dynamics could play in enabling this ability in neural networks. By surveying theoretical research, we highlight two fundamental enablers for continual learning. First, synaptic plasticity mechanisms must maintain and evolve an internal state over several behaviorally relevant timescales. Second, plasticity algorithms must leverage the internal state to intelligently regulate plasticity at individual synapses to facilitate the seamless integration of new memories while avoiding detrimental interference with existing ones. Our chapter covers successful applications of these principles to deep neural networks and underscores the significance of synaptic metaplasticity in sustaining continual learning capabilities. Finally, we outline avenues for further research to understand the brain's superb continual learning abilities and harness similar mechanisms for artificial intelligence systems. |
2002.10882 | James Tee | James Tee and Desmond P. Taylor | A Quantized Representation of Intertemporal Choice in the Brain | 9 pages, 19 figures. arXiv admin note: substantial text overlap with
arXiv:1805.01631 | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value [4][5] is typically modeled using a continuous representation (i.e., a
Real number). A discrete representation of value has recently been postulated
[6]. A quantized representation of probability in the brain was also posited
and supported by experimental data [7]. Value and probability are inter-related
via Prospect Theory [4][5]. In this paper, we hypothesize that intertemporal
choices may also be quantized. For example, people may treat (or discount) 16
days indifferently to 17 days. To test this, we analyzed an intertemporal task
by using 2 novel models: quantized hyperbolic discounting, and quantized
exponential discounting. Our work here is a re-examination of the behavioral
data previously collected for an fMRI study [8]. Both quantized hyperbolic and
quantized exponential models were compared using AIC and BIC tests. We found
that 13/20 participants were best fit to the quantized exponential model, while
the remaining 7/20 were best fit to the quantized hyperbolic model. Overall,
15/20 participants were best fit to models with a 5-bit precision (i.e., 2^5 =
32 steps). In conclusion, regardless of hyperbolic or exponential, quantized
versions of these models are better fit to the experimental data than their
continuous forms. We finally outline some potential applications of our
findings.
| [
{
"created": "Mon, 24 Feb 2020 03:24:08 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Aug 2020 00:40:49 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Sep 2020 22:54:03 GMT",
"version": "v3"
}
] | 2020-09-17 | [
[
"Tee",
"James",
""
],
[
"Taylor",
"Desmond P.",
""
]
] | Value [4][5] is typically modeled using a continuous representation (i.e., a Real number). A discrete representation of value has recently been postulated [6]. A quantized representation of probability in the brain was also posited and supported by experimental data [7]. Value and probability are inter-related via Prospect Theory [4][5]. In this paper, we hypothesize that intertemporal choices may also be quantized. For example, people may treat (or discount) 16 days indifferently to 17 days. To test this, we analyzed an intertemporal task by using 2 novel models: quantized hyperbolic discounting, and quantized exponential discounting. Our work here is a re-examination of the behavioral data previously collected for an fMRI study [8]. Both quantized hyperbolic and quantized exponential models were compared using AIC and BIC tests. We found that 13/20 participants were best fit to the quantized exponential model, while the remaining 7/20 were best fit to the quantized hyperbolic model. Overall, 15/20 participants were best fit to models with a 5-bit precision (i.e., 2^5 = 32 steps). In conclusion, regardless of hyperbolic or exponential, quantized versions of these models are better fit to the experimental data than their continuous forms. We finally outline some potential applications of our findings. |
1210.5502 | Quentin Geissmann | Quentin Geissmann | OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and
Other Circular Objects | null | null | 10.1371/journal.pone.0054072 | null | q-bio.QM cs.CV | http://creativecommons.org/licenses/by/3.0/ | Counting circular objects such as cell colonies is an important source of
information for biologists. Although this task is often time-consuming and
subjective, it is still predominantly performed manually. The aim of the
present work is to provide a new tool to enumerate circular objects from
digital pictures and video streams. Here, I demonstrate that the created
program, OpenCFU, is very robust, accurate and fast. In addition, it provides
control over the processing parameters and is implemented in an in- tuitive and
modern interface. OpenCFU is a cross-platform and open-source software freely
available at http://opencfu.sourceforge.net.
| [
{
"created": "Thu, 18 Oct 2012 14:05:17 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Oct 2012 13:33:19 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Nov 2012 12:01:26 GMT",
"version": "v3"
}
] | 2012-12-14 | [
[
"Geissmann",
"Quentin",
""
]
] | Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an in- tuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. |
1604.03132 | Robert Patro | Nitish Gupta, Komal Sanjeev, Tim Wall, Carl Kingsford, Rob Patro | Efficient Index Maintenance Under Dynamic Genome Modification | paper accepted at the RECOMB-Seq 2016 | null | null | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient text indexing data structures have enabled large-scale genomic
sequence analysis and are used to help solve problems ranging from assembly to
read mapping. However, these data structures typically assume that the
underlying reference text is static and will not change over the course of the
queries being made. Some progress has been made in exploring how certain text
indices, like the suffix array, may be updated, rather than rebuilt from
scratch, when the underlying reference changes. Yet, these update operations
can be complex in practice, difficult to implement, and give fairly pessimistic
worst-case bounds. We present a novel data structure, SkipPatch, for
maintaining a k-mer-based index over a dynamically changing genome. SkipPatch
pairs a hash-based k-mer index with an indexable skip list that is used to
efficiently maintain the set of edits that have been applied to the original
genome. SkipPatch is practically fast, significantly outperforming the dynamic
extended suffix array in terms of update and query speed.
| [
{
"created": "Mon, 11 Apr 2016 20:10:48 GMT",
"version": "v1"
}
] | 2016-04-13 | [
[
"Gupta",
"Nitish",
""
],
[
"Sanjeev",
"Komal",
""
],
[
"Wall",
"Tim",
""
],
[
"Kingsford",
"Carl",
""
],
[
"Patro",
"Rob",
""
]
] | Efficient text indexing data structures have enabled large-scale genomic sequence analysis and are used to help solve problems ranging from assembly to read mapping. However, these data structures typically assume that the underlying reference text is static and will not change over the course of the queries being made. Some progress has been made in exploring how certain text indices, like the suffix array, may be updated, rather than rebuilt from scratch, when the underlying reference changes. Yet, these update operations can be complex in practice, difficult to implement, and give fairly pessimistic worst-case bounds. We present a novel data structure, SkipPatch, for maintaining a k-mer-based index over a dynamically changing genome. SkipPatch pairs a hash-based k-mer index with an indexable skip list that is used to efficiently maintain the set of edits that have been applied to the original genome. SkipPatch is practically fast, significantly outperforming the dynamic extended suffix array in terms of update and query speed. |
1410.5723 | Manu Dubin | Manu J. Dubin, Pei Zhang, Dazhe Meng, Marie-Stanislas Remigereau,
Edward J. Osborne, Francesco Paolo Casale, Phillip Drewe, Andr\'e Kahles,
Bjarni Vilhj\'almsson, Joanna Jagoda, Selen Irez, Viktor Voronin, Qiang Song,
Quan Long, Gunnar R\"atsch, Oliver Stegle, Richard M. Clark and Magnus
Nordborg | DNA methylation variation in Arabidopsis has a genetic basis and shows
evidence of local adaptation | 38 pages 4 figures | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epigenome modulation in response to the environment potentially provides a
mechanism for organisms to adapt, both within and between generations. However,
neither the extent to which this occurs, nor the molecular mechanisms involved
are known. Here we investigate DNA methylation variation in Swedish Arabidopsis
thaliana accessions grown at two different temperatures. Environmental effects
on DNA methylation were limited to transposons, where CHH methylation was found
to increase with temperature. Genome-wide association mapping revealed that the
extensive CHH methylation variation was strongly associated with genetic
variants in both cis and trans, including a major trans-association close to
the DNA methyltransferase CMT2. Unlike CHH methylation, CpG gene body
methylation (GBM) on the coding region of genes was not affected by growth
temperature, but was instead strongly correlated with the latitude of origin.
Accessions from colder regions had higher levels of GBM for a significant
fraction of the genome, and this was correlated with elevated transcription
levels for the genes affected. Genome-wide association mapping revealed that
this effect was largely due to trans-acting loci, a significant fraction of
which showed evidence of local adaptation. These findings constitute the first
direct link between DNA methylation and adaptation to the environment, and
provide a basis for further dissecting how environmentally driven and
genetically determined epigenetic variation interact and influence organismal
fitness.
| [
{
"created": "Tue, 21 Oct 2014 16:06:28 GMT",
"version": "v1"
}
] | 2014-10-22 | [
[
"Dubin",
"Manu J.",
""
],
[
"Zhang",
"Pei",
""
],
[
"Meng",
"Dazhe",
""
],
[
"Remigereau",
"Marie-Stanislas",
""
],
[
"Osborne",
"Edward J.",
""
],
[
"Casale",
"Francesco Paolo",
""
],
[
"Drewe",
"Phillip",
... | Epigenome modulation in response to the environment potentially provides a mechanism for organisms to adapt, both within and between generations. However, neither the extent to which this occurs, nor the molecular mechanisms involved are known. Here we investigate DNA methylation variation in Swedish Arabidopsis thaliana accessions grown at two different temperatures. Environmental effects on DNA methylation were limited to transposons, where CHH methylation was found to increase with temperature. Genome-wide association mapping revealed that the extensive CHH methylation variation was strongly associated with genetic variants in both cis and trans, including a major trans-association close to the DNA methyltransferase CMT2. Unlike CHH methylation, CpG gene body methylation (GBM) on the coding region of genes was not affected by growth temperature, but was instead strongly correlated with the latitude of origin. Accessions from colder regions had higher levels of GBM for a significant fraction of the genome, and this was correlated with elevated transcription levels for the genes affected. Genome-wide association mapping revealed that this effect was largely due to trans-acting loci, a significant fraction of which showed evidence of local adaptation. These findings constitute the first direct link between DNA methylation and adaptation to the environment, and provide a basis for further dissecting how environmentally driven and genetically determined epigenetic variation interact and influence organismal fitness. |
2006.04480 | Kaspar Rufibach | Evgeny Degtyarev and Kaspar Rufibach and Yue Shentu and Godwin Yung
and Michelle Casey and Stefan Englert and Feng Liu and Yi Liu and Oliver
Sailer and Jonathan Siegel and Steven Sun and Rui Tang and Jiangxiu Zhou | Assessing the Impact of COVID-19 on the Objective and Analysis of
Oncology Clinical Trials -- Application of the Estimand Framework | Paper written on behalf of the industry working group on estimands in
oncology (www.oncoestimand.org). Accepted for publication in a special issue
of Statistics in Biopharmaceutical Research | Statistics in Biopharmaceutical Research, 2020, 12(4), 427-437 | 10.1080/19466315.2020.1785543 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 outbreak has rapidly evolved into a global pandemic. The impact of
COVID-19 on patient journeys in oncology represents a new risk to
interpretation of trial results and its broad applicability for future clinical
practice. We identify key intercurrent events that may occur due to COVID-19 in
oncology clinical trials with a focus on time-to-event endpoints and discuss
considerations pertaining to the other estimand attributes introduced in the
ICH E9 addendum. We propose strategies to handle COVID-19 related intercurrent
events, depending on their relationship with malignancy and treatment and the
interpretability of data after them. We argue that the clinical trial objective
from a world without COVID-19 pandemic remains valid. The estimand framework
provides a common language to discuss the impact of COVID-19 in a structured
and transparent manner. This demonstrates that the applicability of the
framework may even go beyond what it was initially intended for.
| [
{
"created": "Mon, 8 Jun 2020 11:17:42 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jun 2020 11:26:09 GMT",
"version": "v2"
}
] | 2023-04-17 | [
[
"Degtyarev",
"Evgeny",
""
],
[
"Rufibach",
"Kaspar",
""
],
[
"Shentu",
"Yue",
""
],
[
"Yung",
"Godwin",
""
],
[
"Casey",
"Michelle",
""
],
[
"Englert",
"Stefan",
""
],
[
"Liu",
"Feng",
""
],
[
"Liu"... | COVID-19 outbreak has rapidly evolved into a global pandemic. The impact of COVID-19 on patient journeys in oncology represents a new risk to interpretation of trial results and its broad applicability for future clinical practice. We identify key intercurrent events that may occur due to COVID-19 in oncology clinical trials with a focus on time-to-event endpoints and discuss considerations pertaining to the other estimand attributes introduced in the ICH E9 addendum. We propose strategies to handle COVID-19 related intercurrent events, depending on their relationship with malignancy and treatment and the interpretability of data after them. We argue that the clinical trial objective from a world without COVID-19 pandemic remains valid. The estimand framework provides a common language to discuss the impact of COVID-19 in a structured and transparent manner. This demonstrates that the applicability of the framework may even go beyond what it was initially intended for. |
1810.00499 | Sanjana Gupta | Sanjana Gupta, Jacob Czech, Robert Kuczewski, Thomas M. Bartol,
Terrence J. Sejnowski, Robin E. C. Lee, and James R. Faeder | Spatial Stochastic Modeling with MCell and CellBlender | Munsky et al., Quantitative Biology: Theory, Computational Methods,
and Models, MIT Press, 2018 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter provides a brief introduction to the theory and practice of
spatial stochastic simulations. It begins with an overview of different methods
available for biochemical simulations highlighting their strengths and
limitations. Spatial stochastic modeling approaches are indicated when
diffusion is relatively slow and spatial inhomogeneities involve relatively
small numbers of particles. The popular software package MCell allows
particle-based stochastic simulations of biochemical systems in complex three
dimensional (3D) geometries, which are important for many cell biology
applications. Here, we provide an overview of the simulation algorithms used by
MCell and the underlying theory. We then give a tutorial on building and
simulating MCell models using the CellBlender graphical user interface, that is
built as a plug-in to Blender, a widely-used and freely available software
platform for 3D modeling. The tutorial starts with simple models that
demonstrate basic MCell functionality and then advances to a number of more
complex examples that demonstrate a range of features and provide examples of
important biophysical effects that require spatially-resolved stochastic
dynamics to capture.
| [
{
"created": "Mon, 1 Oct 2018 01:34:39 GMT",
"version": "v1"
}
] | 2018-10-02 | [
[
"Gupta",
"Sanjana",
""
],
[
"Czech",
"Jacob",
""
],
[
"Kuczewski",
"Robert",
""
],
[
"Bartol",
"Thomas M.",
""
],
[
"Sejnowski",
"Terrence J.",
""
],
[
"Lee",
"Robin E. C.",
""
],
[
"Faeder",
"James R.",
""... | This chapter provides a brief introduction to the theory and practice of spatial stochastic simulations. It begins with an overview of different methods available for biochemical simulations highlighting their strengths and limitations. Spatial stochastic modeling approaches are indicated when diffusion is relatively slow and spatial inhomogeneities involve relatively small numbers of particles. The popular software package MCell allows particle-based stochastic simulations of biochemical systems in complex three dimensional (3D) geometries, which are important for many cell biology applications. Here, we provide an overview of the simulation algorithms used by MCell and the underlying theory. We then give a tutorial on building and simulating MCell models using the CellBlender graphical user interface, that is built as a plug-in to Blender, a widely-used and freely available software platform for 3D modeling. The tutorial starts with simple models that demonstrate basic MCell functionality and then advances to a number of more complex examples that demonstrate a range of features and provide examples of important biophysical effects that require spatially-resolved stochastic dynamics to capture. |
1308.4421 | David Bortz | Sarthok Sircar, Elizabeth Aisenbrey, Stephanie J. Bryant and David M.
Bortz | Determining equilibrium osmolarity in Poly(ethylene glycol) / Chondrotin
sulfate gels mimicking articular cartilage | 19 pages,6 figures | null | null | null | q-bio.TO cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an experimentally guided, multi-phase, multi-species
polyelectrolyte gel model to make quantitative predictions on the
electro-chemical properties of articular cartilage. The mixture theory consists
of two different types of polymers: Poly(ethylene gylcol) (PEG), Chondrotin
sulfate (ChS), water (acting as solvent) and several different ions: H$^+$,
Na$^+$, Cl$^-$. The polymer chains have covalent cross-links modeled using Doi
rubber elasticity theory. Numerical studies on polymer volume fraction and net
osmolarity (difference in the solute concentration across the gel) show the
interplay between ionic bath concentrations, pH, polymer mass in the solvent
and the average charge per monomer; governing the equilibrium swelled /
de-swelled state of the gel. We conclude that swelling is aided due to a higher
average charge per monomer (or a higher percentage of charged ChS component of
the polymer), low solute concentration in the bath, a high pH or a low
cross-link fraction. However, the swelling-deswelling transitions could be
continuous or discontinuous depending upon the relative influence of the
various competing forces.
| [
{
"created": "Tue, 20 Aug 2013 20:05:31 GMT",
"version": "v1"
}
] | 2013-08-22 | [
[
"Sircar",
"Sarthok",
""
],
[
"Aisenbrey",
"Elizabeth",
""
],
[
"Bryant",
"Stephanie J.",
""
],
[
"Bortz",
"David M.",
""
]
] | We present an experimentally guided, multi-phase, multi-species polyelectrolyte gel model to make quantitative predictions on the electro-chemical properties of articular cartilage. The mixture theory consists of two different types of polymers: Poly(ethylene gylcol) (PEG), Chondrotin sulfate (ChS), water (acting as solvent) and several different ions: H$^+$, Na$^+$, Cl$^-$. The polymer chains have covalent cross-links modeled using Doi rubber elasticity theory. Numerical studies on polymer volume fraction and net osmolarity (difference in the solute concentration across the gel) show the interplay between ionic bath concentrations, pH, polymer mass in the solvent and the average charge per monomer; governing the equilibrium swelled / de-swelled state of the gel. We conclude that swelling is aided due to a higher average charge per monomer (or a higher percentage of charged ChS component of the polymer), low solute concentration in the bath, a high pH or a low cross-link fraction. However, the swelling-deswelling transitions could be continuous or discontinuous depending upon the relative influence of the various competing forces. |
1810.04274 | Mariano Cabezas | Mariano Cabezas, Sergi Valverde, Sandra Gonz\'alez-Vill\`a, Albert
Cl\'erigues, Mostafa Salem, Kaisar Kushibar, Jose Bernal, Arnau Oliver, and
Xavier Llad\'o | Survival prediction using ensemble tumor segmentation and transfer
learning | Submitted to the BRATS2018 MICCAI challenge | null | null | null | q-bio.QM cs.CV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmenting tumors and their subregions is a challenging task as demonstrated
by the annual BraTS challenge. Moreover, predicting the survival of the patient
using mainly imaging features, while being a desirable outcome to evaluate the
treatment of the patient, it is also a difficult task. In this paper, we
present a cascaded pipeline to segment the tumor and its subregions and then we
use these results and other clinical features together with image features
coming from a pretrained VGG-16 network to predict the survival of the patient.
Preliminary results with the training and validation dataset show a promising
start in terms of segmentation, while the prediction values could be improved
with further testing on the feature extraction part of the network.
| [
{
"created": "Thu, 4 Oct 2018 09:55:09 GMT",
"version": "v1"
}
] | 2018-10-11 | [
[
"Cabezas",
"Mariano",
""
],
[
"Valverde",
"Sergi",
""
],
[
"González-Villà",
"Sandra",
""
],
[
"Clérigues",
"Albert",
""
],
[
"Salem",
"Mostafa",
""
],
[
"Kushibar",
"Kaisar",
""
],
[
"Bernal",
"Jose",
""
... | Segmenting tumors and their subregions is a challenging task as demonstrated by the annual BraTS challenge. Moreover, predicting the survival of the patient using mainly imaging features, while being a desirable outcome to evaluate the treatment of the patient, it is also a difficult task. In this paper, we present a cascaded pipeline to segment the tumor and its subregions and then we use these results and other clinical features together with image features coming from a pretrained VGG-16 network to predict the survival of the patient. Preliminary results with the training and validation dataset show a promising start in terms of segmentation, while the prediction values could be improved with further testing on the feature extraction part of the network. |
2301.02538 | Pouria Yazdian | Pouria Yazdian Anari, Nathan Lay, Aditi Chaurasia, Nikhil Gopal, Safa
Samimi, Stephanie Harmon, Rabindra Gautam, Kevin Ma, Fatemeh Dehghani
Firouzabadi, Evrim Turkbey, Maria Merino, Elizabeth C. Jones, Mark W. Ball,
W. Marston Linehan, Baris Turkbey, Ashkan A. Malayeri | Automatic segmentation of clear cell renal cell tumors, kidney, and
cysts in patients with von Hippel-Lindau syndrome using U-net architecture on
magnetic resonance images | null | null | null | null | q-bio.QM physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We demonstrate automated segmentation of clear cell renal cell carcinomas
(ccRCC), cysts, and surrounding normal kidney parenchyma in patients with von
Hippel-Lindau (VHL) syndrome using convolutional neural networks (CNN) on
Magnetic Resonance Imaging (MRI). We queried 115 VHL patients and 117 scans (3
patients have two separate scans) with 504 ccRCCs and 1171 cysts from 2015 to
2021. Lesions were manually segmented on T1 excretory phase, co-registered on
all contrast-enhanced T1 sequences and used to train 2D and 3D U-Net. The U-Net
performance was evaluated on 10 randomized splits of the cohort. The models
were evaluated using the dice similarity coefficient (DSC). Our 2D U-Net
achieved an average ccRCC lesion detection Area under the curve (AUC) of 0.88
and DSC scores of 0.78, 0.40, and 0.46 for segmentation of the kidney, cysts,
and tumors, respectively. Our 3D U-Net achieved an average ccRCC lesion
detection AUC of 0.79 and DSC scores of 0.67, 0.32, and 0.34 for kidney, cysts,
and tumors, respectively. We demonstrated good detection and moderate
segmentation results using U-Net for ccRCC on MRI. Automatic detection and
segmentation of normal renal parenchyma, cysts, and masses may assist
radiologists in quantifying the burden of disease in patients with VHL.
| [
{
"created": "Fri, 6 Jan 2023 14:51:00 GMT",
"version": "v1"
}
] | 2023-01-09 | [
[
"Anari",
"Pouria Yazdian",
""
],
[
"Lay",
"Nathan",
""
],
[
"Chaurasia",
"Aditi",
""
],
[
"Gopal",
"Nikhil",
""
],
[
"Samimi",
"Safa",
""
],
[
"Harmon",
"Stephanie",
""
],
[
"Gautam",
"Rabindra",
""
],
... | We demonstrate automated segmentation of clear cell renal cell carcinomas (ccRCC), cysts, and surrounding normal kidney parenchyma in patients with von Hippel-Lindau (VHL) syndrome using convolutional neural networks (CNN) on Magnetic Resonance Imaging (MRI). We queried 115 VHL patients and 117 scans (3 patients have two separate scans) with 504 ccRCCs and 1171 cysts from 2015 to 2021. Lesions were manually segmented on T1 excretory phase, co-registered on all contrast-enhanced T1 sequences and used to train 2D and 3D U-Net. The U-Net performance was evaluated on 10 randomized splits of the cohort. The models were evaluated using the dice similarity coefficient (DSC). Our 2D U-Net achieved an average ccRCC lesion detection Area under the curve (AUC) of 0.88 and DSC scores of 0.78, 0.40, and 0.46 for segmentation of the kidney, cysts, and tumors, respectively. Our 3D U-Net achieved an average ccRCC lesion detection AUC of 0.79 and DSC scores of 0.67, 0.32, and 0.34 for kidney, cysts, and tumors, respectively. We demonstrated good detection and moderate segmentation results using U-Net for ccRCC on MRI. Automatic detection and segmentation of normal renal parenchyma, cysts, and masses may assist radiologists in quantifying the burden of disease in patients with VHL. |
2407.10308 | Romain Gosselin | Romain-Daniel Gosselin | AI Detectors are Poor Western Blot Classifiers: A Study of Accuracy and
Predictive Values | 25 pages, 5 figures, to be submitted in a peer-reviewed journal,
supporting data freely available at
https://doi.org/10.6084/m9.figshare.26300464 and
https://doi.org/10.6084/m9.figshare.26300515 | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The recent rise of generative artificial intelligence (GenAI) capable of
creating scientific images presents a challenge in the fight against academic
fraud. This study evaluates the efficacy of three free web-based AI detectors
in identifying AI-generated images of Western blots, which is a very common
technique in biology. We tested these detectors on a collection of artificial
Western blot images (n=48) that were created using ChatGPT 4 DALLE 3 and on
authentic Western blots (n=48) that were sampled from articles published within
four biology journals in 2015; this was before the rise of generative AI based
on large language models. The results reveal that the sensitivity (0.9583 for
Is It AI, 0.1875 for Hive Moderation, and 0.7083 for Illuminarty) and
specificity (0.5417 for Is It AI, 0.8750 for Hive Moderation, and 0.4167 for
Illuminarty) are very different. Positive predictive values (PPV) across
various AI prevalence were low, for example reaching 0.1885 for Is It AI,
0.1429 for Hive Moderation, and 0.1189 for Illuminarty at an AI prevalence of
0.1. This highlights the difficulty in confidently determining image
authenticity based on the output of a single detector. Reducing the size of
Western blots from four to two lanes reduced test sensitivities and increased
test specificities but did not markedly affect overall detector accuracies and
also only slightly improved the PPV of one detector (Is It AI). These findings
strongly argue against the use of free AI detectors to detect fake scientific
images, and they demonstrate the urgent need for more robust detection tools
that are specifically trained on scientific content such as Western blot
images.
| [
{
"created": "Sun, 14 Jul 2024 19:57:27 GMT",
"version": "v1"
}
] | 2024-07-16 | [
[
"Gosselin",
"Romain-Daniel",
""
]
] | The recent rise of generative artificial intelligence (GenAI) capable of creating scientific images presents a challenge in the fight against academic fraud. This study evaluates the efficacy of three free web-based AI detectors in identifying AI-generated images of Western blots, which is a very common technique in biology. We tested these detectors on a collection of artificial Western blot images (n=48) that were created using ChatGPT 4 DALLE 3 and on authentic Western blots (n=48) that were sampled from articles published within four biology journals in 2015; this was before the rise of generative AI based on large language models. The results reveal that the sensitivity (0.9583 for Is It AI, 0.1875 for Hive Moderation, and 0.7083 for Illuminarty) and specificity (0.5417 for Is It AI, 0.8750 for Hive Moderation, and 0.4167 for Illuminarty) are very different. Positive predictive values (PPV) across various AI prevalence were low, for example reaching 0.1885 for Is It AI, 0.1429 for Hive Moderation, and 0.1189 for Illuminarty at an AI prevalence of 0.1. This highlights the difficulty in confidently determining image authenticity based on the output of a single detector. Reducing the size of Western blots from four to two lanes reduced test sensitivities and increased test specificities but did not markedly affect overall detector accuracies and also only slightly improved the PPV of one detector (Is It AI). These findings strongly argue against the use of free AI detectors to detect fake scientific images, and they demonstrate the urgent need for more robust detection tools that are specifically trained on scientific content such as Western blot images. |
2311.09107 | Ralph Brinks | Ralph Brinks | Illness-death model with renewal | 9 pages, 5 figures | null | null | null | q-bio.PE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The illness-death model for chronic conditions is combined with a renewal
equation for the number of newborns taking into account possibly different
fertility rates in the healthy and diseased parts of the population. The
resulting boundary value problem consists of a system of partial differential
equations with an integral boundary condition. As an application, the boundary
value problem is applied to an example about type 2 diabetes.
| [
{
"created": "Wed, 15 Nov 2023 16:54:19 GMT",
"version": "v1"
}
] | 2023-11-16 | [
[
"Brinks",
"Ralph",
""
]
] | The illness-death model for chronic conditions is combined with a renewal equation for the number of newborns taking into account possibly different fertility rates in the healthy and diseased parts of the population. The resulting boundary value problem consists of a system of partial differential equations with an integral boundary condition. As an application, the boundary value problem is applied to an example about type 2 diabetes. |
1312.0075 | Changbong Hyeon | Jong-Chin Lin, Changbong Hyeon, D. Thirumalai | Sequence-dependent folding landscapes of adenine riboswitch aptamers | 22 pages, 4 figures | Phys. Chem. Chem. Phys. (2014) vol. 16, 6376 | 10.1039/C3CP53932F | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction of the functions of riboswitches requires a quantitative
description of the folding landscape so that the barriers and time scales for
the conformational change in the switching region in the aptamer can be
estimated. Using a combination of all atom molecular dynamics and
coarse-grained model simulations we studied the response of adenine (A) binding
add and pbuE A-riboswitches to mechanical force. The two riboswitches contain a
structurally similar three-way junction formed by three paired helices, P1, P2,
and P3, but carry out different functions. Using pulling simulations, with
structures generated in MD simulations, we show that after P1 rips the dominant
unfolding pathway in add A-riboswitch is the rupture of P2 followed by
unraveling of P3. In the pbuE A-riboswitch, after P1 unfolds P3 ruptures ahead
of P2. The order of unfolding of the helices, which is in accord with single
molecule pulling experiments, is determined by the relative stabilities of the
individual helices. Our results show that the stability of isolated helices
determines the order of assembly and response to force in these non-coding
regions. We use the simulated free energy profile for pbuE A-riboswitch to
estimate the time scale for allosteric switching, which shows that this
riboswitch is under kinetic control lending additional support to the
conclusion based on single molecule pulling experiments. A consequence of the
stability hypothesis is that a single point mutation (U28C) in the P2 helix of
the add A-riboswitch, which increases the stability of P2, would make the
folding landscapes of the two riboswitches similar. This prediction can be
tested in single molecule pulling experiments.
| [
{
"created": "Sat, 30 Nov 2013 08:04:35 GMT",
"version": "v1"
}
] | 2018-03-14 | [
[
"Lin",
"Jong-Chin",
""
],
[
"Hyeon",
"Changbong",
""
],
[
"Thirumalai",
"D.",
""
]
] | Prediction of the functions of riboswitches requires a quantitative description of the folding landscape so that the barriers and time scales for the conformational change in the switching region in the aptamer can be estimated. Using a combination of all atom molecular dynamics and coarse-grained model simulations we studied the response of adenine (A) binding add and pbuE A-riboswitches to mechanical force. The two riboswitches contain a structurally similar three-way junction formed by three paired helices, P1, P2, and P3, but carry out different functions. Using pulling simulations, with structures generated in MD simulations, we show that after P1 rips the dominant unfolding pathway in add A-riboswitch is the rupture of P2 followed by unraveling of P3. In the pbuE A-riboswitch, after P1 unfolds P3 ruptures ahead of P2. The order of unfolding of the helices, which is in accord with single molecule pulling experiments, is determined by the relative stabilities of the individual helices. Our results show that the stability of isolated helices determines the order of assembly and response to force in these non-coding regions. We use the simulated free energy profile for pbuE A-riboswitch to estimate the time scale for allosteric switching, which shows that this riboswitch is under kinetic control lending additional support to the conclusion based on single molecule pulling experiments. A consequence of the stability hypothesis is that a single point mutation (U28C) in the P2 helix of the add A-riboswitch, which increases the stability of P2, would make the folding landscapes of the two riboswitches similar. This prediction can be tested in single molecule pulling experiments. |
1604.04796 | Henry Tuckwell | Henry C. Tuckwell and Susanne Ditlevsen | The space-clamped Hodgkin-Huxley system with random synaptic input:
inhibition of spiking by weak noise and analysis with moment equations | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a classical space-clamped Hodgkin-Huxley model neuron stimulated
by synaptic excitation and inhibition with conductances represented by
Ornstein-Uhlenbeck processes. Using numerical solutions of the stochastic model
system obtained by an Euler method, it is found that with excitation only there
is a critical value of the steady state excitatory conductance for repetitive
spiking without noise and for values of the conductance near the critical value
small noise has a powerfully inhibitory effect. For a given level of inhibition
there is also a critical value of the steady state excitatory conductance for
repetitive firing and it is demonstrated that noise either in the excitatory or
inhibitory processes or both can powerfully inhibit spiking. Furthermore, near
the critical value, inverse stochastic resonance was observed when noise was
present only in the inhibitory input process.
The system of 27 coupled deterministic differential equations for the
approximate first and second order moments of the 6-dimensional model is
derived. The moment differential equations are solved using Runge-Kutta methods
and the solutions are compared with the results obtained by simulation for
various sets of parameters including some with conductances obtained by
experiment on pyramidal cells of rat prefrontal cortex. The mean and variance
obtained from simulation are in good agreement when there is spiking induced by
strong stimulation and relatively small noise or when the voltage is
fluctuating at subthreshold levels. In the occasional spike mode sometimes
exhibited by spinal motoneurons and cortical pyramidal cells the assunptions
underlying the moment equation approach are not satisfied.
| [
{
"created": "Sat, 16 Apr 2016 20:58:42 GMT",
"version": "v1"
}
] | 2016-04-19 | [
[
"Tuckwell",
"Henry C.",
""
],
[
"Ditlevsen",
"Susanne",
""
]
] | We consider a classical space-clamped Hodgkin-Huxley model neuron stimulated by synaptic excitation and inhibition with conductances represented by Ornstein-Uhlenbeck processes. Using numerical solutions of the stochastic model system obtained by an Euler method, it is found that with excitation only there is a critical value of the steady state excitatory conductance for repetitive spiking without noise and for values of the conductance near the critical value small noise has a powerfully inhibitory effect. For a given level of inhibition there is also a critical value of the steady state excitatory conductance for repetitive firing and it is demonstrated that noise either in the excitatory or inhibitory processes or both can powerfully inhibit spiking. Furthermore, near the critical value, inverse stochastic resonance was observed when noise was present only in the inhibitory input process. The system of 27 coupled deterministic differential equations for the approximate first and second order moments of the 6-dimensional model is derived. The moment differential equations are solved using Runge-Kutta methods and the solutions are compared with the results obtained by simulation for various sets of parameters including some with conductances obtained by experiment on pyramidal cells of rat prefrontal cortex. The mean and variance obtained from simulation are in good agreement when there is spiking induced by strong stimulation and relatively small noise or when the voltage is fluctuating at subthreshold levels. In the occasional spike mode sometimes exhibited by spinal motoneurons and cortical pyramidal cells the assunptions underlying the moment equation approach are not satisfied. |
2005.08353 | Corina Drapaca | Corina S. Drapaca, Sahin Ozdemir, Elizabeth A. Proctor | A space-fractional cable equation for the propagation of action
potentials in myelinated neurons | 20 pages, 14 figures; added reference, updated formulas, added new
formulas, corrected typos, added 4 figures | Emerging Science Journal, Vol. 4, No. 3: 148-164, 2020 | 10.28991/esj-2020-01219 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Myelinated neurons are characterized by the presence of myelin, a
multilaminated wrapping around the axons formed by specialized neuroglial
cells. Myelin acts as an electrical insulator and therefore, in myelinated
neurons, the action potentials do not propagate within the axons but happen
only at the nodes of Ranvier which are gaps in the axonal myelination. Recent
advancements in brain science have shown that the shapes, timings, and
propagation speeds of these so-called saltatory action potentials are
controlled by various biochemical interactions among neurons, glial cells, and
the extracellular space. Given the complexity of brain's structure and
processes, the work hypothesis made in this paper is that non-local effects are
involved in the optimal propagation of action potentials. A space-fractional
cable equation for the action potentials propagation in myelinated neurons is
proposed that involves spatial derivatives of fractional order. The effects of
non-locality on the distribution of the membrane potential are investigated
using numerical simulations.
| [
{
"created": "Sun, 17 May 2020 19:48:54 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Aug 2020 16:37:12 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Sep 2020 21:22:29 GMT",
"version": "v3"
}
] | 2020-09-17 | [
[
"Drapaca",
"Corina S.",
""
],
[
"Ozdemir",
"Sahin",
""
],
[
"Proctor",
"Elizabeth A.",
""
]
] | Myelinated neurons are characterized by the presence of myelin, a multilaminated wrapping around the axons formed by specialized neuroglial cells. Myelin acts as an electrical insulator and therefore, in myelinated neurons, the action potentials do not propagate within the axons but happen only at the nodes of Ranvier which are gaps in the axonal myelination. Recent advancements in brain science have shown that the shapes, timings, and propagation speeds of these so-called saltatory action potentials are controlled by various biochemical interactions among neurons, glial cells, and the extracellular space. Given the complexity of brain's structure and processes, the work hypothesis made in this paper is that non-local effects are involved in the optimal propagation of action potentials. A space-fractional cable equation for the action potentials propagation in myelinated neurons is proposed that involves spatial derivatives of fractional order. The effects of non-locality on the distribution of the membrane potential are investigated using numerical simulations. |
1103.5917 | Yaniv Brandvain | Yaniv Brandvain and Graham Coop | Scrambling eggs: Meiotic drive and the evolution of female recombination
rates | In press in Genetics | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theories to explain the prevalence of sex and recombination have long been a
central theme of evolutionary biology. Yet despite decades of attention
dedicated to the evolution of sex and recombination, the widespread pattern of
sex-differences in the recombination rate is not well understood and has
received relatively little theoretical attention. Here, we argue that female
meiotic drivers - alleles that increase in frequency by exploiting the
asymmetric cell division of oogenesis - present a potent selective pressure
favoring the modification of the female recombination rate. Because
recombination plays a central role in shaping patterns of variation within and
among dyads, modifiers of the female recombination rate can function as potent
suppressors or enhancers of female meiotic drive. We show that when female
recombination modifiers are unlinked to female drivers, recombination modifiers
that suppress harmful female drive can spread. By contrast, a recombination
modifier tightly linked to a driver can increase in frequency by enhancing
female drive. Our results predict that rapidly evolving female recombination
rates, particularly around centromeres, should be a common outcome of meiotic
drive. We discuss how selection to modify the efficacy of meiotic drive may
contribute to commonly observed patterns of sex-differences in recombination.
| [
{
"created": "Mon, 28 Mar 2011 16:27:01 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Dec 2011 00:26:13 GMT",
"version": "v2"
}
] | 2011-12-12 | [
[
"Brandvain",
"Yaniv",
""
],
[
"Coop",
"Graham",
""
]
] | Theories to explain the prevalence of sex and recombination have long been a central theme of evolutionary biology. Yet despite decades of attention dedicated to the evolution of sex and recombination, the widespread pattern of sex-differences in the recombination rate is not well understood and has received relatively little theoretical attention. Here, we argue that female meiotic drivers - alleles that increase in frequency by exploiting the asymmetric cell division of oogenesis - present a potent selective pressure favoring the modification of the female recombination rate. Because recombination plays a central role in shaping patterns of variation within and among dyads, modifiers of the female recombination rate can function as potent suppressors or enhancers of female meiotic drive. We show that when female recombination modifiers are unlinked to female drivers, recombination modifiers that suppress harmful female drive can spread. By contrast, a recombination modifier tightly linked to a driver can increase in frequency by enhancing female drive. Our results predict that rapidly evolving female recombination rates, particularly around centromeres, should be a common outcome of meiotic drive. We discuss how selection to modify the efficacy of meiotic drive may contribute to commonly observed patterns of sex-differences in recombination. |
1004.2101 | Ruriko Yoshida | Elissaveta Arnaoudova and David Haws and Peter Huggins and Jerzy W.
Jaromczyk and Neil Moore and Chris Schardl and Ruriko Yoshida | Statistical Phylogenetic Tree Analysis Using Differences of Means | 17 pages, 6 figures | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a statistical method to test whether two phylogenetic trees with
given alignments are significantly incongruent. Our method compares the two
distributions of phylogenetic trees given by the input alignments, instead of
comparing point estimations of trees. This statistical approach can be applied
to gene tree analysis for example, detecting unusual events in genome evolution
such as horizontal gene transfer and reshuffling. Our method uses difference of
means to compare two distributions of trees, after embedding trees in a vector
space. Bootstrapping alignment columns can then be applied to obtain p-values.
To compute distances between means, we employ a "kernel trick" which speeds up
distance calculations when trees are embedded in a high-dimensional feature
space, e.g. splits or quartets feature space. In this pilot study, first we
test our statistical method's ability to distinguish between sets of gene trees
generated under coalescence models with species trees of varying dissimilarity.
We follow our simulation results with applications to various data sets of
gophers and lice, grasses and their endophytes, and different fungal genes from
the same genome. A companion toolkit, {\tt Phylotree}, is provided to
facilitate computational experiments.
| [
{
"created": "Tue, 13 Apr 2010 03:15:37 GMT",
"version": "v1"
}
] | 2010-04-14 | [
[
"Arnaoudova",
"Elissaveta",
""
],
[
"Haws",
"David",
""
],
[
"Huggins",
"Peter",
""
],
[
"Jaromczyk",
"Jerzy W.",
""
],
[
"Moore",
"Neil",
""
],
[
"Schardl",
"Chris",
""
],
[
"Yoshida",
"Ruriko",
""
]
] | We propose a statistical method to test whether two phylogenetic trees with given alignments are significantly incongruent. Our method compares the two distributions of phylogenetic trees given by the input alignments, instead of comparing point estimations of trees. This statistical approach can be applied to gene tree analysis for example, detecting unusual events in genome evolution such as horizontal gene transfer and reshuffling. Our method uses difference of means to compare two distributions of trees, after embedding trees in a vector space. Bootstrapping alignment columns can then be applied to obtain p-values. To compute distances between means, we employ a "kernel trick" which speeds up distance calculations when trees are embedded in a high-dimensional feature space, e.g. splits or quartets feature space. In this pilot study, first we test our statistical method's ability to distinguish between sets of gene trees generated under coalescence models with species trees of varying dissimilarity. We follow our simulation results with applications to various data sets of gophers and lice, grasses and their endophytes, and different fungal genes from the same genome. A companion toolkit, {\tt Phylotree}, is provided to facilitate computational experiments. |
q-bio/0503015 | Dietrich Stauffer | Lotfi Zekri and Dietrich Stauffer | Sociophysics Simulations III: Retirement Demography | For 8th Granada seminar (AIP Conf. Proc.); 8 pages including 3
figures | null | 10.1063/1.2008592 | null | q-bio.PE | null | This third part of the lecture series deals with the question: Who will pay
for your retirement? For Western Europe the answer may be ``nobody'', but for
Algeria the demography looks more promising.
| [
{
"created": "Fri, 11 Mar 2005 13:36:05 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Zekri",
"Lotfi",
""
],
[
"Stauffer",
"Dietrich",
""
]
] | This third part of the lecture series deals with the question: Who will pay for your retirement? For Western Europe the answer may be ``nobody'', but for Algeria the demography looks more promising. |
0707.0114 | Nicholas Eriksson | Nicholas Eriksson, Lior Pachter, Yumi Mitsuya, Soo-Yon Rhee, Chunlin
Wang, Baback Gharizadeh, Mostafa Ronaghi, Robert W. Shafer, Niko Beerenwinkel | Viral population estimation using pyrosequencing | 23 pages, 13 figures | null | 10.1371/journal.pcbi.1000074 | null | q-bio.PE | null | The diversity of virus populations within single infected hosts presents a
major difficulty for the natural immune response as well as for vaccine design
and antiviral drug therapy. Recently developed pyrophosphate based sequencing
technologies (pyrosequencing) can be used for quantifying this diversity by
ultra-deep sequencing of virus samples. We present computational methods for
the analysis of such sequence data and apply these techniques to pyrosequencing
data obtained from HIV populations within patients harboring drug resistant
virus strains. Our main result is the estimation of the population structure of
the sample from the pyrosequencing reads. This inference is based on a
statistical approach to error correction, followed by a combinatorial algorithm
for constructing a minimal set of haplotypes that explain the data. Using this
set of explaining haplotypes, we apply a statistical model to infer the
frequencies of the haplotypes in the population via an EM algorithm. We
demonstrate that pyrosequencing reads allow for effective population
reconstruction by extensive simulations and by comparison to 165 sequences
obtained directly from clonal sequencing of four independent, diverse HIV
populations. Thus, pyrosequencing can be used for cost-effective estimation of
the structure of virus populations, promising new insights into viral
evolutionary dynamics and disease control strategies.
| [
{
"created": "Sun, 1 Jul 2007 15:36:32 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jan 2008 19:11:08 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Eriksson",
"Nicholas",
""
],
[
"Pachter",
"Lior",
""
],
[
"Mitsuya",
"Yumi",
""
],
[
"Rhee",
"Soo-Yon",
""
],
[
"Wang",
"Chunlin",
""
],
[
"Gharizadeh",
"Baback",
""
],
[
"Ronaghi",
"Mostafa",
""
],
[
... | The diversity of virus populations within single infected hosts presents a major difficulty for the natural immune response as well as for vaccine design and antiviral drug therapy. Recently developed pyrophosphate based sequencing technologies (pyrosequencing) can be used for quantifying this diversity by ultra-deep sequencing of virus samples. We present computational methods for the analysis of such sequence data and apply these techniques to pyrosequencing data obtained from HIV populations within patients harboring drug resistant virus strains. Our main result is the estimation of the population structure of the sample from the pyrosequencing reads. This inference is based on a statistical approach to error correction, followed by a combinatorial algorithm for constructing a minimal set of haplotypes that explain the data. Using this set of explaining haplotypes, we apply a statistical model to infer the frequencies of the haplotypes in the population via an EM algorithm. We demonstrate that pyrosequencing reads allow for effective population reconstruction by extensive simulations and by comparison to 165 sequences obtained directly from clonal sequencing of four independent, diverse HIV populations. Thus, pyrosequencing can be used for cost-effective estimation of the structure of virus populations, promising new insights into viral evolutionary dynamics and disease control strategies. |
1902.06333 | Susan Khor | Susan Khor | Forming native shortcut networks to simulate protein folding | null | null | null | null | q-bio.MN q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Native shortcut networks (SCN0) are sub-graphs of native protein residue
networks (PRN0). In this paper, we propose the Network Dynamics (ND) model,
which reconstructs a PRN0 by adding back its edges according to some recipe,
while its nodes are fixed at native (PDB) locations. A PRN0 reconstruction will
eventually reconstruct its SCN0, but only after exploring several non-native
shortcut network (SCN) configurations. It is these other SCN configurations
that are of interest to us, as they produce the statistics to evaluate the
different recipes. The recipes vary from each other slightly to investigate the
effect of different edge orderings on protein folding. An edge ordering is
deemed more successful if it produces a stronger correlation with experimental
folding rates. Over proteins of different chain fold types, this basic
requirement is best satisfied when a recipe favours earlier restoration of
edges with smaller sequence separation, and earlier restoration of edges
incident on nodes with larger remaining degree within the 1-hop neighborhood of
a previously selected node. Further, this recipe generated more route-like
trajectories over SCN space and a wider range of simulated folding rates, both
of which signal cooperative two-state folding behavior. It also produced better
correspondence between ND calculated phi-values and experimental phi-values
obtained from a set of ten transition state ensembles (TSE). Also introduced is
the local centrality measure which uses centrality of initial fold
substructures to yield calculated phi-values from PRN0s with even better
overall correspondence than ND calculated phi-values. Preliminary investigation
with mixed ND recipes by chain fold type suggests that concocting further
context-sensitive recipes by chain fold type and combining them could improve
ND. This search task may be better undertaken by machine learning.
| [
{
"created": "Sun, 17 Feb 2019 21:45:24 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2019 17:32:05 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Dec 2020 21:28:50 GMT",
"version": "v3"
},
{
"created": "Wed, 31 Mar 2021 03:56:29 GMT",
"version": "v4"
},
{
"c... | 2021-08-31 | [
[
"Khor",
"Susan",
""
]
] | Native shortcut networks (SCN0) are sub-graphs of native protein residue networks (PRN0). In this paper, we propose the Network Dynamics (ND) model, which reconstructs a PRN0 by adding back its edges according to some recipe, while its nodes are fixed at native (PDB) locations. A PRN0 reconstruction will eventually reconstruct its SCN0, but only after exploring several non-native shortcut network (SCN) configurations. It is these other SCN configurations that are of interest to us, as they produce the statistics to evaluate the different recipes. The recipes vary from each other slightly to investigate the effect of different edge orderings on protein folding. An edge ordering is deemed more successful if it produces a stronger correlation with experimental folding rates. Over proteins of different chain fold types, this basic requirement is best satisfied when a recipe favours earlier restoration of edges with smaller sequence separation, and earlier restoration of edges incident on nodes with larger remaining degree within the 1-hop neighborhood of a previously selected node. Further, this recipe generated more route-like trajectories over SCN space and a wider range of simulated folding rates, both of which signal cooperative two-state folding behavior. It also produced better correspondence between ND calculated phi-values and experimental phi-values obtained from a set of ten transition state ensembles (TSE). Also introduced is the local centrality measure which uses centrality of initial fold substructures to yield calculated phi-values from PRN0s with even better overall correspondence than ND calculated phi-values. Preliminary investigation with mixed ND recipes by chain fold type suggests that concocting further context-sensitive recipes by chain fold type and combining them could improve ND. This search task may be better undertaken by machine learning. |
2204.04845 | Rahul Biswas | Rahul Biswas and Eli Shlizerman | Statistical Perspective on Functional and Causal Neural Connectomics:
The Time-Aware PC Algorithm | null | PLOS Computational Biology (2022) | 10.1371/journal.pcbi.1010653 | null | q-bio.NC q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The representation of the flow of information between neurons in the brain
based on their activity is termed the causal functional connectome. Such
representation incorporates the dynamic nature of neuronal activity and causal
interactions between them. In contrast to connectome, the causal functional
connectome is not directly observed and needs to be inferred from neural time
series. A popular statistical framework for inferring causal connectivity from
observations is the directed probabilistic graphical modeling. Its common
formulation is not suitable for neural time series since was developed for
variables with independent and identically distributed static samples. In this
work, we propose to model and estimate the causal functional connectivity from
neural time series using a novel approach that adapts directed probabilistic
graphical modeling to the time series scenario. In particular, we develop the
Time-Aware PC (TPC) algorithm for estimating the causal functional
connectivity, which adapts the PC algorithm a state-of-the-art method for
statistical causal inference. We show that the model outcome of TPC has the
properties of reflecting causality of neural interactions such as being
non-parametric, exhibits the directed Markov property in a time-series setting,
and is predictive of the consequence of counterfactual interventions on the
time series. We demonstrate the utility of the methodology to obtain the causal
functional connectome for several datasets including simulations, benchmark
datasets, and recent multi-array electro-physiological recordings from the
mouse visual cortex.
| [
{
"created": "Mon, 11 Apr 2022 03:09:50 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Apr 2022 17:59:39 GMT",
"version": "v2"
}
] | 2022-11-16 | [
[
"Biswas",
"Rahul",
""
],
[
"Shlizerman",
"Eli",
""
]
] | The representation of the flow of information between neurons in the brain based on their activity is termed the causal functional connectome. Such representation incorporates the dynamic nature of neuronal activity and causal interactions between them. In contrast to connectome, the causal functional connectome is not directly observed and needs to be inferred from neural time series. A popular statistical framework for inferring causal connectivity from observations is the directed probabilistic graphical modeling. Its common formulation is not suitable for neural time series since was developed for variables with independent and identically distributed static samples. In this work, we propose to model and estimate the causal functional connectivity from neural time series using a novel approach that adapts directed probabilistic graphical modeling to the time series scenario. In particular, we develop the Time-Aware PC (TPC) algorithm for estimating the causal functional connectivity, which adapts the PC algorithm a state-of-the-art method for statistical causal inference. We show that the model outcome of TPC has the properties of reflecting causality of neural interactions such as being non-parametric, exhibits the directed Markov property in a time-series setting, and is predictive of the consequence of counterfactual interventions on the time series. We demonstrate the utility of the methodology to obtain the causal functional connectome for several datasets including simulations, benchmark datasets, and recent multi-array electro-physiological recordings from the mouse visual cortex. |
q-bio/0512020 | Hiroo Kenzaki | Hiroo Kenzaki, Macoto Kikuchi | Diversity in Free Energy Landscape of Proteins with the Same Native
Topology | 4 pages, 3 figures | Chem. Phys. Lett. 427 (2006) 414-417 | 10.1016/j.cplett.2006.05.112 | null | q-bio.BM cond-mat.soft physics.bio-ph | null | In order to elucidate the role of the native state topology and the stability
of subdomains in protein folding, we investigate free energy landscape of human
lysozyme, which is composed of two subdomains, by Monte Carlo simulations. A
realistic lattice model with Go-like interaction is used. We take the relative
interaction strength (stability, in other word) of two subdomains as a variable
parameter and study the folding process. A variety of folding process is
observed and we obtained a phase diagram of folding in terms of temperature and
the relative stability. Experimentally-observed diversity in folding process of
c-type lysozimes is thus understood as a consequence of the difference in the
relative stability of subdomains.
| [
{
"created": "Fri, 9 Dec 2005 12:51:51 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2005 09:10:10 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Kenzaki",
"Hiroo",
""
],
[
"Kikuchi",
"Macoto",
""
]
] | In order to elucidate the role of the native state topology and the stability of subdomains in protein folding, we investigate free energy landscape of human lysozyme, which is composed of two subdomains, by Monte Carlo simulations. A realistic lattice model with Go-like interaction is used. We take the relative interaction strength (stability, in other word) of two subdomains as a variable parameter and study the folding process. A variety of folding process is observed and we obtained a phase diagram of folding in terms of temperature and the relative stability. Experimentally-observed diversity in folding process of c-type lysozimes is thus understood as a consequence of the difference in the relative stability of subdomains. |
1903.12009 | Dana Sherman | Dana Sherman and David Harel | Deciphering the underlying mechanisms of the pharyngeal motions in
Caenorhabditis elegans | 32 pages, 7 figures | null | null | null | q-bio.NC q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pharynx of the nematode Caenorhabditis elegans is a neuromuscular pump
that exhibits two typical motions: pumping and peristalsis. While the dynamics
of these motions are well characterized, the underlying mechanisms generating
most of them are not known. In this paper, we propose comprehensive and
detailed mechanisms that can explain the various observed dynamics of the
different pharyngeal areas: the dynamics of the pumping muscles - corpus,
anterior isthmus, and terminal bulb - and the peristalsis dynamics of the
posterior isthmus muscles. While the suggested mechanisms are consistent with
all available relevant data, the assumptions on which they are based and the
open questions they raise could point at additional interesting research
directions on the C. elegans pharynx. We are hoping that appropriate
experiments on the nematode will eventually corroborate our results, and
improve our understanding of the functioning of the C. elegans pharynx, and
possibly of the mammalian digestive system.
| [
{
"created": "Thu, 28 Mar 2019 14:35:02 GMT",
"version": "v1"
}
] | 2019-03-29 | [
[
"Sherman",
"Dana",
""
],
[
"Harel",
"David",
""
]
] | The pharynx of the nematode Caenorhabditis elegans is a neuromuscular pump that exhibits two typical motions: pumping and peristalsis. While the dynamics of these motions are well characterized, the underlying mechanisms generating most of them are not known. In this paper, we propose comprehensive and detailed mechanisms that can explain the various observed dynamics of the different pharyngeal areas: the dynamics of the pumping muscles - corpus, anterior isthmus, and terminal bulb - and the peristalsis dynamics of the posterior isthmus muscles. While the suggested mechanisms are consistent with all available relevant data, the assumptions on which they are based and the open questions they raise could point at additional interesting research directions on the C. elegans pharynx. We are hoping that appropriate experiments on the nematode will eventually corroborate our results, and improve our understanding of the functioning of the C. elegans pharynx, and possibly of the mammalian digestive system. |
2007.11815 | Pablo Mata L. | Pablo Mata Almonacid and Carolina Medel | A structure-preserving numerical approach for simulating algae blooms in
marine water bodies of western Patagonia | 34 pages, 11 figures | null | null | null | q-bio.QM physics.ao-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patagonian fjords' area is one of the largest estuarine regions in the world.
Every one of its water bodies displays a unique hydrodynamic behavior with
enormous effects on the biogeochemical characteristics of the ecosystems. In
this context, algal blooms are ecological phenomena of major relevance.
Numerical simulation has proved to be a promising tool to understand their
impacts. It has not been used for studying algal blooms in this zone. This
article focuses on proposing a novel numerical model for simulating brief algal
blooms occurring in water bodies of western Patagonia. The proposed model
presents a trade-off between complexity and applicability since field-data
sparsity in the zone discourages using more sophisticated approaches. The model
is based on a two-layer description of the water column. The first layer
represents the euphotic zone where an embedded biogeochemical model of
NPZD-type is used to model a mass-conserving trophic web. High intensity wind
drives the water column mixing, introducing an upward flux of nutrients that
boosts high rates of primary production. A time-dependent Gaussian pulse is
used to describe this process. Mass losses due to detritus sinking are also
included. Then, the ecosystem's dynamics is represented by means of an
externally forced, non-autonomous system of ordinary differential equations
which is characterized by strictly positive trajectories but that it is not
longer mass-conserving. A structure-preserving time integrator based on a
splitting-composition technique is designed for solving the system's equations.
It is cast as a three-steps algorithm and provides an exact estimations of
biomass fluxes. Additionally, a genetic algorithm-based tool is used to
calibrate the model's parameters in realistic scenarios. The proposed model is
applied im a detailed study of a winter bloom in an austral fjord.
| [
{
"created": "Thu, 23 Jul 2020 06:33:33 GMT",
"version": "v1"
}
] | 2020-07-24 | [
[
"Almonacid",
"Pablo Mata",
""
],
[
"Medel",
"Carolina",
""
]
] | Patagonian fjords' area is one of the largest estuarine regions in the world. Every one of its water bodies displays a unique hydrodynamic behavior with enormous effects on the biogeochemical characteristics of the ecosystems. In this context, algal blooms are ecological phenomena of major relevance. Numerical simulation has proved to be a promising tool to understand their impacts. It has not been used for studying algal blooms in this zone. This article focuses on proposing a novel numerical model for simulating brief algal blooms occurring in water bodies of western Patagonia. The proposed model presents a trade-off between complexity and applicability since field-data sparsity in the zone discourages using more sophisticated approaches. The model is based on a two-layer description of the water column. The first layer represents the euphotic zone where an embedded biogeochemical model of NPZD-type is used to model a mass-conserving trophic web. High intensity wind drives the water column mixing, introducing an upward flux of nutrients that boosts high rates of primary production. A time-dependent Gaussian pulse is used to describe this process. Mass losses due to detritus sinking are also included. Then, the ecosystem's dynamics is represented by means of an externally forced, non-autonomous system of ordinary differential equations which is characterized by strictly positive trajectories but that it is not longer mass-conserving. A structure-preserving time integrator based on a splitting-composition technique is designed for solving the system's equations. It is cast as a three-steps algorithm and provides an exact estimations of biomass fluxes. Additionally, a genetic algorithm-based tool is used to calibrate the model's parameters in realistic scenarios. The proposed model is applied im a detailed study of a winter bloom in an austral fjord. |
1804.11011 | Mingwei Dai | Mingwei Dai, Xiang Wan, Hao Peng, Yao Wang, Yue Liu, Jin Liu, Zongben
Xu and Can Yang | Joint Analysis of Individual-level and Summary-level GWAS Data by
Leveraging Pleiotropy | 32 pages, 11 figures, 2 tables | null | null | null | q-bio.GN stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large number of recent genome-wide association studies (GWASs) for complex
phenotypes confirm the early conjecture for polygenicity, suggesting the
presence of large number of variants with only tiny or moderate effects.
However, due to the limited sample size of a single GWAS, many associated
genetic variants are too weak to achieve the genome-wide significance. These
undiscovered variants further limit the prediction capability of GWAS.
Restricted access to the individual-level data and the increasing availability
of the published GWAS results motivate the development of methods integrating
both the individual-level and summary-level data. How to build the connection
between the individual-level and summary-level data determines the efficiency
of using the existing abundant summary-level resources with limited
individual-level data, and this issue inspires more efforts in the existing
area.
In this study, we propose a novel statistical approach, LEP, which provides a
novel way of modeling the connection between the individual-level data and
summary-level data. LEP integrates both types of data by \underline{LE}veraing
\underline{P}leiotropy to increase the statistical power of risk variants
identification and the accuracy of risk prediction. The algorithm for parameter
estimation is developed to handle genome-wide-scale data. Through comprehensive
simulation studies, we demonstrated the advantages of LEP over the existing
methods. We further applied LEP to perform integrative analysis of Crohn's
disease from WTCCC and summary statistics from GWAS of some other diseases,
such as Type 1 diabetes, Ulcerative colitis and Primary biliary cirrhosis. LEP
was able to significantly increase the statistical power of identifying risk
variants and improve the risk prediction accuracy from 63.39\% ($\pm$ 0.58\%)
to 68.33\% ($\pm$ 0.32\%) using about 195,000 variants.
| [
{
"created": "Mon, 30 Apr 2018 01:17:46 GMT",
"version": "v1"
}
] | 2018-05-01 | [
[
"Dai",
"Mingwei",
""
],
[
"Wan",
"Xiang",
""
],
[
"Peng",
"Hao",
""
],
[
"Wang",
"Yao",
""
],
[
"Liu",
"Yue",
""
],
[
"Liu",
"Jin",
""
],
[
"Xu",
"Zongben",
""
],
[
"Yang",
"Can",
""
]
] | A large number of recent genome-wide association studies (GWASs) for complex phenotypes confirm the early conjecture for polygenicity, suggesting the presence of large number of variants with only tiny or moderate effects. However, due to the limited sample size of a single GWAS, many associated genetic variants are too weak to achieve the genome-wide significance. These undiscovered variants further limit the prediction capability of GWAS. Restricted access to the individual-level data and the increasing availability of the published GWAS results motivate the development of methods integrating both the individual-level and summary-level data. How to build the connection between the individual-level and summary-level data determines the efficiency of using the existing abundant summary-level resources with limited individual-level data, and this issue inspires more efforts in the existing area. In this study, we propose a novel statistical approach, LEP, which provides a novel way of modeling the connection between the individual-level data and summary-level data. LEP integrates both types of data by \underline{LE}veraing \underline{P}leiotropy to increase the statistical power of risk variants identification and the accuracy of risk prediction. The algorithm for parameter estimation is developed to handle genome-wide-scale data. Through comprehensive simulation studies, we demonstrated the advantages of LEP over the existing methods. We further applied LEP to perform integrative analysis of Crohn's disease from WTCCC and summary statistics from GWAS of some other diseases, such as Type 1 diabetes, Ulcerative colitis and Primary biliary cirrhosis. LEP was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.39\% ($\pm$ 0.58\%) to 68.33\% ($\pm$ 0.32\%) using about 195,000 variants. |
2112.02809 | Artemy Kolchinsky | Artemy Kolchinsky | Thermodynamics of Darwinian evolution in molecular replicators | null | null | null | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the relationship between thermodynamics, fitness, and Darwinian
evolution in autocatalytic molecular replicators. We uncover a thermodynamic
bound that relates fitness, replication rate, and the Gibbs free energy
dissipated per copy. This bound applies to a broad range of systems, including
elementary and non-elementary autocatalytic reactions, polymer-based
replicators, and certain kinds of autocatalytic sets. In addition, we show that
the critical selection coefficient (the minimal fitness difference visible to
selection) is bounded by the Gibbs free energy dissipated per replication. Our
results imply fundamental thermodynamic bounds on the strength of selection in
molecular evolution, complementary to other bounds that arise from finite
population sizes and error thresholds. These bounds may be relevant for
understanding thermodynamic constraints faced by early replicators at the
origin of life. We illustrate our approach on several examples, including a
classic model of replicators in a chemostat.
| [
{
"created": "Mon, 6 Dec 2021 06:40:29 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jan 2022 00:01:52 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Feb 2022 19:30:33 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Feb 2024 00:20:51 GMT",
"version": "v4"
}
] | 2024-02-05 | [
[
"Kolchinsky",
"Artemy",
""
]
] | We consider the relationship between thermodynamics, fitness, and Darwinian evolution in autocatalytic molecular replicators. We uncover a thermodynamic bound that relates fitness, replication rate, and the Gibbs free energy dissipated per copy. This bound applies to a broad range of systems, including elementary and non-elementary autocatalytic reactions, polymer-based replicators, and certain kinds of autocatalytic sets. In addition, we show that the critical selection coefficient (the minimal fitness difference visible to selection) is bounded by the Gibbs free energy dissipated per replication. Our results imply fundamental thermodynamic bounds on the strength of selection in molecular evolution, complementary to other bounds that arise from finite population sizes and error thresholds. These bounds may be relevant for understanding thermodynamic constraints faced by early replicators at the origin of life. We illustrate our approach on several examples, including a classic model of replicators in a chemostat. |
1411.0107 | Petr Karlovsky PhD | B. Venkatesh, U. Hettwer, B. Koopmann, P. Karlovsky | Conversion of cDNA differential display results (DDRT-PCR) into
quantitative transcription profiles | null | BMC Genomics 6/51 (2005) 1-12 | 10.1186/1471-2164-6-51 | null | q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | Background: Gene expression studies on non-model organisms require open-end
strategies for transcription profiling. Gel-based analysis of cDNA fragments
allows to detect alterations in gene expression for genes which have neither
been sequenced yet nor are available in cDNA libraries. Commonly used protocols
are cDNA Differential Display (DDRT-PCR) and cDNA-AFLP. Both methods have been
used merely as qualitative gene discovery tools so far. Results: We developed
procedures for the conversion of DDRT-PCR data into quantitative transcription
profiles. Amplified cDNA fragments are separated on a DNA sequencer. Data
processing consists of four steps: (i) cDNA bands in lanes corresponding to
samples treated with the same primer combination are matched in order to
identify fragments originating from the same transcript, (ii) intensity of
bands is determined by densitometry, (iii) densitometric values are normalized,
and (iv) intensity ratio is calculated for each pair of corresponding bands.
Transcription profiles are represented by sets of intensity ratios (control vs.
treatment) for cDNA fragments defined by primer combination and DNA mobility.
We demonstrated the procedure by analyzing DDRT-PCR data on the effect of
secondary metabolites of oilseed rape Brassica napus on the transcriptome of
the pathogenic fungus Leptosphaeria maculans. Conclusion: We developed a data
processing procedure for quantitative analysis of amplified cDNA fragments. The
system utilizes common software and provides an open-end alternative to
microarray analysis. The processing is expected to work equally well with
DDRT-PCR and cDNA-AFLP data and be useful in research on organisms for which
microarray analysis is not available or economical.
| [
{
"created": "Sat, 1 Nov 2014 12:04:12 GMT",
"version": "v1"
}
] | 2014-11-04 | [
[
"Venkatesh",
"B.",
""
],
[
"Hettwer",
"U.",
""
],
[
"Koopmann",
"B.",
""
],
[
"Karlovsky",
"P.",
""
]
] | Background: Gene expression studies on non-model organisms require open-end strategies for transcription profiling. Gel-based analysis of cDNA fragments allows to detect alterations in gene expression for genes which have neither been sequenced yet nor are available in cDNA libraries. Commonly used protocols are cDNA Differential Display (DDRT-PCR) and cDNA-AFLP. Both methods have been used merely as qualitative gene discovery tools so far. Results: We developed procedures for the conversion of DDRT-PCR data into quantitative transcription profiles. Amplified cDNA fragments are separated on a DNA sequencer. Data processing consists of four steps: (i) cDNA bands in lanes corresponding to samples treated with the same primer combination are matched in order to identify fragments originating from the same transcript, (ii) intensity of bands is determined by densitometry, (iii) densitometric values are normalized, and (iv) intensity ratio is calculated for each pair of corresponding bands. Transcription profiles are represented by sets of intensity ratios (control vs. treatment) for cDNA fragments defined by primer combination and DNA mobility. We demonstrated the procedure by analyzing DDRT-PCR data on the effect of secondary metabolites of oilseed rape Brassica napus on the transcriptome of the pathogenic fungus Leptosphaeria maculans. Conclusion: We developed a data processing procedure for quantitative analysis of amplified cDNA fragments. The system utilizes common software and provides an open-end alternative to microarray analysis. The processing is expected to work equally well with DDRT-PCR and cDNA-AFLP data and be useful in research on organisms for which microarray analysis is not available or economical. |
2208.02127 | Prajakta Bedekar | Prajakta Bedekar (1 and 2), Anthony J. Kearsley (1), Paul N. Patrone
(1) ((1) National Institute of Standards and Technology, (2) Johns Hopkins
University) | Prevalence Estimation and Optimal Classification Methods to Account for
Time Dependence in Antibody Levels | 29 pages, 11 figures | null | null | null | q-bio.QM math.OC math.PR stat.AP | http://creativecommons.org/licenses/by/4.0/ | Serology testing can identify past infection by quantifying the immune
response of an infected individual providing important public health guidance.
Individual immune responses are time-dependent, which is reflected in antibody
measurements. Moreover, the probability of obtaining a particular measurement
changes due to prevalence as the disease progresses. Taking into account these
personal and population-level effects, we develop a mathematical model that
suggests a natural adaptive scheme for estimating prevalence as a function of
time. We then combine the estimated prevalence with optimal decision theory to
develop a time-dependent probabilistic classification scheme that minimizes
error. We validate this analysis by using a combination of real-world and
synthetic SARS-CoV-2 data and discuss the type of longitudinal studies needed
to execute this scheme in real-world settings.
| [
{
"created": "Wed, 3 Aug 2022 15:06:42 GMT",
"version": "v1"
}
] | 2022-08-04 | [
[
"Bedekar",
"Prajakta",
"",
"1 and 2"
],
[
"Kearsley",
"Anthony J.",
""
],
[
"Patrone",
"Paul N.",
""
]
] | Serology testing can identify past infection by quantifying the immune response of an infected individual providing important public health guidance. Individual immune responses are time-dependent, which is reflected in antibody measurements. Moreover, the probability of obtaining a particular measurement changes due to prevalence as the disease progresses. Taking into account these personal and population-level effects, we develop a mathematical model that suggests a natural adaptive scheme for estimating prevalence as a function of time. We then combine the estimated prevalence with optimal decision theory to develop a time-dependent probabilistic classification scheme that minimizes error. We validate this analysis by using a combination of real-world and synthetic SARS-CoV-2 data and discuss the type of longitudinal studies needed to execute this scheme in real-world settings. |
2402.06960 | Mattia Miotto | Mattia Miotto, Lorenzo Di Rienzo, Leonardo Bo', Giancarlo Ruocco,
Edoardo Milanetti | Zepyros: A webserver to evaluate the shape complementarity of
protein-protein interfaces | 4 pages, 1 figure | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Shape complementarity of molecular surfaces at the interfaces is a well-known
characteristic of protein-protein binding regions, and it is critical in
influencing the stability of the complex. Measuring such complementarity is at
the basis of methods for both the prediction of possible interactions and for
the design/optimization of speficic ones. However, only a limited number of
tools are currently available to efficiently and rapidly assess it. Here, we
introduce Zepyros, a webserver for fast measuring of the shape complementarity
between two molecular interfaces of a given protein-protein complex using
structural information. Zepyros is implemented as a publicly available tool
with a user-friendly interface. Our server can be found at the following link
(all major browser supported): https://zepyros.bio-groups.com
| [
{
"created": "Sat, 10 Feb 2024 14:29:27 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Miotto",
"Mattia",
""
],
[
"Di Rienzo",
"Lorenzo",
""
],
[
"Bo'",
"Leonardo",
""
],
[
"Ruocco",
"Giancarlo",
""
],
[
"Milanetti",
"Edoardo",
""
]
] | Shape complementarity of molecular surfaces at the interfaces is a well-known characteristic of protein-protein binding regions, and it is critical in influencing the stability of the complex. Measuring such complementarity is at the basis of methods for both the prediction of possible interactions and for the design/optimization of speficic ones. However, only a limited number of tools are currently available to efficiently and rapidly assess it. Here, we introduce Zepyros, a webserver for fast measuring of the shape complementarity between two molecular interfaces of a given protein-protein complex using structural information. Zepyros is implemented as a publicly available tool with a user-friendly interface. Our server can be found at the following link (all major browser supported): https://zepyros.bio-groups.com |
2005.08134 | Joceline Lega | Joceline Lega | Parameter Estimation from ICC curves | null | Journal of Biological Dynamics 15, 195-212 (2021) | 10.1080/17513758.2021.1912419 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incidence vs Cumulative Cases (ICC) curves are introduced and shown to
provide a simple framework for parameter identification in the case of the most
elementary epidemiological model, consisting of susceptible, infected, and
removed compartments. This novel methodology is used to estimate the basic
reproduction ratio of recent outbreaks, including the ongoing COVID-19
epidemic.
| [
{
"created": "Sun, 17 May 2020 00:23:23 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 23:32:24 GMT",
"version": "v2"
}
] | 2021-04-14 | [
[
"Lega",
"Joceline",
""
]
] | Incidence vs Cumulative Cases (ICC) curves are introduced and shown to provide a simple framework for parameter identification in the case of the most elementary epidemiological model, consisting of susceptible, infected, and removed compartments. This novel methodology is used to estimate the basic reproduction ratio of recent outbreaks, including the ongoing COVID-19 epidemic. |
q-bio/0703030 | Hagai B. Perets | Ofer Biham, Nathalie Q. Balaban, Adiel Loinger, Azi Lipshtat, Hagai B.
Perets | Deterministic and Stochastic Simulations of Simple Genetic Circuits | 14 pages, 5 figures. To be published in the Banach Center
Publications, he Proceedings of the workshop on "Stochastic models in
biological sciences", Institute of Applied Mathematics Warsaw University,
Warsaw, Poland, 29/5/06-2/6/06 | null | null | null | q-bio.MN q-bio.QM | null | We analyze three simple genetic circuits which involve transcriptional
regulation and feedback: the autorepressor, the switch and the repressilator,
that consist of one, two and three genes, respectively. Such systems are
commonly simulated using rate equations, that account for the concentrations of
the mRNAs and proteins produced by these genes. Rate equations are suitable
when the concentrations of the relevant molecules in a cell are large and
fluctuations are negligible. However, when some of the proteins in the circuit
appear in low copy numbers, fluctuations become important and the rate
equations fail. In this case stochastic methods, such as direct numerical
integration of the master equation or Monte Carlo simulations are required.
Here we present deterministic and stochastic simulations of the autorepressor,
the switch and the repressilator. We show that fluctuations give rise to
quantitative and qualitative changes in the dynamics of these systems. In
particular, we demonstrate a fluctuations-induced bistability in a variant of
the genetic switch and and noisy oscillations obtained in the repressilator
circuit.
| [
{
"created": "Tue, 13 Mar 2007 07:01:58 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Biham",
"Ofer",
""
],
[
"Balaban",
"Nathalie Q.",
""
],
[
"Loinger",
"Adiel",
""
],
[
"Lipshtat",
"Azi",
""
],
[
"Perets",
"Hagai B.",
""
]
] | We analyze three simple genetic circuits which involve transcriptional regulation and feedback: the autorepressor, the switch and the repressilator, that consist of one, two and three genes, respectively. Such systems are commonly simulated using rate equations, that account for the concentrations of the mRNAs and proteins produced by these genes. Rate equations are suitable when the concentrations of the relevant molecules in a cell are large and fluctuations are negligible. However, when some of the proteins in the circuit appear in low copy numbers, fluctuations become important and the rate equations fail. In this case stochastic methods, such as direct numerical integration of the master equation or Monte Carlo simulations are required. Here we present deterministic and stochastic simulations of the autorepressor, the switch and the repressilator. We show that fluctuations give rise to quantitative and qualitative changes in the dynamics of these systems. In particular, we demonstrate a fluctuations-induced bistability in a variant of the genetic switch and and noisy oscillations obtained in the repressilator circuit. |
1712.03855 | Artem Novozhilov | Alexander S. Bratus, Artem S. Novozhilov, and Yuri S. Semenov | Rigorous mathematical analysis of the quasispecies model: From Manfred
Eigen to the recent developments | 22 pages, 6 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review the major progress in the rigorous analysis of the classical
quasispecies model that usually comes in two related but different forms: the
Eigen model and the Crow--Kimura model. The model itself was formulated almost
50 years ago, and in its stationary form represents an easy to formulate
eigenvalue problem. Notwithstanding the simplicity of the problem statement, we
still lack full understanding of the behavior of the mean population fitness
and the quasispecies distribution for an arbitrary fitness landscape. Our main
goal in this review is two-fold: First, to highlight a number of impressive
mathematical results, including some of the recent ones, which pertain to the
mathematical development of the quasispecies theory. Second, to emphasize that,
despite these 50 years of vigorous research, there are still very natural both
biological and mathematical questions that remain to be addressed within the
quasispecies framework. Our hope is that at least some of the approaches we
review in this text can be of help for anyone embarking on further analysis of
the quasispecies model.
| [
{
"created": "Mon, 11 Dec 2017 15:59:27 GMT",
"version": "v1"
}
] | 2017-12-12 | [
[
"Bratus",
"Alexander S.",
""
],
[
"Novozhilov",
"Artem S.",
""
],
[
"Semenov",
"Yuri S.",
""
]
] | We review the major progress in the rigorous analysis of the classical quasispecies model that usually comes in two related but different forms: the Eigen model and the Crow--Kimura model. The model itself was formulated almost 50 years ago, and in its stationary form represents an easy to formulate eigenvalue problem. Notwithstanding the simplicity of the problem statement, we still lack full understanding of the behavior of the mean population fitness and the quasispecies distribution for an arbitrary fitness landscape. Our main goal in this review is two-fold: First, to highlight a number of impressive mathematical results, including some of the recent ones, which pertain to the mathematical development of the quasispecies theory. Second, to emphasize that, despite these 50 years of vigorous research, there are still very natural both biological and mathematical questions that remain to be addressed within the quasispecies framework. Our hope is that at least some of the approaches we review in this text can be of help for anyone embarking on further analysis of the quasispecies model. |
2012.02019 | Anamaria Sanchez-Daza | Danton Freire-Flores, Nyna Llanovarced-Kawles, Anamaria Sanchez-Daza,
\'Alvaro Olivera-Nappa | On the heterogeneous spread of COVID-19 in Chile | null | null | 10.1016/j.chaos.2021.111156 | null | q-bio.PE physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-pharmaceutical interventions (NPIs) have played a crucial role in
controlling the spread of COVID-19. Nevertheless, NPI efficacy varies
enormously between and within countries, mainly because of population and
behavioural heterogeneity. In this work, we adapted a multi-group SEIRA model
to study the spreading dynamics of COVID-19 in Chile, representing
geographically separated regions of the country by different groups. We use
national mobilization statistics to estimate the connectivity between regions
and data from governmental repositories to obtain COVID-19 spreading and death
rates in each region. We then assessed the effectiveness of different NPIs by
studying the temporal evolution of the reproduction number Rt. Analyzing
data-driven and model-based estimates of Rt, we found a strong coupling of
different regions, highlighting the necessity of organized and coordinated
actions to control the spread of SARS-CoV-2. Finally, we evaluated different
scenarios to forecast the evolution of COVID-19 in the most densely populated
regions, finding that the early lifting of restriction probably will lead to
novel outbreaks.
| [
{
"created": "Thu, 3 Dec 2020 16:03:39 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 16:28:00 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Jun 2021 17:48:15 GMT",
"version": "v3"
}
] | 2021-06-28 | [
[
"Freire-Flores",
"Danton",
""
],
[
"Llanovarced-Kawles",
"Nyna",
""
],
[
"Sanchez-Daza",
"Anamaria",
""
],
[
"Olivera-Nappa",
"Álvaro",
""
]
] | Non-pharmaceutical interventions (NPIs) have played a crucial role in controlling the spread of COVID-19. Nevertheless, NPI efficacy varies enormously between and within countries, mainly because of population and behavioural heterogeneity. In this work, we adapted a multi-group SEIRA model to study the spreading dynamics of COVID-19 in Chile, representing geographically separated regions of the country by different groups. We use national mobilization statistics to estimate the connectivity between regions and data from governmental repositories to obtain COVID-19 spreading and death rates in each region. We then assessed the effectiveness of different NPIs by studying the temporal evolution of the reproduction number Rt. Analyzing data-driven and model-based estimates of Rt, we found a strong coupling of different regions, highlighting the necessity of organized and coordinated actions to control the spread of SARS-CoV-2. Finally, we evaluated different scenarios to forecast the evolution of COVID-19 in the most densely populated regions, finding that the early lifting of restriction probably will lead to novel outbreaks. |
0706.2024 | Eben Kenah | Eben Kenah, Marc Lipsitch, James M. Robins | Generation interval contraction and epidemic data analysis | 20 pages, 5 figures; to appear in Mathematical Biosciences | Mathematical Biosciences 213(1): 71-79, May 2008 | 10.1016/j.mbs.2008.02.007 | null | q-bio.QM math.PR stat.AP | null | The generation interval is the time between the infection time of an infected
person and the infection time of his or her infector. Probability density
functions for generation intervals have been an important input for epidemic
models and epidemic data analysis. In this paper, we specify a general
stochastic SIR epidemic model and prove that the mean generation interval
decreases when susceptible persons are at risk of infectious contact from
multiple sources. The intuition behind this is that when a susceptible person
has multiple potential infectors, there is a ``race'' to infect him or her in
which only the first infectious contact leads to infection. In an epidemic, the
mean generation interval contracts as the prevalence of infection increases. We
call this global competition among potential infectors. When there is rapid
transmission within clusters of contacts, generation interval contraction can
be caused by a high local prevalence of infection even when the global
prevalence is low. We call this local competition among potential infectors.
Using simulations, we illustrate both types of competition.
Finally, we show that hazards of infectious contact can be used instead of
generation intervals to estimate the time course of the effective reproductive
number in an epidemic. This approach leads naturally to partial likelihoods for
epidemic data that are very similar to those that arise in survival analysis,
opening a promising avenue of methodological research in infectious disease
epidemiology.
| [
{
"created": "Thu, 14 Jun 2007 02:00:03 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Dec 2007 02:02:05 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Feb 2008 00:44:47 GMT",
"version": "v3"
}
] | 2023-10-24 | [
[
"Kenah",
"Eben",
""
],
[
"Lipsitch",
"Marc",
""
],
[
"Robins",
"James M.",
""
]
] | The generation interval is the time between the infection time of an infected person and the infection time of his or her infector. Probability density functions for generation intervals have been an important input for epidemic models and epidemic data analysis. In this paper, we specify a general stochastic SIR epidemic model and prove that the mean generation interval decreases when susceptible persons are at risk of infectious contact from multiple sources. The intuition behind this is that when a susceptible person has multiple potential infectors, there is a ``race'' to infect him or her in which only the first infectious contact leads to infection. In an epidemic, the mean generation interval contracts as the prevalence of infection increases. We call this global competition among potential infectors. When there is rapid transmission within clusters of contacts, generation interval contraction can be caused by a high local prevalence of infection even when the global prevalence is low. We call this local competition among potential infectors. Using simulations, we illustrate both types of competition. Finally, we show that hazards of infectious contact can be used instead of generation intervals to estimate the time course of the effective reproductive number in an epidemic. This approach leads naturally to partial likelihoods for epidemic data that are very similar to those that arise in survival analysis, opening a promising avenue of methodological research in infectious disease epidemiology. |
2003.13221 | Andrew Warrington | Frank Wood, Andrew Warrington, Saeid Naderiparizi, Christian Weilbach,
Vaden Masrani, William Harvey, Adam Scibior, Boyan Beronov, John
Grefenstette, Duncan Campbell and Ali Nasseri | Planning as Inference in Epidemiological Models | Revisions | Front Artif Intell. 2021; 4: 550603 | 10.3389/frai.2021.550603 | null | q-bio.PE cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this work we demonstrate how to automate parts of the infectious
disease-control policy-making process via performing inference in existing
epidemiological models. The kind of inference tasks undertaken include
computing the posterior distribution over controllable, via direct
policy-making choices, simulation model parameters that give rise to acceptable
disease progression outcomes. Among other things, we illustrate the use of a
probabilistic programming language that automates inference in existing
simulators. Neither the full capabilities of this tool for automating inference
nor its utility for planning is widely disseminated at the current time. Timely
gains in understanding about how such simulation-based models and inference
automation tools applied in support of policymaking could lead to less
economically damaging policy prescriptions, particularly during the current
COVID-19 pandemic.
| [
{
"created": "Mon, 30 Mar 2020 05:10:26 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Apr 2020 02:17:11 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Sep 2021 19:26:40 GMT",
"version": "v3"
}
] | 2022-09-07 | [
[
"Wood",
"Frank",
""
],
[
"Warrington",
"Andrew",
""
],
[
"Naderiparizi",
"Saeid",
""
],
[
"Weilbach",
"Christian",
""
],
[
"Masrani",
"Vaden",
""
],
[
"Harvey",
"William",
""
],
[
"Scibior",
"Adam",
""
],... | In this work we demonstrate how to automate parts of the infectious disease-control policy-making process via performing inference in existing epidemiological models. The kind of inference tasks undertaken include computing the posterior distribution over controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Among other things, we illustrate the use of a probabilistic programming language that automates inference in existing simulators. Neither the full capabilities of this tool for automating inference nor its utility for planning is widely disseminated at the current time. Timely gains in understanding about how such simulation-based models and inference automation tools applied in support of policymaking could lead to less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic. |
0707.1224 | Michal Kol\'a\v{r} | Michal Kol\'a\v{r}, Michael L\"assig, Johannes Berg | From Protein Interactions to Functional Annotation: Graph Alignment in
Herpes | null | null | null | null | q-bio.MN q-bio.QM | null | Sequence alignment forms the basis of many methods for functional annotation
by phylogenetic comparison, but becomes unreliable in the `twilight' regions of
high sequence divergence and short gene length. Here we perform a cross-species
comparison of two herpesviruses, VZV and KSHV, with a hybrid method called
graph alignment. The method is based jointly on the similarity of protein
interaction networks and on sequence similarity. In our alignment, we find open
reading frames for which interaction similarity concurs with a low level of
sequence similarity, thus confirming the evolutionary relationship. In
addition, we find high levels of interaction similarity between open reading
frames without any detectable sequence similarity. The functional predictions
derived from this alignment are consistent with genomic position and gene
expression data.
| [
{
"created": "Mon, 9 Jul 2007 10:58:56 GMT",
"version": "v1"
}
] | 2007-07-10 | [
[
"Kolář",
"Michal",
""
],
[
"Lässig",
"Michael",
""
],
[
"Berg",
"Johannes",
""
]
] | Sequence alignment forms the basis of many methods for functional annotation by phylogenetic comparison, but becomes unreliable in the `twilight' regions of high sequence divergence and short gene length. Here we perform a cross-species comparison of two herpesviruses, VZV and KSHV, with a hybrid method called graph alignment. The method is based jointly on the similarity of protein interaction networks and on sequence similarity. In our alignment, we find open reading frames for which interaction similarity concurs with a low level of sequence similarity, thus confirming the evolutionary relationship. In addition, we find high levels of interaction similarity between open reading frames without any detectable sequence similarity. The functional predictions derived from this alignment are consistent with genomic position and gene expression data. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.