id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0801.0691
|
George Tsibidis
|
George D. Tsibidis
|
A FRAP model to investigate reaction-diffusion of proteins within a
bounded domain: a theoretical approach
|
25 pages. Abstracts Proceedings, The American Society for Cell
Biology, 46th Annual Meeting, December 9-13, 2006, San Diego
|
Journal of Theoretical Biology 253 (2008) 755-768
|
10.1016/j.jtbi.2008.04.010
| null |
q-bio.SC physics.bio-ph q-bio.BM q-bio.QM
| null |
Temporally and spatially resolved measurements of protein transport inside
cells provide important clues to the functional architecture and dynamics of
biological systems. Fluorescence Recovery After Photobleaching (FRAP) technique
has been used over the past three decades to measure the mobility of
macromolecules and protein transport and interaction with immobile structures
inside the cell nucleus. A theoretical model is presented that aims to describe
protein transport inside the nucleus, a process which is influenced by the
presence of a boundary (i.e. membrane). A set of reaction-diffusion equations
is employed to model both the diffusion of proteins and their interaction with
immobile binding sites. The proposed model has been designed to be applied to
biological samples with a Confocal Laser Scanning Microscope (CLSM) equipped
with the feature to bleach regions characterised by a scanning beam that has a
radially Gaussian distributed profile. The proposed model leads to FRAP curves
that depend on the on- and off-rates. Semi-analytical expressions are used to
define the boundaries of on- (off-) rate parameter space in simplified cases
when molecules move within a bounded domain. The theoretical model can be used
in conjunction to experimental data acquired by CLSM to investigate the
biophysical properties of proteins in living cells.
|
[
{
"created": "Fri, 4 Jan 2008 15:33:28 GMT",
"version": "v1"
}
] |
2009-03-04
|
[
[
"Tsibidis",
"George D.",
""
]
] |
Temporally and spatially resolved measurements of protein transport inside cells provide important clues to the functional architecture and dynamics of biological systems. Fluorescence Recovery After Photobleaching (FRAP) technique has been used over the past three decades to measure the mobility of macromolecules and protein transport and interaction with immobile structures inside the cell nucleus. A theoretical model is presented that aims to describe protein transport inside the nucleus, a process which is influenced by the presence of a boundary (i.e. membrane). A set of reaction-diffusion equations is employed to model both the diffusion of proteins and their interaction with immobile binding sites. The proposed model has been designed to be applied to biological samples with a Confocal Laser Scanning Microscope (CLSM) equipped with the feature to bleach regions characterised by a scanning beam that has a radially Gaussian distributed profile. The proposed model leads to FRAP curves that depend on the on- and off-rates. Semi-analytical expressions are used to define the boundaries of on- (off-) rate parameter space in simplified cases when molecules move within a bounded domain. The theoretical model can be used in conjunction to experimental data acquired by CLSM to investigate the biophysical properties of proteins in living cells.
|
1012.1281
|
Daniel Hernan Barmak
|
Marcelo J Otero, Daniel H Barmak, Claudio O Dorso, Hern\'an G Solari,
Mario A Natiello
|
Modeling Dengue Outbreaks
|
18 pages, 6 figures, submited to Mathematical Biosciences
| null | null | null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a dengue model (SEIR) where the human individuals are treated on
an individual basis (IBM) while the mosquito population, produced by an
independent model, is treated by compartments (SEI). We study the spread of
epidemics by the sole action of the mosquito. Exponential, deterministic and
experimental distributions for the (human) exposed period are considered in two
weather scenarios, one corresponding to temperate climate and the other to
tropical climate. Virus circulation, final epidemic size and duration of
outbreaks are considered showing that the results present little sensitivity to
the statistics followed by the exposed period provided the median of the
distributions are in coincidence. Only the time between an introduced
(imported) case and the appearance of the first symptomatic secondary case is
sensitive to this distribution. We finally show that the IBM model introduced
is precisely a realization of a compartmental model, and that at least in this
case, the choice between compartmental models or IBM is only a matter of
convenience.
|
[
{
"created": "Mon, 6 Dec 2010 18:55:28 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Otero",
"Marcelo J",
""
],
[
"Barmak",
"Daniel H",
""
],
[
"Dorso",
"Claudio O",
""
],
[
"Solari",
"Hernán G",
""
],
[
"Natiello",
"Mario A",
""
]
] |
We introduce a dengue model (SEIR) where the human individuals are treated on an individual basis (IBM) while the mosquito population, produced by an independent model, is treated by compartments (SEI). We study the spread of epidemics by the sole action of the mosquito. Exponential, deterministic and experimental distributions for the (human) exposed period are considered in two weather scenarios, one corresponding to temperate climate and the other to tropical climate. Virus circulation, final epidemic size and duration of outbreaks are considered showing that the results present little sensitivity to the statistics followed by the exposed period provided the median of the distributions are in coincidence. Only the time between an introduced (imported) case and the appearance of the first symptomatic secondary case is sensitive to this distribution. We finally show that the IBM model introduced is precisely a realization of a compartmental model, and that at least in this case, the choice between compartmental models or IBM is only a matter of convenience.
|
1801.09165
|
Vadim Zotev
|
Vadim Zotev, Raquel Phillips, Masaya Misaki, Chung Ki Wong, Brent E.
Wurfel, Frank Krueger, Matthew Feldner, Jerzy Bodurka
|
Real-time fMRI neurofeedback training of the amygdala activity with
simultaneous EEG in veterans with combat-related PTSD
|
26 pages, 16 figures, to appear in NeuroImage: Clinical
|
NeuroImage: Clinical 19 (2018) 106-121
|
10.1016/j.nicl.2018.04.010
| null |
q-bio.NC physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Posttraumatic stress disorder (PTSD) is a chronic and disabling
neuropsychiatric disorder characterized by insufficient top-down modulation of
the amygdala activity by the prefrontal cortex. Real-time fMRI neurofeedback
(rtfMRI-nf) is an emerging method with potential for modifying the
amygdala-prefrontal interactions. We report the first controlled emotion
self-regulation study in veterans with combat-related PTSD utilizing rtfMRI-nf
of the amygdala activity. PTSD patients in the experimental group (EG, n=20)
learned to upregulate BOLD activity of the left amygdala (LA) using rtfMRI-nf
during a happy emotion induction task. PTSD patients in the control group (CG,
n=11) were provided with a sham rtfMRI-nf. The study included three rtfMRI-nf
training sessions, and EEG recordings were performed simultaneously with fMRI.
PTSD severity was assessed using the Clinician-Administered PTSD Scale (CAPS).
The EG participants showed a significant reduction in total CAPS ratings,
including significant reductions in avoidance and hyperarousal symptoms.
Overall, 80% of the EG participants demonstrated clinically meaningful
reductions in CAPS ratings, compared to 38% in the CG. During the first
session, fMRI connectivity of the LA with the orbitofrontal cortex and the
dorsolateral prefrontal cortex (DLPFC) was progressively enhanced, and this
enhancement significantly and positively correlated with initial CAPS ratings.
Left-lateralized enhancement in upper alpha EEG coherence also exhibited a
significant positive correlation with the initial CAPS. Reduction in PTSD
severity between the first and last rtfMRI-nf sessions significantly correlated
with enhancement in functional connectivity between the LA and the left DLPFC.
Our results demonstrate that the rtfMRI-nf of the amygdala activity has the
potential to correct the amygdala-prefrontal functional connectivity
deficiencies specific to PTSD.
|
[
{
"created": "Sun, 28 Jan 2018 01:41:00 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Apr 2018 22:11:15 GMT",
"version": "v2"
}
] |
2018-04-13
|
[
[
"Zotev",
"Vadim",
""
],
[
"Phillips",
"Raquel",
""
],
[
"Misaki",
"Masaya",
""
],
[
"Wong",
"Chung Ki",
""
],
[
"Wurfel",
"Brent E.",
""
],
[
"Krueger",
"Frank",
""
],
[
"Feldner",
"Matthew",
""
],
[
"Bodurka",
"Jerzy",
""
]
] |
Posttraumatic stress disorder (PTSD) is a chronic and disabling neuropsychiatric disorder characterized by insufficient top-down modulation of the amygdala activity by the prefrontal cortex. Real-time fMRI neurofeedback (rtfMRI-nf) is an emerging method with potential for modifying the amygdala-prefrontal interactions. We report the first controlled emotion self-regulation study in veterans with combat-related PTSD utilizing rtfMRI-nf of the amygdala activity. PTSD patients in the experimental group (EG, n=20) learned to upregulate BOLD activity of the left amygdala (LA) using rtfMRI-nf during a happy emotion induction task. PTSD patients in the control group (CG, n=11) were provided with a sham rtfMRI-nf. The study included three rtfMRI-nf training sessions, and EEG recordings were performed simultaneously with fMRI. PTSD severity was assessed using the Clinician-Administered PTSD Scale (CAPS). The EG participants showed a significant reduction in total CAPS ratings, including significant reductions in avoidance and hyperarousal symptoms. Overall, 80% of the EG participants demonstrated clinically meaningful reductions in CAPS ratings, compared to 38% in the CG. During the first session, fMRI connectivity of the LA with the orbitofrontal cortex and the dorsolateral prefrontal cortex (DLPFC) was progressively enhanced, and this enhancement significantly and positively correlated with initial CAPS ratings. Left-lateralized enhancement in upper alpha EEG coherence also exhibited a significant positive correlation with the initial CAPS. Reduction in PTSD severity between the first and last rtfMRI-nf sessions significantly correlated with enhancement in functional connectivity between the LA and the left DLPFC. Our results demonstrate that the rtfMRI-nf of the amygdala activity has the potential to correct the amygdala-prefrontal functional connectivity deficiencies specific to PTSD.
|
1902.07386
|
Spyridon Baltsavias
|
Spyridon Baltsavias, Will Van Treuren, Marcus J. Weber, Jayant
Charthad, Sam Baker, Justin L. Sonnenburg, Amin Arbabian
|
In Vivo Wireless Sensors for Gut Microbiome Redox Monitoring
|
Minor revision. Supplementary information included at the end of the
main paper
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A perturbed gut microbiome has recently been linked with multiple disease
processes, yet researchers currently lack tools that can provide in vivo,
quantitative, and real-time insight into these processes and associated
host-microbe interactions. We propose an in vivo wireless implant for
monitoring gastrointestinal tract redox states using oxidation-reduction
potentials (ORP). The implant is powered and conveniently interrogated via
ultrasonic waves. We engineer the sensor electronics, electrodes, and
encapsulation materials for robustness in vivo, and integrate them into an
implant that endures autoclave sterilization and measures ORP for 12 days
implanted in the cecum of a live rat. The presented implant platform paves the
way for long-term experimental testing of biological hypotheses, offering new
opportunities for understanding gut redox pathophysiology mechanisms, and
facilitating translation to disease diagnosis and treatment applications.
|
[
{
"created": "Wed, 20 Feb 2019 03:19:56 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Jul 2019 01:55:10 GMT",
"version": "v2"
}
] |
2019-07-30
|
[
[
"Baltsavias",
"Spyridon",
""
],
[
"Van Treuren",
"Will",
""
],
[
"Weber",
"Marcus J.",
""
],
[
"Charthad",
"Jayant",
""
],
[
"Baker",
"Sam",
""
],
[
"Sonnenburg",
"Justin L.",
""
],
[
"Arbabian",
"Amin",
""
]
] |
A perturbed gut microbiome has recently been linked with multiple disease processes, yet researchers currently lack tools that can provide in vivo, quantitative, and real-time insight into these processes and associated host-microbe interactions. We propose an in vivo wireless implant for monitoring gastrointestinal tract redox states using oxidation-reduction potentials (ORP). The implant is powered and conveniently interrogated via ultrasonic waves. We engineer the sensor electronics, electrodes, and encapsulation materials for robustness in vivo, and integrate them into an implant that endures autoclave sterilization and measures ORP for 12 days implanted in the cecum of a live rat. The presented implant platform paves the way for long-term experimental testing of biological hypotheses, offering new opportunities for understanding gut redox pathophysiology mechanisms, and facilitating translation to disease diagnosis and treatment applications.
|
2211.00581
|
Tetsuhiro Hatakeyama
|
Tetsuhiro S. Hatakeyama and Ryudo Ohbayashi
|
Evolutionary Innovation by Polyploidy
|
10 pages, 5 figures, 1 tables
| null | null | null |
q-bio.PE physics.bio-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The preferred conditions for evolutionary innovation is a fundamental
question, but little is known, in part because the question involves rare
events. We focused on the potential role of polyploidy in the evolution of
novel traits. There are two hypotheses regarding the effects of polyploidy on
evolution: Polyploidy reduces the effect of a single mutation and slows
evolution. In contrast, the gene redundancy introduced by polyploidy will
promote neofunctionalization and accelerate evolution. Does polyploidy speed up
or slow down evolution? In this study, we proposed a simple model of polyploid
cells and showed that the evolutionary rate of polyploids is similar to or much
slower than that of haploids under neutral selection or during gradual
evolution. However, on a fitness landscape where cells should jump over a
lethal valley to increase their fitness, the probability of evolution in
polyploidy could be drastically increased, and the optimal number of
chromosomes was identified. We theoretically discussed the existence of this
optimal chromosome number from the large deviation theory. Furthermore, we
proposed that the optimization for achieving evolutionary innovation could
determine the range of chromosome numbers in polyploid bacteria.
|
[
{
"created": "Tue, 1 Nov 2022 16:55:45 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 08:06:16 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Hatakeyama",
"Tetsuhiro S.",
""
],
[
"Ohbayashi",
"Ryudo",
""
]
] |
The preferred conditions for evolutionary innovation is a fundamental question, but little is known, in part because the question involves rare events. We focused on the potential role of polyploidy in the evolution of novel traits. There are two hypotheses regarding the effects of polyploidy on evolution: Polyploidy reduces the effect of a single mutation and slows evolution. In contrast, the gene redundancy introduced by polyploidy will promote neofunctionalization and accelerate evolution. Does polyploidy speed up or slow down evolution? In this study, we proposed a simple model of polyploid cells and showed that the evolutionary rate of polyploids is similar to or much slower than that of haploids under neutral selection or during gradual evolution. However, on a fitness landscape where cells should jump over a lethal valley to increase their fitness, the probability of evolution in polyploidy could be drastically increased, and the optimal number of chromosomes was identified. We theoretically discussed the existence of this optimal chromosome number from the large deviation theory. Furthermore, we proposed that the optimization for achieving evolutionary innovation could determine the range of chromosome numbers in polyploid bacteria.
|
2407.12897
|
Sai Spandana Chintapalli
|
Sai Spandana Chintapalli, Rongguang Wang, Zhijian Yang, Vasiliki
Tassopoulou, Fanyang Yu, Vishnu Bashyam, Guray Erus, Pratik Chaudhari,
Haochang Shou, Christos Davatzikos
|
NeuroSynth: MRI-Derived Neuroanatomical Generative Models and Associated
Dataset of 18,000 Samples
| null | null | null | null |
q-bio.QM stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Availability of large and diverse medical datasets is often challenged by
privacy and data sharing restrictions. For successful application of machine
learning techniques for disease diagnosis, prognosis, and precision medicine,
large amounts of data are necessary for model building and optimization. To
help overcome such limitations in the context of brain MRI, we present
NeuroSynth: a collection of generative models of normative regional volumetric
features derived from structural brain imaging. NeuroSynth models are trained
on real brain imaging regional volumetric measures from the iSTAGING
consortium, which encompasses over 40,000 MRI scans across 13 studies,
incorporating covariates such as age, sex, and race. Leveraging NeuroSynth, we
produce and offer 18,000 synthetic samples spanning the adult lifespan (ages
22-90 years), alongside the model's capability to generate unlimited data.
Experimental results indicate that samples generated from NeuroSynth agree with
the distributions obtained from real data. Most importantly, the generated
normative data significantly enhance the accuracy of downstream machine
learning models on tasks such as disease classification. Data and models are
available at: https://huggingface.co/spaces/rongguangw/neuro-synth.
|
[
{
"created": "Wed, 17 Jul 2024 15:33:10 GMT",
"version": "v1"
}
] |
2024-07-19
|
[
[
"Chintapalli",
"Sai Spandana",
""
],
[
"Wang",
"Rongguang",
""
],
[
"Yang",
"Zhijian",
""
],
[
"Tassopoulou",
"Vasiliki",
""
],
[
"Yu",
"Fanyang",
""
],
[
"Bashyam",
"Vishnu",
""
],
[
"Erus",
"Guray",
""
],
[
"Chaudhari",
"Pratik",
""
],
[
"Shou",
"Haochang",
""
],
[
"Davatzikos",
"Christos",
""
]
] |
Availability of large and diverse medical datasets is often challenged by privacy and data sharing restrictions. For successful application of machine learning techniques for disease diagnosis, prognosis, and precision medicine, large amounts of data are necessary for model building and optimization. To help overcome such limitations in the context of brain MRI, we present NeuroSynth: a collection of generative models of normative regional volumetric features derived from structural brain imaging. NeuroSynth models are trained on real brain imaging regional volumetric measures from the iSTAGING consortium, which encompasses over 40,000 MRI scans across 13 studies, incorporating covariates such as age, sex, and race. Leveraging NeuroSynth, we produce and offer 18,000 synthetic samples spanning the adult lifespan (ages 22-90 years), alongside the model's capability to generate unlimited data. Experimental results indicate that samples generated from NeuroSynth agree with the distributions obtained from real data. Most importantly, the generated normative data significantly enhance the accuracy of downstream machine learning models on tasks such as disease classification. Data and models are available at: https://huggingface.co/spaces/rongguangw/neuro-synth.
|
1501.00197
|
Liane Gabora
|
Liane Gabora and Apara Ranjan
|
Nuancing the Neuron: A Review of 'The Memory Process: Neuroscientific
and Humanistic Perspectives' by Suzanne Nalbantian, Paul M. Matthews, and
James L. McClelland (Eds.)
| null |
PsycCritiques: Contemporary Psychology APA Review of Books, 56(39)
(2011)
| null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Memory Process, edited by Suzanne Nalbantian, Paul M. Matthews, and James
L. McClelland, is an intriguing and well-written book that provides a
groundbreaking overview of diverse approaches to understanding memory that sets
the agenda for an interdisciplinary approach to the topic. Memory has long been
a focus of investigation and interest in both the sciences and the humanities.
The way memory enriches and distorts lived experience has been widely explored
in literature and the arts. Our fascination with the subject is increasingly
evident in popular culture, with the widespread proliferation of novels and
movies in which events play out in a nonlinear fashion that reflects how
memories of them are woven together in the minds of the characters involved.
Scientific approaches to memory have focused on the study of amnesiacs,
neuroimaging studies, and cognitive studies of the formation, retrieval, and
forgetting of memories. Until now, however, humanistic and scientific
investigations of memory have been carried out independently. This book
provides an exemplary illustration of how disciplinary boundaries can be
transcended, showing how approaches to memory from the humanities and the
sciences can revitalize one another. With 19 chapters written by academics from
a range of disciplines including neuroscience, psychology, psychiatry,
cognitive science, cultural studies, philosophy, bioethics, history of science,
art, theatre, film, and literature, it is undoubtedly one of the most
interdisciplinary books on library shelves. Together these diverse perspectives
provide a broad and stimulating overview of the memory process.
|
[
{
"created": "Sun, 28 Dec 2014 06:05:36 GMT",
"version": "v1"
}
] |
2015-01-05
|
[
[
"Gabora",
"Liane",
""
],
[
"Ranjan",
"Apara",
""
]
] |
The Memory Process, edited by Suzanne Nalbantian, Paul M. Matthews, and James L. McClelland, is an intriguing and well-written book that provides a groundbreaking overview of diverse approaches to understanding memory that sets the agenda for an interdisciplinary approach to the topic. Memory has long been a focus of investigation and interest in both the sciences and the humanities. The way memory enriches and distorts lived experience has been widely explored in literature and the arts. Our fascination with the subject is increasingly evident in popular culture, with the widespread proliferation of novels and movies in which events play out in a nonlinear fashion that reflects how memories of them are woven together in the minds of the characters involved. Scientific approaches to memory have focused on the study of amnesiacs, neuroimaging studies, and cognitive studies of the formation, retrieval, and forgetting of memories. Until now, however, humanistic and scientific investigations of memory have been carried out independently. This book provides an exemplary illustration of how disciplinary boundaries can be transcended, showing how approaches to memory from the humanities and the sciences can revitalize one another. With 19 chapters written by academics from a range of disciplines including neuroscience, psychology, psychiatry, cognitive science, cultural studies, philosophy, bioethics, history of science, art, theatre, film, and literature, it is undoubtedly one of the most interdisciplinary books on library shelves. Together these diverse perspectives provide a broad and stimulating overview of the memory process.
|
1703.07996
|
Sarwan Kumar
|
Jagdev Singh Kular and Sarwan Kumar
|
Quantification of avoidable yield losses in oilseed Brassica caused by
insect pests
| null |
Journal of Plant Protection Research, 51(1): 38-43 (2011)
| null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A six year field study was conducted from 2001 2002 to 2006 2007 at Punjab
Agricultural University, Ludhiana, India to study the losses in seed yield of
different Brassica species (B. juncea, B. napus, B. carinata, B. rapa and Eruca
sativa) by the infestation of insect pests. The experiment was conducted in two
different sets viz. protected/sprayed and unprotected, in a randomized block
design, with three replications. Data on the infestation of insect pests, and
seed yield were recorded at weekly intervals and at harvest, respectively. The
loss in seed yield, due to mustard aphid and cabbage caterpillar, varied from
6.5 to 26.4 per cent. E. sativa suffered the least loss in seed yield and
harboured the minimum population of mustard aphid (2.1 aphids/plant) and
cabbage caterpillar (2.4 larvae/plant). On the other hand, B. carinata was
highly susceptible to the cabbage caterpillar (26.2 larvae/plant) and suffered
the maximum yield loss (26.4%).
|
[
{
"created": "Thu, 23 Mar 2017 10:45:56 GMT",
"version": "v1"
}
] |
2017-03-24
|
[
[
"Kular",
"Jagdev Singh",
""
],
[
"Kumar",
"Sarwan",
""
]
] |
A six year field study was conducted from 2001 2002 to 2006 2007 at Punjab Agricultural University, Ludhiana, India to study the losses in seed yield of different Brassica species (B. juncea, B. napus, B. carinata, B. rapa and Eruca sativa) by the infestation of insect pests. The experiment was conducted in two different sets viz. protected/sprayed and unprotected, in a randomized block design, with three replications. Data on the infestation of insect pests, and seed yield were recorded at weekly intervals and at harvest, respectively. The loss in seed yield, due to mustard aphid and cabbage caterpillar, varied from 6.5 to 26.4 per cent. E. sativa suffered the least loss in seed yield and harboured the minimum population of mustard aphid (2.1 aphids/plant) and cabbage caterpillar (2.4 larvae/plant). On the other hand, B. carinata was highly susceptible to the cabbage caterpillar (26.2 larvae/plant) and suffered the maximum yield loss (26.4%).
|
1309.3075
|
Filip Bielejec
|
Filip Bielejec, Philippe Lemey, Guy Baele, Andrew Rambaut, Marc A
Suchard
|
Inferring Heterogeneous Evolutionary Processes Through Time: from
sequence substitution to phylogeography
|
30 pages, 6 figure, 3 tables
| null | null | null |
q-bio.PE stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular phylogenetic and phylogeographic reconstructions generally assume
time-homogeneous substitution processes. Motivated by computational
convenience, this assumption sacrifices biological realism and offers little
opportunity to uncover the temporal dynamics in evolutionary histories. Here,
we extend and generalize an evolutionary approach that relaxes the
time-homogeneous process assumption by allowing the specification of different
infinitesimal substitution rate matrices across different time intervals,
called epochs, along the evolutionary history. We focus on an epoch model
implementation in a Bayesian inference framework that offers great modeling
flexibility in drawing inference about any discrete data type characterized as
a continuous-time Markov chain, including phylogeographic traits. To alleviate
the computational burden that the additional temporal heterogeneity imposes, we
adopt a massively parallel approach that achieves both fine- and coarse-grain
parallelization of the computations across branches that accommodate epoch
transitions, making extensive use of graphics processing units. Through
synthetic examples, we assess model performance in recovering evolutionary
parameters from data generated according to different evolutionary scenarios
that comprise different numbers of epochs for both nucleotide and codon
substitution processes. We illustrate the usefulness of our inference framework
in two different applications to empirical data sets: the selection dynamics on
within-host HIV populations throughout infection and the seasonality of global
influenza circulation. In both cases, our epoch model captures key features of
temporal heterogeneity that remained difficult to test using ad hoc procedures.
|
[
{
"created": "Thu, 12 Sep 2013 09:38:43 GMT",
"version": "v1"
}
] |
2013-09-13
|
[
[
"Bielejec",
"Filip",
""
],
[
"Lemey",
"Philippe",
""
],
[
"Baele",
"Guy",
""
],
[
"Rambaut",
"Andrew",
""
],
[
"Suchard",
"Marc A",
""
]
] |
Molecular phylogenetic and phylogeographic reconstructions generally assume time-homogeneous substitution processes. Motivated by computational convenience, this assumption sacrifices biological realism and offers little opportunity to uncover the temporal dynamics in evolutionary histories. Here, we extend and generalize an evolutionary approach that relaxes the time-homogeneous process assumption by allowing the specification of different infinitesimal substitution rate matrices across different time intervals, called epochs, along the evolutionary history. We focus on an epoch model implementation in a Bayesian inference framework that offers great modeling flexibility in drawing inference about any discrete data type characterized as a continuous-time Markov chain, including phylogeographic traits. To alleviate the computational burden that the additional temporal heterogeneity imposes, we adopt a massively parallel approach that achieves both fine- and coarse-grain parallelization of the computations across branches that accommodate epoch transitions, making extensive use of graphics processing units. Through synthetic examples, we assess model performance in recovering evolutionary parameters from data generated according to different evolutionary scenarios that comprise different numbers of epochs for both nucleotide and codon substitution processes. We illustrate the usefulness of our inference framework in two different applications to empirical data sets: the selection dynamics on within-host HIV populations throughout infection and the seasonality of global influenza circulation. In both cases, our epoch model captures key features of temporal heterogeneity that remained difficult to test using ad hoc procedures.
|
2308.08864
|
Dipam Das
|
Debasish Bhattacharjee, Nabajit Ray, Dipam Das, Hemanta Kumar Sarmah
|
A discrete-time dynamical model of prey and stage-structured predator
with juvenile hunting incorporating negative effects of prey refuge
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper examines a discrete predator-prey model that incorporates prey
refuge and its detrimental impact on the growth of the prey population. Age
structure is taken into account for predator species. Furthermore, juvenile
hunting as well as prey counter-attack are also considered. This paper provides
a comprehensive analysis of the existence and stability conditions pertaining
to all possible fixed points. The analytical and numerical investigation into
the occurrence of different bifurcations, such as the Neimark-Sacker
bifurcation and period-doubling bifurcation, in relation to various parameters
is discussed. The impact of the parameters reflecting prey growth and prey
refuge is thoroughly addressed. Numerous numerical simulations are presented in
order to validate the theoretical findings.
|
[
{
"created": "Thu, 17 Aug 2023 08:47:03 GMT",
"version": "v1"
}
] |
2023-08-21
|
[
[
"Bhattacharjee",
"Debasish",
""
],
[
"Ray",
"Nabajit",
""
],
[
"Das",
"Dipam",
""
],
[
"Sarmah",
"Hemanta Kumar",
""
]
] |
This paper examines a discrete predator-prey model that incorporates prey refuge and its detrimental impact on the growth of the prey population. Age structure is taken into account for predator species. Furthermore, juvenile hunting as well as prey counter-attack are also considered. This paper provides a comprehensive analysis of the existence and stability conditions pertaining to all possible fixed points. The analytical and numerical investigation into the occurrence of different bifurcations, such as the Neimark-Sacker bifurcation and period-doubling bifurcation, in relation to various parameters is discussed. The impact of the parameters reflecting prey growth and prey refuge is thoroughly addressed. Numerous numerical simulations are presented in order to validate the theoretical findings.
|
1308.2107
|
Peter Wills
|
Peter R. Wills
|
Genetic information, physical interpreters and thermodynamics; the
material-informatic basis of biosemiosis
|
27 pages, 1 figure, oral presentation at 12th Annual Gatherings in
Biosemiotics, Tartu, Estonia, 2012
| null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sequence of nucleotide bases occurring in an organism's DNA is often
regarded as a codescript for its construction. However, information in a DNA
sequence can only be regarded as a codescript relative to an operational
biochemical machine, which the information constrains in such a way as to
direct the process of construction. In reality, any biochemical machine for
which a DNA codescript is efficacious is itself produced through the mechanical
interpretation of an identical or very similar codescript. In these terms the
origin of life can be described as a bootstrap process involving the
simultaneous accumulation of genetic information and the generation of a
machine that interprets it as instructions for its own construction. This
problem is discussed within the theoretical frameworks of thermodynamics,
informatics and self-reproducing automata, paying special attention to the
physico-chemical origin of genetic coding and the conditions, both
thermodynamic and informatic, which a system must fulfil in order for it to
sustain semiosis. The origin of life is equated with biosemiosis
|
[
{
"created": "Fri, 9 Aug 2013 12:33:36 GMT",
"version": "v1"
}
] |
2013-08-12
|
[
[
"Wills",
"Peter R.",
""
]
] |
The sequence of nucleotide bases occurring in an organism's DNA is often regarded as a codescript for its construction. However, information in a DNA sequence can only be regarded as a codescript relative to an operational biochemical machine, which the information constrains in such a way as to direct the process of construction. In reality, any biochemical machine for which a DNA codescript is efficacious is itself produced through the mechanical interpretation of an identical or very similar codescript. In these terms the origin of life can be described as a bootstrap process involving the simultaneous accumulation of genetic information and the generation of a machine that interprets it as instructions for its own construction. This problem is discussed within the theoretical frameworks of thermodynamics, informatics and self-reproducing automata, paying special attention to the physico-chemical origin of genetic coding and the conditions, both thermodynamic and informatic, which a system must fulfil in order for it to sustain semiosis. The origin of life is equated with biosemiosis
|
2005.14496
|
Biman Bagchi -
|
Sayantan Mondal, Saumyak Mukherjee and Biman Bagchi
|
Attainment of Herd Immunity: Mathematical Modelling of Survival Rate
|
9 pages, 9 figures, 2 tables
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the influence of the rate of the attainment of herd immunity (HI),
in the absence of an approved vaccine, on the vulnerable population. We
essentially ask the question: how hard the evolution towards the desired herd
immunity could be on the life of the vulnerables? We employ mathematical
modelling (chemical network theory) and cellular automata based computer
simulations to study the human cost of an epidemic spread and an effective
strategy to introduce HI. Implementation of different strategies to counter the
spread of the disease requires a certain degree of quantitative understanding
of the time dependence of the outcome. In this paper, our main objective is to
gather understanding of the dependence of outcome on the rate of progress of
HI. We generalize the celebrated SIR model (Susceptible-Infected-Removed) by
compartmentalizing the susceptible population into two categories- (i)
vulnerables and (ii) resilients, and study dynamical evolution of the disease
progression. We achieve such a classification by employing different rates of
recovery of vulnerables vis-a-vis resilients. We obtain the relative fatality
of these two sub-categories as a function of the percentages of the vulnerable
and resilient population, and the complex dependence on the rate of attainment
of herd immunity. Our results quantify the adverse effects on the recovery
rates of vulnerables in the course of attaining the herd immunity. We find the
important result that a slower attainment of the HI is relatively less fatal.
However, a slower progress towards HI could be complicated by many intervening
factors.
|
[
{
"created": "Fri, 29 May 2020 10:49:06 GMT",
"version": "v1"
}
] |
2020-06-01
|
[
[
"Mondal",
"Sayantan",
""
],
[
"Mukherjee",
"Saumyak",
""
],
[
"Bagchi",
"Biman",
""
]
] |
We study the influence of the rate of the attainment of herd immunity (HI), in the absence of an approved vaccine, on the vulnerable population. We essentially ask the question: how hard the evolution towards the desired herd immunity could be on the life of the vulnerables? We employ mathematical modelling (chemical network theory) and cellular automata based computer simulations to study the human cost of an epidemic spread and an effective strategy to introduce HI. Implementation of different strategies to counter the spread of the disease requires a certain degree of quantitative understanding of the time dependence of the outcome. In this paper, our main objective is to gather understanding of the dependence of outcome on the rate of progress of HI. We generalize the celebrated SIR model (Susceptible-Infected-Removed) by compartmentalizing the susceptible population into two categories- (i) vulnerables and (ii) resilients, and study dynamical evolution of the disease progression. We achieve such a classification by employing different rates of recovery of vulnerables vis-a-vis resilients. We obtain the relative fatality of these two sub-categories as a function of the percentages of the vulnerable and resilient population, and the complex dependence on the rate of attainment of herd immunity. Our results quantify the adverse effects on the recovery rates of vulnerables in the course of attaining the herd immunity. We find the important result that a slower attainment of the HI is relatively less fatal. However, a slower progress towards HI could be complicated by many intervening factors.
|
0807.3089
|
James Fowler
|
James H. Fowler, Christopher T. Dawes, Nicholas A. Christakis
|
Model of Genetic Variation in Human Social Networks
|
Additional materials related to the paper are available at
http://jhfowler.ucsd.edu
|
PNAS 106 (6): 1720-1724 (10 February 2009)
|
10.1073/pnas.0806746106
| null |
q-bio.GN physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social networks exhibit strikingly systematic patterns across a wide range of
human contexts. While genetic variation accounts for a significant portion of
the variation in many complex social behaviors, the heritability of egocentric
social network attributes is unknown. Here we show that three of these
attributes (in-degree, transitivity, and centrality) are heritable. We then
develop a "mirror network" method to test extant network models and show that
none accounts for observed genetic variation in human social networks. We
propose an alternative "Attract and Introduce" model with two simple forms of
heterogeneity that generates significant heritability as well as other
important network features. We show that the model is well suited to real
social networks in humans. These results suggest that natural selection may
have played a role in the evolution of social networks. They also suggest that
modeling intrinsic variation in network attributes may be important for
understanding the way genes affect human behaviors and the way these behaviors
spread from person to person.
|
[
{
"created": "Sat, 19 Jul 2008 13:20:00 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Oct 2008 06:42:27 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Nov 2008 23:07:27 GMT",
"version": "v3"
},
{
"created": "Wed, 11 Feb 2009 16:31:14 GMT",
"version": "v4"
}
] |
2009-02-11
|
[
[
"Fowler",
"James H.",
""
],
[
"Dawes",
"Christopher T.",
""
],
[
"Christakis",
"Nicholas A.",
""
]
] |
Social networks exhibit strikingly systematic patterns across a wide range of human contexts. While genetic variation accounts for a significant portion of the variation in many complex social behaviors, the heritability of egocentric social network attributes is unknown. Here we show that three of these attributes (in-degree, transitivity, and centrality) are heritable. We then develop a "mirror network" method to test extant network models and show that none accounts for observed genetic variation in human social networks. We propose an alternative "Attract and Introduce" model with two simple forms of heterogeneity that generates significant heritability as well as other important network features. We show that the model is well suited to real social networks in humans. These results suggest that natural selection may have played a role in the evolution of social networks. They also suggest that modeling intrinsic variation in network attributes may be important for understanding the way genes affect human behaviors and the way these behaviors spread from person to person.
|
2009.00388
|
Michel Destrade
|
Ilaria Cinelli, Michel Destrade, Peter McHugh, Antonia Trotta, Michael
Gilchrist, Maeve Duffy
|
Head-to-nerve analysis of electromechanical impairments of diffuse
axonal injury
| null |
Biomechanics and Modeling in Mechanobiology volume 18, pages
361-374(2019)
|
10.1007/s10237-018-1086-8
| null |
q-bio.TO cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim was to investigate mechanical and functional failure of diffuse
axonal injury (DAI) in nerve bundles following frontal head impacts, by finite
element simulations. Anatomical changes following traumatic brain injury are
simulated at the macroscale by using a 3D head model. Frontal head impacts at
speeds of 2.5-7.5 m/s induce mild-to-moderate DAI in the white matter of the
brain. Investigation of the changes in induced electromechanical responses at
the cellular level is carried out in two scaled nerve bundle models, one with
myelinated nerve fibres, the other with unmyelinated nerve fibres. DAI
occurrence is simulated by using a real-time fully coupled electromechanical
framework, which combines a modulated threshold for spiking activation and
independent alteration of the electrical properties for each three-layer fibre
in the nerve bundle models. The magnitudes of simulated strains in the white
matter of the brain model are used to determine the displacement boundary
conditions in elongation simulations using the 3D nerve bundle models. At high
impact speed, mechanical failure occurs at lower strain values in large
unmyelinated bundles than in myelinated bundles or small unmyelinated bundles;
signal propagation continues in large myelinated bundles during and after
loading, although there is a large shift in baseline voltage during loading; a
linear relationship is observed between the generated plastic strain in the
nerve bundle models and the impact speed and nominal strains of the head model.
The myelin layer protects the fibre from mechanical damage, preserving its
functionalities.
|
[
{
"created": "Mon, 31 Aug 2020 15:03:18 GMT",
"version": "v1"
}
] |
2020-09-02
|
[
[
"Cinelli",
"Ilaria",
""
],
[
"Destrade",
"Michel",
""
],
[
"McHugh",
"Peter",
""
],
[
"Trotta",
"Antonia",
""
],
[
"Gilchrist",
"Michael",
""
],
[
"Duffy",
"Maeve",
""
]
] |
The aim was to investigate mechanical and functional failure of diffuse axonal injury (DAI) in nerve bundles following frontal head impacts, by finite element simulations. Anatomical changes following traumatic brain injury are simulated at the macroscale by using a 3D head model. Frontal head impacts at speeds of 2.5-7.5 m/s induce mild-to-moderate DAI in the white matter of the brain. Investigation of the changes in induced electromechanical responses at the cellular level is carried out in two scaled nerve bundle models, one with myelinated nerve fibres, the other with unmyelinated nerve fibres. DAI occurrence is simulated by using a real-time fully coupled electromechanical framework, which combines a modulated threshold for spiking activation and independent alteration of the electrical properties for each three-layer fibre in the nerve bundle models. The magnitudes of simulated strains in the white matter of the brain model are used to determine the displacement boundary conditions in elongation simulations using the 3D nerve bundle models. At high impact speed, mechanical failure occurs at lower strain values in large unmyelinated bundles than in myelinated bundles or small unmyelinated bundles; signal propagation continues in large myelinated bundles during and after loading, although there is a large shift in baseline voltage during loading; a linear relationship is observed between the generated plastic strain in the nerve bundle models and the impact speed and nominal strains of the head model. The myelin layer protects the fibre from mechanical damage, preserving its functionalities.
|
1610.07306
|
Anand Bhaskar
|
Anand Bhaskar, Adel Javanmard, Thomas A. Courtade, David Tse
|
Novel probabilistic models of spatial genetic ancestry with applications
to stratification correction in genome-wide association studies
|
Supplementary information included to the main text
| null | null | null |
q-bio.PE stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genetic variation in human populations is influenced by geographic ancestry
due to spatial locality in historical mating and migration patterns. Spatial
population structure in genetic datasets has been traditionally analyzed using
either model-free algorithms, such as principal components analysis (PCA) and
multidimensional scaling, or using explicit spatial probabilistic models of
allele frequency evolution. We develop a general probabilistic model and an
associated inference algorithm that unify the model-based and data-driven
approaches to visualizing and inferring population structure. Our algorithm,
Geographic Ancestry Positioning (GAP), relates local genetic distances between
samples to their spatial distances, and can be used for visually discerning
population structure as well as accurately inferring the spatial origin of
individuals on a two-dimensional continuum. On both simulated and several real
datasets from diverse human populations, GAP exhibits substantially lower error
in reconstructing spatial ancestry coordinates compared to PCA.
Our spatial inference algorithm can also be effectively applied to the
problem of population stratification in genome-wide association studies (GWAS),
where hidden population structure can create fictitious associations when
population ancestry is correlated with both the genotype and the trait. We
develop an association test that uses the ancestry coordinates inferred by GAP
to accurately account for ancestry-induced correlations in GWAS. Based on
simulations and analysis of a dataset of 10 metabolic traits measured in a
Northern Finland cohort, which is known to exhibit significant population
structure, we find that our method has superior power to current approaches.
|
[
{
"created": "Mon, 24 Oct 2016 07:22:43 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Oct 2016 04:55:27 GMT",
"version": "v2"
}
] |
2016-10-26
|
[
[
"Bhaskar",
"Anand",
""
],
[
"Javanmard",
"Adel",
""
],
[
"Courtade",
"Thomas A.",
""
],
[
"Tse",
"David",
""
]
] |
Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our algorithm, Geographic Ancestry Positioning (GAP), relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. We develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches.
|
1005.0899
|
Thierry Rabilloud
|
Thierry Rabilloud (BBSI)
|
Variations on a theme: Changes to electrophoretic separations that can
make a difference
| null |
Journal of proteomics (2010) epub ahead of print
|
10.1016/j.jprot.2010.04.001
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electrophoretic separations of proteins are widely used in proteomic
analyses, and rely heavily on SDS electrophoresis. This mode of separation is
almost exclusively used when a single dimension separation is performed, and
generally represents the second dimension of two-dimensional separations.
Electrophoretic separations for proteomics use robust, well-established
protocols. However, many variations in almost all possible parameters have been
described in the literature over the years, and they may bring a decisive
advantage when the limits of the classical protocols are reached. The purpose
of this article is to review the most important of these variations, so that
the readers can be aware of how they can improve or tune protein separations
according to their needs. The chemical variations reviewed in this paper
encompass gel structure, buffer systems and detergents for SDS electrophoresis,
two-dimensional electrophoresis based on isoelectric focusing and
two-dimensional electrophoresis based on cationic zone electrophoresis.
|
[
{
"created": "Thu, 6 May 2010 06:53:31 GMT",
"version": "v1"
}
] |
2010-05-07
|
[
[
"Rabilloud",
"Thierry",
"",
"BBSI"
]
] |
Electrophoretic separations of proteins are widely used in proteomic analyses, and rely heavily on SDS electrophoresis. This mode of separation is almost exclusively used when a single dimension separation is performed, and generally represents the second dimension of two-dimensional separations. Electrophoretic separations for proteomics use robust, well-established protocols. However, many variations in almost all possible parameters have been described in the literature over the years, and they may bring a decisive advantage when the limits of the classical protocols are reached. The purpose of this article is to review the most important of these variations, so that the readers can be aware of how they can improve or tune protein separations according to their needs. The chemical variations reviewed in this paper encompass gel structure, buffer systems and detergents for SDS electrophoresis, two-dimensional electrophoresis based on isoelectric focusing and two-dimensional electrophoresis based on cationic zone electrophoresis.
|
2007.04370
|
Gloria Cecchini
|
Lorenzo Chicchi, Gloria Cecchini, Ihusan Adam, Giuseppe de Vito,
Roberto Livi, Francesco Saverio Pavone, Ludovico Silvestri, Lapo Turrini,
Francesco Vanzi, Duccio Fanelli
|
Reconstruction scheme for excitatory and inhibitory dynamics with
quenched disorder: application to zebrafish imaging
| null |
Journal of Computational Neuroscience 49, 159-174 (2021)
|
10.1007/s10827-020-00774-1
| null |
q-bio.NC cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An inverse procedure is developed and tested to recover functional and
structural information from global signals of brains activity. The method
assumes a leaky-integrate and fire model with excitatory and inhibitory
neurons, coupled via a directed network. Neurons are endowed with a
heterogenous current value, which sets their associated dynamical regime. By
making use of a heterogenous mean-field approximation, the method seeks to
reconstructing from global activity patterns the distribution of in-coming
degrees, for both excitatory and inhibitory neurons, as well as the
distribution of the assigned currents. The proposed inverse scheme is first
validated against synthetic data. Then, time-lapse acquisitions of a zebrafish
larva recorded with a two-photon light sheet microscope are used as an input to
the reconstruction algorithm. A power law distribution of the in-coming
connectivity of the excitatory neurons is found. Local degree distributions are
also computed by segmenting the whole brain in sub-regions traced from
annotated atlas.
|
[
{
"created": "Wed, 8 Jul 2020 18:58:23 GMT",
"version": "v1"
}
] |
2021-05-14
|
[
[
"Chicchi",
"Lorenzo",
""
],
[
"Cecchini",
"Gloria",
""
],
[
"Adam",
"Ihusan",
""
],
[
"de Vito",
"Giuseppe",
""
],
[
"Livi",
"Roberto",
""
],
[
"Pavone",
"Francesco Saverio",
""
],
[
"Silvestri",
"Ludovico",
""
],
[
"Turrini",
"Lapo",
""
],
[
"Vanzi",
"Francesco",
""
],
[
"Fanelli",
"Duccio",
""
]
] |
An inverse procedure is developed and tested to recover functional and structural information from global signals of brains activity. The method assumes a leaky-integrate and fire model with excitatory and inhibitory neurons, coupled via a directed network. Neurons are endowed with a heterogenous current value, which sets their associated dynamical regime. By making use of a heterogenous mean-field approximation, the method seeks to reconstructing from global activity patterns the distribution of in-coming degrees, for both excitatory and inhibitory neurons, as well as the distribution of the assigned currents. The proposed inverse scheme is first validated against synthetic data. Then, time-lapse acquisitions of a zebrafish larva recorded with a two-photon light sheet microscope are used as an input to the reconstruction algorithm. A power law distribution of the in-coming connectivity of the excitatory neurons is found. Local degree distributions are also computed by segmenting the whole brain in sub-regions traced from annotated atlas.
|
2006.15128
|
John Antrobus Ph.D.
|
John S. Antrobus, Yusuke Shono, Wolfgang M. Pauli, and Bala Sundaram
|
All Recognition is Accomplished By Interacting Bottom-Up Sensory and
Top-Down Context Bias in Occipital to Frontal Cortex Neural Networks
|
39 pages, 1 figure
| null | null | null |
q-bio.NC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recognition of every word is accomplished by close collaboration of bottom-up
sub-word and word recognition neural networks with top-down cognitive word
context expectations. The utility of this context appropriate collaboration is
substantial savings in recognition time, accuracy and cortical neural
processing resources. Repetition priming, the simplest form of context
facilitation, has been studied extensively, but behavioral and cognitive
neuroscience research has failed to produce a common shared model. Facilitation
is attributed to temporary lowered word recognition thresholds. Recent fMRI
evidence identifies frontal, prefrontal, left temporal cortex interactions as
the source of this priming bias. Five experiments presented here clearly
demonstrate that word recognition facilitation is a bias effect. Context-Biased
Fast Accurate Recognition, a recurrent neural network model, shows how this
anticipatory bias is accomplished by interactions among top-down conceptual
cognitive networks and bottom-up lexical word recognition networks. Signal
detection theory says that this facilitation bias is offset by the cost of
miss-recognizing similar, but different words. However, the prime typically
creates a temporary time-space recognition window within which probability of
prime recurrence is substantially raised paradoxically transforming bias into
sensitivity.
|
[
{
"created": "Fri, 26 Jun 2020 17:36:58 GMT",
"version": "v1"
}
] |
2020-06-29
|
[
[
"Antrobus",
"John S.",
""
],
[
"Shono",
"Yusuke",
""
],
[
"Pauli",
"Wolfgang M.",
""
],
[
"Sundaram",
"Bala",
""
]
] |
Recognition of every word is accomplished by close collaboration of bottom-up sub-word and word recognition neural networks with top-down cognitive word context expectations. The utility of this context appropriate collaboration is substantial savings in recognition time, accuracy and cortical neural processing resources. Repetition priming, the simplest form of context facilitation, has been studied extensively, but behavioral and cognitive neuroscience research has failed to produce a common shared model. Facilitation is attributed to temporary lowered word recognition thresholds. Recent fMRI evidence identifies frontal, prefrontal, left temporal cortex interactions as the source of this priming bias. Five experiments presented here clearly demonstrate that word recognition facilitation is a bias effect. Context-Biased Fast Accurate Recognition, a recurrent neural network model, shows how this anticipatory bias is accomplished by interactions among top-down conceptual cognitive networks and bottom-up lexical word recognition networks. Signal detection theory says that this facilitation bias is offset by the cost of miss-recognizing similar, but different words. However, the prime typically creates a temporary time-space recognition window within which probability of prime recurrence is substantially raised paradoxically transforming bias into sensitivity.
|
1708.04103
|
Vince Grolmusz
|
Balazs Szalkai and Vince Grolmusz
|
SECLAF: A Webserver and Deep Neural Network Design Tool for Biological
Sequence Classification
| null | null | null | null |
q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial intelligence (AI) tools are gaining more and more ground each year
in bioinformatics. Learning algorithms can be taught easily by using the
existing enormous biological databases, and the resulting models can be used
for the high-quality classification of novel, un-categorized data in numerous
areas, including biological sequence analysis. Here we introduce SECLAF, an
artificial neural-net based biological sequence classifier framework, which
uses the Tensorflow library of Google, Inc. By applying SECLAF for
residue-sequences, we have reported (Methods (2017),
https://doi.org/10.1016/j.ymeth.2017.06.034) the most accurate multi-label
protein classifier to date (UniProt --into 698 classes-- AUC 99.99\%; Gene
Ontology --into 983 classes-- AUC 99.45\%). Our framework SECLAF can be applied
for other sequence classification tasks, as we describe in the present
contribution.
Availability and implementation: The program SECLAF is implemented in Python,
and is available for download, with example datasets at the website
https://pitgroup.org/seclaf/. For Gene Ontology and UniProt based
classifications a webserver is also available at the address above.
|
[
{
"created": "Mon, 14 Aug 2017 13:00:03 GMT",
"version": "v1"
}
] |
2017-08-15
|
[
[
"Szalkai",
"Balazs",
""
],
[
"Grolmusz",
"Vince",
""
]
] |
Artificial intelligence (AI) tools are gaining more and more ground each year in bioinformatics. Learning algorithms can be taught easily by using the existing enormous biological databases, and the resulting models can be used for the high-quality classification of novel, un-categorized data in numerous areas, including biological sequence analysis. Here we introduce SECLAF, an artificial neural-net based biological sequence classifier framework, which uses the Tensorflow library of Google, Inc. By applying SECLAF for residue-sequences, we have reported (Methods (2017), https://doi.org/10.1016/j.ymeth.2017.06.034) the most accurate multi-label protein classifier to date (UniProt --into 698 classes-- AUC 99.99\%; Gene Ontology --into 983 classes-- AUC 99.45\%). Our framework SECLAF can be applied for other sequence classification tasks, as we describe in the present contribution. Availability and implementation: The program SECLAF is implemented in Python, and is available for download, with example datasets at the website https://pitgroup.org/seclaf/. For Gene Ontology and UniProt based classifications a webserver is also available at the address above.
|
2201.00663
|
Quentin Thommen
|
Quentin Thommen, Julien Hurbain, and Benjamin Pfeuty
|
Stochastic simulation algorithm for isotope-based dynamic flux analysis
|
9 pages, 4 figures
| null | null | null |
q-bio.MN
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Carbon isotope labeling method is a standard metabolic engineering tool for
flux quantification in living cells. To cope with the high dimensionality of
isotope labeling systems, diverse algorithms have been developed to reduce the
number of variables or operations in metabolic flux analysis (MFA), but lacks
generalizability to non-stationary metabolic conditions. In this study, we
present a stochastic simulation algorithm (SSA) derived from the chemical
master equation of the isotope labeling system. This algorithm allows to
compute the time evolution of isotopomer concentrations in non-stationary
conditions, with the valuable property that computational time does not scale
with the number of isotopomers. The efficiency and limitations of the algorithm
is benchmarked for the forward and inverse problems of 13C-DMFA in the pentose
phosphate pathways. Overall, SSA constitute an alternative class to
deterministic approaches for metabolic flux analysis that is well adapted to
comprehensive dataset including parallel labeling experiments, and whose
limitations associated to the sampling size can be overcome by using Monte
Carlo sampling approaches.
|
[
{
"created": "Mon, 3 Jan 2022 13:56:54 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Feb 2022 09:40:25 GMT",
"version": "v2"
}
] |
2022-02-11
|
[
[
"Thommen",
"Quentin",
""
],
[
"Hurbain",
"Julien",
""
],
[
"Pfeuty",
"Benjamin",
""
]
] |
Carbon isotope labeling method is a standard metabolic engineering tool for flux quantification in living cells. To cope with the high dimensionality of isotope labeling systems, diverse algorithms have been developed to reduce the number of variables or operations in metabolic flux analysis (MFA), but lacks generalizability to non-stationary metabolic conditions. In this study, we present a stochastic simulation algorithm (SSA) derived from the chemical master equation of the isotope labeling system. This algorithm allows to compute the time evolution of isotopomer concentrations in non-stationary conditions, with the valuable property that computational time does not scale with the number of isotopomers. The efficiency and limitations of the algorithm is benchmarked for the forward and inverse problems of 13C-DMFA in the pentose phosphate pathways. Overall, SSA constitute an alternative class to deterministic approaches for metabolic flux analysis that is well adapted to comprehensive dataset including parallel labeling experiments, and whose limitations associated to the sampling size can be overcome by using Monte Carlo sampling approaches.
|
1903.00197
|
Eryu Xia
|
Eryu Xia, Xin Du, Jing Mei, Wen Sun, Suijun Tong, Zhiqing Kang, Jian
Sheng, Jian Li, Changsheng Ma, Jianzeng Dong, Shaochun Li
|
Outcome-Driven Clustering of Acute Coronary Syndrome Patients using
Multi-Task Neural Network with Attention
| null | null | null | null |
q-bio.QM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cluster analysis aims at separating patients into phenotypically heterogenous
groups and defining therapeutically homogeneous patient subclasses. It is an
important approach in data-driven disease classification and subtyping. Acute
coronary syndrome (ACS) is a syndrome due to sudden decrease of coronary artery
blood flow, where disease classification would help to inform therapeutic
strategies and provide prognostic insights. Here we conducted outcome-driven
cluster analysis of ACS patients, which jointly considers treatment and patient
outcome as indicators for patient state. Multi-task neural network with
attention was used as a modeling framework, including learning of the patient
state, cluster analysis, and feature importance profiling. Seven patient
clusters were discovered. The clusters have different characteristics, as well
as different risk profiles to the outcome of in-hospital major adverse cardiac
events. The results demonstrate cluster analysis using outcome-driven
multi-task neural network as promising for patient classification and
subtyping.
|
[
{
"created": "Fri, 1 Mar 2019 08:20:28 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Mar 2019 07:08:04 GMT",
"version": "v2"
}
] |
2019-03-28
|
[
[
"Xia",
"Eryu",
""
],
[
"Du",
"Xin",
""
],
[
"Mei",
"Jing",
""
],
[
"Sun",
"Wen",
""
],
[
"Tong",
"Suijun",
""
],
[
"Kang",
"Zhiqing",
""
],
[
"Sheng",
"Jian",
""
],
[
"Li",
"Jian",
""
],
[
"Ma",
"Changsheng",
""
],
[
"Dong",
"Jianzeng",
""
],
[
"Li",
"Shaochun",
""
]
] |
Cluster analysis aims at separating patients into phenotypically heterogenous groups and defining therapeutically homogeneous patient subclasses. It is an important approach in data-driven disease classification and subtyping. Acute coronary syndrome (ACS) is a syndrome due to sudden decrease of coronary artery blood flow, where disease classification would help to inform therapeutic strategies and provide prognostic insights. Here we conducted outcome-driven cluster analysis of ACS patients, which jointly considers treatment and patient outcome as indicators for patient state. Multi-task neural network with attention was used as a modeling framework, including learning of the patient state, cluster analysis, and feature importance profiling. Seven patient clusters were discovered. The clusters have different characteristics, as well as different risk profiles to the outcome of in-hospital major adverse cardiac events. The results demonstrate cluster analysis using outcome-driven multi-task neural network as promising for patient classification and subtyping.
|
2003.06349
|
Georgy Karev
|
Georgiy Karev
|
Dynamics of Strategy Distribution in a One-Dimensional Continuous Trait
Space with a Bi-linear and Quadratic Payoff Functions
|
28 pages,13 Figures; it is an extended version of the paper published
in "Games", 2020
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolution of distribution of strategies in game theory is an interesting
question that has been studied only for specific cases. Here I develop a
general method to extend analysis of the evolution of continuous strategy
distributions given bi-linear and quadratic payoff functions for any initial
distribution to answer the following question: given the initial distribution
of strategies in a game, how will it evolve over time? I look at several
specific examples, including normal distribution on the entire line, normal
truncated distribution, as well as exponential, uniform and Gamma
distributions. I show that the class of exponential distributions is invariant
with respect to replicator dynamics in games with bi-linear payoff functions. I
show also that the class of normal distributions is invariant with respect to
replicator dynamics in games with quadratic payoff functions. The developed
method can now be applied to a broad class of questions pertaining to evolution
of strategies in games with different payoff functions and different initial
distributions.
|
[
{
"created": "Fri, 13 Mar 2020 15:46:40 GMT",
"version": "v1"
}
] |
2020-03-16
|
[
[
"Karev",
"Georgiy",
""
]
] |
Evolution of distribution of strategies in game theory is an interesting question that has been studied only for specific cases. Here I develop a general method to extend analysis of the evolution of continuous strategy distributions given bi-linear and quadratic payoff functions for any initial distribution to answer the following question: given the initial distribution of strategies in a game, how will it evolve over time? I look at several specific examples, including normal distribution on the entire line, normal truncated distribution, as well as exponential, uniform and Gamma distributions. I show that the class of exponential distributions is invariant with respect to replicator dynamics in games with bi-linear payoff functions. I show also that the class of normal distributions is invariant with respect to replicator dynamics in games with quadratic payoff functions. The developed method can now be applied to a broad class of questions pertaining to evolution of strategies in games with different payoff functions and different initial distributions.
|
2209.02365
|
Alan Smeaton
|
Victoria Rhodes and Maureen Maguire and Meghana Shetty and Conor
McAloon and Alan F. Smeaton
|
Periodicity Intensity of the 24 h Circadian Rhythm in Newborn Calves
Show Indicators of Herd Welfare
| null |
Sensors 2022, 22(15), 5843
|
10.3390/s22155843
| null |
q-bio.QM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Circadian rhythms are a process of the sleep-wake cycle that regulates the
physical, mental and behavioural changes in all living beings with a period of
roughly 24 h. Wearable accelerometers are typically used in livestock
applications to record animal movement from which we can estimate the activity
type. Here, we use the overall movement recorded by accelerometers worn on the
necks of newborn calves for a period of 8 weeks. From the movement data, we
calculate 24 h periodicity intensities corresponding to circadian rhythms, from
a 7-day window that slides through up to 8-weeks of data logging. The strength
or intensity of the 24 h periodicity is computed at intervals as the calves
become older, which is an indicator of individual calf welfare. We observe that
the intensities of these 24 h periodicities for individual calves, derived from
movement data, increase and decrease synchronously in a herd of 19 calves. Our
results show that external factors affecting the welfare of the herd can be
observed by processing and visualising movement data in this way and our method
reveals insights that are not observable from movement data alone.
|
[
{
"created": "Mon, 8 Aug 2022 08:52:20 GMT",
"version": "v1"
}
] |
2022-09-07
|
[
[
"Rhodes",
"Victoria",
""
],
[
"Maguire",
"Maureen",
""
],
[
"Shetty",
"Meghana",
""
],
[
"McAloon",
"Conor",
""
],
[
"Smeaton",
"Alan F.",
""
]
] |
Circadian rhythms are a process of the sleep-wake cycle that regulates the physical, mental and behavioural changes in all living beings with a period of roughly 24 h. Wearable accelerometers are typically used in livestock applications to record animal movement from which we can estimate the activity type. Here, we use the overall movement recorded by accelerometers worn on the necks of newborn calves for a period of 8 weeks. From the movement data, we calculate 24 h periodicity intensities corresponding to circadian rhythms, from a 7-day window that slides through up to 8-weeks of data logging. The strength or intensity of the 24 h periodicity is computed at intervals as the calves become older, which is an indicator of individual calf welfare. We observe that the intensities of these 24 h periodicities for individual calves, derived from movement data, increase and decrease synchronously in a herd of 19 calves. Our results show that external factors affecting the welfare of the herd can be observed by processing and visualising movement data in this way and our method reveals insights that are not observable from movement data alone.
|
2407.00002
|
Peter M{\o}rch Groth
|
Peter M{\o}rch Groth and Mads Herbert Kerrn and Lars Olsen and Jesper
Salomon and Wouter Boomsma
|
Kermut: Composite kernel regression for protein variant effects
|
10 pages (36 in total with appendix), 4 figures (26 figures in total
with appendix)
| null | null | null |
q-bio.BM cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reliable prediction of protein variant effects is crucial for both protein
optimization and for advancing biological understanding. For practical use in
protein engineering, it is important that we can also provide reliable
uncertainty estimates for our predictions, and while prediction accuracy has
seen much progress in recent years, uncertainty metrics are rarely reported. We
here provide a Gaussian process regression model, Kermut, with a novel
composite kernel for modelling mutation similarity, which obtains
state-of-the-art performance for protein variant effect prediction while also
offering estimates of uncertainty through its posterior. An analysis of the
quality of the uncertainty estimates demonstrates that our model provides
meaningful levels of overall calibration, but that instance-specific
uncertainty calibration remains more challenging. We hope that this will
encourage future work in this promising direction.
|
[
{
"created": "Tue, 9 Apr 2024 14:08:06 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 08:28:57 GMT",
"version": "v2"
}
] |
2024-07-10
|
[
[
"Groth",
"Peter Mørch",
""
],
[
"Kerrn",
"Mads Herbert",
""
],
[
"Olsen",
"Lars",
""
],
[
"Salomon",
"Jesper",
""
],
[
"Boomsma",
"Wouter",
""
]
] |
Reliable prediction of protein variant effects is crucial for both protein optimization and for advancing biological understanding. For practical use in protein engineering, it is important that we can also provide reliable uncertainty estimates for our predictions, and while prediction accuracy has seen much progress in recent years, uncertainty metrics are rarely reported. We here provide a Gaussian process regression model, Kermut, with a novel composite kernel for modelling mutation similarity, which obtains state-of-the-art performance for protein variant effect prediction while also offering estimates of uncertainty through its posterior. An analysis of the quality of the uncertainty estimates demonstrates that our model provides meaningful levels of overall calibration, but that instance-specific uncertainty calibration remains more challenging. We hope that this will encourage future work in this promising direction.
|
1610.04579
|
Andrew Leifer
|
Jeffrey P. Nguyen, Ashley N. Linder, George S. Plummer, Joshua W.
Shaevitz, and Andrew M. Leifer
|
Automatically tracking neurons in a moving and deforming brain
|
33 pages, 7 figures, code available
|
PLoS Comput Biol 13(5): e1005517 (2017)
|
10.1371/journal.pcbi.1005517
| null |
q-bio.NC cs.CV physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in optical neuroimaging techniques now allow neural activity to be
recorded with cellular resolution in awake and behaving animals. Brain motion
in these recordings pose a unique challenge. The location of individual neurons
must be tracked in 3D over time to accurately extract single neuron activity
traces. Recordings from small invertebrates like C. elegans are especially
challenging because they undergo very large brain motion and deformation during
animal movement. Here we present an automated computer vision pipeline to
reliably track populations of neurons with single neuron resolution in the
brain of a freely moving C. elegans undergoing large motion and deformation. 3D
volumetric fluorescent images of the animal's brain are straightened, aligned
and registered, and the locations of neurons in the images are found via
segmentation. Each neuron is then assigned an identity using a new
time-independent machine-learning approach we call Neuron Registration Vector
Encoding. In this approach, non-rigid point-set registration is used to match
each segmented neuron in each volume with a set of reference volumes taken from
throughout the recording. The way each neuron matches with the references
defines a feature vector which is clustered to assign an identity to each
neuron in each volume. Finally, thin-plate spline interpolation is used to
correct errors in segmentation and check consistency of assigned identities.
The Neuron Registration Vector Encoding approach proposed here is uniquely well
suited for tracking neurons in brains undergoing large deformations. When
applied to whole-brain calcium imaging recordings in freely moving C. elegans,
this analysis pipeline located 150 neurons for the duration of an 8 minute
recording and consistently found more neurons more quickly than manual or
semi-automated approaches.
|
[
{
"created": "Fri, 14 Oct 2016 18:51:30 GMT",
"version": "v1"
}
] |
2017-05-22
|
[
[
"Nguyen",
"Jeffrey P.",
""
],
[
"Linder",
"Ashley N.",
""
],
[
"Plummer",
"George S.",
""
],
[
"Shaevitz",
"Joshua W.",
""
],
[
"Leifer",
"Andrew M.",
""
]
] |
Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal's brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 150 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches.
|
1407.2548
|
Katja Reichel
|
Katja Reichel, Valentin Bahier, C\'edric Midoux, Jean-Pierre Masson,
Solenn Stoeckel
|
Interpretation and approximation tools for big, dense Markov chain
transition matrices in ecology and evolution
|
8 pages, 4 figures, supplement: 2 figures, visual abstract,
highlights, source code
| null | null | null |
q-bio.QM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Markov chains are a common framework for individual-based state and time
discrete models in ecology and evolution. Their use, however, is largely
limited to systems with a low number of states, since the transition matrices
involved pose considerable challenges as their size and their density increase.
Big, dense transition matrices may easily defy both the computer's memory and
the scientists' ability to interpret them, due to the very high amount of
information they contain; yet approximations using other types of models are
not always the best solution.
We propose a set of methods to overcome the difficulties associated with big,
dense Markov chain transition matrices. Using a population genetic model as an
example, we demonstrate how big matrices can be transformed into clear and
easily interpretable graphs with the help of network analysis. Moreover, we
describe an algorithm to save computer memory by substituting the original
matrix with a sparse approximate while preserving all its mathematically
important properties. In the same model example, we manage to store about 90%
less data while keeping more than 99% of the information contained in the
matrix and a closely corresponding dominant eigenvector.
Our approach is an example how numerical limitations for the number of states
in a Markov chain can be overcome. By facilitating the use of state-rich Markov
chain models, they may become a valuable supplement to the diversity of models
currently employed in biology.
|
[
{
"created": "Wed, 9 Jul 2014 16:12:39 GMT",
"version": "v1"
}
] |
2014-07-10
|
[
[
"Reichel",
"Katja",
""
],
[
"Bahier",
"Valentin",
""
],
[
"Midoux",
"Cédric",
""
],
[
"Masson",
"Jean-Pierre",
""
],
[
"Stoeckel",
"Solenn",
""
]
] |
Markov chains are a common framework for individual-based state and time discrete models in ecology and evolution. Their use, however, is largely limited to systems with a low number of states, since the transition matrices involved pose considerable challenges as their size and their density increase. Big, dense transition matrices may easily defy both the computer's memory and the scientists' ability to interpret them, due to the very high amount of information they contain; yet approximations using other types of models are not always the best solution. We propose a set of methods to overcome the difficulties associated with big, dense Markov chain transition matrices. Using a population genetic model as an example, we demonstrate how big matrices can be transformed into clear and easily interpretable graphs with the help of network analysis. Moreover, we describe an algorithm to save computer memory by substituting the original matrix with a sparse approximate while preserving all its mathematically important properties. In the same model example, we manage to store about 90% less data while keeping more than 99% of the information contained in the matrix and a closely corresponding dominant eigenvector. Our approach is an example how numerical limitations for the number of states in a Markov chain can be overcome. By facilitating the use of state-rich Markov chain models, they may become a valuable supplement to the diversity of models currently employed in biology.
|
1201.3022
|
Sachin Talathi
|
Roxana A. Stephanescu, R.G. Shivakeshavan, Sachin S. Talathi
|
Computational Models For Epilepsy
|
4 figures
|
Seizure: European Journal of Epilepsy, 2012 Dec;21(10):748-59
|
10.1016/j.seizure.2012.08.012
| null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epilepsy is a neurological disease characterized by recurrent and spontaneous
seizures. It affects approximately 50 million people worldwide. In majority of
the cases accurate diagnosis of the disease can be made without using any
technologically advanced techniques and seizures are controlled using standard
treatment in the form of regular use of anti-epileptic drugs. However,
approximately 30% of the patients suffer from medically refractory epilepsy,
wherein seizures are not controlled by the use of anti-epileptic drugs.
Understanding the mechanisms underlying these forms of drug resistant epileptic
seizures and the development of alternative effective treatment strategies is a
fundamental challenge in modern epilepsy research. In this context, the need
for integrative approaches combining various modalities of treatment strategies
is high. Computational modeling has gained prominence in recent years as an
important tool for tackling the complexity of the epileptic phenomenon. In this
review article we present a survey of different computational models for
epilepsy and discuss how computer models can aid in our understanding of brain
mechanisms in epilepsy and the development of new epilepsy treatment protocols.
|
[
{
"created": "Sat, 14 Jan 2012 14:49:09 GMT",
"version": "v1"
}
] |
2013-04-05
|
[
[
"Stephanescu",
"Roxana A.",
""
],
[
"Shivakeshavan",
"R. G.",
""
],
[
"Talathi",
"Sachin S.",
""
]
] |
Epilepsy is a neurological disease characterized by recurrent and spontaneous seizures. It affects approximately 50 million people worldwide. In majority of the cases accurate diagnosis of the disease can be made without using any technologically advanced techniques and seizures are controlled using standard treatment in the form of regular use of anti-epileptic drugs. However, approximately 30% of the patients suffer from medically refractory epilepsy, wherein seizures are not controlled by the use of anti-epileptic drugs. Understanding the mechanisms underlying these forms of drug resistant epileptic seizures and the development of alternative effective treatment strategies is a fundamental challenge in modern epilepsy research. In this context, the need for integrative approaches combining various modalities of treatment strategies is high. Computational modeling has gained prominence in recent years as an important tool for tackling the complexity of the epileptic phenomenon. In this review article we present a survey of different computational models for epilepsy and discuss how computer models can aid in our understanding of brain mechanisms in epilepsy and the development of new epilepsy treatment protocols.
|
1510.00198
|
Charles Fisher
|
Charles K. Fisher, Thierry Mora, and Aleksandra M. Walczak
|
Habitat Fluctuations Drive Species Covariation in the Human Microbiota
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two species with similar resource requirements respond in a characteristic
way to variations in their habitat -- their abundances rise and fall in
concert. We use this idea to learn how bacterial populations in the microbiota
respond to habitat conditions that vary from person-to-person across the human
population. Our mathematical framework shows that habitat fluctuations are
sufficient for explaining intra-bodysite correlations in relative species
abundances from the Human Microbiome Project. We explicitly show that the
relative abundances of phylogenetically related species are positively
correlated and can be predicted from taxonomic relationships. We identify a
small set of functional pathways related to metabolism and maintenance of the
cell wall that form the basis of a common resource sharing niche space of the
human microbiota.
|
[
{
"created": "Thu, 1 Oct 2015 12:12:00 GMT",
"version": "v1"
}
] |
2015-10-02
|
[
[
"Fisher",
"Charles K.",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
]
] |
Two species with similar resource requirements respond in a characteristic way to variations in their habitat -- their abundances rise and fall in concert. We use this idea to learn how bacterial populations in the microbiota respond to habitat conditions that vary from person-to-person across the human population. Our mathematical framework shows that habitat fluctuations are sufficient for explaining intra-bodysite correlations in relative species abundances from the Human Microbiome Project. We explicitly show that the relative abundances of phylogenetically related species are positively correlated and can be predicted from taxonomic relationships. We identify a small set of functional pathways related to metabolism and maintenance of the cell wall that form the basis of a common resource sharing niche space of the human microbiota.
|
q-bio/0505005
|
Paolo Provero
|
Davide Cora', Carl Herrmann, Christoph Dieterich, Ferdinando Di Cunto,
Paolo Provero and Michele Caselle
|
Ab initio identification of putative human transcription factor binding
sites by comparative genomics
|
22 pages, 2 figures. Supplementary material available from the
authors
|
BMC Bioinformatics 2005, 6:110
| null | null |
q-bio.GN
| null |
We discuss a simple and powerful approach for the ab initio identification of
cis-regulatory motifs involved in transcriptional regulation. The method we
present integrates several elements: human-mouse comparison, statistical
analysis of genomic sequences and the concept of coregulation. We apply it to a
complete scan of the human genome. By using the catalogue of conserved upstream
sequences collected in the CORG database we construct sets of genes sharing the
same overrepresented motif (short DNA sequence) in their upstream regions both
in human and in mouse. We perform this construction for all possible motifs
from 5 to 8 nucleotides in length and then filter the resulting sets looking
for two types of evidence of coregulation: first, we analyze the Gene Ontology
annotation of the genes in the set, searching for statistically significant
common annotations; second, we analyze the expression profiles of the genes in
the set as measured by microarray experiments, searching for evidence of
coexpression. The sets which pass one or both filters are conjectured to
contain a significant fraction of coregulated genes, and the upstream motifs
characterizing the sets are thus good candidates to be the binding sites of the
TF's involved in such regulation. In this way we find various known motifs and
also some new candidate binding sites.
|
[
{
"created": "Tue, 3 May 2005 10:37:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Cora'",
"Davide",
""
],
[
"Herrmann",
"Carl",
""
],
[
"Dieterich",
"Christoph",
""
],
[
"Di Cunto",
"Ferdinando",
""
],
[
"Provero",
"Paolo",
""
],
[
"Caselle",
"Michele",
""
]
] |
We discuss a simple and powerful approach for the ab initio identification of cis-regulatory motifs involved in transcriptional regulation. The method we present integrates several elements: human-mouse comparison, statistical analysis of genomic sequences and the concept of coregulation. We apply it to a complete scan of the human genome. By using the catalogue of conserved upstream sequences collected in the CORG database we construct sets of genes sharing the same overrepresented motif (short DNA sequence) in their upstream regions both in human and in mouse. We perform this construction for all possible motifs from 5 to 8 nucleotides in length and then filter the resulting sets looking for two types of evidence of coregulation: first, we analyze the Gene Ontology annotation of the genes in the set, searching for statistically significant common annotations; second, we analyze the expression profiles of the genes in the set as measured by microarray experiments, searching for evidence of coexpression. The sets which pass one or both filters are conjectured to contain a significant fraction of coregulated genes, and the upstream motifs characterizing the sets are thus good candidates to be the binding sites of the TF's involved in such regulation. In this way we find various known motifs and also some new candidate binding sites.
|
1511.04944
|
Sunyoung Kwon
|
Sunyoung Kwon, Gyuwan Kim, Byunghan Lee, Jongsik Chun, Sungroh Yoon,
and Young-Han Kim
|
NASCUP: Nucleic Acid Sequence Classification by Universal Probability
| null | null | null | null |
q-bio.GN cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the need for fast and accurate classification of unlabeled
nucleotide sequences on a large scale, we developed NASCUP, a new
classification method that captures statistical structures of nucleotide
sequences by compact context-tree models and universal probability from
information theory. NASCUP achieved BLAST-like classification accuracy
consistently for several large-scale databases in orders-of-magnitude reduced
runtime, and was applied to other bioinformatics tasks such as outlier
detection and synthetic sequence generation.
|
[
{
"created": "Mon, 16 Nov 2015 13:04:03 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Nov 2018 14:49:23 GMT",
"version": "v2"
}
] |
2018-11-30
|
[
[
"Kwon",
"Sunyoung",
""
],
[
"Kim",
"Gyuwan",
""
],
[
"Lee",
"Byunghan",
""
],
[
"Chun",
"Jongsik",
""
],
[
"Yoon",
"Sungroh",
""
],
[
"Kim",
"Young-Han",
""
]
] |
Motivated by the need for fast and accurate classification of unlabeled nucleotide sequences on a large scale, we developed NASCUP, a new classification method that captures statistical structures of nucleotide sequences by compact context-tree models and universal probability from information theory. NASCUP achieved BLAST-like classification accuracy consistently for several large-scale databases in orders-of-magnitude reduced runtime, and was applied to other bioinformatics tasks such as outlier detection and synthetic sequence generation.
|
1610.04224
|
Indika Rajapakse
|
Indika Rajapakse, Steve Smale
|
Emergence of Function
| null | null | null | null |
q-bio.OT math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work gives a mathematical study of tissue dynamics. We combine
within-cell genome dynamics and diffusion between cells, where the synthesis of
the two gives rise to the emergence of function. We introduce a concept of
monotonicity and prove that monotonicity together with hardwiring, defined as
all cells of the same tissue having the same genome dynamics, is sufficient for
the global convergence of the tissue dynamics.
|
[
{
"created": "Wed, 12 Oct 2016 23:56:21 GMT",
"version": "v1"
}
] |
2016-10-17
|
[
[
"Rajapakse",
"Indika",
""
],
[
"Smale",
"Steve",
""
]
] |
This work gives a mathematical study of tissue dynamics. We combine within-cell genome dynamics and diffusion between cells, where the synthesis of the two gives rise to the emergence of function. We introduce a concept of monotonicity and prove that monotonicity together with hardwiring, defined as all cells of the same tissue having the same genome dynamics, is sufficient for the global convergence of the tissue dynamics.
|
0905.2932
|
Thomas Butler
|
Thomas Butler and Nigel Goldenfeld
|
Optimality Properties of a Proposed Precursor to the Genetic Code
| null | null |
10.1103/PhysRevE.80.032901
| null |
q-bio.OT q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We calculate the optimality of a doublet precursor to the canonical genetic
code with respect to mitigating the effects of point mutations and compare our
results to corresponding ones for the canonical genetic code. We find that the
proposed precursor has much less optimality than that of the canonical code.
Our results render unlikely the notion that the doublet precursor was an
intermediate state in the evolution of the canonical genetic code. These
findings support the notion that code optimality reflects evolutionary
dynamics, and that if such a doublet code originally had a biochemical
significance, it arose before the emergence of translation.
|
[
{
"created": "Mon, 18 May 2009 16:32:10 GMT",
"version": "v1"
}
] |
2013-05-29
|
[
[
"Butler",
"Thomas",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] |
We calculate the optimality of a doublet precursor to the canonical genetic code with respect to mitigating the effects of point mutations and compare our results to corresponding ones for the canonical genetic code. We find that the proposed precursor has much less optimality than that of the canonical code. Our results render unlikely the notion that the doublet precursor was an intermediate state in the evolution of the canonical genetic code. These findings support the notion that code optimality reflects evolutionary dynamics, and that if such a doublet code originally had a biochemical significance, it arose before the emergence of translation.
|
0810.1833
|
Jose Emilio Jimenez
|
Stephen A. Wells, J. Emilio Jimenez-Roldan, Rudolf A. R\"omer
|
Sensitivity of protein rigidity analysis to small structural variations:
a large-scale comparative analysis
|
10 Figures, 3 Tables, 16 pages, submitted to physicalbiology
|
physicalbiology 6, 046005-11 (2009)
|
10.1088/1478-3975/6/4/046005
| null |
q-bio.MN q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rigidity analysis using the "pebble game" can usefully be applied to protein
crystal structures to obtain information on protein folding, assembly and the
structure-function relationship. However, previous work using this technique
has not made clear how sensitive rigidity analysis is to small structural
variations. We present a comparative study in which rigidity analysis is
applied to multiple structures, derived from different organisms and different
conditions of crystallisation, for each of several different proteins. We find
that rigidity analysis is best used as a comparative tool to highlight the
effects of structural variation. Our use of multiple protein structures brings
out a previously unnoticed peculiarity in the rigidity of trypsin.
|
[
{
"created": "Fri, 10 Oct 2008 10:02:54 GMT",
"version": "v1"
}
] |
2009-09-25
|
[
[
"Wells",
"Stephen A.",
""
],
[
"Jimenez-Roldan",
"J. Emilio",
""
],
[
"Römer",
"Rudolf A.",
""
]
] |
Rigidity analysis using the "pebble game" can usefully be applied to protein crystal structures to obtain information on protein folding, assembly and the structure-function relationship. However, previous work using this technique has not made clear how sensitive rigidity analysis is to small structural variations. We present a comparative study in which rigidity analysis is applied to multiple structures, derived from different organisms and different conditions of crystallisation, for each of several different proteins. We find that rigidity analysis is best used as a comparative tool to highlight the effects of structural variation. Our use of multiple protein structures brings out a previously unnoticed peculiarity in the rigidity of trypsin.
|
2210.04111
|
Michael Baker Ph.D.
|
Yoshinao Katsu (Hokkaido University), Xiaozhi Lin (Hokkaido
University), Ruigeng Ji (Hokkaido University), Ze Chen (Hokkaido University),
Yui Kamisaka (Hokkaido University), Koto Bamba (Hokkaido University), Michael
E. Baker (University of California, San Diego)
|
Corticosteroid Activation of Atlantic Sea Lamprey Corticoid Receptor:
Allosteric Regulation by the N-terminal Domain
|
27 pages, 6 figures
| null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lampreys are jawless fish that evolved about 550 million years ago at the
base of the vertebrate line. Modern lampreys contain a corticoid receptor (CR),
the common ancestor of the glucocorticoid receptor (GR) and mineralocorticoid
receptor (MR), which first appear in cartilaginous fish, such as sharks. Until
recently, 344 amino acids at the amino terminus of adult lamprey CR were not
present in the lamprey CR sequence in GenBank. A search of the recently
sequenced lamprey germline genome identified two CR sequences, CR1 and CR2,
containing the 344 previously un-identified amino acids at the amino terminus.
CR1 also contains a novel four amino acid insertion in the DNA-binding domain
(DBD). We studied corticosteroid activation of CR1 and CR2 and found their
strongest response was to 11-deoxycorticosterone and 11-deoxycortisol, the two
circulating corticosteroids in lamprey. Based on steroid specificity, both CRs
are close to elephant shark MR and distant from elephant shark GR. HEK293 cells
transfected with full-length CR1 or CR2 and the MMTV promoter have about 3-fold
higher steroid-mediated activation compared to HEK293 cells transfected with
these CRs and the TAT3 promoter. Deletion of the amino-terminal domain (NTD) of
lamprey CR1 and CR2 to form truncated CRs decreased transcriptional activation
by about 70% in HEK293 cells transfected with MMTV, but increased transcription
by about 6-fold in cells transfected with TAT3, indicating that the promoter
has an important effect on NTD regulation of CR transcription by
corticosteroids.
|
[
{
"created": "Sat, 8 Oct 2022 21:54:36 GMT",
"version": "v1"
}
] |
2022-10-11
|
[
[
"Katsu",
"Yoshinao",
"",
"Hokkaido University"
],
[
"Lin",
"Xiaozhi",
"",
"Hokkaido\n University"
],
[
"Ji",
"Ruigeng",
"",
"Hokkaido University"
],
[
"Chen",
"Ze",
"",
"Hokkaido University"
],
[
"Kamisaka",
"Yui",
"",
"Hokkaido University"
],
[
"Bamba",
"Koto",
"",
"Hokkaido University"
],
[
"Baker",
"Michael E.",
"",
"University of California, San Diego"
]
] |
Lampreys are jawless fish that evolved about 550 million years ago at the base of the vertebrate line. Modern lampreys contain a corticoid receptor (CR), the common ancestor of the glucocorticoid receptor (GR) and mineralocorticoid receptor (MR), which first appear in cartilaginous fish, such as sharks. Until recently, 344 amino acids at the amino terminus of adult lamprey CR were not present in the lamprey CR sequence in GenBank. A search of the recently sequenced lamprey germline genome identified two CR sequences, CR1 and CR2, containing the 344 previously un-identified amino acids at the amino terminus. CR1 also contains a novel four amino acid insertion in the DNA-binding domain (DBD). We studied corticosteroid activation of CR1 and CR2 and found their strongest response was to 11-deoxycorticosterone and 11-deoxycortisol, the two circulating corticosteroids in lamprey. Based on steroid specificity, both CRs are close to elephant shark MR and distant from elephant shark GR. HEK293 cells transfected with full-length CR1 or CR2 and the MMTV promoter have about 3-fold higher steroid-mediated activation compared to HEK293 cells transfected with these CRs and the TAT3 promoter. Deletion of the amino-terminal domain (NTD) of lamprey CR1 and CR2 to form truncated CRs decreased transcriptional activation by about 70% in HEK293 cells transfected with MMTV, but increased transcription by about 6-fold in cells transfected with TAT3, indicating that the promoter has an important effect on NTD regulation of CR transcription by corticosteroids.
|
1003.0836
|
Dominique Rousie DL
|
D. L. Rousie (1), J.P. Deroubaix (2), O. Joly (1), P. Salvetti (2), J.
Vasseur (2), A. Berthoz (1), ((1) Laboratoire de la perception et de l
action-College de France-Paris, (2) IFR 49 INSERM/CNRS-Orsay)
|
Semi-Circular Canals Anomalies//Idiopathic Scoliosis
|
15 pages, 8 figures Name and adress for correspondence: Docteur
Rousie, 3 rue Saint Louis, 59113 Seclin France. Fax: 0033 320 32 35 44, Tel.
0033 320 90 12 29, mrousie@nordnet.fr
| null | null | null |
q-bio.NC physics.med-ph q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thanks to a novel modelling programme to detect anomalies in the membranous
semi circular canals (SSC) of idiopathic scoliosis (IS) we found severe
anomalies mainly located in lateral SCC devoted to trunk rotation and lateral
deviations. We also found a specific communication between the lateral and
posterior canal involving the utricular chamber which is also highly suspected
in scoliosis. Key points: - Membranous semi circular canals (SCC) modelling
based on MRI revealed significant anomalies in IS patients compared to normal
subjects. - Frequent aplasias located in the lateral canal were found in IS. -
We also discovered a, never described, abnormal communication between lateral
and posterior canal. - Lateral SCC is involved in trunk rotation and lateral
deviation: these movements are frequently abnormal in IS. Supports: Fondation
Yves Cotrel pour la recherche en pathologie rachidienne. Institut de France,
Paris. SHFJ/CEA Orsay in the frame of the cooperation through IFR 49
INSERM/CNRS France.
|
[
{
"created": "Wed, 3 Mar 2010 15:31:05 GMT",
"version": "v1"
}
] |
2016-09-08
|
[
[
"Rousie",
"D. L.",
""
],
[
"Deroubaix",
"J. P.",
""
],
[
"Joly",
"O.",
""
],
[
"Salvetti",
"P.",
""
],
[
"Vasseur",
"J.",
""
],
[
"Berthoz",
"A.",
""
]
] |
Thanks to a novel modelling programme to detect anomalies in the membranous semi circular canals (SSC) of idiopathic scoliosis (IS) we found severe anomalies mainly located in lateral SCC devoted to trunk rotation and lateral deviations. We also found a specific communication between the lateral and posterior canal involving the utricular chamber which is also highly suspected in scoliosis. Key points: - Membranous semi circular canals (SCC) modelling based on MRI revealed significant anomalies in IS patients compared to normal subjects. - Frequent aplasias located in the lateral canal were found in IS. - We also discovered a, never described, abnormal communication between lateral and posterior canal. - Lateral SCC is involved in trunk rotation and lateral deviation: these movements are frequently abnormal in IS. Supports: Fondation Yves Cotrel pour la recherche en pathologie rachidienne. Institut de France, Paris. SHFJ/CEA Orsay in the frame of the cooperation through IFR 49 INSERM/CNRS France.
|
2311.17756
|
Serge Bouaziz
|
Steven R. Laplante (INRS-IAF), Pascale Coric (UPCG), Serge Bouaziz
(CiTCoM), Tanos Celmar Costa Fran\c{c}a (INRS-IAF)
|
NMR Spectroscopy Can Help Accelerate Antiviral Drug Discovery Programs
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small molecule drugs have an important role to play in combating viral
infections, and biophysics support has been central for contributing to the
discovery and design of direct acting antivirals. Perhaps one of the most
successful biophysical tools for this purpose is NMR spectroscopy when utilized
strategically and pragmatically within team workflows and timelines. This
report describes some clear examples of how NMR applications contributed to the
design of antivirals when combined with medicinal chemistry, biochemistry,
X-ray crystallography and computational chemistry. Overall, these
multidisciplinary approaches allowed teams to reveal and expose compound
physical properties from which design ideas were spawned and tested to achieve
the desired successes. Examples are discussed for the discovery of antivirals
that target HCV, HIV and SARS-CoV-2.
|
[
{
"created": "Wed, 29 Nov 2023 15:58:25 GMT",
"version": "v1"
}
] |
2023-11-30
|
[
[
"Laplante",
"Steven R.",
"",
"INRS-IAF"
],
[
"Coric",
"Pascale",
"",
"UPCG"
],
[
"Bouaziz",
"Serge",
"",
"CiTCoM"
],
[
"França",
"Tanos Celmar Costa",
"",
"INRS-IAF"
]
] |
Small molecule drugs have an important role to play in combating viral infections, and biophysics support has been central for contributing to the discovery and design of direct acting antivirals. Perhaps one of the most successful biophysical tools for this purpose is NMR spectroscopy when utilized strategically and pragmatically within team workflows and timelines. This report describes some clear examples of how NMR applications contributed to the design of antivirals when combined with medicinal chemistry, biochemistry, X-ray crystallography and computational chemistry. Overall, these multidisciplinary approaches allowed teams to reveal and expose compound physical properties from which design ideas were spawned and tested to achieve the desired successes. Examples are discussed for the discovery of antivirals that target HCV, HIV and SARS-CoV-2.
|
2011.10955
|
Tyler Maltba
|
Tyler E. Maltba (1), Hongli Zhao (1), Daniel M. Tartakovsky (2) ((1)
UC Berkeley, (2) Stanford University)
|
Autonomous learning of nonlocal stochastic neuron dynamics
|
28 pages, 12 figures, First author: Tyler E. Maltba, Corresponding
author: Daniel M. Tartakovsky
|
Cogn Neurodyn 16, 683-705 (2022)
|
10.1007/s11571-021-09731-9
| null |
q-bio.NC cs.NA math.NA q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuronal dynamics is driven by externally imposed or internally generated
random excitations/noise, and is often described by systems of random or
stochastic ordinary differential equations. Such systems admit a distribution
of solutions, which is (partially) characterized by the single-time joint
probability density function (PDF) of system states. It can be used to
calculate such information-theoretic quantities as the mutual information
between the stochastic stimulus and various internal states of the neuron
(e.g., membrane potential), as well as various spiking statistics. When random
excitations are modeled as Gaussian white noise, the joint PDF of neuron states
satisfies exactly a Fokker-Planck equation. However, most biologically
plausible noise sources are correlated (colored). In this case, the resulting
PDF equations require a closure approximation. We propose two methods for
closing such equations: a modified nonlocal large-eddy-diffusivity closure and
a data-driven closure relying on sparse regression to learn relevant features.
The closures are tested for the stochastic non-spiking leaky integrate-and-fire
and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual
information and total correlation between the random stimulus and the internal
states of the neuron are calculated for the FHN neuron.
|
[
{
"created": "Sun, 22 Nov 2020 06:47:18 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Sep 2021 19:13:34 GMT",
"version": "v2"
}
] |
2023-12-19
|
[
[
"Maltba",
"Tyler E.",
""
],
[
"Zhao",
"Hongli",
""
],
[
"Tartakovsky",
"Daniel M.",
""
]
] |
Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of random or stochastic ordinary differential equations. Such systems admit a distribution of solutions, which is (partially) characterized by the single-time joint probability density function (PDF) of system states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for the stochastic non-spiking leaky integrate-and-fire and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.
|
1010.5019
|
James P. Crutchfield
|
Steve T. Piantadosi and James P. Crutchfield
|
How the Dimension of Space Affects the Products of Pre-Biotic Evolution:
The Spatial Population Dynamics of Structural Complexity and The Emergence of
Membranes
|
9 pages, 7 figures, 2 tables;
http://cse.ucdavis.edu/~cmg/compmech/pubs/ss.htm
| null | null | null |
q-bio.PE nlin.AO q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that autocatalytic networks of epsilon-machines and their population
dynamics differ substantially between spatial (geographically distributed) and
nonspatial (panmixia) populations. Generally, regions of spacetime-invariant
autocatalytic networks---or domains---emerge in geographically distributed
populations. These are separated by functional membranes of complementary
epsilon-machines that actively translate between the domains and are
responsible for their growth and stability. We analyze both spatial and
nonspatial populations, determining the algebraic properties of the
autocatalytic networks that allow for space to affect the dynamics and so
generate autocatalytic domains and membranes. In addition, we analyze
populations of intermediate spatial architecture, delineating the thresholds at
which spatial memory (information storage) begins to determine the character of
the emergent auto-catalytic organization.
|
[
{
"created": "Sun, 24 Oct 2010 22:43:32 GMT",
"version": "v1"
}
] |
2010-10-26
|
[
[
"Piantadosi",
"Steve T.",
""
],
[
"Crutchfield",
"James P.",
""
]
] |
We show that autocatalytic networks of epsilon-machines and their population dynamics differ substantially between spatial (geographically distributed) and nonspatial (panmixia) populations. Generally, regions of spacetime-invariant autocatalytic networks---or domains---emerge in geographically distributed populations. These are separated by functional membranes of complementary epsilon-machines that actively translate between the domains and are responsible for their growth and stability. We analyze both spatial and nonspatial populations, determining the algebraic properties of the autocatalytic networks that allow for space to affect the dynamics and so generate autocatalytic domains and membranes. In addition, we analyze populations of intermediate spatial architecture, delineating the thresholds at which spatial memory (information storage) begins to determine the character of the emergent auto-catalytic organization.
|
2403.13005
|
Antonia Calvi
|
Antonia Calvi, Th\'eophile Gaudin, Dominik Miketa, Dominique Sydow,
Liam Wilbraham
|
Leap: molecular synthesisability scoring with intermediates
|
New Frontiers of AI for Drug Discovery and Development workshop paper
| null | null | null |
q-bio.BM cs.LG physics.chem-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Assessing whether a molecule can be synthesised is a primary task in drug
discovery. It enables computational chemists to filter for viable compounds or
bias molecular generative models. The notion of synthesisability is dynamic as
it evolves depending on the availability of key compounds. A common approach in
drug discovery involves exploring the chemical space surrounding
synthetically-accessible intermediates. This strategy improves the
synthesisability of the derived molecules due to the availability of key
intermediates. Existing synthesisability scoring methods such as SAScore,
SCScore and RAScore, cannot condition on intermediates dynamically. Our
approach, Leap, is a GPT-2 model trained on the depth, or longest linear path,
of predicted synthesis routes that allows information on the availability of
key intermediates to be included at inference time. We show that Leap surpasses
all other scoring methods by at least 5% on AUC score when identifying
synthesisable molecules, and can successfully adapt predicted scores when
presented with a relevant intermediate compound.
|
[
{
"created": "Thu, 14 Mar 2024 11:53:35 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 16:26:04 GMT",
"version": "v2"
}
] |
2024-04-15
|
[
[
"Calvi",
"Antonia",
""
],
[
"Gaudin",
"Théophile",
""
],
[
"Miketa",
"Dominik",
""
],
[
"Sydow",
"Dominique",
""
],
[
"Wilbraham",
"Liam",
""
]
] |
Assessing whether a molecule can be synthesised is a primary task in drug discovery. It enables computational chemists to filter for viable compounds or bias molecular generative models. The notion of synthesisability is dynamic as it evolves depending on the availability of key compounds. A common approach in drug discovery involves exploring the chemical space surrounding synthetically-accessible intermediates. This strategy improves the synthesisability of the derived molecules due to the availability of key intermediates. Existing synthesisability scoring methods such as SAScore, SCScore and RAScore, cannot condition on intermediates dynamically. Our approach, Leap, is a GPT-2 model trained on the depth, or longest linear path, of predicted synthesis routes that allows information on the availability of key intermediates to be included at inference time. We show that Leap surpasses all other scoring methods by at least 5% on AUC score when identifying synthesisable molecules, and can successfully adapt predicted scores when presented with a relevant intermediate compound.
|
2204.02130
|
Yikang Zhang
|
Yikang Zhang, Xiaomin Chu, Yelu Jiang, Hongjie Wu and Lijun Quan
|
SemanticCAP: Chromatin Accessibility Prediction Enhanced by Features
Learning from a Language Model
| null | null | null | null |
q-bio.GN cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A large number of inorganic and organic compounds are able to bind DNA and
form complexes, among which drug-related molecules are important. Chromatin
accessibility changes not only directly affects drug-DNA interactions, but also
promote or inhibit the expression of critical genes associated with drug
resistance by affecting the DNA binding capacity of TFs and transcriptional
regulators. However, Biological experimental techniques for measuring it are
expensive and time consuming. In recent years, several kinds of computational
methods have been proposed to identify accessible regions of the genome.
Existing computational models mostly ignore the contextual information of bases
in gene sequences. To address these issues, we proposed a new solution named
SemanticCAP. It introduces a gene language model which models the context of
gene sequences, thus being able to provide an effective representation of a
certain site in gene sequences. Basically, we merge the features provided by
the gene language model into our chromatin accessibility model. During the
process, we designed some methods to make feature fusion smoother. Compared
with other systems under public benchmarks, our model proved to have better
performance.
|
[
{
"created": "Tue, 5 Apr 2022 11:47:58 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2022 10:25:29 GMT",
"version": "v2"
}
] |
2022-04-07
|
[
[
"Zhang",
"Yikang",
""
],
[
"Chu",
"Xiaomin",
""
],
[
"Jiang",
"Yelu",
""
],
[
"Wu",
"Hongjie",
""
],
[
"Quan",
"Lijun",
""
]
] |
A large number of inorganic and organic compounds are able to bind DNA and form complexes, among which drug-related molecules are important. Chromatin accessibility changes not only directly affects drug-DNA interactions, but also promote or inhibit the expression of critical genes associated with drug resistance by affecting the DNA binding capacity of TFs and transcriptional regulators. However, Biological experimental techniques for measuring it are expensive and time consuming. In recent years, several kinds of computational methods have been proposed to identify accessible regions of the genome. Existing computational models mostly ignore the contextual information of bases in gene sequences. To address these issues, we proposed a new solution named SemanticCAP. It introduces a gene language model which models the context of gene sequences, thus being able to provide an effective representation of a certain site in gene sequences. Basically, we merge the features provided by the gene language model into our chromatin accessibility model. During the process, we designed some methods to make feature fusion smoother. Compared with other systems under public benchmarks, our model proved to have better performance.
|
q-bio/0502004
|
James P. Brody
|
Hermann B. Frieboes and James P. Brody
|
Telomere loss limits the rate of human epithelial tumor formation
|
15 pages, includes 7 figures
| null | null | null |
q-bio.TO
| null |
Most human carcinomas exhibit telomere abnormalities early in the
carcinogenesis process suggesting that crisis caused by telomere shortening may
be a necessary event leading to human carcinomas. Epidemiological records of
the age at which each patient in a population develops carcinoma are known as
age-incidence data; these provide a quantitative measure of human tumor
initiation and dynamics. If crisis brought on by telomere shortening is
necessary for most human carcinomas, it may also be the rate limiting step. To
test this, we compared a mathematical model in which telomere loss is the rate
limiting step during carcinogenesis with age-incidence data compiled by the
Surveillance, Epidemiology and End Results (SEER) program. We found that this
model adequately explains the age-incidence data. The model also implies that
two distinct paths exist for carcinoma to develop in prostate, breast, and
ovary tissues. We conclude that a single step, crisis brought on by telomere
shortening, limits the rate of formation of human carcinomas.
|
[
{
"created": "Sun, 6 Feb 2005 20:48:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Frieboes",
"Hermann B.",
""
],
[
"Brody",
"James P.",
""
]
] |
Most human carcinomas exhibit telomere abnormalities early in the carcinogenesis process suggesting that crisis caused by telomere shortening may be a necessary event leading to human carcinomas. Epidemiological records of the age at which each patient in a population develops carcinoma are known as age-incidence data; these provide a quantitative measure of human tumor initiation and dynamics. If crisis brought on by telomere shortening is necessary for most human carcinomas, it may also be the rate limiting step. To test this, we compared a mathematical model in which telomere loss is the rate limiting step during carcinogenesis with age-incidence data compiled by the Surveillance, Epidemiology and End Results (SEER) program. We found that this model adequately explains the age-incidence data. The model also implies that two distinct paths exist for carcinoma to develop in prostate, breast, and ovary tissues. We conclude that a single step, crisis brought on by telomere shortening, limits the rate of formation of human carcinomas.
|
q-bio/0507012
|
Johannes K\"astner
|
Johannes Kaestner (1 and 2), Sascha Hemmen (1), and Peter E. Bloechl
(1) ((1) Clausthal University of Technology, Germany, (2) Max-Planck-Institue
for Coal Research, Germany)
|
Activation and Protonation of Dinitrogen at the FeMo-Cofactor of
Nitrogenase
|
9 Pages, 6 figures, accepted in J. Chem. Phys. Typos corrected
according to the galley proofs of JCP
|
J. Chem. Phys. 123, 074306 (2005)
|
10.1063/1.2008227
| null |
q-bio.BM
| null |
The protonation of N2 bound to the active center of nitrogenase has been
investigated using state-of-the-art DFT calculations. Dinitrogen in the
bridging mode is activated by forming two bonds to Fe sites, which results in a
reduction of the energy for the first hydrogen transfer by 123 kJ/mol. The
axial binding mode with open sulfur bridge is less reactive by 30 kJ/mol and
the energetic ordering of the axial and bridged binding mode is reversed in
favor of the bridging dinitrogen during the first protonation. Protonation of
the central ligand is thermodynamically favorable but kinetically hindered. If
the central ligand is protonated, the proton is transferred to dinitrogen
following the second protonation. Protonation of dinitrogen at the Mo site does
not lead to low-energy intermediates.
|
[
{
"created": "Fri, 8 Jul 2005 10:07:44 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Aug 2005 07:27:00 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Kaestner",
"Johannes",
"",
"1 and 2"
],
[
"Hemmen",
"Sascha",
""
],
[
"Bloechl",
"Peter E.",
""
]
] |
The protonation of N2 bound to the active center of nitrogenase has been investigated using state-of-the-art DFT calculations. Dinitrogen in the bridging mode is activated by forming two bonds to Fe sites, which results in a reduction of the energy for the first hydrogen transfer by 123 kJ/mol. The axial binding mode with open sulfur bridge is less reactive by 30 kJ/mol and the energetic ordering of the axial and bridged binding mode is reversed in favor of the bridging dinitrogen during the first protonation. Protonation of the central ligand is thermodynamically favorable but kinetically hindered. If the central ligand is protonated, the proton is transferred to dinitrogen following the second protonation. Protonation of dinitrogen at the Mo site does not lead to low-energy intermediates.
|
2111.13992
|
Samson Petrosyan
|
Samson Petrosyan, Grigory Tikhomirov
|
NanoFrame: A web-based DNA wireframe design tool for 3D structures
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The rapid development of the DNA nanotechnology field has been facilitated by
advances in CAD software. However, as more complex concepts arose, the lag
between the needs and software capabilities appeared. Further derailed by
manual installation and software incompatibility across different platforms and
often tedious library management issues, the software has become hard-to-use
for many.
Here we present NanoFrame, a web-based DNA wireframe design tool for making
3D nanostructures from a single scaffold. Within this software, we devised
algorithms for DNA routing, staple breaking, and wireframe cage opening which,
while modeled for cuboid structure, can be generalized to a variety of platonic
and Archimedean shapes. In addition, NanoFrame provides a platform for editing
auto-generated staple sequences and saving work online.
|
[
{
"created": "Sat, 27 Nov 2021 22:05:33 GMT",
"version": "v1"
}
] |
2021-11-30
|
[
[
"Petrosyan",
"Samson",
""
],
[
"Tikhomirov",
"Grigory",
""
]
] |
The rapid development of the DNA nanotechnology field has been facilitated by advances in CAD software. However, as more complex concepts arose, the lag between the needs and software capabilities appeared. Further derailed by manual installation and software incompatibility across different platforms and often tedious library management issues, the software has become hard-to-use for many. Here we present NanoFrame, a web-based DNA wireframe design tool for making 3D nanostructures from a single scaffold. Within this software, we devised algorithms for DNA routing, staple breaking, and wireframe cage opening which, while modeled for cuboid structure, can be generalized to a variety of platonic and Archimedean shapes. In addition, NanoFrame provides a platform for editing auto-generated staple sequences and saving work online.
|
0806.3772
|
Eugene Shakhnovich
|
Konstantin B. Zeldovich and Eugene I. Shakhnovich
|
Emergence of mutationally robust proteins in a microscopic model of
evolution
| null | null | null | null |
q-bio.BM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to absorb mutations while retaining structure and function, or
mutational robustness, is a remarkable property of natural proteins. In this
Letter, we use a computational model of organismic evolution [Zeldovich et al,
PLOS Comp Biol 3(7):e139 (2007)], which explicitly couples protein physics and
population dynamics, to study mutational robustness of evolved model proteins.
We find that dominant protein structures which evolved in the simulations are
highly designable ones, in accord with some of the earlier observations. Next,
we compare evolved sequences with the ones designed to fold into the same
dominant structures and having the same thermodynamic stability, and find that
evolved sequences are more robust against point mutations, being less likely to
be destabilized upon them. These results point to sequence evolution as an
important method of protein engineering if mutational robustness of the
artificially developed proteins is desired. On the biological side, mutational
robustness of proteins appears to be a natural consequence of the
mutation-selection evolutionary process.
|
[
{
"created": "Mon, 23 Jun 2008 20:59:00 GMT",
"version": "v1"
}
] |
2008-06-25
|
[
[
"Zeldovich",
"Konstantin B.",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] |
The ability to absorb mutations while retaining structure and function, or mutational robustness, is a remarkable property of natural proteins. In this Letter, we use a computational model of organismic evolution [Zeldovich et al, PLOS Comp Biol 3(7):e139 (2007)], which explicitly couples protein physics and population dynamics, to study mutational robustness of evolved model proteins. We find that dominant protein structures which evolved in the simulations are highly designable ones, in accord with some of the earlier observations. Next, we compare evolved sequences with the ones designed to fold into the same dominant structures and having the same thermodynamic stability, and find that evolved sequences are more robust against point mutations, being less likely to be destabilized upon them. These results point to sequence evolution as an important method of protein engineering if mutational robustness of the artificially developed proteins is desired. On the biological side, mutational robustness of proteins appears to be a natural consequence of the mutation-selection evolutionary process.
|
1912.09154
|
Eugene Terentjev M.
|
Jiri Kucera and Eugene M. Terentjev
|
FliI6-FliJ molecular motor assists with unfolding in the type III
secretion export apparatus
| null |
Scientific Reports (2020) 10:7127
|
10.1038/s41598-020-63330-y
| null |
q-bio.SC cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The role of rotational molecular motors of the ATP synthase class is integral
to the metabolism of cells. Yet the function of FliI6-FliJ complex - homologous
to the F1 ATPase motor - within the flagellar export apparatus remains unclear.
We use a simple two-state model adapted from studies of linear molecular motors
to identify key features of this motor. The two states are the 'locked' ground
state where the FliJ coiled coil filament experiences fluctuations in an
asymmetric torsional potential, and a 'free' excited state in which FliJ
undergoes rotational diffusion. Michaelis-Menten kinetics was used to treat
transitions between these two states, and obtain the average angular velocity
of the FliJ filament within the FliI6 stator: ~9.0 rps. The motor was then
studied under external counter torque conditions in order to ascertain its
maximal power output: Pmax ~42 kBT/s, and the stall torque: ~3 kBT/rad. Two
modes of action within the flagellar export apparatus are proposed, in which
the motor performs useful work either by continuously 'grinding' through the
resistive environment, or by exerting equal and opposite stall force on it. In
both cases, the resistance is provided by flagellin subunits entering the
flagellar export channel prior to their unfolding. We therefore propose that
the function of the FliI6-FliJ complex is to lower the energy barrier and
therefore assist in unfolding of the flagellar proteins before feeding them
into the transport channel.
|
[
{
"created": "Thu, 19 Dec 2019 12:16:26 GMT",
"version": "v1"
}
] |
2020-05-12
|
[
[
"Kucera",
"Jiri",
""
],
[
"Terentjev",
"Eugene M.",
""
]
] |
The role of rotational molecular motors of the ATP synthase class is integral to the metabolism of cells. Yet the function of FliI6-FliJ complex - homologous to the F1 ATPase motor - within the flagellar export apparatus remains unclear. We use a simple two-state model adapted from studies of linear molecular motors to identify key features of this motor. The two states are the 'locked' ground state where the FliJ coiled coil filament experiences fluctuations in an asymmetric torsional potential, and a 'free' excited state in which FliJ undergoes rotational diffusion. Michaelis-Menten kinetics was used to treat transitions between these two states, and obtain the average angular velocity of the FliJ filament within the FliI6 stator: ~9.0 rps. The motor was then studied under external counter torque conditions in order to ascertain its maximal power output: Pmax ~42 kBT/s, and the stall torque: ~3 kBT/rad. Two modes of action within the flagellar export apparatus are proposed, in which the motor performs useful work either by continuously 'grinding' through the resistive environment, or by exerting equal and opposite stall force on it. In both cases, the resistance is provided by flagellin subunits entering the flagellar export channel prior to their unfolding. We therefore propose that the function of the FliI6-FliJ complex is to lower the energy barrier and therefore assist in unfolding of the flagellar proteins before feeding them into the transport channel.
|
1903.07541
|
Yuri Eisaki
|
Yuri Eisaki, Ikkyu Aihara, Isamu Hikosaka and Tohru Kawabe
|
Landing Dynamics of a Seagull Examined by Field Observation and
Mathematical Modeling
|
11 pages, 11 figures
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A seagull ({\it Larus crassirostris}) has a high ability to realize its safe,
accurate and smooth landing. We examined how a seagull controls its angle of
attack when landing on a specific target. First, we recorded the landing
behavior of an actual seagull by multiple video cameras and quantified the
flight trajectory and the angle of attack as time series data. Second, we
introduced a mathematical model that describes how a seagull controls its speed
by changing its angle of attack. Based on the numerical simulation combining
the mathematical model and empirical data, we succeeded in qualitatively
explaining the landing behavior of an actual seagull, which demonstrates that
the control the angle of attack is important for landing behavior.
|
[
{
"created": "Fri, 15 Mar 2019 15:41:49 GMT",
"version": "v1"
}
] |
2019-03-19
|
[
[
"Eisaki",
"Yuri",
""
],
[
"Aihara",
"Ikkyu",
""
],
[
"Hikosaka",
"Isamu",
""
],
[
"Kawabe",
"Tohru",
""
]
] |
A seagull ({\it Larus crassirostris}) has a high ability to realize its safe, accurate and smooth landing. We examined how a seagull controls its angle of attack when landing on a specific target. First, we recorded the landing behavior of an actual seagull by multiple video cameras and quantified the flight trajectory and the angle of attack as time series data. Second, we introduced a mathematical model that describes how a seagull controls its speed by changing its angle of attack. Based on the numerical simulation combining the mathematical model and empirical data, we succeeded in qualitatively explaining the landing behavior of an actual seagull, which demonstrates that the control the angle of attack is important for landing behavior.
|
1310.7277
|
Emmanuelle Tognoli
|
Emmanuelle Tognoli and J. A. Scott Kelso
|
Enlarging the scope: grasping brain complexity
|
8 pages, 2 figures
|
Frontiers in Systems Neuroscience, 2014 8:122
|
10.3389/fnsys.2014.00122
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To further advance our understanding of the brain, new concepts and theories
are needed. In particular, the ability of the brain to create information flows
must be reconciled with its propensity for synchronization and mass action. The
framework of Coordination Dynamics and the theory of metastability are
presented as a starting point to study the interplay of integrative and
segregative tendencies that are expressed in space and time during the normal
course of brain function. Some recent shifts in perspective are emphasized,
that may ultimately lead to a better understanding of brain complexity.
|
[
{
"created": "Sun, 27 Oct 2013 23:49:37 GMT",
"version": "v1"
}
] |
2014-07-04
|
[
[
"Tognoli",
"Emmanuelle",
""
],
[
"Kelso",
"J. A. Scott",
""
]
] |
To further advance our understanding of the brain, new concepts and theories are needed. In particular, the ability of the brain to create information flows must be reconciled with its propensity for synchronization and mass action. The framework of Coordination Dynamics and the theory of metastability are presented as a starting point to study the interplay of integrative and segregative tendencies that are expressed in space and time during the normal course of brain function. Some recent shifts in perspective are emphasized, that may ultimately lead to a better understanding of brain complexity.
|
2007.14926
|
Mai He
|
Yu-Whuei Hu (1), Li-Shan Huang (2), Eric J. Yeh (3), Mai He (4) ((1)
National Dong Hwa University, Hualian, Taiwan (2) National Tsing Hua
University, Hsinchu, Taiwan (3) Amgen Inc. Thousand Oaks, CA, USA (4)
Department of Pathology & Immunology, Washington University School of
Medicine, St. Louis, MO, USA)
|
Healthcare Utilization and Perceived Health Status from Falun Gong
Practitioners in Taiwan: A Pilot SF-36 Survey
|
Five tables
| null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: Falun Gong (FLG) is a practice of mind and body focusing on moral
character improvement along with meditative exercises. This 2002 pilot study
explored perceived health status, medical resource utilization and related
factors among Taiwanese FLG practitioners, compared to the general Taiwanese
norm estimated by the 2001 National Health Interview Survey (NHIS). Methods:
This cross-sectional, observational study was based on a voluntary, paper-based
survey conducted from October 2002 to February 2003 using the same Taiwanese
SF-36 instrument employed by the NHIS. Primary outcomes included eight SF-36
domain scores and the number of medical visits. One-sample t-tests, one-way
ANOVA and multivariate linear regression analyses were performed. Results: The
response rate was 75.6% (1,210/1,600). Compared to the norm, the study cohort
had significantly higher scores in six of eight SF-36 domains across gender and
age (p<0.05). Among those with chronic diseases, 70% to 89% reported their
conditions either improved or cured. 74.2%, 79.2%, 83.3%, and 85.6% quitted
alcohol drinking, smoking, chewing betel nuts, and gambling. 62.7% reported a
reduced number of medical visits (mean=13.53 before; mean=5.87 after).
Conclusions: In this subject cohort, practicing FLG led to higher perceived
health scores and reduced health resource utilization compared to the norm.
|
[
{
"created": "Wed, 29 Jul 2020 16:00:28 GMT",
"version": "v1"
}
] |
2020-07-30
|
[
[
"Hu",
"Yu-Whuei",
""
],
[
"Huang",
"Li-Shan",
""
],
[
"Yeh",
"Eric J.",
""
],
[
"He",
"Mai",
""
]
] |
Objective: Falun Gong (FLG) is a practice of mind and body focusing on moral character improvement along with meditative exercises. This 2002 pilot study explored perceived health status, medical resource utilization and related factors among Taiwanese FLG practitioners, compared to the general Taiwanese norm estimated by the 2001 National Health Interview Survey (NHIS). Methods: This cross-sectional, observational study was based on a voluntary, paper-based survey conducted from October 2002 to February 2003 using the same Taiwanese SF-36 instrument employed by the NHIS. Primary outcomes included eight SF-36 domain scores and the number of medical visits. One-sample t-tests, one-way ANOVA and multivariate linear regression analyses were performed. Results: The response rate was 75.6% (1,210/1,600). Compared to the norm, the study cohort had significantly higher scores in six of eight SF-36 domains across gender and age (p<0.05). Among those with chronic diseases, 70% to 89% reported their conditions either improved or cured. 74.2%, 79.2%, 83.3%, and 85.6% quitted alcohol drinking, smoking, chewing betel nuts, and gambling. 62.7% reported a reduced number of medical visits (mean=13.53 before; mean=5.87 after). Conclusions: In this subject cohort, practicing FLG led to higher perceived health scores and reduced health resource utilization compared to the norm.
|
1005.1465
|
Steffen Waldherr
|
Steffen Waldherr, Jingbo Wu, Frank Allg\"ower
|
Bridging Time Scales in Cellular Decision Making with a Stochastic
Bistable Switch
|
14 pages, 4 figures
| null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular transformations which involve a significant phenotypical change of
the cell's state use bistable biochemical switches as underlying decision
systems. In this work, we aim at linking cellular decisions taking place on a
time scale of years to decades with the biochemical dynamics in signal
transduction and gene regulation, occuring on a time scale of minutes to hours.
We show that a stochastic bistable switch forms a viable biochemical mechanism
to implement decision processes on long time scales. As a case study, the
mechanism is applied to model the initiation of follicle growth in mammalian
ovaries, where the physiological time scale of follicle pool depletion is on
the order of the organism's lifespan. We construct a simple mathematical model
for this process based on experimental evidence for the involved genetic
mechanisms. Despite the underlying stochasticity, the proposed mechanism turns
out to yield reliable behavior in large populations of cells subject to the
considered decision process. Our model explains how the physiological time
constant may emerge from the intrinsic stochasticity of the underlying gene
regulatory network. Apart from ovarian follicles, the proposed mechanism may
also be of relevance for other physiological systems where cells take binary
decisions over a long time scale.
|
[
{
"created": "Mon, 10 May 2010 07:42:22 GMT",
"version": "v1"
}
] |
2010-05-11
|
[
[
"Waldherr",
"Steffen",
""
],
[
"Wu",
"Jingbo",
""
],
[
"Allgöwer",
"Frank",
""
]
] |
Cellular transformations which involve a significant phenotypical change of the cell's state use bistable biochemical switches as underlying decision systems. In this work, we aim at linking cellular decisions taking place on a time scale of years to decades with the biochemical dynamics in signal transduction and gene regulation, occuring on a time scale of minutes to hours. We show that a stochastic bistable switch forms a viable biochemical mechanism to implement decision processes on long time scales. As a case study, the mechanism is applied to model the initiation of follicle growth in mammalian ovaries, where the physiological time scale of follicle pool depletion is on the order of the organism's lifespan. We construct a simple mathematical model for this process based on experimental evidence for the involved genetic mechanisms. Despite the underlying stochasticity, the proposed mechanism turns out to yield reliable behavior in large populations of cells subject to the considered decision process. Our model explains how the physiological time constant may emerge from the intrinsic stochasticity of the underlying gene regulatory network. Apart from ovarian follicles, the proposed mechanism may also be of relevance for other physiological systems where cells take binary decisions over a long time scale.
|
0811.1081
|
Mircea Andrecut Dr
|
M. Andrecut
|
Parallel GPU Implementation of Iterative PCA Algorithms
|
45 pages, 1 figure, source code included
| null | null | null |
q-bio.QM cs.MS physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Principal component analysis (PCA) is a key statistical technique for
multivariate data analysis. For large data sets the common approach to PCA
computation is based on the standard NIPALS-PCA algorithm, which unfortunately
suffers from loss of orthogonality, and therefore its applicability is usually
limited to the estimation of the first few components. Here we present an
algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which
eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics
Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA
algorithms. The numerical results show that the GPU parallel optimized
versions, based on CUBLAS (NVIDIA) are substantially faster (up to 12 times)
than the CPU optimized versions based on CBLAS (GNU Scientific Library).
|
[
{
"created": "Fri, 7 Nov 2008 04:34:01 GMT",
"version": "v1"
}
] |
2008-11-10
|
[
[
"Andrecut",
"M.",
""
]
] |
Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA) are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).
|
2306.15074
|
Ilya Shirokov
|
D. V. Alekseevsky, I.M. Shirokov
|
Geometry of saccades and saccadic cycles
|
9 pages, 3 figures
| null | null | null |
q-bio.NC math.DG
|
http://creativecommons.org/licenses/by/4.0/
|
The paper is devoted to the development of the differential geometry of
saccades and saccadic cycles. We recall an interpretation of Donder's and
Listing's law in terms of the Hopf fibration of the $3$-sphere over the
$2$-sphere. In particular, the configuration space of the eye ball (when the
head is fixed) is the 2-dimensional hemisphere $S^+_L$, which is called
Listing's hemisphere. We give three characterizations of saccades: as geodesic
segment $ab$ in the Listing's hemisphere, as the gaze curve and as a piecewise
geodesic curve of the orthogonal group. We study the geometry of saccadic
cycle, which is represented by a geodesic polygon in the Listing hemisphere,
and give necessary and sufficient conditions, when a system of lines through
the center of eye ball is the system of axes of rotation for saccades of the
saccadic cycle, described in terms of world coordinates and retinotopic
coordinates. This gives an approach to the study the visual stability problem.
|
[
{
"created": "Mon, 26 Jun 2023 21:22:07 GMT",
"version": "v1"
}
] |
2023-06-28
|
[
[
"Alekseevsky",
"D. V.",
""
],
[
"Shirokov",
"I. M.",
""
]
] |
The paper is devoted to the development of the differential geometry of saccades and saccadic cycles. We recall an interpretation of Donder's and Listing's law in terms of the Hopf fibration of the $3$-sphere over the $2$-sphere. In particular, the configuration space of the eye ball (when the head is fixed) is the 2-dimensional hemisphere $S^+_L$, which is called Listing's hemisphere. We give three characterizations of saccades: as geodesic segment $ab$ in the Listing's hemisphere, as the gaze curve and as a piecewise geodesic curve of the orthogonal group. We study the geometry of saccadic cycle, which is represented by a geodesic polygon in the Listing hemisphere, and give necessary and sufficient conditions, when a system of lines through the center of eye ball is the system of axes of rotation for saccades of the saccadic cycle, described in terms of world coordinates and retinotopic coordinates. This gives an approach to the study the visual stability problem.
|
1208.3847
|
Miroslaw Rewekant PhD MD
|
S. Piekarski, M. Rewekant
|
On applications of conservation laws in pharmacokinetics
|
6 pages
| null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been certain criticism raised by A. Rescigno [2,8,9,12-15] against
the standard formulation of pharmacokinetics. In 2011 it has been suggested
that inconsistencies in pharmacokinetics should be eliminated after deriving
"pharmacokinetic parameters" from conservation laws [3]. In the following text
a simple system of conservation laws for extra - vascular administration of a
drug is explicitly given and preliminary discussion concerning this issue is
included.
|
[
{
"created": "Sun, 19 Aug 2012 15:46:23 GMT",
"version": "v1"
}
] |
2012-08-21
|
[
[
"Piekarski",
"S.",
""
],
[
"Rewekant",
"M.",
""
]
] |
There has been certain criticism raised by A. Rescigno [2,8,9,12-15] against the standard formulation of pharmacokinetics. In 2011 it has been suggested that inconsistencies in pharmacokinetics should be eliminated after deriving "pharmacokinetic parameters" from conservation laws [3]. In the following text a simple system of conservation laws for extra - vascular administration of a drug is explicitly given and preliminary discussion concerning this issue is included.
|
1605.09697
|
Augusto Gonzalez
|
Dario Leon and Augusto Gonzalez
|
Mutations as Levy flights
| null |
Scientific Reports volume 11, Article number: 9889 (2021)
|
10.1038/s41598-021-88012-1
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data from a long time evolution experiment with Escherichia Coli and from a
large study on copy number variations in subjects with european ancestry are
analyzed in order to argue that mutations can be described as Levy flights in
the mutation space. These Levy flights have at least two components: random
single-base substitutions and large DNA rearrangements. From the data, we get
estimations for the time rates of both events and the size distribution
function of large rearrangements.
|
[
{
"created": "Tue, 31 May 2016 16:13:10 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Jul 2016 19:50:08 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Jun 2017 20:27:52 GMT",
"version": "v3"
},
{
"created": "Mon, 16 Apr 2018 13:54:46 GMT",
"version": "v4"
},
{
"created": "Thu, 7 May 2020 02:28:10 GMT",
"version": "v5"
},
{
"created": "Sat, 23 May 2020 18:07:06 GMT",
"version": "v6"
},
{
"created": "Wed, 27 Jan 2021 21:26:24 GMT",
"version": "v7"
}
] |
2022-05-20
|
[
[
"Leon",
"Dario",
""
],
[
"Gonzalez",
"Augusto",
""
]
] |
Data from a long time evolution experiment with Escherichia Coli and from a large study on copy number variations in subjects with european ancestry are analyzed in order to argue that mutations can be described as Levy flights in the mutation space. These Levy flights have at least two components: random single-base substitutions and large DNA rearrangements. From the data, we get estimations for the time rates of both events and the size distribution function of large rearrangements.
|
1610.03443
|
Axel Wedemeyer
|
Axel Wedemeyer, Lasse Kliemann, Anand Srivastav, Christian Schielke,
Thorsten B. Reusch, Philip Rosenstiel
|
An Improved Filtering Algorithm for Big Read Datasets
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For single-cell or metagenomic sequencing projects, it is necessary to
sequence with a very high mean coverage in order to make sure that all parts of
the sample DNA get covered by the reads produced. This leads to huge datasets
with lots of redundant data. A filtering of this data prior to assembly is
advisable. Titus Brown et al. (2012) presented the algorithm Diginorm for this
purpose, which filters reads based on the abundance of their $k$-mers. We
present Bignorm, a faster and quality-conscious read filtering algorithm. An
important new feature is the use of phred quality scores together with a
detailed analysis of the $k$-mer counts to decide which reads to keep. With
recommended parameters, in terms of median we remove 97.15% of the reads while
keeping the mean phred score of the filtered dataset high. Using the SDAdes
assembler, we produce assemblies of high quality from these filtered datasets
in a fraction of the time needed for an assembly from the datasets filtered
with Diginorm. We conclude that read filtering is a practical method for
reducing read data and for speeding up the assembly process. Our Bignorm
algorithm allows assemblies of competitive quality in comparison to Diginorm,
while being much faster. Bignorm is available for download at
https://git.informatik.uni-kiel.de/axw/Bignorm.git
|
[
{
"created": "Tue, 11 Oct 2016 17:49:26 GMT",
"version": "v1"
}
] |
2016-10-12
|
[
[
"Wedemeyer",
"Axel",
""
],
[
"Kliemann",
"Lasse",
""
],
[
"Srivastav",
"Anand",
""
],
[
"Schielke",
"Christian",
""
],
[
"Reusch",
"Thorsten B.",
""
],
[
"Rosenstiel",
"Philip",
""
]
] |
For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Titus Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their $k$-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new feature is the use of phred quality scores together with a detailed analysis of the $k$-mer counts to decide which reads to keep. With recommended parameters, in terms of median we remove 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical method for reducing read data and for speeding up the assembly process. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm.git
|
1305.5411
|
Javier Galeano
|
Javier Garcia-Algarra, Javier Galeano, Juan Manuel Pastor, Jose Maria
Iriondo, and Jose J. Ramasco
|
Rethinking the logistic approach for population dynamics of mutualistic
interactions
|
13 pages, 7 figures
|
Journal of Theoretical Biology 363, 332 (2014)
|
10.1016/j.jtbi.2014.08.039
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mutualistic communities have an internal structure that makes them resilient
to external per- turbations. Late research has focused on their stability and
the topology of the relations between the different organisms to explain the
reasons of the system robustness. Much less attention has been invested in
analyzing the systems dynamics. The main population models in use are modifi-
cations of the logistic equation with additional terms to account for the
benefits produced by the interspecific interactions. These models have
shortcomings as the so called r - K formulation of logistic equation diverges
under some conditions. In this work, we introduce a model for population
dynamics under mutualism inspired by the logistic equation but avoiding
singularities. The model is mathematically simpler than the widely used type II
models, although it shows similar complexity in terms of fixed points and
stability of the dynamics. Furthermore, each term of our model has a more
direct ecological interpretation, which can facilitate the measurement of the
rates involved in field campaigns. We perform an analytical stability analysis
and numerical simulations to study the model behavior in more general
interaction scenarios including tests of the resilience of its dynamics under
external perturbations. Despite its simplicity, our results indicate that the
model dynamics shows an important richness that can be used to gain further
insights in the dynamics of mutualistic communities.
|
[
{
"created": "Thu, 23 May 2013 13:22:17 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Feb 2014 16:50:00 GMT",
"version": "v2"
}
] |
2014-09-19
|
[
[
"Garcia-Algarra",
"Javier",
""
],
[
"Galeano",
"Javier",
""
],
[
"Pastor",
"Juan Manuel",
""
],
[
"Iriondo",
"Jose Maria",
""
],
[
"Ramasco",
"Jose J.",
""
]
] |
Mutualistic communities have an internal structure that makes them resilient to external per- turbations. Late research has focused on their stability and the topology of the relations between the different organisms to explain the reasons of the system robustness. Much less attention has been invested in analyzing the systems dynamics. The main population models in use are modifi- cations of the logistic equation with additional terms to account for the benefits produced by the interspecific interactions. These models have shortcomings as the so called r - K formulation of logistic equation diverges under some conditions. In this work, we introduce a model for population dynamics under mutualism inspired by the logistic equation but avoiding singularities. The model is mathematically simpler than the widely used type II models, although it shows similar complexity in terms of fixed points and stability of the dynamics. Furthermore, each term of our model has a more direct ecological interpretation, which can facilitate the measurement of the rates involved in field campaigns. We perform an analytical stability analysis and numerical simulations to study the model behavior in more general interaction scenarios including tests of the resilience of its dynamics under external perturbations. Despite its simplicity, our results indicate that the model dynamics shows an important richness that can be used to gain further insights in the dynamics of mutualistic communities.
|
q-bio/0607042
|
Miodrag Krmar
|
Vladan Pankovic, Banjac Dejan, Rade Glavatovic, Milan Predojevic
|
A Simple Solution of the Lotka-Volterra Equations
|
6 pages, no figures
| null | null |
NS-PH-19/06
|
q-bio.QM
| null |
In this work we consider a simple, approximate, tending toward exact,
solution of the system of two usual Lotka-Volterra differential equations.
Given solution is obtained by an iterative method. In any finite approximation
order of this solution, exponents of the corresponding Lotka-Volterra variables
have simple, time polynomial form. When approximation order tends to infinity
obtained approximate solution converges toward exact solution in some finite
time interval.
|
[
{
"created": "Mon, 24 Jul 2006 08:47:17 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Pankovic",
"Vladan",
""
],
[
"Dejan",
"Banjac",
""
],
[
"Glavatovic",
"Rade",
""
],
[
"Predojevic",
"Milan",
""
]
] |
In this work we consider a simple, approximate, tending toward exact, solution of the system of two usual Lotka-Volterra differential equations. Given solution is obtained by an iterative method. In any finite approximation order of this solution, exponents of the corresponding Lotka-Volterra variables have simple, time polynomial form. When approximation order tends to infinity obtained approximate solution converges toward exact solution in some finite time interval.
|
2101.11712
|
Adrian Buganza Tepole
|
Yue Leng, Vahidullah Tac, Sarah Calve, Adrian Buganza Tepole
|
Predicting the Mechanical Properties of Biopolymer Gels Using Neural
Networks Trained on Discrete Fiber Network Data
|
20 pages, 9 figures, for associated files please see
https://bitbucket.org/buganzalab/nn_rve/src/master/
| null |
10.1016/j.cma.2021.114160
| null |
q-bio.QM cs.CE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biopolymer gels, such as those made out of fibrin or collagen, are widely
used in tissue engineering applications and biomedical research. Moreover,
fibrin naturally assembles into gels in vivo during wound healing and thrombus
formation. Macroscale biopolymer gel mechanics are dictated by the microscale
fiber network. Hence, accurate description of biopolymer gels can be achieved
using representative volume elements (RVE) that explicitly model the discrete
fiber networks of the microscale. These RVE models, however, cannot be
efficiently used to model the macroscale due to the challenges and
computational demands of multiscale coupling. Here, we propose the use of an
artificial, fully connected neural network (FCNN) to efficiently capture the
behavior of the RVE models. The FCNN was trained on 1100 fiber networks
subjected to 121 biaxial deformations. The stress data from the RVE, together
with the total energy and the condition of incompressibility of the surrounding
matrix, were used to determine the derivatives of an unknown strain energy
function with respect to the deformation invariants. During training, the loss
function was modified to ensure convexity of the strain energy function and
symmetry of its Hessian. A general FCNN model was coded into a user material
subroutine (UMAT) in the software Abaqus. In this work, the FCNN trained on the
discrete fiber network data was used in finite element simulations of fibrin
gels using our UMAT. We anticipate that this work will enable further
integration of machine learning tools with computational mechanics. It will
also improve computational modeling of biological materials characterized by a
multiscale structure.
|
[
{
"created": "Sat, 23 Jan 2021 23:52:33 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Aug 2021 15:04:29 GMT",
"version": "v2"
}
] |
2021-11-03
|
[
[
"Leng",
"Yue",
""
],
[
"Tac",
"Vahidullah",
""
],
[
"Calve",
"Sarah",
""
],
[
"Tepole",
"Adrian Buganza",
""
]
] |
Biopolymer gels, such as those made out of fibrin or collagen, are widely used in tissue engineering applications and biomedical research. Moreover, fibrin naturally assembles into gels in vivo during wound healing and thrombus formation. Macroscale biopolymer gel mechanics are dictated by the microscale fiber network. Hence, accurate description of biopolymer gels can be achieved using representative volume elements (RVE) that explicitly model the discrete fiber networks of the microscale. These RVE models, however, cannot be efficiently used to model the macroscale due to the challenges and computational demands of multiscale coupling. Here, we propose the use of an artificial, fully connected neural network (FCNN) to efficiently capture the behavior of the RVE models. The FCNN was trained on 1100 fiber networks subjected to 121 biaxial deformations. The stress data from the RVE, together with the total energy and the condition of incompressibility of the surrounding matrix, were used to determine the derivatives of an unknown strain energy function with respect to the deformation invariants. During training, the loss function was modified to ensure convexity of the strain energy function and symmetry of its Hessian. A general FCNN model was coded into a user material subroutine (UMAT) in the software Abaqus. In this work, the FCNN trained on the discrete fiber network data was used in finite element simulations of fibrin gels using our UMAT. We anticipate that this work will enable further integration of machine learning tools with computational mechanics. It will also improve computational modeling of biological materials characterized by a multiscale structure.
|
2005.04424
|
Avijit Maji
|
Avijit Maji, Tushar Choudhari, M.B. Sushma
|
Implication of Repatriating Migrant Workers on COVID-19 Spread and
Transportation Requirements
|
22 pages, 8 figures, 8 tables
| null | null | null |
q-bio.PE physics.soc-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Nationwide lockdown for COVID-19 created an urgent demand for public
transportation among migrant workers stranded at different parts of India to
return to their native places. Arranging transportation could spike the number
of COVID-19 infected cases. Hence, this paper investigates the potential surge
in confirmed and active cases of COVID-19 infection and assesses the train and
bus fleet size required for the repatriating migrant workers. The expected to
repatriate migrant worker population was obtained by forecasting the 2011
census data and comparing it with the information reported in the news media. A
modified susceptible-exposed-infected-removed (SEIR) model was proposed to
estimate the surge in confirmed and active cases of COVID-19 patients in
India's selected states with high outflux of migrants. The developed model
considered combinations of different levels of the daily arrival rate of
migrant workers, total migrant workers in need of transportation, and the
origin of the trip dependent symptomatic cases on arrival. Reducing the daily
arrival rate of migrant workers for states with very high outflux of migrants
(i.e., Uttar Pradesh and Bihar) can help to lower the surge in confirmed and
active cases. Nevertheless, it could create a disparity in the number of days
needed to transport all repatriating migrant workers to the home states. Hence,
travel arrangements for about 100,000 migrant workers per day to Uttar Pradesh
and Bihar, about 50,000 per day to Rajasthan and Madhya Pradesh, 20,000 per day
to Maharashtra and less than 20,000 per day to other states of India was
recommended.
|
[
{
"created": "Sat, 9 May 2020 12:24:01 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2020 18:01:12 GMT",
"version": "v2"
}
] |
2020-08-03
|
[
[
"Maji",
"Avijit",
""
],
[
"Choudhari",
"Tushar",
""
],
[
"Sushma",
"M. B.",
""
]
] |
Nationwide lockdown for COVID-19 created an urgent demand for public transportation among migrant workers stranded at different parts of India to return to their native places. Arranging transportation could spike the number of COVID-19 infected cases. Hence, this paper investigates the potential surge in confirmed and active cases of COVID-19 infection and assesses the train and bus fleet size required for the repatriating migrant workers. The expected to repatriate migrant worker population was obtained by forecasting the 2011 census data and comparing it with the information reported in the news media. A modified susceptible-exposed-infected-removed (SEIR) model was proposed to estimate the surge in confirmed and active cases of COVID-19 patients in India's selected states with high outflux of migrants. The developed model considered combinations of different levels of the daily arrival rate of migrant workers, total migrant workers in need of transportation, and the origin of the trip dependent symptomatic cases on arrival. Reducing the daily arrival rate of migrant workers for states with very high outflux of migrants (i.e., Uttar Pradesh and Bihar) can help to lower the surge in confirmed and active cases. Nevertheless, it could create a disparity in the number of days needed to transport all repatriating migrant workers to the home states. Hence, travel arrangements for about 100,000 migrant workers per day to Uttar Pradesh and Bihar, about 50,000 per day to Rajasthan and Madhya Pradesh, 20,000 per day to Maharashtra and less than 20,000 per day to other states of India was recommended.
|
q-bio/0411032
|
Michael LaMar
|
M. Drew LaMar, J. Xin, Y. Qi
|
Signal processing of acoustic signals in the time domain with an active
nonlinear nonlocal cochlear model
|
19 pages, 9 figures
| null | null | null |
q-bio.QM q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A two space dimensional active nonlinear nonlocal cochlear model is
formulated in the time domain to capture nonlinear hearing effects such as
compression, multi-tone suppression and difference tones. The micromechanics of
the basilar membrane (BM) are incorporated to model active cochlear properties.
An active gain parameter is constructed in the form of a nonlinear nonlocal
functional of BM displacement. The model is discretized with a boundary
integral method and numerically solved using an iterative second order accurate
finite difference scheme. A block matrix structure of the discrete system is
exploited to simplify the numerics with no loss of accuracy. Model responses to
multiple frequency stimuli are shown in agreement with hearing experiments. A
nonlinear spectrum is computed from the model, and compared with FFT spectrum
for noisy tonal inputs. The discretized model is efficient and accurate, and
can serve as a useful auditory signal processing tool.
|
[
{
"created": "Mon, 15 Nov 2004 19:06:19 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jul 2010 15:41:51 GMT",
"version": "v2"
}
] |
2010-07-07
|
[
[
"LaMar",
"M. Drew",
""
],
[
"Xin",
"J.",
""
],
[
"Qi",
"Y.",
""
]
] |
A two space dimensional active nonlinear nonlocal cochlear model is formulated in the time domain to capture nonlinear hearing effects such as compression, multi-tone suppression and difference tones. The micromechanics of the basilar membrane (BM) are incorporated to model active cochlear properties. An active gain parameter is constructed in the form of a nonlinear nonlocal functional of BM displacement. The model is discretized with a boundary integral method and numerically solved using an iterative second order accurate finite difference scheme. A block matrix structure of the discrete system is exploited to simplify the numerics with no loss of accuracy. Model responses to multiple frequency stimuli are shown in agreement with hearing experiments. A nonlinear spectrum is computed from the model, and compared with FFT spectrum for noisy tonal inputs. The discretized model is efficient and accurate, and can serve as a useful auditory signal processing tool.
|
1807.04665
|
Dirson Jian Li
|
Dirson Jian Li
|
Observations and perspectives on the prebiotic sequence evolution
|
53 pages, 9 figures
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The post-genomic era has brought opportunities to bridge traditionally
separate fields of early history of life and brought new insight into origin
and evolution of biodiversity. According to distributions of codons in genome
sequences, I found a relationship between the genetic code and the tree of
life. This remote and profound relationship involves the origin and evolution
of the genetic code and the diversification and expansion of genomes. Here, a
prebiotic picture of the triplex nucleic acid evolution is proposed to explain
the origin of the genetic code, where the transition from disorder to order in
the origin of life might be due to the increasing stabilities of triplex base
pairs. The codon degeneracy can be obtained in detail based on the coevolution
of the genetic code with amino acids, or equivalently, the coevolution of tRNAs
with aaRSs. This theory is based on experimental data such as the stability of
triplex base pairs and the statistical features of genomic codon distributions.
Several experimentally testable proposals have been developed. This study
should be regarded as an exploratory attempt to reveal the early evolution of
life based on sequence information in a statistical manner.
|
[
{
"created": "Thu, 12 Jul 2018 15:13:12 GMT",
"version": "v1"
}
] |
2018-07-13
|
[
[
"Li",
"Dirson Jian",
""
]
] |
The post-genomic era has brought opportunities to bridge traditionally separate fields of early history of life and brought new insight into origin and evolution of biodiversity. According to distributions of codons in genome sequences, I found a relationship between the genetic code and the tree of life. This remote and profound relationship involves the origin and evolution of the genetic code and the diversification and expansion of genomes. Here, a prebiotic picture of the triplex nucleic acid evolution is proposed to explain the origin of the genetic code, where the transition from disorder to order in the origin of life might be due to the increasing stabilities of triplex base pairs. The codon degeneracy can be obtained in detail based on the coevolution of the genetic code with amino acids, or equivalently, the coevolution of tRNAs with aaRSs. This theory is based on experimental data such as the stability of triplex base pairs and the statistical features of genomic codon distributions. Several experimentally testable proposals have been developed. This study should be regarded as an exploratory attempt to reveal the early evolution of life based on sequence information in a statistical manner.
|
1302.7301
|
Krzysztof Argasinski
|
Krzysztof Argasinski and Mark Broom
|
Background fitness, eco-evolutionary feedbacks and the Hawk-Dove game
|
This paper is the research report from the grant and soon will be
replaced by two papers containing generalized extensins of the results
introduced in this paper
| null | null | null |
q-bio.PE math.CA nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper further develops work from a previous article which introduced a
new way of modelling evolutionary game models with an emphasis on ecological
realism, concerned with how ecological factors determine payoffs in
evolutionary games. The current paper is focused on the impact of selectively
neutral factors (i.e. those that are the same for all strategies), such as
background fitness and a neutral density dependence on game dynamics. In the
previous paper background mortality was modelled by post-reproductive mortality
describing aggregated mortality between game interactions. In this paper a
novel approach to the modelling of background fitness, based on different
occurrence rates (intensities) of game interactions and background events is
presented. It is shown that the new method is more realistic and natural for
dynamic models than the method used in the previous paper. The previous method
in effect produces biased trajectories, but it is compatible with the new
approach at the restpoints. Also in this paper a rigorous stability analysis of
the restpoints of the dynamics is presented. It is shown that the stability or
instability of the restpoints, can be explained by a mechanism of
eco-evolutionary feedback. The results obtained are illustrated by a Hawk-Dove
game example.
|
[
{
"created": "Thu, 28 Feb 2013 19:56:16 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Mar 2014 23:03:23 GMT",
"version": "v2"
}
] |
2014-03-05
|
[
[
"Argasinski",
"Krzysztof",
""
],
[
"Broom",
"Mark",
""
]
] |
This paper further develops work from a previous article which introduced a new way of modelling evolutionary game models with an emphasis on ecological realism, concerned with how ecological factors determine payoffs in evolutionary games. The current paper is focused on the impact of selectively neutral factors (i.e. those that are the same for all strategies), such as background fitness and a neutral density dependence on game dynamics. In the previous paper background mortality was modelled by post-reproductive mortality describing aggregated mortality between game interactions. In this paper a novel approach to the modelling of background fitness, based on different occurrence rates (intensities) of game interactions and background events is presented. It is shown that the new method is more realistic and natural for dynamic models than the method used in the previous paper. The previous method in effect produces biased trajectories, but it is compatible with the new approach at the restpoints. Also in this paper a rigorous stability analysis of the restpoints of the dynamics is presented. It is shown that the stability or instability of the restpoints, can be explained by a mechanism of eco-evolutionary feedback. The results obtained are illustrated by a Hawk-Dove game example.
|
2202.05761
|
Zachary Kilpatrick PhD
|
Subekshya Bidari, Ahmed El Hady, Jacob Davidson, and Zachary P
Kilpatrick
|
Stochastic dynamics of social patch foraging decisions
|
24 pages, 7 figures
| null | null | null |
q-bio.PE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Animals typically forage in groups. Social foraging can help animals avoid
predation and decrease their uncertainty about the richness of food resources.
Despite this, theoretical mechanistic models of patch foraging have
overwhelmingly focused on the behavior of single foragers. In this study, we
develop a mechanistic model describing the behavior of individuals foraging
together and departing food patches following an evidence accumulation process.
Each individual's belief about patch quality is represented by a stochastically
accumulating variable coupled to others' belief, representing the transfer of
information. We consider a cohesive group, and model information sharing as
either intermittent pulsatile coupling (communicate decision to leave) or
continuous diffusive coupling (communicate online belief). Foraging efficiency
under pulsatile coupling has a stronger dependence on the coupling strength
parameter compared to diffusive. Despite employing minimal information
transfer, pulsatile coupling can still provide similar or higher foraging
efficiency compared to diffusive coupling. Conversely, diffusive coupling is
more robust to parameter detuning and performs better when individuals have
heterogeneous departure criteria and social information weighting. Efficiency
is measured by a reward rate function that balances the amount of energy
accumulated against the time spent in a patch, computed by solving an ordered
first passage time problem for the patch departures of each individual. Using
synthetic data we show that we can distinguish between the two modes of
communication and identify the model parameters. Our model establishes a social
patch foraging framework to parse and identify deliberative decision
strategies, to distinguish different forms of social communication, and to
allow model fitting to real world animal behavior data.
|
[
{
"created": "Fri, 11 Feb 2022 16:52:13 GMT",
"version": "v1"
}
] |
2022-02-14
|
[
[
"Bidari",
"Subekshya",
""
],
[
"Hady",
"Ahmed El",
""
],
[
"Davidson",
"Jacob",
""
],
[
"Kilpatrick",
"Zachary P",
""
]
] |
Animals typically forage in groups. Social foraging can help animals avoid predation and decrease their uncertainty about the richness of food resources. Despite this, theoretical mechanistic models of patch foraging have overwhelmingly focused on the behavior of single foragers. In this study, we develop a mechanistic model describing the behavior of individuals foraging together and departing food patches following an evidence accumulation process. Each individual's belief about patch quality is represented by a stochastically accumulating variable coupled to others' belief, representing the transfer of information. We consider a cohesive group, and model information sharing as either intermittent pulsatile coupling (communicate decision to leave) or continuous diffusive coupling (communicate online belief). Foraging efficiency under pulsatile coupling has a stronger dependence on the coupling strength parameter compared to diffusive. Despite employing minimal information transfer, pulsatile coupling can still provide similar or higher foraging efficiency compared to diffusive coupling. Conversely, diffusive coupling is more robust to parameter detuning and performs better when individuals have heterogeneous departure criteria and social information weighting. Efficiency is measured by a reward rate function that balances the amount of energy accumulated against the time spent in a patch, computed by solving an ordered first passage time problem for the patch departures of each individual. Using synthetic data we show that we can distinguish between the two modes of communication and identify the model parameters. Our model establishes a social patch foraging framework to parse and identify deliberative decision strategies, to distinguish different forms of social communication, and to allow model fitting to real world animal behavior data.
|
1301.1017
|
Andre Brown
|
Andr\'e E.X. Brown and William R. Schafer
|
Automated behavioural fingerprinting of C. elegans mutants
|
Accepted for publication in "Linking Phenotypes and Genotypes - How
to Infer Genetic Architecture from Perturbation Studies", edited by Florian
Markowetz (Cambridge) and Michael Boutros (Heidelberg), Cambridge University
Press. 28 pages, 6 figures
| null | null | null |
q-bio.QM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid advances in genetics, genomics, and imaging have given insight into the
molecular and cellular basis of behaviour in a variety of model organisms with
unprecedented detail and scope. It is increasingly routine to isolate
behavioural mutants, clone and characterise mutant genes, and discern the
molecular and neural basis for a behavioural phenotype. Conversely, reverse
genetic approaches have made it possible to straightforwardly identify genes of
interest in whole-genome sequences and generate mutants that can be subjected
to phenotypic analysis. In this latter approach, it is the phenotyping that
presents the major bottleneck; when it comes to connecting phenotype to
genotype in freely behaving animals, analysis of behaviour itself remains
superficial and time consuming. However, many proof-of-principle studies of
automated behavioural analysis over the last decade have poised the field on
the verge of exciting developments that promise to begin closing this gap.
In the broadest sense, our goal in this chapter is to explore what we can
learn about the genes involved in neural function by carefully observing
behaviour. This approach is rooted in model organism genetics but shares ideas
with ethology and neuroscience, as well as computer vision and bioinformatics.
After introducing C. elegans as a model, we will survey the research that has
led to the current state of the art in worm behavioural phenotyping and present
current research that is transforming our approach to behavioural genetics.
|
[
{
"created": "Sun, 6 Jan 2013 16:01:24 GMT",
"version": "v1"
}
] |
2013-01-08
|
[
[
"Brown",
"André E. X.",
""
],
[
"Schafer",
"William R.",
""
]
] |
Rapid advances in genetics, genomics, and imaging have given insight into the molecular and cellular basis of behaviour in a variety of model organisms with unprecedented detail and scope. It is increasingly routine to isolate behavioural mutants, clone and characterise mutant genes, and discern the molecular and neural basis for a behavioural phenotype. Conversely, reverse genetic approaches have made it possible to straightforwardly identify genes of interest in whole-genome sequences and generate mutants that can be subjected to phenotypic analysis. In this latter approach, it is the phenotyping that presents the major bottleneck; when it comes to connecting phenotype to genotype in freely behaving animals, analysis of behaviour itself remains superficial and time consuming. However, many proof-of-principle studies of automated behavioural analysis over the last decade have poised the field on the verge of exciting developments that promise to begin closing this gap. In the broadest sense, our goal in this chapter is to explore what we can learn about the genes involved in neural function by carefully observing behaviour. This approach is rooted in model organism genetics but shares ideas with ethology and neuroscience, as well as computer vision and bioinformatics. After introducing C. elegans as a model, we will survey the research that has led to the current state of the art in worm behavioural phenotyping and present current research that is transforming our approach to behavioural genetics.
|
1212.0119
|
Justin Fay
|
Elizabeth K. Engle and Justin C. Fay
|
ZRT1 harbors an excess of nonsynonymous polymorphism and shows evidence
of balancing selection in Saccharomyces cerevisiae
| null | null | null | null |
q-bio.PE q-bio.GN
|
http://creativecommons.org/licenses/by/3.0/
|
Estimates of the fraction of nucleotide substitutions driven by positive
selection vary widely across different species. Accounting for different
estimates of positive selection has been difficult, in part because selection
on polymorphism within a species is known to obscure a signal of positive
selection between species. While methods have been developed to control for the
confounding effects of negative selection against deleterious polymorphism, the
impact of balancing selection on estimates of positive selection has not been
assessed. In Saccharomyces cerevisiae, there is no signal of positive selection
within protein coding sequences as the ratio of nonsynonymous to synonymous
polymorphism is higher than that of divergence. To investigate the impact of
balancing selection on estimates of positive selection we examined five genes
with high rates of nonsynonymous polymorphism in S. cerevisiae relative to
divergence from S. paradoxus. One of the genes, a high affinity zinc
transporter ZRT1, shows an elevated rate of synonymous polymorphism indicative
of balancing selection. The high rate of synonymous polymorphism coincides with
nonsynonymous divergence between three haplotype groups, which we find to be
functionally indistinguishable. We conclude that balancing selection is not
likely to be a common cause of genes harboring a large excess of nonsynonymous
polymorphism in yeast.
|
[
{
"created": "Sat, 1 Dec 2012 14:49:35 GMT",
"version": "v1"
}
] |
2012-12-04
|
[
[
"Engle",
"Elizabeth K.",
""
],
[
"Fay",
"Justin C.",
""
]
] |
Estimates of the fraction of nucleotide substitutions driven by positive selection vary widely across different species. Accounting for different estimates of positive selection has been difficult, in part because selection on polymorphism within a species is known to obscure a signal of positive selection between species. While methods have been developed to control for the confounding effects of negative selection against deleterious polymorphism, the impact of balancing selection on estimates of positive selection has not been assessed. In Saccharomyces cerevisiae, there is no signal of positive selection within protein coding sequences as the ratio of nonsynonymous to synonymous polymorphism is higher than that of divergence. To investigate the impact of balancing selection on estimates of positive selection we examined five genes with high rates of nonsynonymous polymorphism in S. cerevisiae relative to divergence from S. paradoxus. One of the genes, a high affinity zinc transporter ZRT1, shows an elevated rate of synonymous polymorphism indicative of balancing selection. The high rate of synonymous polymorphism coincides with nonsynonymous divergence between three haplotype groups, which we find to be functionally indistinguishable. We conclude that balancing selection is not likely to be a common cause of genes harboring a large excess of nonsynonymous polymorphism in yeast.
|
1506.04437
|
Tim Peterson
|
Timothy R. Peterson, Mathieu Laplante, Ed Van Veen, Marcel Van Vugt,
Carson C. Thoreen, David M. Sabatini
|
mTORC1 regulates cytokinesis through activation of Rho-ROCK signaling
|
https://onarbor.com/work/3/mtorc1-regulates-cytokinesis-through-activation-of-rho-rock-signaling
| null | null | null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the mechanisms by which cells coordinate their size with their
ability to divide has long attracted the interest of biologists. The Target of
Rapamycin (TOR) pathway is becoming increasingly recognized as a master
regulator of cell size, however less is known how TOR activity might be coupled
with the cell cycle. Here, we establish that mTOR complex 1 (mTORC1) promotes
cytokinesis through activation of a Rho GTPase-Rho Kinase (ROCK) signaling
cascade. Hyperactivation of mTORC1 signaling by depletion of any of its
negative regulators: TSC1, TSC2, PTEN, or DEPTOR, induces polyploidy in a
rapamycin-sensitive manner. mTORC1 hyperactivation-mediated polyploidization
occurs by a prolonged, but ultimately failed attempt at abcission followed by
re-fusion. Similar to the effects of ROCK2 overexpression, these mTORC1-driven
aberrant cytokinesis events are accompanied by increased Rho-GTP loading,
extensive plasma membrane blebbing, and increased actin-myosin contractility,
all of which can be rescued by either mTORC1 or ROCK inhibition. These results
provide evidence for the existence of a novel mTORC1-Rho-ROCK pathway during
cytokinesis and suggest that mTORC1 might play a critical role in setting the
size at which a mammalian cell divides.
|
[
{
"created": "Sun, 14 Jun 2015 20:47:56 GMT",
"version": "v1"
}
] |
2015-06-16
|
[
[
"Peterson",
"Timothy R.",
""
],
[
"Laplante",
"Mathieu",
""
],
[
"Van Veen",
"Ed",
""
],
[
"Van Vugt",
"Marcel",
""
],
[
"Thoreen",
"Carson C.",
""
],
[
"Sabatini",
"David M.",
""
]
] |
Understanding the mechanisms by which cells coordinate their size with their ability to divide has long attracted the interest of biologists. The Target of Rapamycin (TOR) pathway is becoming increasingly recognized as a master regulator of cell size, however less is known how TOR activity might be coupled with the cell cycle. Here, we establish that mTOR complex 1 (mTORC1) promotes cytokinesis through activation of a Rho GTPase-Rho Kinase (ROCK) signaling cascade. Hyperactivation of mTORC1 signaling by depletion of any of its negative regulators: TSC1, TSC2, PTEN, or DEPTOR, induces polyploidy in a rapamycin-sensitive manner. mTORC1 hyperactivation-mediated polyploidization occurs by a prolonged, but ultimately failed attempt at abcission followed by re-fusion. Similar to the effects of ROCK2 overexpression, these mTORC1-driven aberrant cytokinesis events are accompanied by increased Rho-GTP loading, extensive plasma membrane blebbing, and increased actin-myosin contractility, all of which can be rescued by either mTORC1 or ROCK inhibition. These results provide evidence for the existence of a novel mTORC1-Rho-ROCK pathway during cytokinesis and suggest that mTORC1 might play a critical role in setting the size at which a mammalian cell divides.
|
2201.11849
|
Francesca Ballarini
|
J. Schuemann, A. McNamara, J. W. Warmenhoven, N. T. Henthorn, K.
Kirkby, M. J. Merchant, S. Ingram, H. Paganetti, KD. Held, J. Ramos-Mendez,
B. Faddegon, J. Perl, D. Goodhead, I. Plante, H. Rabus, H. Nettelbeck, W.
Friedland, P. Kundrat, A. Ottolenghi, G. Baiocco, S. Barbieri, M. Dingfelder,
S. Incerti, C. Villagrasa, M. Bueno, M. A. Bernal, S. Guatelli, D. Sakata, J.
M. C. Brown, Z. Francis, I. Kyriakou, N. Lampe, F. Ballarini, M. P. Ca-rante,
M. Davidkova, V. \v{S}tep\'an, X. Jia, F. A. Cucinotta, R. Schulte, R.
Stewart, D. Carlson, S. Galer, Z. Kuncic, S. LaCombe, J. Milligan, S. H. Cho,
T. Inaniwa, T. Sato, M Durante, K Prise, and S. J. McMahon
|
A new Standard DNA damage (SDD) data format
| null |
Radiation Research, 191(1): 76-92
|
10.1667/RR15209.1
| null |
q-bio.OT physics.med-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Our understanding of radiation induced cellular damage has greatly improved
over the past decades. Despite this progress, there are still many obstacles to
fully understanding how radiation interacts with biologically relevant cellular
components to form observable endpoints. One hurdle is the difficulty faced by
members of different research groups in directly comparing results. Multiple
Monte Carlo codes have been developed to simulate damage induction at the DNA
scale, while at the same time various groups have developed models that
describe DNA repair processes with varying levels of detail. These repair
models are intrinsically linked to the damage model employed in their
development, making it difficult to disentangle systematic effects in either
part of the modelling chain. The modelling chain typically consists of track
structure Monte Carlo simulations of the physics interactions creating direct
damages to the DNA; followed by simulations of the production and initial
reactions of chemical species causing indirect damages. After the DNA damage
induction, DNA repair models combine the simulated damage patterns with
biological models to determine the biological consequences of the damage. We
propose a new Standard data format for DNA Damage to unify the interface
between the simulation of damage induction and the biological modelling of cell
repair processes. Such a standard greatly facilitates inter model comparisons,
providing an ideal environment to tease out model assumptions and identify
persistent, underlying mechanisms. Through inter model comparisons, this
unified standard has the potential to greatly advance our understanding of the
underlying mechanisms of radiation induced DNA damage and the resulting
observable biological effects.
|
[
{
"created": "Tue, 11 Jan 2022 11:43:13 GMT",
"version": "v1"
}
] |
2022-01-31
|
[
[
"Schuemann",
"J.",
""
],
[
"McNamara",
"A.",
""
],
[
"Warmenhoven",
"J. W.",
""
],
[
"Henthorn",
"N. T.",
""
],
[
"Kirkby",
"K.",
""
],
[
"Merchant",
"M. J.",
""
],
[
"Ingram",
"S.",
""
],
[
"Paganetti",
"H.",
""
],
[
"Held",
"KD.",
""
],
[
"Ramos-Mendez",
"J.",
""
],
[
"Faddegon",
"B.",
""
],
[
"Perl",
"J.",
""
],
[
"Goodhead",
"D.",
""
],
[
"Plante",
"I.",
""
],
[
"Rabus",
"H.",
""
],
[
"Nettelbeck",
"H.",
""
],
[
"Friedland",
"W.",
""
],
[
"Kundrat",
"P.",
""
],
[
"Ottolenghi",
"A.",
""
],
[
"Baiocco",
"G.",
""
],
[
"Barbieri",
"S.",
""
],
[
"Dingfelder",
"M.",
""
],
[
"Incerti",
"S.",
""
],
[
"Villagrasa",
"C.",
""
],
[
"Bueno",
"M.",
""
],
[
"Bernal",
"M. A.",
""
],
[
"Guatelli",
"S.",
""
],
[
"Sakata",
"D.",
""
],
[
"Brown",
"J. M. C.",
""
],
[
"Francis",
"Z.",
""
],
[
"Kyriakou",
"I.",
""
],
[
"Lampe",
"N.",
""
],
[
"Ballarini",
"F.",
""
],
[
"Ca-rante",
"M. P.",
""
],
[
"Davidkova",
"M.",
""
],
[
"Štepán",
"V.",
""
],
[
"Jia",
"X.",
""
],
[
"Cucinotta",
"F. A.",
""
],
[
"Schulte",
"R.",
""
],
[
"Stewart",
"R.",
""
],
[
"Carlson",
"D.",
""
],
[
"Galer",
"S.",
""
],
[
"Kuncic",
"Z.",
""
],
[
"LaCombe",
"S.",
""
],
[
"Milligan",
"J.",
""
],
[
"Cho",
"S. H.",
""
],
[
"Inaniwa",
"T.",
""
],
[
"Sato",
"T.",
""
],
[
"Durante",
"M",
""
],
[
"Prise",
"K",
""
],
[
"McMahon",
"S. J.",
""
]
] |
Our understanding of radiation induced cellular damage has greatly improved over the past decades. Despite this progress, there are still many obstacles to fully understanding how radiation interacts with biologically relevant cellular components to form observable endpoints. One hurdle is the difficulty faced by members of different research groups in directly comparing results. Multiple Monte Carlo codes have been developed to simulate damage induction at the DNA scale, while at the same time various groups have developed models that describe DNA repair processes with varying levels of detail. These repair models are intrinsically linked to the damage model employed in their development, making it difficult to disentangle systematic effects in either part of the modelling chain. The modelling chain typically consists of track structure Monte Carlo simulations of the physics interactions creating direct damages to the DNA; followed by simulations of the production and initial reactions of chemical species causing indirect damages. After the DNA damage induction, DNA repair models combine the simulated damage patterns with biological models to determine the biological consequences of the damage. We propose a new Standard data format for DNA Damage to unify the interface between the simulation of damage induction and the biological modelling of cell repair processes. Such a standard greatly facilitates inter model comparisons, providing an ideal environment to tease out model assumptions and identify persistent, underlying mechanisms. Through inter model comparisons, this unified standard has the potential to greatly advance our understanding of the underlying mechanisms of radiation induced DNA damage and the resulting observable biological effects.
|
1211.6662
|
Abdelmalik Moujahid
|
Abdelmalik Moujahid and Alicia D'Anjou
|
Metabolic efficiency with fast spiking in the squid axon
| null |
Front. Comput. Neurosci. (2012) 6:95
|
10.3389/fncom.2012.00095
| null |
q-bio.NC math.DS physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fundamentally, action potentials in the squid axon are consequence of the
entrance of sodium ions during the depolarization of the rising phase of the
spike mediated by the outflow of potassium ions during the hyperpolarization of
the falling phase. Perfect metabolic efficiency with a minimum charge needed
for the change in voltage during the action potential would confine sodium
entry to the rising phase and potassium efflux to the falling phase. However,
because sodium channels remain open to a significant extent during the falling
phase, a certain overlap of inward and outward currents is observed. In this
work we investigate the impact of ion overlap on the number of the adenosine
triphosphate (ATP) molecules and energy cost required per action potential as a
function of the temperature in a Hodgkin-Huxley model. Based on a recent
approach to computing the energy cost of neuronal AP generation not based on
ion counting, we show that increased firing frequencies induced by higher
temperatures imply more efficient use of sodium entry, and then a decrease in
the metabolic energy cost required to restore the concentration gradients after
an action potential. Also, we determine values of sodium conductance at which
the hydrolysis efficiency presents a clear minimum.
|
[
{
"created": "Wed, 28 Nov 2012 17:09:14 GMT",
"version": "v1"
}
] |
2012-11-29
|
[
[
"Moujahid",
"Abdelmalik",
""
],
[
"D'Anjou",
"Alicia",
""
]
] |
Fundamentally, action potentials in the squid axon are consequence of the entrance of sodium ions during the depolarization of the rising phase of the spike mediated by the outflow of potassium ions during the hyperpolarization of the falling phase. Perfect metabolic efficiency with a minimum charge needed for the change in voltage during the action potential would confine sodium entry to the rising phase and potassium efflux to the falling phase. However, because sodium channels remain open to a significant extent during the falling phase, a certain overlap of inward and outward currents is observed. In this work we investigate the impact of ion overlap on the number of the adenosine triphosphate (ATP) molecules and energy cost required per action potential as a function of the temperature in a Hodgkin-Huxley model. Based on a recent approach to computing the energy cost of neuronal AP generation not based on ion counting, we show that increased firing frequencies induced by higher temperatures imply more efficient use of sodium entry, and then a decrease in the metabolic energy cost required to restore the concentration gradients after an action potential. Also, we determine values of sodium conductance at which the hydrolysis efficiency presents a clear minimum.
|
1805.00548
|
Mike Steel Prof.
|
Lina Herbst, Thomas Li, Mike Steel
|
Quantifying the accuracy of ancestral state prediction in a phylogenetic
tree under maximum parsimony
|
26 pages, 4 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In phylogenetic studies, biologists often wish to estimate the ancestral
discrete character state at an interior vertex $v$ of an evolutionary tree $T$
from the states that are observed at the leaves of the tree. A simple and fast
estimation method --- maximum parsimony --- takes the ancestral state at $v$ to
be any state that minimises the number of state changes in $T$ required to
explain its evolution on $T$. In this paper, we investigate the reconstruction
accuracy of this estimation method further, under a simple symmetric model of
state change, and obtain a number of new results, both for 2-state characters,
and $r$--state characters ($r>2$). Our results rely on establishing new
identities and inequalities, based on a coupling argument that involves a
simpler `coin toss' approach to ancestral state reconstruction.
|
[
{
"created": "Tue, 1 May 2018 20:50:52 GMT",
"version": "v1"
}
] |
2018-05-03
|
[
[
"Herbst",
"Lina",
""
],
[
"Li",
"Thomas",
""
],
[
"Steel",
"Mike",
""
]
] |
In phylogenetic studies, biologists often wish to estimate the ancestral discrete character state at an interior vertex $v$ of an evolutionary tree $T$ from the states that are observed at the leaves of the tree. A simple and fast estimation method --- maximum parsimony --- takes the ancestral state at $v$ to be any state that minimises the number of state changes in $T$ required to explain its evolution on $T$. In this paper, we investigate the reconstruction accuracy of this estimation method further, under a simple symmetric model of state change, and obtain a number of new results, both for 2-state characters, and $r$--state characters ($r>2$). Our results rely on establishing new identities and inequalities, based on a coupling argument that involves a simpler `coin toss' approach to ancestral state reconstruction.
|
q-bio/0410001
|
Ken Kiyono
|
Zbigniew R. Struzik, Junichiro Hayano, Seiichiro Sakata, Shin Kwak,
Yoshiharu Yamamoto
|
1/f Scaling in Heart Rate Requires Antagonistic Autonomic Control
|
4 pages, 3 figures, to appear in Phys. Rev. E (2004)
| null |
10.1103/PhysRevE.70.050901
| null |
q-bio.OT
| null |
We present the first systematic evidence for the origins of 1/f-type temporal
scaling in human heart rate. The heart rate is regulated by the activity of two
branches of the autonomic nervous system: the parasympathetic (PNS) and the
sympathetic (SNS) nervous systems. We examine alterations in the scaling
property when the balance between PNS and SNS activity is modified, and find
that the relative PNS suppression by congestive heart failure results in a
substantial increase in the Hurst exponent H towards random walk scaling
$1/f^{2}$ and a similar breakdown is observed with relative SNS suppression by
primary autonomic failure. These results suggest that 1/f scaling in heart rate
requires the intricate balance between the antagonistic activity of PNS and
SNS.
|
[
{
"created": "Fri, 1 Oct 2004 09:57:24 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Struzik",
"Zbigniew R.",
""
],
[
"Hayano",
"Junichiro",
""
],
[
"Sakata",
"Seiichiro",
""
],
[
"Kwak",
"Shin",
""
],
[
"Yamamoto",
"Yoshiharu",
""
]
] |
We present the first systematic evidence for the origins of 1/f-type temporal scaling in human heart rate. The heart rate is regulated by the activity of two branches of the autonomic nervous system: the parasympathetic (PNS) and the sympathetic (SNS) nervous systems. We examine alterations in the scaling property when the balance between PNS and SNS activity is modified, and find that the relative PNS suppression by congestive heart failure results in a substantial increase in the Hurst exponent H towards random walk scaling $1/f^{2}$ and a similar breakdown is observed with relative SNS suppression by primary autonomic failure. These results suggest that 1/f scaling in heart rate requires the intricate balance between the antagonistic activity of PNS and SNS.
|
2406.12064
|
Yun William Yu
|
Xiaolei Brian Zhang, Grace Oualline, Jim Shaw, Yun William Yu
|
skandiver: a divergence-based analysis tool for identifying
intercellular mobile genetic elements
|
9 pages, 6 figures
| null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile genetic elements (MGEs) are as ubiquitous in nature as they are varied
in type, ranging from viral insertions to transposons to incorporated plasmids.
Horizontal transfer of MGEs across bacterial species may also pose a
significant threat to global health due to their capability to harbour
antibiotic resistance genes. However, despite cheap and rapid whole genome
sequencing, the varied nature of MGEs makes it difficult to fully characterize
them, and existing methods for detecting MGEs often don't agree on what should
count. In this manuscript, we first define and argue in favor of a
divergence-based characterization of mobile-genetic elements. Using that
paradigm, we present skandiver, a tool designed to efficiently detect MGEs from
whole genome assemblies without the need for gene annotation or markers.
skandiver determines mobile elements via genome fragmentation, average
nucleotide identity (ANI), and divergence time. By building on the scalable
skani software for ANI computation, skandiver can query hundreds of complete
assemblies against $>$65,000 representative genomes in a few minutes and 19 GB
memory, providing scalable and efficient method for elucidating mobile element
profiles in incomplete, uncharacterized genomic sequences. For isolated and
integrated large plasmids (>10kbp), skandiver's recall was 48\% and 47\%,
MobileElementFinder was 59\% and 17\%, and geNomad was 86\% and 32\%,
respectively. For isolated large plasmids, skandiver's recall (48\%) is lower
than state-of-the-art reference-based methods geNomad (86\%) and
MobileElementFinder (59\%). However, skandiver achieves higher recall on
integrated plasmids and, unlike other methods, without comparing against a
curated database, making skandiver suitable for discovery of novel MGEs.
Availability: https://github.com/YoukaiFromAccounting/skandiver
|
[
{
"created": "Mon, 17 Jun 2024 20:06:22 GMT",
"version": "v1"
}
] |
2024-06-19
|
[
[
"Zhang",
"Xiaolei Brian",
""
],
[
"Oualline",
"Grace",
""
],
[
"Shaw",
"Jim",
""
],
[
"Yu",
"Yun William",
""
]
] |
Mobile genetic elements (MGEs) are as ubiquitous in nature as they are varied in type, ranging from viral insertions to transposons to incorporated plasmids. Horizontal transfer of MGEs across bacterial species may also pose a significant threat to global health due to their capability to harbour antibiotic resistance genes. However, despite cheap and rapid whole genome sequencing, the varied nature of MGEs makes it difficult to fully characterize them, and existing methods for detecting MGEs often don't agree on what should count. In this manuscript, we first define and argue in favor of a divergence-based characterization of mobile-genetic elements. Using that paradigm, we present skandiver, a tool designed to efficiently detect MGEs from whole genome assemblies without the need for gene annotation or markers. skandiver determines mobile elements via genome fragmentation, average nucleotide identity (ANI), and divergence time. By building on the scalable skani software for ANI computation, skandiver can query hundreds of complete assemblies against $>$65,000 representative genomes in a few minutes and 19 GB memory, providing scalable and efficient method for elucidating mobile element profiles in incomplete, uncharacterized genomic sequences. For isolated and integrated large plasmids (>10kbp), skandiver's recall was 48\% and 47\%, MobileElementFinder was 59\% and 17\%, and geNomad was 86\% and 32\%, respectively. For isolated large plasmids, skandiver's recall (48\%) is lower than state-of-the-art reference-based methods geNomad (86\%) and MobileElementFinder (59\%). However, skandiver achieves higher recall on integrated plasmids and, unlike other methods, without comparing against a curated database, making skandiver suitable for discovery of novel MGEs. Availability: https://github.com/YoukaiFromAccounting/skandiver
|
2006.03017
|
Matthew Mizuhara
|
Nicolas Bolle, Matthew S. Mizuhara
|
Dynamics of a cell motility model near the sharp interface limit
|
18 pages, 5 figures
| null |
10.1016/j.jtbi.2020.110420
| null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phase-field models have recently had great success in describing the dynamic
morphologies and motility of eukaryotic cells. In this work we investigate the
minimal phase-field model introduced in [Berlyand, Potomkin, Rybalko (2017)].
Rigorous analysis of its sharp interface limit dynamics was completed in
[Mizuhara, Berlyand, Rybalko, Zhang (2016); Mizuhara, Zhang (2019)], where it
was observed that persistent cell motion was not stable. In this work we
numerically study the pre-limiting phase-field model near the sharp interface
limit, to better understand this lack of persistent motion. We find that
immobile, persistent, and rotating states are all exhibited in this minimal
model, and investigate the loss of persistent motion in the sharp interface
limit. In addition we study cell speed as a function of biophysical parameters.
|
[
{
"created": "Thu, 4 Jun 2020 17:09:58 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 15:11:11 GMT",
"version": "v2"
}
] |
2020-08-03
|
[
[
"Bolle",
"Nicolas",
""
],
[
"Mizuhara",
"Matthew S.",
""
]
] |
Phase-field models have recently had great success in describing the dynamic morphologies and motility of eukaryotic cells. In this work we investigate the minimal phase-field model introduced in [Berlyand, Potomkin, Rybalko (2017)]. Rigorous analysis of its sharp interface limit dynamics was completed in [Mizuhara, Berlyand, Rybalko, Zhang (2016); Mizuhara, Zhang (2019)], where it was observed that persistent cell motion was not stable. In this work we numerically study the pre-limiting phase-field model near the sharp interface limit, to better understand this lack of persistent motion. We find that immobile, persistent, and rotating states are all exhibited in this minimal model, and investigate the loss of persistent motion in the sharp interface limit. In addition we study cell speed as a function of biophysical parameters.
|
1901.04143
|
Sheryl Chang
|
Sheryl L. Chang and Mahendra Piraveenan and Philippa Pattison and
Mikhail Prokopenko
|
Game theoretic modelling of infectious disease dynamics and intervention
methods: a mini-review
|
24 pages, 10 figures
|
Journal of biological dynamics 14(1),2020,p p.57-89
|
10.1080/17513758.2020.1720322
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We review research papers which use game theory to model the decision making
of individuals during an epidemic, attempting to classify the literature and
identify the emerging trends in this field. We show that the literature can be
classified based on (i) type of population modelling (compartmental or
network-based), (ii) frequency of the game (non-iterative or iterative), and
(iii) type of strategy adoption (self-evaluation or imitation). We highlight
that the choice of model depends on many factors such as the type of immunity
the disease confers, the type of immunity the vaccine confers, and size of
population and level of mixing therein. We show that while early studies used
compartmental modelling with self-evaluation based strategy adoption, the
recent trend is to use network-based modelling with imitation-based strategy
adoption. Our review indicates that game theory continues to be an effective
tool to model intervention (vaccination or social distancing) decision-making
by individuals.
|
[
{
"created": "Mon, 14 Jan 2019 06:15:55 GMT",
"version": "v1"
}
] |
2020-02-13
|
[
[
"Chang",
"Sheryl L.",
""
],
[
"Piraveenan",
"Mahendra",
""
],
[
"Pattison",
"Philippa",
""
],
[
"Prokopenko",
"Mikhail",
""
]
] |
We review research papers which use game theory to model the decision making of individuals during an epidemic, attempting to classify the literature and identify the emerging trends in this field. We show that the literature can be classified based on (i) type of population modelling (compartmental or network-based), (ii) frequency of the game (non-iterative or iterative), and (iii) type of strategy adoption (self-evaluation or imitation). We highlight that the choice of model depends on many factors such as the type of immunity the disease confers, the type of immunity the vaccine confers, and size of population and level of mixing therein. We show that while early studies used compartmental modelling with self-evaluation based strategy adoption, the recent trend is to use network-based modelling with imitation-based strategy adoption. Our review indicates that game theory continues to be an effective tool to model intervention (vaccination or social distancing) decision-making by individuals.
|
0808.4016
|
Nikolai Sinitsyn
|
N. A. Sinitsyn, Nicolas Hengartner, Ilya Nemenman
|
Coarse-graining stochastic biochemical networks: quasi-stationary
approximation and fast simulations using a stochastic path integral technique
|
29 pages, 8 figures
|
Proceed. Natl. Acad. Sci. U. S. A., 2009 106:10546-10551
| null |
Technical Report: LA-UR 08-0025
|
q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a universal approach for analysis and fast simulations of stiff
stochastic biochemical kinetics networks, which rests on elimination of fast
chemical species without a loss of information about mesoscopic, non-Poissonian
fluctuations of the slow ones. Our approach, which is similar to the
Born-Oppenheimer approximation in quantum mechanics, follows from the
stochastic path integral representation of the full counting statistics of
reaction events (also known as the cumulant generating function). In
applications with a small number of chemical reactions, this approach produces
analytical expressions for moments of chemical fluxes between slow variables.
This allows for a low-dimensional, interpretable representation of the
biochemical system, that can be used for coarse-grained numerical simulation
schemes with a small computational complexity and yet high accuracy. As an
example, we consider a chain of biochemical reactions, derive its
coarse-grained description, and show that the Gillespie simulations of the
original stiff system, the coarse-grained simulations, and the full analytical
treatment are in an agreement, but the coarse-grained simulations are three
orders of magnitude faster than the Gillespie analogue.
|
[
{
"created": "Fri, 29 Aug 2008 02:18:21 GMT",
"version": "v1"
}
] |
2009-07-07
|
[
[
"Sinitsyn",
"N. A.",
""
],
[
"Hengartner",
"Nicolas",
""
],
[
"Nemenman",
"Ilya",
""
]
] |
We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscopic, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenheimer approximation in quantum mechanics, follows from the stochastic path integral representation of the full counting statistics of reaction events (also known as the cumulant generating function). In applications with a small number of chemical reactions, this approach produces analytical expressions for moments of chemical fluxes between slow variables. This allows for a low-dimensional, interpretable representation of the biochemical system, that can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we consider a chain of biochemical reactions, derive its coarse-grained description, and show that the Gillespie simulations of the original stiff system, the coarse-grained simulations, and the full analytical treatment are in an agreement, but the coarse-grained simulations are three orders of magnitude faster than the Gillespie analogue.
|
q-bio/0605019
|
Javier Buldu
|
Pablo Balenzuela, Javier M. Buldu, Marcos Casanova and Jordi
Garcia-Ojalvo
|
Episodic synchronization in dynamically driven neurons
|
7 pages, 8 figures
| null |
10.1103/PhysRevE.74.061910
| null |
q-bio.NC
| null |
We examine the response of type II excitable neurons to trains of synaptic
pulses, as a function of the pulse frequency and amplitude. We show that the
resonant behavior characteristic of type II excitability, already described for
harmonic inputs, is also present for pulsed inputs. With this in mind, we study
the response of neurons to pulsed input trains whose frequency varies
continuously in time, and observe that the receiving neuron synchronizes
episodically to the input pulses, whenever the pulse frequency lies within the
neuron's locking range. We propose this behavior as a mechanism of rate-code
detection in neuronal populations. The results are obtained both in numerical
simulations of the Morris-Lecar model and in an electronic implementation of
the FitzHugh-Nagumo system, evidencing the robustness of the phenomenon.
|
[
{
"created": "Fri, 12 May 2006 08:07:42 GMT",
"version": "v1"
}
] |
2015-06-26
|
[
[
"Balenzuela",
"Pablo",
""
],
[
"Buldu",
"Javier M.",
""
],
[
"Casanova",
"Marcos",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] |
We examine the response of type II excitable neurons to trains of synaptic pulses, as a function of the pulse frequency and amplitude. We show that the resonant behavior characteristic of type II excitability, already described for harmonic inputs, is also present for pulsed inputs. With this in mind, we study the response of neurons to pulsed input trains whose frequency varies continuously in time, and observe that the receiving neuron synchronizes episodically to the input pulses, whenever the pulse frequency lies within the neuron's locking range. We propose this behavior as a mechanism of rate-code detection in neuronal populations. The results are obtained both in numerical simulations of the Morris-Lecar model and in an electronic implementation of the FitzHugh-Nagumo system, evidencing the robustness of the phenomenon.
|
2406.05170
|
Danyi Huang
|
Danyi Huang, Ziang Liu and Yizhou Li
|
Research on Tumors Segmentation based on Image Enhancement Method
| null | null | null | null |
q-bio.OT cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
One of the most effective ways to treat liver cancer is to perform precise
liver resection surgery, the key step of which includes precise digital image
segmentation of the liver and its tumor. However, traditional liver parenchymal
segmentation techniques often face several challenges in performing liver
segmentation: lack of precision, slow processing speed, and computational
burden. These shortcomings limit the efficiency of surgical planning and
execution. In this work, the model initially describes in detail a new image
enhancement algorithm that enhances the key features of an image by adaptively
adjusting the contrast and brightness of the image. Then, a deep learning-based
segmentation network was introduced, which was specially trained on the
enhanced images to optimize the detection accuracy of tumor regions. In
addition, multi-scale analysis techniques have been incorporated into the
study, allowing the model to analyze images at different resolutions to capture
more nuanced tumor features. In the presentation of the experimental results,
the study used the 3Dircadb dataset to test the effectiveness of the proposed
method. The experimental results show that compared with the traditional image
segmentation method, the new method using image enhancement technology has
significantly improved the accuracy and recall rate of tumor identification.
|
[
{
"created": "Fri, 7 Jun 2024 12:25:04 GMT",
"version": "v1"
}
] |
2024-06-11
|
[
[
"Huang",
"Danyi",
""
],
[
"Liu",
"Ziang",
""
],
[
"Li",
"Yizhou",
""
]
] |
One of the most effective ways to treat liver cancer is to perform precise liver resection surgery, the key step of which includes precise digital image segmentation of the liver and its tumor. However, traditional liver parenchymal segmentation techniques often face several challenges in performing liver segmentation: lack of precision, slow processing speed, and computational burden. These shortcomings limit the efficiency of surgical planning and execution. In this work, the model initially describes in detail a new image enhancement algorithm that enhances the key features of an image by adaptively adjusting the contrast and brightness of the image. Then, a deep learning-based segmentation network was introduced, which was specially trained on the enhanced images to optimize the detection accuracy of tumor regions. In addition, multi-scale analysis techniques have been incorporated into the study, allowing the model to analyze images at different resolutions to capture more nuanced tumor features. In the presentation of the experimental results, the study used the 3Dircadb dataset to test the effectiveness of the proposed method. The experimental results show that compared with the traditional image segmentation method, the new method using image enhancement technology has significantly improved the accuracy and recall rate of tumor identification.
|
2402.04241
|
Alexey Ovchinnikov
|
Helen Byrne, Heather Harrington, Alexey Ovchinnikov, Gleb Pogudin,
Hamid Rahkooy, and Pedro Soto
|
Algebraic identifiability of partial differential equation models
| null | null | null | null |
q-bio.QM cs.SC cs.SY eess.SY math.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Differential equation models are crucial to scientific processes. The values
of model parameters are important for analyzing the behaviour of solutions. A
parameter is called globally identifiable if its value can be uniquely
determined from the input and output functions. To determine if a parameter
estimation problem is well-posed for a given model, one must check if the model
parameters are globally identifiable. This problem has been intensively studied
for ordinary differential equation models, with theory and several efficient
algorithms and software packages developed. A comprehensive theory of algebraic
identifiability for PDEs has hitherto not been developed due to the complexity
of initial and boundary conditions. Here, we provide theory and algorithms,
based on differential algebra, for testing identifiability of polynomial PDE
models. We showcase this approach on PDE models arising in the sciences.
|
[
{
"created": "Tue, 6 Feb 2024 18:49:51 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Byrne",
"Helen",
""
],
[
"Harrington",
"Heather",
""
],
[
"Ovchinnikov",
"Alexey",
""
],
[
"Pogudin",
"Gleb",
""
],
[
"Rahkooy",
"Hamid",
""
],
[
"Soto",
"Pedro",
""
]
] |
Differential equation models are crucial to scientific processes. The values of model parameters are important for analyzing the behaviour of solutions. A parameter is called globally identifiable if its value can be uniquely determined from the input and output functions. To determine if a parameter estimation problem is well-posed for a given model, one must check if the model parameters are globally identifiable. This problem has been intensively studied for ordinary differential equation models, with theory and several efficient algorithms and software packages developed. A comprehensive theory of algebraic identifiability for PDEs has hitherto not been developed due to the complexity of initial and boundary conditions. Here, we provide theory and algorithms, based on differential algebra, for testing identifiability of polynomial PDE models. We showcase this approach on PDE models arising in the sciences.
|
2202.01104
|
Charles Puelz
|
Zan Ahmad, Lynn H. Jin, Daniel J. Penny, Craig G. Rusin, Charles S.
Peskin, Charles Puelz
|
Optimal fenestration of the Fontan circulation
| null | null | null | null |
q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we develop a pulsatile compartmental model of the Fontan
circulation and use it to explore the effects of a fenestration added to this
physiology. A fenestration is a shunt between the systemic and pulmonary veins
that is added either at the time of Fontan conversion or at a later time for
the treatment of complications. This shunt increases cardiac output and
decreases systemic venous pressure. However, these hemodynamic benefits are
achieved at the expense of a decrease in the arterial oxygen saturation. The
model developed this paper incorporates fenestration size as a parameter and
describes both blood flow and oxygen transport. It is calibrated to clinical
data from Fontan patients, and we use it to study the impact of a fenestration
on several hemodynamic variables. In certain scenarios corresponding to
high-risk Fontan physiology, we demonstrate the existence of an optimal
fenestration size that maximizes oxygen delivery to the systemic tissues.
|
[
{
"created": "Wed, 2 Feb 2022 16:04:12 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jun 2022 19:43:01 GMT",
"version": "v2"
}
] |
2022-06-23
|
[
[
"Ahmad",
"Zan",
""
],
[
"Jin",
"Lynn H.",
""
],
[
"Penny",
"Daniel J.",
""
],
[
"Rusin",
"Craig G.",
""
],
[
"Peskin",
"Charles S.",
""
],
[
"Puelz",
"Charles",
""
]
] |
In this paper, we develop a pulsatile compartmental model of the Fontan circulation and use it to explore the effects of a fenestration added to this physiology. A fenestration is a shunt between the systemic and pulmonary veins that is added either at the time of Fontan conversion or at a later time for the treatment of complications. This shunt increases cardiac output and decreases systemic venous pressure. However, these hemodynamic benefits are achieved at the expense of a decrease in the arterial oxygen saturation. The model developed this paper incorporates fenestration size as a parameter and describes both blood flow and oxygen transport. It is calibrated to clinical data from Fontan patients, and we use it to study the impact of a fenestration on several hemodynamic variables. In certain scenarios corresponding to high-risk Fontan physiology, we demonstrate the existence of an optimal fenestration size that maximizes oxygen delivery to the systemic tissues.
|
1608.06723
|
Bo Sun
|
Amani A. Alobaidi and Bo Sun
|
Probing Three-dimensional Collective Cancer Invasion with DIGME
| null | null | null | null |
q-bio.TO q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multicellular migration and pattern formation play important roles in
developmental biology, cancer metastasis and wound healing. To understand the
collective cell dynamics in three dimensional extracellular matrix (ECM), we
have developed a simple and mechanical-based strategy, Diskoid In Geometrically
Micropatterned ECM (DIGME). DIGME allows easy engineering of the shape of 3-D
tissue organoid, the mesoscale ECM heterogeneity, and the fiber alignment of
collagen-based ECM all at the same time. We have employed DIGME to study the
collective cancer invasion and find that DIGME provides a powerful tool to
probe three dimensional dynamics of tumor organoid in patterned
microenvironment.
|
[
{
"created": "Wed, 24 Aug 2016 06:02:12 GMT",
"version": "v1"
}
] |
2016-08-25
|
[
[
"Alobaidi",
"Amani A.",
""
],
[
"Sun",
"Bo",
""
]
] |
Multicellular migration and pattern formation play important roles in developmental biology, cancer metastasis and wound healing. To understand the collective cell dynamics in three dimensional extracellular matrix (ECM), we have developed a simple and mechanical-based strategy, Diskoid In Geometrically Micropatterned ECM (DIGME). DIGME allows easy engineering of the shape of 3-D tissue organoid, the mesoscale ECM heterogeneity, and the fiber alignment of collagen-based ECM all at the same time. We have employed DIGME to study the collective cancer invasion and find that DIGME provides a powerful tool to probe three dimensional dynamics of tumor organoid in patterned microenvironment.
|
q-bio/0606032
|
Maria A. Avi\~no-Diaz
|
Maria A. Avino-Diaz and Oscar Moreno
|
Probabilistic Regulatory Networks: Modeling Genetic Networks
|
8 pages, 2 figures, International Congress of Matematicians, to be
published in Proceedings of the Fifth International Conference on Engineering
Computational Technology, 2006, Las Palmas, Gran Canaria
| null | null | null |
q-bio.GN math.DS
| null |
We describe here the new concept of $\epsilon$-Homomorphisms of Probabilistic
Regulatory Gene Networks(PRN). The $\epsilon$-homomorphisms are special
mappings between two probabilistic networks, that consider the algebraic action
of the iteration of functions and the probabilistic dynamic of the two
networks. It is proved here that the class of PRN, together with the
homomorphisms, form a category with products and coproducts. Projections are
special homomorphisms, induced by invariant subnetworks. Here, it is proved
that an $\epsilon$-homomorphism for 0 <$\epsilon$< 1 produces simultaneous
Markov Chains in both networks, that permit to introduce the concepts of
$\epsilon$-isomorphism of Markov Chains, and similar networks.
|
[
{
"created": "Thu, 22 Jun 2006 18:39:47 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Avino-Diaz",
"Maria A.",
""
],
[
"Moreno",
"Oscar",
""
]
] |
We describe here the new concept of $\epsilon$-Homomorphisms of Probabilistic Regulatory Gene Networks(PRN). The $\epsilon$-homomorphisms are special mappings between two probabilistic networks, that consider the algebraic action of the iteration of functions and the probabilistic dynamic of the two networks. It is proved here that the class of PRN, together with the homomorphisms, form a category with products and coproducts. Projections are special homomorphisms, induced by invariant subnetworks. Here, it is proved that an $\epsilon$-homomorphism for 0 <$\epsilon$< 1 produces simultaneous Markov Chains in both networks, that permit to introduce the concepts of $\epsilon$-isomorphism of Markov Chains, and similar networks.
|
0811.3514
|
Noa Sela
|
Galit Lev-Maor, Amir Goren, Noa Sela, Eddo Kim, Hadas Keren, Adi
Doron-Faigenboim, Shelly Leibman-Barak, Tal Pupko, Gil Ast
|
The Alternative Choice of Constitutive Exons throughout Evolution
| null |
PLoS Genet 2007 3(11): e203
| null | null |
q-bio.GN q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Alternative cassette exons are known to originate from two processes
exonization of intronic sequences and exon shuffling. Herein, we suggest an
additional mechanism by which constitutively spliced exons become alternative
cassette exons during evolution. We compiled a dataset of orthologous exons
from human and mouse that are constitutively spliced in one species but
alternatively spliced in the other. Examination of these exons suggests that
the common ancestors were constitutively spliced. We show that relaxation of
the 59 splice site during evolution is one of the molecular mechanisms by which
exons shift from constitutive to alternative splicing. This shift is associated
with the fixation of exonic splicing regulatory sequences (ESRs) that are
essential for exon definition and control the inclusion level only after the
transition to alternative splicing. The effect of each ESR on splicing and the
combinatorial effects between two ESRs are conserved from fish to human. Our
results uncover an evolutionary pathway that increases transcriptome diversity
by shifting exons from constitutive to alternative splicing
|
[
{
"created": "Fri, 21 Nov 2008 11:05:21 GMT",
"version": "v1"
}
] |
2008-11-24
|
[
[
"Lev-Maor",
"Galit",
""
],
[
"Goren",
"Amir",
""
],
[
"Sela",
"Noa",
""
],
[
"Kim",
"Eddo",
""
],
[
"Keren",
"Hadas",
""
],
[
"Doron-Faigenboim",
"Adi",
""
],
[
"Leibman-Barak",
"Shelly",
""
],
[
"Pupko",
"Tal",
""
],
[
"Ast",
"Gil",
""
]
] |
Alternative cassette exons are known to originate from two processes exonization of intronic sequences and exon shuffling. Herein, we suggest an additional mechanism by which constitutively spliced exons become alternative cassette exons during evolution. We compiled a dataset of orthologous exons from human and mouse that are constitutively spliced in one species but alternatively spliced in the other. Examination of these exons suggests that the common ancestors were constitutively spliced. We show that relaxation of the 59 splice site during evolution is one of the molecular mechanisms by which exons shift from constitutive to alternative splicing. This shift is associated with the fixation of exonic splicing regulatory sequences (ESRs) that are essential for exon definition and control the inclusion level only after the transition to alternative splicing. The effect of each ESR on splicing and the combinatorial effects between two ESRs are conserved from fish to human. Our results uncover an evolutionary pathway that increases transcriptome diversity by shifting exons from constitutive to alternative splicing
|
2305.01475
|
Teddy Lazebnik Dr.
|
Teddy Lazebnik, Liron Simon-Keren
|
Cancer-inspired Genomics Mapper Model for the Generation of Synthetic
DNA Sequences with Desired Genomics Signatures
| null | null |
10.1016/j.compbiomed.2023.107221
| null |
q-bio.GN cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Genome data are crucial in modern medicine, offering significant potential
for diagnosis and treatment. Thanks to technological advancements, many
millions of healthy and diseased genomes have already been sequenced; however,
obtaining the most suitable data for a specific study, and specifically for
validation studies, remains challenging with respect to scale and access.
Therefore, in silico genomics sequence generators have been proposed as a
possible solution. However, the current generators produce inferior data using
mostly shallow (stochastic) connections, detected with limited computational
complexity in the training data. This means they do not take the appropriate
biological relations and constraints, that originally caused the observed
connections, into consideration. To address this issue, we propose
cancer-inspired genomics mapper model (CGMM), that combines genetic algorithm
(GA) and deep learning (DL) methods to tackle this challenge. CGMM mimics
processes that generate genetic variations and mutations to transform readily
available control genomes into genomes with the desired phenotypes. We
demonstrate that CGMM can generate synthetic genomes of selected phenotypes
such as ancestry and cancer that are indistinguishable from real genomes of
such phenotypes, based on unsupervised clustering. Our results show that CGMM
outperforms four current state-of-the-art genomics generators on two different
tasks, suggesting that CGMM will be suitable for a wide range of purposes in
genomic medicine, especially for much-needed validation studies.
|
[
{
"created": "Mon, 1 May 2023 07:16:40 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Lazebnik",
"Teddy",
""
],
[
"Simon-Keren",
"Liron",
""
]
] |
Genome data are crucial in modern medicine, offering significant potential for diagnosis and treatment. Thanks to technological advancements, many millions of healthy and diseased genomes have already been sequenced; however, obtaining the most suitable data for a specific study, and specifically for validation studies, remains challenging with respect to scale and access. Therefore, in silico genomics sequence generators have been proposed as a possible solution. However, the current generators produce inferior data using mostly shallow (stochastic) connections, detected with limited computational complexity in the training data. This means they do not take the appropriate biological relations and constraints, that originally caused the observed connections, into consideration. To address this issue, we propose cancer-inspired genomics mapper model (CGMM), that combines genetic algorithm (GA) and deep learning (DL) methods to tackle this challenge. CGMM mimics processes that generate genetic variations and mutations to transform readily available control genomes into genomes with the desired phenotypes. We demonstrate that CGMM can generate synthetic genomes of selected phenotypes such as ancestry and cancer that are indistinguishable from real genomes of such phenotypes, based on unsupervised clustering. Our results show that CGMM outperforms four current state-of-the-art genomics generators on two different tasks, suggesting that CGMM will be suitable for a wide range of purposes in genomic medicine, especially for much-needed validation studies.
|
1410.7346
|
Paolo Zuliani
|
Bing Liu, Soonho Kong, Sicun Gao, Paolo Zuliani, Edmund M. Clarke
|
Towards Personalized Prostate Cancer Therapy Using Delta-Reachability
Analysis
|
HSCC 2015
| null |
10.1145/2728606.2728634
| null |
q-bio.QM cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent clinical studies suggest that the efficacy of hormone therapy for
prostate cancer depends on the characteristics of individual patients. In this
paper, we develop a computational framework for identifying patient-specific
androgen ablation therapy schedules for postponing the potential cancer
relapse. We model the population dynamics of heterogeneous prostate cancer
cells in response to androgen suppression as a nonlinear hybrid automaton. We
estimate personalized kinetic parameters to characterize patients and employ
$\delta$-reachability analysis to predict patient-specific therapeutic
strategies. The results show that our methods are promising and may lead to a
prognostic tool for personalized cancer therapy.
|
[
{
"created": "Mon, 27 Oct 2014 18:36:40 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Mar 2015 09:33:47 GMT",
"version": "v2"
},
{
"created": "Tue, 19 May 2015 13:17:29 GMT",
"version": "v3"
}
] |
2015-05-20
|
[
[
"Liu",
"Bing",
""
],
[
"Kong",
"Soonho",
""
],
[
"Gao",
"Sicun",
""
],
[
"Zuliani",
"Paolo",
""
],
[
"Clarke",
"Edmund M.",
""
]
] |
Recent clinical studies suggest that the efficacy of hormone therapy for prostate cancer depends on the characteristics of individual patients. In this paper, we develop a computational framework for identifying patient-specific androgen ablation therapy schedules for postponing the potential cancer relapse. We model the population dynamics of heterogeneous prostate cancer cells in response to androgen suppression as a nonlinear hybrid automaton. We estimate personalized kinetic parameters to characterize patients and employ $\delta$-reachability analysis to predict patient-specific therapeutic strategies. The results show that our methods are promising and may lead to a prognostic tool for personalized cancer therapy.
|
2112.07295
|
Alexey Zharinov
|
A. I. Zharinov, Y. A. Tsybina, S. Y. Gordleeva
|
Review: CPG as a controller for biomimetic floating robots
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
All vertebrates are capable of performing various types of physical activity.
Locomotor patterns are created by the cyclical coordinated work of the skeletal
muscles. The organization of such a system in living organisms is responsible
for networks of interconnected populations of neurons capable of forming
rhythmic activity - CPG. As biologists have established, CPG is a key mechanism
causing adaptive and versatile locomotion in animals. Using a system based on
biological principles as a controller for robotic devices will avoid many of
the problems that are present in standard control loops. In particular, it
becomes possible not only plausible modeling and study of the peculiarities of
animal locomotion (in simple cases), but also smooth switching between
different types of activity depending on environmental conditions, i.e.
creation of autonomous adaptive systems. In this review, a description of the
work of recent years is given, in which the CPG is used as a controller for
various robotic animals. We focused on the views showing undulating movements,
i.e. floating and amphibians. Each separate section deals with a certain kind.
At the same time, the works are arranged in the chronological order of their
publication. At the end of each section, there is a summary that categorizes
the articles described by the type of CPG used.
|
[
{
"created": "Tue, 14 Dec 2021 11:03:40 GMT",
"version": "v1"
}
] |
2021-12-15
|
[
[
"Zharinov",
"A. I.",
""
],
[
"Tsybina",
"Y. A.",
""
],
[
"Gordleeva",
"S. Y.",
""
]
] |
All vertebrates are capable of performing various types of physical activity. Locomotor patterns are created by the cyclical coordinated work of the skeletal muscles. The organization of such a system in living organisms is responsible for networks of interconnected populations of neurons capable of forming rhythmic activity - CPG. As biologists have established, CPG is a key mechanism causing adaptive and versatile locomotion in animals. Using a system based on biological principles as a controller for robotic devices will avoid many of the problems that are present in standard control loops. In particular, it becomes possible not only plausible modeling and study of the peculiarities of animal locomotion (in simple cases), but also smooth switching between different types of activity depending on environmental conditions, i.e. creation of autonomous adaptive systems. In this review, a description of the work of recent years is given, in which the CPG is used as a controller for various robotic animals. We focused on the views showing undulating movements, i.e. floating and amphibians. Each separate section deals with a certain kind. At the same time, the works are arranged in the chronological order of their publication. At the end of each section, there is a summary that categorizes the articles described by the type of CPG used.
|
1307.7820
|
Aaron Darling
|
Balaji Venkatachalam, Dan Gusfield, and Yelena Frid
|
Faster Algorithms for RNA-folding using the Four-Russians method
|
Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013). Editor's note: abstract was
shortened to comply with arxiv requirements. Full abstract in PDF
| null | null | null |
q-bio.QM cs.CE cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The secondary structure that maximizes the number of non-crossing matchings
between complimentary bases of an RNA sequence of length n can be computed in
O(n^3) time using Nussinov's dynamic programming algorithm. The Four-Russians
method is a technique that will reduce the running time for certain dynamic
programming algorithms by a multiplicative factor after a preprocessing step
where solutions to all smaller subproblems of a fixed size are exhaustively
enumerated and solved. Frid and Gusfield designed an O(\frac{n^3}{\log n})
algorithm for RNA folding using the Four-Russians technique. In their algorithm
the preprocessing is interleaved with the algorithm computation. (Algo. Mol.
Biol., 2010).
We simplify the algorithm and the analysis by doing the preprocessing once
prior to the algorithm computation. We call this the two-vector method. We also
show variants where instead of exhaustive preprocessing, we only solve the
subproblems encountered in the main algorithm once and memoize the results. We
give a simple proof of correctness and explore the practical advantages over
the earlier method. The Nussinov algorithm admits an O(n^2) time parallel
algorithm. We show a parallel algorithm using the two-vector idea that improves
the time bound to O(\frac{n^2}{log n}).
We discuss the organization of the data structures to exploit coalesced
memory access for fast running times. The ideas to organize the data structures
also help in improving the running time of the serial algorithms. For sequences
of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds
and the two-vector serial method takes about 57 seconds on a desktop and 15
seconds on a server. Among the serial algorithms, the two-vector and memoized
versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are
faster than Nussinov by up to a factor of 20.
|
[
{
"created": "Tue, 30 Jul 2013 05:13:11 GMT",
"version": "v1"
}
] |
2013-08-02
|
[
[
"Venkatachalam",
"Balaji",
""
],
[
"Gusfield",
"Dan",
""
],
[
"Frid",
"Yelena",
""
]
] |
The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n^3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that will reduce the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(\frac{n^3}{\log n}) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. (Algo. Mol. Biol., 2010). We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method. The Nussinov algorithm admits an O(n^2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(\frac{n^2}{log n}). We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20.
|
1210.6998
|
Jong-Chin Lin
|
Jong-Chin Lin and D. Thirumalai
|
Gene Regulation by Riboswitches with and without Negative Feedback Loop
|
12 pages, 8 figures
| null |
10.1016/j.bpj.2012.10.026
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Riboswitches, structured elements in the untranslated regions of messenger
RNAs, regulate gene expression by binding specific metabolites. We introduce a
kinetic network model that describes the functions of riboswitches at the
systems level. Using experimental data for flavin mono nucleotide riboswitch as
a guide we show that efficient function, implying a large dynamic range without
compromising the requirement to suppress transcription, is determined by a
balance between the transcription speed, the folding and unfolding rates of the
aptamer, and the binding rates of the metabolite. We also investigated the
effect of negative feedback accounting for binding to metabolites, which are
themselves the products of genes that are being regulated. For a range of
transcription rates negative feedback suppresses gene expression by nearly 10
fold. Negative feedback speeds the gene expression response time, and
suppresses the change of steady state protein concentration by half relative to
that without feedback, when there is a modest spike in DNA concentration. A
dynamic phase diagram expressed in terms of transcription speed, folding rates,
and metabolite binding rates predicts different scenarios in
riboswitch-mediated transcription regulation.
|
[
{
"created": "Thu, 25 Oct 2012 20:55:48 GMT",
"version": "v1"
}
] |
2015-06-11
|
[
[
"Lin",
"Jong-Chin",
""
],
[
"Thirumalai",
"D.",
""
]
] |
Riboswitches, structured elements in the untranslated regions of messenger RNAs, regulate gene expression by binding specific metabolites. We introduce a kinetic network model that describes the functions of riboswitches at the systems level. Using experimental data for flavin mono nucleotide riboswitch as a guide we show that efficient function, implying a large dynamic range without compromising the requirement to suppress transcription, is determined by a balance between the transcription speed, the folding and unfolding rates of the aptamer, and the binding rates of the metabolite. We also investigated the effect of negative feedback accounting for binding to metabolites, which are themselves the products of genes that are being regulated. For a range of transcription rates negative feedback suppresses gene expression by nearly 10 fold. Negative feedback speeds the gene expression response time, and suppresses the change of steady state protein concentration by half relative to that without feedback, when there is a modest spike in DNA concentration. A dynamic phase diagram expressed in terms of transcription speed, folding rates, and metabolite binding rates predicts different scenarios in riboswitch-mediated transcription regulation.
|
2211.13231
|
Kishan K C
|
Kishan KC, Rui Li, Paribesh Regmi, Anne R. Haake
|
Predicting Biomedical Interactions with Probabilistic Model Selection
for Graph Neural Networks
| null | null | null | null |
q-bio.QM cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A biological system is a complex network of heterogeneous molecular entities
and their interactions contributing to various biological characteristics of
the system. However, current biological networks are noisy, sparse, and
incomplete, limiting our ability to create a holistic view of the biological
system and understand the biological phenomena. Experimental identification of
such interactions is both time-consuming and expensive. With the recent
advancements in high-throughput data generation and significant improvement in
computational power, various computational methods have been developed to
predict novel interactions in the noisy network. Recently, deep learning
methods such as graph neural networks have shown their effectiveness in
modeling graph-structured data and achieved good performance in biomedical
interaction prediction. However, graph neural networks-based methods require
human expertise and experimentation to design the appropriate complexity of the
model and significantly impact the performance of the model. Furthermore, deep
graph neural networks face overfitting problems and tend to be poorly
calibrated with high confidence on incorrect predictions. To address these
challenges, we propose Bayesian model selection for graph convolutional
networks to jointly infer the most plausible number of graph convolution layers
(depth) warranted by data and perform dropout regularization simultaneously.
Experiments on four interaction datasets show that our proposed method achieves
accurate and calibrated predictions. Our proposed method enables the graph
convolutional networks to dynamically adapt their depths to accommodate an
increasing number of interactions.
|
[
{
"created": "Tue, 22 Nov 2022 20:44:28 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Dec 2022 20:02:35 GMT",
"version": "v2"
}
] |
2022-12-06
|
[
[
"KC",
"Kishan",
""
],
[
"Li",
"Rui",
""
],
[
"Regmi",
"Paribesh",
""
],
[
"Haake",
"Anne R.",
""
]
] |
A biological system is a complex network of heterogeneous molecular entities and their interactions contributing to various biological characteristics of the system. However, current biological networks are noisy, sparse, and incomplete, limiting our ability to create a holistic view of the biological system and understand the biological phenomena. Experimental identification of such interactions is both time-consuming and expensive. With the recent advancements in high-throughput data generation and significant improvement in computational power, various computational methods have been developed to predict novel interactions in the noisy network. Recently, deep learning methods such as graph neural networks have shown their effectiveness in modeling graph-structured data and achieved good performance in biomedical interaction prediction. However, graph neural networks-based methods require human expertise and experimentation to design the appropriate complexity of the model and significantly impact the performance of the model. Furthermore, deep graph neural networks face overfitting problems and tend to be poorly calibrated with high confidence on incorrect predictions. To address these challenges, we propose Bayesian model selection for graph convolutional networks to jointly infer the most plausible number of graph convolution layers (depth) warranted by data and perform dropout regularization simultaneously. Experiments on four interaction datasets show that our proposed method achieves accurate and calibrated predictions. Our proposed method enables the graph convolutional networks to dynamically adapt their depths to accommodate an increasing number of interactions.
|
2406.05347
|
Bo Chen
|
Bo Chen, Zhilei Bei, Xingyi Cheng, Pan Li, Jie Tang, Le Song
|
MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative
Pre-Training
| null | null | null | null |
q-bio.BM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the
evolutionary trajectories of protein families. The accuracy of protein
structure predictions is often compromised for protein sequences that lack
sufficient homologous information to construct high quality MSA. Although
various methods have been proposed to generate virtual MSA under these
conditions, they fall short in comprehensively capturing the intricate
coevolutionary patterns within MSA or require guidance from external oracle
models. Here we introduce MSAGPT, a novel approach to prompt protein structure
predictions via MSA generative pretraining in the low MSA regime. MSAGPT
employs a simple yet effective 2D evolutionary positional encoding scheme to
model complex evolutionary patterns. Endowed by this, its flexible 1D MSA
decoding framework facilitates zero or few shot learning. Moreover, we
demonstrate that leveraging the feedback from AlphaFold2 can further enhance
the model capacity via Rejective Fine tuning (RFT) and Reinforcement Learning
from AF2 Feedback (RLAF). Extensive experiments confirm the efficacy of MSAGPT
in generating faithful virtual MSA to enhance the structure prediction
accuracy. The transfer learning capabilities also highlight its great potential
for facilitating other protein tasks.
|
[
{
"created": "Sat, 8 Jun 2024 04:23:57 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 02:42:17 GMT",
"version": "v2"
}
] |
2024-06-12
|
[
[
"Chen",
"Bo",
""
],
[
"Bei",
"Zhilei",
""
],
[
"Cheng",
"Xingyi",
""
],
[
"Li",
"Pan",
""
],
[
"Tang",
"Jie",
""
],
[
"Song",
"Le",
""
]
] |
Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the evolutionary trajectories of protein families. The accuracy of protein structure predictions is often compromised for protein sequences that lack sufficient homologous information to construct high quality MSA. Although various methods have been proposed to generate virtual MSA under these conditions, they fall short in comprehensively capturing the intricate coevolutionary patterns within MSA or require guidance from external oracle models. Here we introduce MSAGPT, a novel approach to prompt protein structure predictions via MSA generative pretraining in the low MSA regime. MSAGPT employs a simple yet effective 2D evolutionary positional encoding scheme to model complex evolutionary patterns. Endowed by this, its flexible 1D MSA decoding framework facilitates zero or few shot learning. Moreover, we demonstrate that leveraging the feedback from AlphaFold2 can further enhance the model capacity via Rejective Fine tuning (RFT) and Reinforcement Learning from AF2 Feedback (RLAF). Extensive experiments confirm the efficacy of MSAGPT in generating faithful virtual MSA to enhance the structure prediction accuracy. The transfer learning capabilities also highlight its great potential for facilitating other protein tasks.
|
2206.14566
|
Rong Ma
|
Tal Einav and Rong Ma
|
Using Interpretable Machine Learning to Massively Increase the Number of
Antibody-Virus Interactions Across Studies
| null |
Cell Reports Methods, 2023
|
10.1016/j.crmeth.2023.100540
| null |
q-bio.QM cs.LG q-bio.BM q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A central challenge in every field of biology is to use existing measurements
to predict the outcomes of future experiments. In this work, we consider the
wealth of antibody inhibition data against variants of the influenza virus. Due
to this viru's genetic diversity and evolvability, the variants examined in one
study will often have little-to-no overlap with other studies, making it
difficult to discern common patterns or unify datasets for further analysis. To
that end, we develop a computational framework that predicts how an antibody or
serum would inhibit any variant from any other study. We use this framework to
greatly expand seven influenza datasets utilizing hemagglutination inhibition,
validating our method upon 200,000 existing measurements and predicting
2,000,000 new values along with their uncertainties. With these new values, we
quantify the transferability between seven vaccination and infection studies in
humans and ferrets, show that the serum potency is negatively correlated with
breadth, and present a tool for pandemic preparedness. This data-driven
approach does not require any information beyond each virus's name and
measurements, and even datasets with as few as 5 viruses can be expanded,
making this approach widely applicable. Future influenza studies using
hemagglutination inhibition can directly utilize our curated datasets to
predict newly measured antibody responses against ~80 H3N2 influenza viruses
from 1968-2011, whereas immunological studies utilizing other viruses or a
different assay only need a single partially-overlapping dataset to extend
their work. In essence, this approach enables a shift in perspective when
analyzing data from "what you see is what you get" into "what anyone sees is
what everyone gets."
|
[
{
"created": "Fri, 10 Jun 2022 20:14:01 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Oct 2022 12:35:32 GMT",
"version": "v2"
}
] |
2023-07-27
|
[
[
"Einav",
"Tal",
""
],
[
"Ma",
"Rong",
""
]
] |
A central challenge in every field of biology is to use existing measurements to predict the outcomes of future experiments. In this work, we consider the wealth of antibody inhibition data against variants of the influenza virus. Due to this viru's genetic diversity and evolvability, the variants examined in one study will often have little-to-no overlap with other studies, making it difficult to discern common patterns or unify datasets for further analysis. To that end, we develop a computational framework that predicts how an antibody or serum would inhibit any variant from any other study. We use this framework to greatly expand seven influenza datasets utilizing hemagglutination inhibition, validating our method upon 200,000 existing measurements and predicting 2,000,000 new values along with their uncertainties. With these new values, we quantify the transferability between seven vaccination and infection studies in humans and ferrets, show that the serum potency is negatively correlated with breadth, and present a tool for pandemic preparedness. This data-driven approach does not require any information beyond each virus's name and measurements, and even datasets with as few as 5 viruses can be expanded, making this approach widely applicable. Future influenza studies using hemagglutination inhibition can directly utilize our curated datasets to predict newly measured antibody responses against ~80 H3N2 influenza viruses from 1968-2011, whereas immunological studies utilizing other viruses or a different assay only need a single partially-overlapping dataset to extend their work. In essence, this approach enables a shift in perspective when analyzing data from "what you see is what you get" into "what anyone sees is what everyone gets."
|
1512.04982
|
Pascal Fieth
|
Pascal Fieth and Alexander K. Hartmann
|
Score distributions of gapped multiple sequence alignments down to the
low-probability tail
| null |
Phys. Rev. E 94, 022127 (2016)
|
10.1103/PhysRevE.94.022127
| null |
q-bio.QM cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assessing the significance of alignment scores of optimally aligned DNA or
amino acid sequences can be achieved via the knowledge of the score
distribution of random sequences. But this requires obtaining the distribution
in the biologically relevant high-scoring region, where the probabilities are
exponentially small. For gapless local alignments of infinitely long sequences
this distribution is known analytically to follow a Gumbel distribution.
Distributions for gapped local alignments and global alignments of finite
lengths can only be obtained numerically. To obtain result for the
small-probability region, specific statistical mechanics-based rare-event
algorithms can be applied. In previous studies, this was achieved for pairwise
alignments. They showed that, contrary to results from previous simple sampling
studies, strong deviations from the Gumbel distribution occur in case of finite
sequence lengths. Here we extend the studies to the for practical applications
in Molecular Biology much more relevant case of multiple sequence alignments
with gaps. We study the distributions of scores over a large range of the
support, reaching probabilities as small as 10^-160, for global and local
(sum-of-pair scores) multiple alignments. We find that even after suitable
rescaling, eliminating the sequence-length dependence, the distributions for
multiple alignment differ from the pairwise alignment case. Furthermore, we
also show that the previously discussed Gaussian correction to the Gumbel
distribution needs to be refined, also for the case of pairwise alignments.
|
[
{
"created": "Mon, 7 Dec 2015 16:22:15 GMT",
"version": "v1"
}
] |
2016-08-24
|
[
[
"Fieth",
"Pascal",
""
],
[
"Hartmann",
"Alexander K.",
""
]
] |
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to the for practical applications in Molecular Biology much more relevant case of multiple sequence alignments with gaps. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10^-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
|
2004.09644
|
Carlos Pajares
|
J. E. Ram\'irez and C. Pajares and M. I. Mart\'inez and R. Rodr\'iguez
Fern\'andez and E. Molina-Gayosso and J. Lozada-Lechuga and A. Fern\'andez
T\'ellez
|
Site-bond percolation solution to preventing the propagation of
\textit{Phytophthora} zoospores on plantations
|
13 pages, 7 figures
|
Phys. Rev. E, 101, 032301, 2020
|
10.1103/PhysRevE.101.032301
| null |
q-bio.PE cond-mat.stat-mech physics.soc-ph
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We propose a strategy based on the site-bond percolation to minimize the
propagation of \textit{Phytophthora} zoospores on plantations, consisting in
introducing physical barriers between neighboring plants. Two clustering
processes are distinguished: i) one of cells with the presence of the pathogen,
detected on soil analysis; and ii) that of diseased plants, revealed from a
visual inspection of the plantation. The former is well described by the
standard site-bond percolation. In the latter, the percolation threshold is
fitted by a Tsallis distribution when no barriers are introduced. We provide,
for both cases, the formulae for the minimal barrier density to prevent the
emergence of the spanning cluster. Though this work is focused on a specific
pathogen, the model presented here can also be applied to prevent the spreading
of other pathogens that disseminate, by other means, from one plant to the
neighboring ones. Finally, the application of this strategy to three types of
commercialy important Mexican chili plants is also shown.
|
[
{
"created": "Mon, 20 Apr 2020 21:27:15 GMT",
"version": "v1"
}
] |
2020-04-22
|
[
[
"Ramírez",
"J. E.",
""
],
[
"Pajares",
"C.",
""
],
[
"Martínez",
"M. I.",
""
],
[
"Fernández",
"R. Rodríguez",
""
],
[
"Molina-Gayosso",
"E.",
""
],
[
"Lozada-Lechuga",
"J.",
""
],
[
"Téllez",
"A. Fernández",
""
]
] |
We propose a strategy based on the site-bond percolation to minimize the propagation of \textit{Phytophthora} zoospores on plantations, consisting in introducing physical barriers between neighboring plants. Two clustering processes are distinguished: i) one of cells with the presence of the pathogen, detected on soil analysis; and ii) that of diseased plants, revealed from a visual inspection of the plantation. The former is well described by the standard site-bond percolation. In the latter, the percolation threshold is fitted by a Tsallis distribution when no barriers are introduced. We provide, for both cases, the formulae for the minimal barrier density to prevent the emergence of the spanning cluster. Though this work is focused on a specific pathogen, the model presented here can also be applied to prevent the spreading of other pathogens that disseminate, by other means, from one plant to the neighboring ones. Finally, the application of this strategy to three types of commercialy important Mexican chili plants is also shown.
|
1311.4384
|
Wan-Chung Hu Dr.
|
Wan-Chung Hu
|
Acute Respiratory Distress Syndrome is a TH17-like and Treg immune
disease
| null | null | null | null |
q-bio.TO q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Acute Respiratory Distress Syndrome (ARDS) is a very severe syndrome leading
to respiratory failure and subsequent mortality. Sepsis is one of the leading
causes of ARDS. Thus, extracellular bacteria play an important role in the
pathophysiology of ARDS. Overactivated neutrophils are the major effector cells
in ARDS. Thus, extracellular bacteria triggered TH17-like innate immunity with
neutrophil activation might accounts for the etiology of ARDS. Here, microarray
analysis was employed to describe TH17-like innate immunity-related cytokine
including TGF-{\beta} and IL-6 up-regulation in whole blood of ARDS patients.
It was found that the innate TH17-related TLR1,2,4,5,8, HSP70, G-CSF, GM-CSF,
complements, defensin, PMN chemokines, cathepsins, Fc receptors, NCFs, FOS,
JunB, CEBPs, NFkB, and leukotriene B4 are all up-regulated. TGF-{\beta}
secreting Treg cells play important roles in lung fibrosis. Up-regulation of
Treg associated STAT5B and TGF-{\beta} with down-regulation of MHC genes, TCR
genes, and co-stimulation molecule CD86 are noted. Key TH17 transcription
factors, STAT3 and ROR{\alpha}, are down-regulated. Thus, the full adaptive
TH17 helper CD4 T cells may not be successfully triggered. Many fibrosis
promoting genes are also up-regulated including MMP8, MMP9, FGF13, TIMP1,
TIMP2, PLOD1, P4HB, P4HA1, PDGFC, HMMR, HS2ST1, CHSY1, and CSGALNACT. Failure
to induce successful adaptive immunity could also attribute to ARDS
pathogenesis. Thus, ARDS is actually a TH17-like and Treg immune disorder.
|
[
{
"created": "Mon, 18 Nov 2013 14:11:05 GMT",
"version": "v1"
}
] |
2013-11-19
|
[
[
"Hu",
"Wan-Chung",
""
]
] |
Acute Respiratory Distress Syndrome (ARDS) is a very severe syndrome leading to respiratory failure and subsequent mortality. Sepsis is one of the leading causes of ARDS. Thus, extracellular bacteria play an important role in the pathophysiology of ARDS. Overactivated neutrophils are the major effector cells in ARDS. Thus, extracellular bacteria triggered TH17-like innate immunity with neutrophil activation might accounts for the etiology of ARDS. Here, microarray analysis was employed to describe TH17-like innate immunity-related cytokine including TGF-{\beta} and IL-6 up-regulation in whole blood of ARDS patients. It was found that the innate TH17-related TLR1,2,4,5,8, HSP70, G-CSF, GM-CSF, complements, defensin, PMN chemokines, cathepsins, Fc receptors, NCFs, FOS, JunB, CEBPs, NFkB, and leukotriene B4 are all up-regulated. TGF-{\beta} secreting Treg cells play important roles in lung fibrosis. Up-regulation of Treg associated STAT5B and TGF-{\beta} with down-regulation of MHC genes, TCR genes, and co-stimulation molecule CD86 are noted. Key TH17 transcription factors, STAT3 and ROR{\alpha}, are down-regulated. Thus, the full adaptive TH17 helper CD4 T cells may not be successfully triggered. Many fibrosis promoting genes are also up-regulated including MMP8, MMP9, FGF13, TIMP1, TIMP2, PLOD1, P4HB, P4HA1, PDGFC, HMMR, HS2ST1, CHSY1, and CSGALNACT. Failure to induce successful adaptive immunity could also attribute to ARDS pathogenesis. Thus, ARDS is actually a TH17-like and Treg immune disorder.
|
2012.04968
|
Luis Aniello La Rocca
|
Luis A. La Rocca, Julia Frank, Heidi Beate Bentzen, Jean-Tori Pantel,
Konrad Gerischer, Anton Bovier and Peter M. Krawitz
|
A lower prevalence for recessive disorders in a random mating population
is a transient phenomenon during and after a growth phase
|
A video clip of our simulations can be found at
https://youtu.be/5hOgLyRqWPg
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite increasing data from population-wide sequencing studies, the risk for
recessive disorders in consanguineous partnerships is still heavily debated. An
important aspect that has not sufficiently been investigated theoretically, is
the influence of inbreeding on mutation load and incidence rates when the
population sizes change. We therefore developed a model to study these dynamics
for a wide range of growth and mating conditions. In the phase of population
expansion and shortly afterwards, our simulations show that there is a drop of
diseased individuals at the expense of an increasing mutation load for random
mating, while both parameters remain almost constant in highly consanguineous
partnerships. This explains the empirical observation in present times that a
high degree of consanguinity is associated with an increased risk of autosomal
recessive disorders. However, it also states that the higher frequency of
severe recessive disorders with developmental delay in inbred populations is a
transient phenomenon before a mutation-selection balance is reached again.
|
[
{
"created": "Wed, 9 Dec 2020 10:45:25 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Dec 2020 12:49:48 GMT",
"version": "v2"
}
] |
2020-12-23
|
[
[
"La Rocca",
"Luis A.",
""
],
[
"Frank",
"Julia",
""
],
[
"Bentzen",
"Heidi Beate",
""
],
[
"Pantel",
"Jean-Tori",
""
],
[
"Gerischer",
"Konrad",
""
],
[
"Bovier",
"Anton",
""
],
[
"Krawitz",
"Peter M.",
""
]
] |
Despite increasing data from population-wide sequencing studies, the risk for recessive disorders in consanguineous partnerships is still heavily debated. An important aspect that has not sufficiently been investigated theoretically, is the influence of inbreeding on mutation load and incidence rates when the population sizes change. We therefore developed a model to study these dynamics for a wide range of growth and mating conditions. In the phase of population expansion and shortly afterwards, our simulations show that there is a drop of diseased individuals at the expense of an increasing mutation load for random mating, while both parameters remain almost constant in highly consanguineous partnerships. This explains the empirical observation in present times that a high degree of consanguinity is associated with an increased risk of autosomal recessive disorders. However, it also states that the higher frequency of severe recessive disorders with developmental delay in inbred populations is a transient phenomenon before a mutation-selection balance is reached again.
|
1304.7905
|
Sergej Mironov
|
S. L. Mironov
|
Theory and experiment reveal unexpected calcium profiles in
one-dimensional systems
| null | null | null | null |
q-bio.SC q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Calcium is an ubiquitous second messenger that triggers a plethora of key
physiological responses. The events are initiated in micro- or nano-sized
compartments and determined by the complex interactions with calcium-binding
proteins and mechanisms of calcium clearance. Local calcium increases in the
vicinity of single channels represent an essentially non-linear
reaction-diffusion problem that have been analysed previously using various
linearized approximations. I revisited the problem of stationary patterns that
can be generated by the point calcium source in the presence of buffer and
obtained new explicit solutions. Main results of the analysis of the calcium
buffering are supplemented with pertinent derivations and discussion of
respective mathematical problems in Appendices. I show that for small calcium
influx the calcium gradients around established around channel lumen have
quasi-exponential form. For bigger fluxes, when the buffer is saturated, the
model predicts periodic patterns. The transition between the two regimes depend
on the capacity of buffer and its mobility. Theoretical predictions were
examined using a model one-dimensional system. For sufficiently big fluxes the
oscillatory calcium patterns were observed. Theoretical and experimental
results are discussed in terms of their possible physiological implications.
|
[
{
"created": "Tue, 30 Apr 2013 07:51:19 GMT",
"version": "v1"
}
] |
2013-05-01
|
[
[
"Mironov",
"S. L.",
""
]
] |
Calcium is an ubiquitous second messenger that triggers a plethora of key physiological responses. The events are initiated in micro- or nano-sized compartments and determined by the complex interactions with calcium-binding proteins and mechanisms of calcium clearance. Local calcium increases in the vicinity of single channels represent an essentially non-linear reaction-diffusion problem that have been analysed previously using various linearized approximations. I revisited the problem of stationary patterns that can be generated by the point calcium source in the presence of buffer and obtained new explicit solutions. Main results of the analysis of the calcium buffering are supplemented with pertinent derivations and discussion of respective mathematical problems in Appendices. I show that for small calcium influx the calcium gradients around established around channel lumen have quasi-exponential form. For bigger fluxes, when the buffer is saturated, the model predicts periodic patterns. The transition between the two regimes depend on the capacity of buffer and its mobility. Theoretical predictions were examined using a model one-dimensional system. For sufficiently big fluxes the oscillatory calcium patterns were observed. Theoretical and experimental results are discussed in terms of their possible physiological implications.
|
1401.4832
|
Magnus Ekeberg
|
Magnus Ekeberg, Tuomo Hartonen, Erik Aurell
|
Fast pseudolikelihood maximization for direct-coupling analysis of
protein structure from many homologous amino-acid sequences
|
33 pages, 4 figures; M. Ekeberg and T. Hartonen are joint first
authors; code and supplementary information on http://plmdca.csc.kth.se/
|
Journal of Computational Physics 276 (2014) 341-356
|
10.1016/j.jcp.2014.07.024
| null |
q-bio.QM physics.comp-ph physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct-Coupling Analysis is a group of methods to harvest information about
coevolving residues in a protein family by learning a generative model in an
exponential family from data. In protein families of realistic size, this
learning can only be done approximately, and there is a trade-off between
inference precision and computational speed. We here show that an earlier
introduced $l_2$-regularized pseudolikelihood maximization method called plmDCA
can be modified as to be easily parallelizable, as well as inherently faster on
a single processor, at negligible difference in accuracy. We test the new
incarnation of the method on 148 protein families from the Protein Families
database (PFAM), one of the largest tests of this class of algorithms to date.
|
[
{
"created": "Mon, 20 Jan 2014 09:15:01 GMT",
"version": "v1"
}
] |
2014-09-16
|
[
[
"Ekeberg",
"Magnus",
""
],
[
"Hartonen",
"Tuomo",
""
],
[
"Aurell",
"Erik",
""
]
] |
Direct-Coupling Analysis is a group of methods to harvest information about coevolving residues in a protein family by learning a generative model in an exponential family from data. In protein families of realistic size, this learning can only be done approximately, and there is a trade-off between inference precision and computational speed. We here show that an earlier introduced $l_2$-regularized pseudolikelihood maximization method called plmDCA can be modified as to be easily parallelizable, as well as inherently faster on a single processor, at negligible difference in accuracy. We test the new incarnation of the method on 148 protein families from the Protein Families database (PFAM), one of the largest tests of this class of algorithms to date.
|
2004.11244
|
Aurelie Nakamura
|
Aurelie Nakamura (iPLESP), Fabienne El-Khoury (iPLESP), Anne-Laure
Sutter-Dallay (BPH), Jeanna-Eve Franck (iPLESP), Xavier Thierry (ELFE), Maria
Melchior (iPLESP), Judith van der Waerden (iPLESP)
|
Partner support during pregnancy mediates social inequalities in
maternal postpartum depression for non-migrant and first generation migrant
women
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background An advantaged socioeconomic position (SEP) and satisfying social
support during pregnancy (SSP) have been found to be protective factors of
maternal postpartum depression (PDD). An advantaged SEP is also associated with
satisfying SSP, making SSP a potential mediator of social inequalities in PPD.
SEP, SSP and PPD are associated with migrant status. The aim of this study was
to quantify the mediating role of SSP in social inequalities in PPD regarding
mother's migrant status. Methods A sub-sample of 15,000 mothers from the French
nationally-representative ELFE cohort study was used for the present analyses.
SEP was constructed as a latent variable measured with educational attainment,
occupational grade, employment, financial difficulties and household income.
SSP was characterized as perceived support from partner (good relation,
satisfying support and paternal leave) and actual support from midwives
(psychosocial risk factors assessment and antenatal education). Mediation
analyses with multiple mediators, stratified by migrant status were conducted.
Results Study population included 76% of non-migrant women, 12% of second and
12% of first generation migrant. SEP was positively associated with support
from partner, regardless of migrant status. Satisfying partner support was
associated with a 8 (non-migrant women) to 11% (first generation migrant women)
reduction in PPD score. Limitations History of depression was not
reported.Conclusions Partner support could reduce social inequalities in PPD.
This work supports the need of interventions, longitudinal and qualitative
studies including fathers and adapted to women at risk of PPD to better
understand the role of SSP in social inequalities in PPD. Keywords social
support, postpartum depression, epidemiology, social inequalities, pregnancy,
mediation analysis
|
[
{
"created": "Wed, 22 Apr 2020 08:20:17 GMT",
"version": "v1"
}
] |
2020-04-24
|
[
[
"Nakamura",
"Aurelie",
"",
"iPLESP"
],
[
"El-Khoury",
"Fabienne",
"",
"iPLESP"
],
[
"Sutter-Dallay",
"Anne-Laure",
"",
"BPH"
],
[
"Franck",
"Jeanna-Eve",
"",
"iPLESP"
],
[
"Thierry",
"Xavier",
"",
"ELFE"
],
[
"Melchior",
"Maria",
"",
"iPLESP"
],
[
"van der Waerden",
"Judith",
"",
"iPLESP"
]
] |
Background An advantaged socioeconomic position (SEP) and satisfying social support during pregnancy (SSP) have been found to be protective factors of maternal postpartum depression (PDD). An advantaged SEP is also associated with satisfying SSP, making SSP a potential mediator of social inequalities in PPD. SEP, SSP and PPD are associated with migrant status. The aim of this study was to quantify the mediating role of SSP in social inequalities in PPD regarding mother's migrant status. Methods A sub-sample of 15,000 mothers from the French nationally-representative ELFE cohort study was used for the present analyses. SEP was constructed as a latent variable measured with educational attainment, occupational grade, employment, financial difficulties and household income. SSP was characterized as perceived support from partner (good relation, satisfying support and paternal leave) and actual support from midwives (psychosocial risk factors assessment and antenatal education). Mediation analyses with multiple mediators, stratified by migrant status were conducted. Results Study population included 76% of non-migrant women, 12% of second and 12% of first generation migrant. SEP was positively associated with support from partner, regardless of migrant status. Satisfying partner support was associated with a 8 (non-migrant women) to 11% (first generation migrant women) reduction in PPD score. Limitations History of depression was not reported.Conclusions Partner support could reduce social inequalities in PPD. This work supports the need of interventions, longitudinal and qualitative studies including fathers and adapted to women at risk of PPD to better understand the role of SSP in social inequalities in PPD. Keywords social support, postpartum depression, epidemiology, social inequalities, pregnancy, mediation analysis
|
0802.4010
|
Marcus Kaiser
|
Marcus Kaiser
|
Brain architecture: A design for natural computation
| null |
Philosophical Transactions of The Royal Society A, 365: 3033-3045,
2007
|
10.1098/rsta.2007.0007
| null |
q-bio.NC cs.AI cs.NE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fifty years ago, John von Neumann compared the architecture of the brain with
that of computers that he invented and which is still in use today. In those
days, the organisation of computers was based on concepts of brain
organisation. Here, we give an update on current results on the global
organisation of neural systems. For neural systems, we outline how the spatial
and topological architecture of neuronal and cortical networks facilitates
robustness against failures, fast processing, and balanced network activation.
Finally, we discuss mechanisms of self-organization for such architectures.
After all, the organization of the brain might again inspire computer
architecture.
|
[
{
"created": "Wed, 27 Feb 2008 13:00:38 GMT",
"version": "v1"
}
] |
2008-02-28
|
[
[
"Kaiser",
"Marcus",
""
]
] |
Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.
|
1804.04839
|
Emanuel Weitschek
|
Fabrizio Celli, Fabio Cumbo, and Emanuel Weitschek
|
Classification of large DNA methylation datasets for identifying cancer
drivers
| null |
F. Celli, F. Cumbo, E. Weitschek: Classification of Large DNA
Methylation Datasets for Identifying Cancer Drivers. Big Data Research,
10.1016/j.bdr.2018.02.005, 2018
|
10.1016/j.bdr.2018.02.005
| null |
q-bio.GN cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DNA methylation is a well-studied genetic modification crucial to regulate
the functioning of the genome. Its alterations play an important role in
tumorigenesis and tumor-suppression. Thus, studying DNA methylation data may
help biomarker discovery in cancer. Since public data on DNA methylation become
abundant, and considering the high number of methylated sites (features)
present in the genome, it is important to have a method for efficiently
processing such large datasets. Relying on big data technologies, we propose
BIGBIOCL an algorithm that can apply supervised classification methods to
datasets with hundreds of thousands of features. It is designed for the
extraction of alternative and equivalent classification models through
iterative deletion of selected features. We run experiments on DNA methylation
datasets extracted from The Cancer Genome Atlas, focusing on three tumor types:
breast, kidney, and thyroid carcinomas. We perform classifications extracting
several methylated sites and their associated genes with accurate performance.
Results suggest that BIGBIOCL can perform hundreds of classification iterations
on hundreds of thousands of features in few hours. Moreover, we compare the
performance of our method with other state-of-the-art classifiers and with a
wide-spread DNA methylation analysis method based on network analysis. Finally,
we are able to efficiently compute multiple alternative classification models
and extract, from DNA-methylation large datasets, a set of candidate genes to
be further investigated to determine their active role in cancer. BIGBIOCL,
results of experiments, and a guide to carry on new experiments are freely
available on GitHub.
|
[
{
"created": "Fri, 13 Apr 2018 08:53:39 GMT",
"version": "v1"
}
] |
2018-04-16
|
[
[
"Celli",
"Fabrizio",
""
],
[
"Cumbo",
"Fabio",
""
],
[
"Weitschek",
"Emanuel",
""
]
] |
DNA methylation is a well-studied genetic modification crucial to regulate the functioning of the genome. Its alterations play an important role in tumorigenesis and tumor-suppression. Thus, studying DNA methylation data may help biomarker discovery in cancer. Since public data on DNA methylation become abundant, and considering the high number of methylated sites (features) present in the genome, it is important to have a method for efficiently processing such large datasets. Relying on big data technologies, we propose BIGBIOCL an algorithm that can apply supervised classification methods to datasets with hundreds of thousands of features. It is designed for the extraction of alternative and equivalent classification models through iterative deletion of selected features. We run experiments on DNA methylation datasets extracted from The Cancer Genome Atlas, focusing on three tumor types: breast, kidney, and thyroid carcinomas. We perform classifications extracting several methylated sites and their associated genes with accurate performance. Results suggest that BIGBIOCL can perform hundreds of classification iterations on hundreds of thousands of features in few hours. Moreover, we compare the performance of our method with other state-of-the-art classifiers and with a wide-spread DNA methylation analysis method based on network analysis. Finally, we are able to efficiently compute multiple alternative classification models and extract, from DNA-methylation large datasets, a set of candidate genes to be further investigated to determine their active role in cancer. BIGBIOCL, results of experiments, and a guide to carry on new experiments are freely available on GitHub.
|
2206.06096
|
Vadim Weinstein
|
Vadim Weinstein, Basak Sakcak, Steven M. LaValle
|
An Enactivist-Inspired Mathematical Model of Cognition
| null | null | null | null |
q-bio.NC cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We formulate five basic tenets of enactivist cognitive science that we have
carefully identified in the relevant literature as the main underlying
principles of that philosophy. We then develop a mathematical framework to talk
about cognitive systems (both artificial and natural) which complies with these
enactivist tenets. In particular we pay attention that our mathematical
modeling does not attribute contentful symbolic representations to the agents,
and that the agent's brain, body and environment are modeled in a way that
makes them an inseparable part of a greater totality. The purpose is to create
a mathematical foundation for cognition which is in line with enactivism. We
see two main benefits of doing so: (1) It enables enactivist ideas to be more
accessible for computer scientists, AI researchers, roboticists, cognitive
scientists, and psychologists, and (2) it gives the philosophers a mathematical
tool which can be used to clarify their notions and help with their debates.
Our main notion is that of a sensorimotor system which is a special case of a
well studied notion of a transition system. We also consider related notions
such as labeled transition systems and deterministic automata. We analyze a
notion called sufficiency and show that it is a very good candidate for a
foundational notion in the "mathematics of cognition from an enactivist
perspective". We demonstrate its importance by proving a uniqueness theorem
about the minimal sufficient refinements (which correspond in some sense to an
optimal attunement of an organism to its environment) and by showing that
sufficiency corresponds to known notions such as sufficient history information
spaces. We then develop other related notions such as degree of insufficiency,
universal covers, hierarchies, strategic sufficiency. In the end, we tie it all
back to the enactivist tenets.
|
[
{
"created": "Fri, 10 Jun 2022 13:03:47 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Weinstein",
"Vadim",
""
],
[
"Sakcak",
"Basak",
""
],
[
"LaValle",
"Steven M.",
""
]
] |
We formulate five basic tenets of enactivist cognitive science that we have carefully identified in the relevant literature as the main underlying principles of that philosophy. We then develop a mathematical framework to talk about cognitive systems (both artificial and natural) which complies with these enactivist tenets. In particular we pay attention that our mathematical modeling does not attribute contentful symbolic representations to the agents, and that the agent's brain, body and environment are modeled in a way that makes them an inseparable part of a greater totality. The purpose is to create a mathematical foundation for cognition which is in line with enactivism. We see two main benefits of doing so: (1) It enables enactivist ideas to be more accessible for computer scientists, AI researchers, roboticists, cognitive scientists, and psychologists, and (2) it gives the philosophers a mathematical tool which can be used to clarify their notions and help with their debates. Our main notion is that of a sensorimotor system which is a special case of a well studied notion of a transition system. We also consider related notions such as labeled transition systems and deterministic automata. We analyze a notion called sufficiency and show that it is a very good candidate for a foundational notion in the "mathematics of cognition from an enactivist perspective". We demonstrate its importance by proving a uniqueness theorem about the minimal sufficient refinements (which correspond in some sense to an optimal attunement of an organism to its environment) and by showing that sufficiency corresponds to known notions such as sufficient history information spaces. We then develop other related notions such as degree of insufficiency, universal covers, hierarchies, strategic sufficiency. In the end, we tie it all back to the enactivist tenets.
|
1412.2622
|
Alexander K. Vidybida
|
Alexander Vidybida
|
Output stream of binding neuron with delayed feedback
|
10 pages, 5 figures, 14-th International Congress of Cybernetics and
Systems of WOSC, Wroclaw, September 9-12, 2008, Proceedings, pages 292-302,
Oficyna Wydawnicza Politechniki Wroclawskiej, ISBN 978-83-7493-400-8
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A binding neuron (BN) whith delayed feedback is considered. The neuron is fed
externally with a Poisson stream of intensity $\lambda$. The neuron's output
spikes are fed into its input with time delay $\Delta$. The resulting output
stream of the BN is not Poissonian, and we look for its interspike intervals
(ISI) distribution. For BN with threshold 2 an exact mathematical expression as
function of $\lambda$, $\Delta$ and BN's internal memory, $\tau$ is derived for
the ISI distribution, and for higher thresholds it is found numerically. The
distributions found are characterized with discontinuities of jump type, and
include singularity of Dirac's $\delta$-function type. It is concluded that
delayed feedback presence can radically alter neuronal output firing
statistics.
|
[
{
"created": "Mon, 8 Dec 2014 15:49:58 GMT",
"version": "v1"
}
] |
2014-12-09
|
[
[
"Vidybida",
"Alexander",
""
]
] |
A binding neuron (BN) whith delayed feedback is considered. The neuron is fed externally with a Poisson stream of intensity $\lambda$. The neuron's output spikes are fed into its input with time delay $\Delta$. The resulting output stream of the BN is not Poissonian, and we look for its interspike intervals (ISI) distribution. For BN with threshold 2 an exact mathematical expression as function of $\lambda$, $\Delta$ and BN's internal memory, $\tau$ is derived for the ISI distribution, and for higher thresholds it is found numerically. The distributions found are characterized with discontinuities of jump type, and include singularity of Dirac's $\delta$-function type. It is concluded that delayed feedback presence can radically alter neuronal output firing statistics.
|
2207.07714
|
Ver\'onica Moreno Vero
|
Gabriel Pena, Ver\'onica Moreno, Nestor Barraza
|
Measuring COVID-19 spreading speed through the mean time between
infections indicator
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to use the mean time between infections (MTBI) metric as obtained
from a recently introduced non-homogeneous Markov stochastic model. Different
types of parameter calibration are performed. We estimate the MTBI using data
from different time windows and from the whole stage history and compare the
results. In order to detect waves and stages in the input data, a preprocessing
filtering technique is applied. The results of applying this indicator to the
COVID-19 reported data of infections from Argentina, Germany and the United
States are shown. We find that the MTBI behaves similarly with respect to the
different data inputs, whereas the model parameters completely change their
behaviour. Evolution over time of the parameters and the MTBI indicator is also
shown. We show evidence to support the claim that the MTBI is a rather good
indicator in order to measure the spreading speed of an epidemic, having
similar values whatever the input data size.
|
[
{
"created": "Fri, 15 Jul 2022 19:12:53 GMT",
"version": "v1"
}
] |
2022-07-19
|
[
[
"Pena",
"Gabriel",
""
],
[
"Moreno",
"Verónica",
""
],
[
"Barraza",
"Nestor",
""
]
] |
We propose to use the mean time between infections (MTBI) metric as obtained from a recently introduced non-homogeneous Markov stochastic model. Different types of parameter calibration are performed. We estimate the MTBI using data from different time windows and from the whole stage history and compare the results. In order to detect waves and stages in the input data, a preprocessing filtering technique is applied. The results of applying this indicator to the COVID-19 reported data of infections from Argentina, Germany and the United States are shown. We find that the MTBI behaves similarly with respect to the different data inputs, whereas the model parameters completely change their behaviour. Evolution over time of the parameters and the MTBI indicator is also shown. We show evidence to support the claim that the MTBI is a rather good indicator in order to measure the spreading speed of an epidemic, having similar values whatever the input data size.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.