id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.07222
|
Sarah Beul
|
Sarah F. Beul, Helen Barbas, Claus C. Hilgetag
|
A predictive structural model of the primate connectome
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anatomical connectivity imposes strong constraints on brain function, but
there is no general agreement about principles that govern its organization.
Based on extensive quantitative data we tested the power of three models to
predict connections of the primate cerebral cortex: architectonic similarity
(structural model), spatial proximity (distance model) and thickness similarity
(thickness model). Architectonic similarity showed the strongest and most
consistent influence on connection features. This parameter was strongly
associated with the presence or absence of inter-areal connections and when
integrated with spatial distance, the model allowed predicting the existence of
projections with very high accuracy. Moreover, architectonic similarity was
strongly related to the laminar pattern of projections origins, and the
absolute number of cortical connections of an area. By contrast, cortical
thickness similarity and distance were not systematically related to connection
features. These findings suggest that cortical architecture provides a general
organizing principle for connections in the primate brain.
|
[
{
"created": "Mon, 23 Nov 2015 13:48:23 GMT",
"version": "v1"
},
{
"created": "Mon, 30 May 2016 12:03:05 GMT",
"version": "v2"
}
] |
2016-05-31
|
[
[
"Beul",
"Sarah F.",
""
],
[
"Barbas",
"Helen",
""
],
[
"Hilgetag",
"Claus C.",
""
]
] |
Anatomical connectivity imposes strong constraints on brain function, but there is no general agreement about principles that govern its organization. Based on extensive quantitative data we tested the power of three models to predict connections of the primate cerebral cortex: architectonic similarity (structural model), spatial proximity (distance model) and thickness similarity (thickness model). Architectonic similarity showed the strongest and most consistent influence on connection features. This parameter was strongly associated with the presence or absence of inter-areal connections and when integrated with spatial distance, the model allowed predicting the existence of projections with very high accuracy. Moreover, architectonic similarity was strongly related to the laminar pattern of projections origins, and the absolute number of cortical connections of an area. By contrast, cortical thickness similarity and distance were not systematically related to connection features. These findings suggest that cortical architecture provides a general organizing principle for connections in the primate brain.
|
2209.01034
|
Alireza Modirshanechi
|
Alireza Modirshanechi, Johanni Brea, Wulfram Gerstner
|
A taxonomy of surprise definitions
|
To appear in the Journal of Mathematical Psychology
|
Journal of Mathematical Psychology Volume 110, September 2022,
102712
|
10.1016/j.jmp.2022.102712
| null |
q-bio.NC stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Surprising events trigger measurable brain activity and influence human
behavior by affecting learning, memory, and decision-making. Currently there
is, however, no consensus on the definition of surprise. Here we identify 18
mathematical definitions of surprise in a unifying framework. We first propose
a technical classification of these definitions into three groups based on
their dependence on an agent's belief, show how they relate to each other, and
prove under what conditions they are indistinguishable. Going beyond this
technical analysis, we propose a taxonomy of surprise definitions and classify
them into four conceptual categories based on the quantity they measure: (i)
'prediction surprise' measures a mismatch between a prediction and an
observation; (ii) 'change-point detection surprise' measures the probability of
a change in the environment; (iii) 'confidence-corrected surprise' explicitly
accounts for the effect of confidence; and (iv) 'information gain surprise'
measures the belief-update upon a new observation. The taxonomy poses the
foundation for principled studies of the functional roles and physiological
signatures of surprise in the brain.
|
[
{
"created": "Fri, 2 Sep 2022 13:07:15 GMT",
"version": "v1"
}
] |
2022-09-26
|
[
[
"Modirshanechi",
"Alireza",
""
],
[
"Brea",
"Johanni",
""
],
[
"Gerstner",
"Wulfram",
""
]
] |
Surprising events trigger measurable brain activity and influence human behavior by affecting learning, memory, and decision-making. Currently there is, however, no consensus on the definition of surprise. Here we identify 18 mathematical definitions of surprise in a unifying framework. We first propose a technical classification of these definitions into three groups based on their dependence on an agent's belief, show how they relate to each other, and prove under what conditions they are indistinguishable. Going beyond this technical analysis, we propose a taxonomy of surprise definitions and classify them into four conceptual categories based on the quantity they measure: (i) 'prediction surprise' measures a mismatch between a prediction and an observation; (ii) 'change-point detection surprise' measures the probability of a change in the environment; (iii) 'confidence-corrected surprise' explicitly accounts for the effect of confidence; and (iv) 'information gain surprise' measures the belief-update upon a new observation. The taxonomy poses the foundation for principled studies of the functional roles and physiological signatures of surprise in the brain.
|
1102.4026
|
Pau Ru\'e
|
Pau Ru\'e and Jordi Garcia-Ojalvo
|
Gene circuit designs for noisy excitable dynamics
|
9 pages, 10 figures
|
Mathematical Biosciences 231 (2011), pp. 90-97
|
10.1016/j.mbs.2011.02.013
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Certain cellular processes take the form of activity pulses that can be
interpreted in terms of noise-driven excitable dynamics. Here we present an
overview of different gene circuit architectures that exhibit excitable pulses
of protein expression, when subject to molecular noise. Different types of
excitable dynamics can occur depending on the bifurcation structure leading to
the specific excitable phase-space topology. The bifurcation structure is not,
however, linked to a particular circuit architecture. Thus a given gene circuit
design can sustain different classes of excitable dynamics depending on the
system parameters.
|
[
{
"created": "Sat, 19 Feb 2011 21:12:48 GMT",
"version": "v1"
}
] |
2011-04-22
|
[
[
"Rué",
"Pau",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] |
Certain cellular processes take the form of activity pulses that can be interpreted in terms of noise-driven excitable dynamics. Here we present an overview of different gene circuit architectures that exhibit excitable pulses of protein expression, when subject to molecular noise. Different types of excitable dynamics can occur depending on the bifurcation structure leading to the specific excitable phase-space topology. The bifurcation structure is not, however, linked to a particular circuit architecture. Thus a given gene circuit design can sustain different classes of excitable dynamics depending on the system parameters.
|
1611.04812
|
Krzysztof Gogolewski
|
K. Gogolewski, M. Startek, A. Gambin and A. Le Rouzic
|
Modelling the proliferation of transposable elements in populations
under environmental stress
|
18 pages (wo references), 7 figures. The work was presented at RECOMB
2015 conference. To be published in Mobile DNA or Theoretical Population
Biology
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we investigate the evolution of sexual diploid populations
which are hosts for active TE families. Our purpose is to explore the
relationship between the environmental change, that influences such population
and activity of those TEs that are present in genomes.
|
[
{
"created": "Tue, 15 Nov 2016 12:59:01 GMT",
"version": "v1"
}
] |
2016-11-16
|
[
[
"Gogolewski",
"K.",
""
],
[
"Startek",
"M.",
""
],
[
"Gambin",
"A.",
""
],
[
"Rouzic",
"A. Le",
""
]
] |
In this article, we investigate the evolution of sexual diploid populations which are hosts for active TE families. Our purpose is to explore the relationship between the environmental change, that influences such population and activity of those TEs that are present in genomes.
|
1903.11627
|
Gabriel Birzu
|
Gabriel Birzu, Sakib Matin, Oskar Hallatschek and Kirill S. Korolev
|
Genetic drift in range expansions is very sensitive to density feedback
in dispersal and growth
|
36 pages, 5 figures, and 1 table
| null | null | null |
q-bio.PE cond-mat.stat-mech physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Theory predicts rapid genetic drift during invasions, yet many expanding
populations maintain high genetic diversity. We find that genetic drift is
dramatically suppressed when dispersal rates increase with the population
density because many more migrants from the diverse, high-density regions
arrive at the expansion edge. When density-dependence is weak or negative, the
effective population size of the front scales only logarithmically with the
carrying capacity. The dependence, however, switches to a sublinear power law
and then to a linear increase as the density-dependence becomes strongly
positive. We develop a unified framework revealing that the transitions between
different regimes of diversity loss are controlled by a single, universal
parameter: the ratio of the expansion velocity to the geometric mean of
dispersal and growth rates at expansion edge. Our results suggest that positive
density-dependence could dramatically alter evolution in expanding populations
even when its contributions to the expansion velocity is small.
|
[
{
"created": "Wed, 27 Mar 2019 18:07:14 GMT",
"version": "v1"
}
] |
2019-03-29
|
[
[
"Birzu",
"Gabriel",
""
],
[
"Matin",
"Sakib",
""
],
[
"Hallatschek",
"Oskar",
""
],
[
"Korolev",
"Kirill S.",
""
]
] |
Theory predicts rapid genetic drift during invasions, yet many expanding populations maintain high genetic diversity. We find that genetic drift is dramatically suppressed when dispersal rates increase with the population density because many more migrants from the diverse, high-density regions arrive at the expansion edge. When density-dependence is weak or negative, the effective population size of the front scales only logarithmically with the carrying capacity. The dependence, however, switches to a sublinear power law and then to a linear increase as the density-dependence becomes strongly positive. We develop a unified framework revealing that the transitions between different regimes of diversity loss are controlled by a single, universal parameter: the ratio of the expansion velocity to the geometric mean of dispersal and growth rates at expansion edge. Our results suggest that positive density-dependence could dramatically alter evolution in expanding populations even when its contributions to the expansion velocity is small.
|
1709.03813
|
Muniba Faiza
|
Muniba Faiza, Tariq Abdullah, Prof. Yonghua Wang
|
Dithymoquinone as a novel inhibitor for
3-carboxy-4-methyl-5-propyl-2-furanpropanoic acid (CMPF) to prevent renal
failure
|
27 pages, 4 figures, and 2 tables
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3-carboxy-4-methyl-5-propyl-2-furanpropanoic acid (CMPF) is a major
endogenous ligand found in the human serum albumin (HSA) of renal failure
patients. It gets accumulated in the HSA and its concentration in sera of
patients may reflect the chronicity of renal failure [1-4]. It is considered
uremic toxin due to its damaging effect on the renal cells. The high
concentrations of CMPF inhibit the binding of other ligands to HSA. Removal of
CMPF is difficult through conventional hemodialysis due to its strong binding
affinity. We hypothesized that the competitive inhibition may be helpful in
removal of CMPF binding to HSA. A compound with higher HSA binding affinity
than CMPF could be useful to prevent CMPF from binding so that CMPF could be
excreted by the body through the urine. We studied an active compound
dihydrothymoquinone/ dithymoquinone (DTQ) found in black cumin seed (Nigella
sativa), which has higher binding affinity for HSA. Molecular docking
simulations were performed to find the binding affinity of CMPF and DTQ with
HSA. DTQ was found to have higher binding affinity possessing more interactions
with the binding residues than the CMPF. We studied the binding pocket
flexibility of CMPF and DTQ to analyze the binding abilities of both the
compounds. We have also predicted the ADME properties for DTQ which shows
higher lipophilicity, higher gastrointestinal (GI) absorption, and blood-brain
barrier (BBB) permeability. We discovered that DTQ has potential to act as an
inhibitor of CMPF and can be considered as a candidate for the formation of the
therapeutic drug against CMPF.
|
[
{
"created": "Sun, 23 Jul 2017 10:53:26 GMT",
"version": "v1"
}
] |
2017-09-13
|
[
[
"Faiza",
"Muniba",
""
],
[
"Abdullah",
"Tariq",
""
],
[
"Wang",
"Prof. Yonghua",
""
]
] |
3-carboxy-4-methyl-5-propyl-2-furanpropanoic acid (CMPF) is a major endogenous ligand found in the human serum albumin (HSA) of renal failure patients. It gets accumulated in the HSA and its concentration in sera of patients may reflect the chronicity of renal failure [1-4]. It is considered uremic toxin due to its damaging effect on the renal cells. The high concentrations of CMPF inhibit the binding of other ligands to HSA. Removal of CMPF is difficult through conventional hemodialysis due to its strong binding affinity. We hypothesized that the competitive inhibition may be helpful in removal of CMPF binding to HSA. A compound with higher HSA binding affinity than CMPF could be useful to prevent CMPF from binding so that CMPF could be excreted by the body through the urine. We studied an active compound dihydrothymoquinone/ dithymoquinone (DTQ) found in black cumin seed (Nigella sativa), which has higher binding affinity for HSA. Molecular docking simulations were performed to find the binding affinity of CMPF and DTQ with HSA. DTQ was found to have higher binding affinity possessing more interactions with the binding residues than the CMPF. We studied the binding pocket flexibility of CMPF and DTQ to analyze the binding abilities of both the compounds. We have also predicted the ADME properties for DTQ which shows higher lipophilicity, higher gastrointestinal (GI) absorption, and blood-brain barrier (BBB) permeability. We discovered that DTQ has potential to act as an inhibitor of CMPF and can be considered as a candidate for the formation of the therapeutic drug against CMPF.
|
1811.08578
|
Bernard Marius 't Hart
|
Ahmed A. Mostafa, Bernard Marius 't Hart, Denise Y.P. Henriques
|
Motor Learning Without Moving: Hand Localization after Passive Training
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
An accurate estimate of limb position is necessary for movement. Where we
localize our unseen hand after a reach depends on felt hand position, or
proprioception, but often only predicted sensory consequences based on
efference copies of motor commands are considered. Both signals should
contribute, so here we use passive training with rotated visual feedback of
hand position to prevent updates of predicted sensory consequences, but still
recalibrate proprioception. After this training we measure participants' hand
location estimates based on both efference-based predictions and afferent
proprioceptive signals with self-generated hand movements as well as based on
proprioception only with robot-generated movements. The changes in hand
localization are equally large after training with robot- and self-generated
hand movements. Both motor and proprioceptive changes are only slightly smaller
as those after training with self-generated movements, confirming that
recalibrated proprioception contributes to motor learning.
|
[
{
"created": "Wed, 21 Nov 2018 02:58:11 GMT",
"version": "v1"
}
] |
2018-11-22
|
[
[
"Mostafa",
"Ahmed A.",
""
],
[
"Hart",
"Bernard Marius 't",
""
],
[
"Henriques",
"Denise Y. P.",
""
]
] |
An accurate estimate of limb position is necessary for movement. Where we localize our unseen hand after a reach depends on felt hand position, or proprioception, but often only predicted sensory consequences based on efference copies of motor commands are considered. Both signals should contribute, so here we use passive training with rotated visual feedback of hand position to prevent updates of predicted sensory consequences, but still recalibrate proprioception. After this training we measure participants' hand location estimates based on both efference-based predictions and afferent proprioceptive signals with self-generated hand movements as well as based on proprioception only with robot-generated movements. The changes in hand localization are equally large after training with robot- and self-generated hand movements. Both motor and proprioceptive changes are only slightly smaller as those after training with self-generated movements, confirming that recalibrated proprioception contributes to motor learning.
|
1809.07550
|
Jens Wilting
|
Jens Wilting, Jonas Dehning, Joao Pinheiro Neto, Lucas Rudelt, Michael
Wibral, Johannes Zierenberg, Viola Priesemann
|
Dynamic Adaptive Computation: Tuning network states to task requirements
|
6 pages + references, 2 figures
|
Frontiers in systems neuroscience 12 (2018)
|
10.3389/fnsys.2018.00055
| null |
q-bio.NC nlin.AO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Neural circuits are able to perform computations under very diverse
conditions and requirements. The required computations impose clear constraints
on their fine-tuning: a rapid and maximally informative response to stimuli in
general requires decorrelated baseline neural activity. Such network dynamics
is known as asynchronous-irregular. In contrast, spatio-temporal integration of
information requires maintenance and transfer of stimulus information over
extended time periods. This can be realized at criticality, a phase transition
where correlations, sensitivity and integration time diverge. Being able to
flexibly switch, or even combine the above properties in a task-dependent
manner would present a clear functional advantage. We propose that cortex
operates in a "reverberating regime" because it is particularly favorable for
ready adaptation of computational properties to context and task. This
reverberating regime enables cortical networks to interpolate between the
asynchronous-irregular and the critical state by small changes in effective
synaptic strength or excitation-inhibition ratio. These changes directly adapt
computational properties, including sensitivity, amplification, integration
time and correlation length within the local network. We review recent
converging evidence that cortex in vivo operates in the reverberating regime,
and that various cortical areas have adapted their integration times to
processing requirements. In addition, we propose that neuromodulation enables a
fine-tuning of the network, so that local circuits can either decorrelate or
integrate, and quench or maintain their input depending on task. We argue that
this task-dependent tuning, which we call "dynamic adaptive computation",
presents a central organization principle of cortical networks and discuss
first experimental evidence.
|
[
{
"created": "Thu, 20 Sep 2018 10:00:18 GMT",
"version": "v1"
}
] |
2019-10-22
|
[
[
"Wilting",
"Jens",
""
],
[
"Dehning",
"Jonas",
""
],
[
"Neto",
"Joao Pinheiro",
""
],
[
"Rudelt",
"Lucas",
""
],
[
"Wibral",
"Michael",
""
],
[
"Zierenberg",
"Johannes",
""
],
[
"Priesemann",
"Viola",
""
]
] |
Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation", presents a central organization principle of cortical networks and discuss first experimental evidence.
|
2212.02229
|
Dominic Masters
|
Dominic Masters, Josef Dean, Kerstin Klaser, Zhiyi Li, Sam
Maddrell-Mander, Adam Sanders, Hatem Helal, Deniz Beker, Ladislav
Ramp\'a\v{s}ek and Dominique Beaini
|
GPS++: An Optimised Hybrid MPNN/Transformer for Molecular Property
Prediction
| null | null | null | null |
q-bio.QM cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This technical report presents GPS++, the first-place solution to the Open
Graph Benchmark Large-Scale Challenge (OGB-LSC 2022) for the PCQM4Mv2 molecular
property prediction task. Our approach implements several key principles from
the prior literature. At its core our GPS++ method is a hybrid MPNN/Transformer
model that incorporates 3D atom positions and an auxiliary denoising task. The
effectiveness of GPS++ is demonstrated by achieving 0.0719 mean absolute error
on the independent test-challenge PCQM4Mv2 split. Thanks to Graphcore IPU
acceleration, GPS++ scales to deep architectures (16 layers), training at 3
minutes per epoch, and large ensemble (112 models), completing the final
predictions in 1 hour 32 minutes, well under the 4 hour inference budget
allocated. Our implementation is publicly available at:
https://github.com/graphcore/ogb-lsc-pcqm4mv2.
|
[
{
"created": "Fri, 18 Nov 2022 18:11:27 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2022 16:53:52 GMT",
"version": "v2"
}
] |
2022-12-07
|
[
[
"Masters",
"Dominic",
""
],
[
"Dean",
"Josef",
""
],
[
"Klaser",
"Kerstin",
""
],
[
"Li",
"Zhiyi",
""
],
[
"Maddrell-Mander",
"Sam",
""
],
[
"Sanders",
"Adam",
""
],
[
"Helal",
"Hatem",
""
],
[
"Beker",
"Deniz",
""
],
[
"Rampášek",
"Ladislav",
""
],
[
"Beaini",
"Dominique",
""
]
] |
This technical report presents GPS++, the first-place solution to the Open Graph Benchmark Large-Scale Challenge (OGB-LSC 2022) for the PCQM4Mv2 molecular property prediction task. Our approach implements several key principles from the prior literature. At its core our GPS++ method is a hybrid MPNN/Transformer model that incorporates 3D atom positions and an auxiliary denoising task. The effectiveness of GPS++ is demonstrated by achieving 0.0719 mean absolute error on the independent test-challenge PCQM4Mv2 split. Thanks to Graphcore IPU acceleration, GPS++ scales to deep architectures (16 layers), training at 3 minutes per epoch, and large ensemble (112 models), completing the final predictions in 1 hour 32 minutes, well under the 4 hour inference budget allocated. Our implementation is publicly available at: https://github.com/graphcore/ogb-lsc-pcqm4mv2.
|
0706.0643
|
Dietrich Stauffer
|
Dietrich Stauffer, Christian Schulze, Dieter W. Heermann
|
Superdiffusion in a Model for Diffusion in a Molecularly Crowded
Environment
|
8 pages including 4 figures
| null | null | null |
q-bio.SC
| null |
We present a model for diffusion in a molecularly crowded environment. The
model consists of random barriers in percolation network. Random walks in the
presence of slowly moving barriers show normal diffusion for long times, but
anomalous diffusion at intermediate times. The effective exponents for square
distance versus time usually are below one at these intermediate times, but can
be also larger than one for high barrier concentrations. Thus we observe sub-
as well as super-diffusion in a crowded environment.
|
[
{
"created": "Tue, 5 Jun 2007 12:35:21 GMT",
"version": "v1"
}
] |
2007-06-06
|
[
[
"Stauffer",
"Dietrich",
""
],
[
"Schulze",
"Christian",
""
],
[
"Heermann",
"Dieter W.",
""
]
] |
We present a model for diffusion in a molecularly crowded environment. The model consists of random barriers in percolation network. Random walks in the presence of slowly moving barriers show normal diffusion for long times, but anomalous diffusion at intermediate times. The effective exponents for square distance versus time usually are below one at these intermediate times, but can be also larger than one for high barrier concentrations. Thus we observe sub- as well as super-diffusion in a crowded environment.
|
2201.07283
|
Ushasi Roy
|
Ushasi Roy, Tyler Collins, Mohit K. Jolly, and Parag Katira
|
Biophysical and Biochemical mechanisms underlying Collective Cell
Migration in Cancer Metastasis
|
27 pages, 2 figures, book chapter
| null | null | null |
q-bio.CB physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multicellular collective migration is a ubiquitous strategy of cells to
translocate spatially in diverse tissue environments to accomplish a wide
variety of biological phenomena, viz. embryonic development, wound healing, and
tumor progression. Diverse cellular functions and behaviors, for instance, cell
protrusions, active contractions, cell-cell adhesion, biochemical signaling,
remodeling of tissue micro-environment, etc., play their own role concomitantly
to have a single concerted consequence of multicellular migration. Thus
unveiling the driving principles, both biochemical and biophysical, of the
inherently complex process of collective cell migration is an insurmountable
task. Mathematical and computational models, in tandem with experimental data,
help in shedding some light on it. Here we review different factors influencing
Collective Cell Migration and then focus on different mathematical and
computational models - discrete, hybrid, and continuum - which helps in
revealing different aspects of multicellular migration. Finally, we discuss the
applications of these modeling frameworks specific to cancer
|
[
{
"created": "Tue, 18 Jan 2022 19:36:07 GMT",
"version": "v1"
}
] |
2022-01-20
|
[
[
"Roy",
"Ushasi",
""
],
[
"Collins",
"Tyler",
""
],
[
"Jolly",
"Mohit K.",
""
],
[
"Katira",
"Parag",
""
]
] |
Multicellular collective migration is a ubiquitous strategy of cells to translocate spatially in diverse tissue environments to accomplish a wide variety of biological phenomena, viz. embryonic development, wound healing, and tumor progression. Diverse cellular functions and behaviors, for instance, cell protrusions, active contractions, cell-cell adhesion, biochemical signaling, remodeling of tissue micro-environment, etc., play their own role concomitantly to have a single concerted consequence of multicellular migration. Thus unveiling the driving principles, both biochemical and biophysical, of the inherently complex process of collective cell migration is an insurmountable task. Mathematical and computational models, in tandem with experimental data, help in shedding some light on it. Here we review different factors influencing Collective Cell Migration and then focus on different mathematical and computational models - discrete, hybrid, and continuum - which helps in revealing different aspects of multicellular migration. Finally, we discuss the applications of these modeling frameworks specific to cancer
|
1903.07231
|
Nils Gehlenborg
|
Michael P Snyder, Shin Lin, Amanda Posgai, Mark Atkinson, Aviv Regev,
Jennifer Rood, Orit Rosen, Leslie Gaffney, Anna Hupalowska, Rahul Satija,
Nils Gehlenborg, Jay Shendure, Julia Laskin, Pehr Harbury, Nicholas A
Nystrom, Ziv Bar-Joseph, Kun Zhang, Katy B\"orner, Yiing Lin, Richard Conroy,
Dena Procaccini, Ananda L Roy, Ajay Pillai, Marishka Brown, Zorina S Galis
(for the HuBMAP Consortium)
|
Mapping the Human Body at Cellular Resolution -- The NIH Common Fund
Human BioMolecular Atlas Program
|
20 pages, 3 figures
| null |
10.1038/s41586-019-1629-x
| null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Transformative technologies are enabling the construction of three
dimensional (3D) maps of tissues with unprecedented spatial and molecular
resolution. Over the next seven years, the NIH Common Fund Human Biomolecular
Atlas Program (HuBMAP) intends to develop a widely accessible framework for
comprehensively mapping the human body at single-cell resolution by supporting
technology development, data acquisition, and detailed spatial mapping. HuBMAP
will integrate its efforts with other funding agencies, programs, consortia,
and the biomedical research community at large towards the shared vision of a
comprehensive, accessible 3D molecular and cellular atlas of the human body, in
health and various disease settings.
|
[
{
"created": "Mon, 18 Mar 2019 02:18:13 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2019 15:06:19 GMT",
"version": "v2"
}
] |
2019-11-04
|
[
[
"Snyder",
"Michael P",
"",
"for the HuBMAP Consortium"
],
[
"Lin",
"Shin",
"",
"for the HuBMAP Consortium"
],
[
"Posgai",
"Amanda",
"",
"for the HuBMAP Consortium"
],
[
"Atkinson",
"Mark",
"",
"for the HuBMAP Consortium"
],
[
"Regev",
"Aviv",
"",
"for the HuBMAP Consortium"
],
[
"Rood",
"Jennifer",
"",
"for the HuBMAP Consortium"
],
[
"Rosen",
"Orit",
"",
"for the HuBMAP Consortium"
],
[
"Gaffney",
"Leslie",
"",
"for the HuBMAP Consortium"
],
[
"Hupalowska",
"Anna",
"",
"for the HuBMAP Consortium"
],
[
"Satija",
"Rahul",
"",
"for the HuBMAP Consortium"
],
[
"Gehlenborg",
"Nils",
"",
"for the HuBMAP Consortium"
],
[
"Shendure",
"Jay",
"",
"for the HuBMAP Consortium"
],
[
"Laskin",
"Julia",
"",
"for the HuBMAP Consortium"
],
[
"Harbury",
"Pehr",
"",
"for the HuBMAP Consortium"
],
[
"Nystrom",
"Nicholas A",
"",
"for the HuBMAP Consortium"
],
[
"Bar-Joseph",
"Ziv",
"",
"for the HuBMAP Consortium"
],
[
"Zhang",
"Kun",
"",
"for the HuBMAP Consortium"
],
[
"Börner",
"Katy",
"",
"for the HuBMAP Consortium"
],
[
"Lin",
"Yiing",
"",
"for the HuBMAP Consortium"
],
[
"Conroy",
"Richard",
"",
"for the HuBMAP Consortium"
],
[
"Procaccini",
"Dena",
"",
"for the HuBMAP Consortium"
],
[
"Roy",
"Ananda L",
"",
"for the HuBMAP Consortium"
],
[
"Pillai",
"Ajay",
"",
"for the HuBMAP Consortium"
],
[
"Brown",
"Marishka",
"",
"for the HuBMAP Consortium"
],
[
"Galis",
"Zorina S",
"",
"for the HuBMAP Consortium"
]
] |
Transformative technologies are enabling the construction of three dimensional (3D) maps of tissues with unprecedented spatial and molecular resolution. Over the next seven years, the NIH Common Fund Human Biomolecular Atlas Program (HuBMAP) intends to develop a widely accessible framework for comprehensively mapping the human body at single-cell resolution by supporting technology development, data acquisition, and detailed spatial mapping. HuBMAP will integrate its efforts with other funding agencies, programs, consortia, and the biomedical research community at large towards the shared vision of a comprehensive, accessible 3D molecular and cellular atlas of the human body, in health and various disease settings.
|
1511.06904
|
Ralph Brinks
|
Ralph Brinks
|
An identifiability problem in a state model for partly undetected
chronic diseases
|
5 pages, 1 figure
| null | null | null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, we proposed an state model (compartment model) to describe the
progression of a chronic disease with an pre-clinical (undiagnosed) state
before clinical diagnosis. It is an open question, if a sequence of
cross-sectional studies with mortality follow-up is sufficient to estimate the
true incidence rate of the disease, i.e. the incidence of the undiagnosed and
diagnosed disease. In this note, we construct a counterexample and show that
this cannot be achieved in general.
|
[
{
"created": "Sat, 21 Nov 2015 17:30:22 GMT",
"version": "v1"
}
] |
2015-11-24
|
[
[
"Brinks",
"Ralph",
""
]
] |
Recently, we proposed an state model (compartment model) to describe the progression of a chronic disease with an pre-clinical (undiagnosed) state before clinical diagnosis. It is an open question, if a sequence of cross-sectional studies with mortality follow-up is sufficient to estimate the true incidence rate of the disease, i.e. the incidence of the undiagnosed and diagnosed disease. In this note, we construct a counterexample and show that this cannot be achieved in general.
|
1212.0356
|
Christian Kuehn
|
Christian Kuehn
|
Warning signs for wave speed transitions of noisy Fisher-KPP invasion
fronts
|
14 pages, 8 figures; preprint - comments and suggestions welcome
|
Theoretical Ecology, Vol. 6, No. 3, pp. 295-308, 2013
|
10.1007/s12080-013-0189-1
| null |
q-bio.PE math.DS math.PR nlin.PS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Invasion waves are a fundamental building block of theoretical ecology. In
this study we aim to take the first steps to link propagation failure and fast
acceleration of traveling waves to critical transitions (or tipping points).
The approach is based upon a detailed numerical study of various versions of
the Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) equation. The main motivation
of this work is to contribute to the following question: how much information
do statistics, collected by a stationary observer, contain about the speed and
bifurcations of traveling waves? We suggest warning signs based upon closeness
to carrying capacity, second-order moments and transients of localized initial
invasions.
|
[
{
"created": "Mon, 3 Dec 2012 11:43:03 GMT",
"version": "v1"
}
] |
2015-03-06
|
[
[
"Kuehn",
"Christian",
""
]
] |
Invasion waves are a fundamental building block of theoretical ecology. In this study we aim to take the first steps to link propagation failure and fast acceleration of traveling waves to critical transitions (or tipping points). The approach is based upon a detailed numerical study of various versions of the Fisher-Kolmogorov-Petrovskii-Piscounov (FKPP) equation. The main motivation of this work is to contribute to the following question: how much information do statistics, collected by a stationary observer, contain about the speed and bifurcations of traveling waves? We suggest warning signs based upon closeness to carrying capacity, second-order moments and transients of localized initial invasions.
|
0807.1898
|
Marco Cosentino Lagomarsino
|
M. Cosentino Lagomarsino, A.L. Sellerio, P.D. Heijning, B. Bassetti
|
Universal Features in the Genome-level Evolution of Protein Domains
| null | null | null | null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein domains are found on genomes with notable statistical distributions,
which bear a high degree of similarity. Previous work has shown how these
distributions can be accounted for by simple models, where the main ingredients
are probabilities of duplication, innovation, and loss of domains. However, no
one so far has addressed the issue that these distributions follow definite
trends depending on protein-coding genome size only. We present a stochastic
duplication/innovation model, falling in the class of so-called Chinese
Restaurant Processes, able to explain this feature of the data. Using only two
universal parameters, related to a minimal number of domains and to the
relative weight of innovation to duplication, the model reproduces two
important aspects: (a) the populations of domain classes (the sets, related to
homology classes, containing realizations of the same domain in different
proteins) follow common power-laws whose cutoff is dictated by genome size, and
(b) the number of domain families is universal and markedly sublinear in genome
size. An important ingredient of the model is that the innovation probability
decreases with genome size. We propose the possibility to interpret this as a
global constraint given by the cost of expanding an increasingly complex
interactome.
|
[
{
"created": "Fri, 11 Jul 2008 17:36:26 GMT",
"version": "v1"
}
] |
2008-07-14
|
[
[
"Lagomarsino",
"M. Cosentino",
""
],
[
"Sellerio",
"A. L.",
""
],
[
"Heijning",
"P. D.",
""
],
[
"Bassetti",
"B.",
""
]
] |
Protein domains are found on genomes with notable statistical distributions, which bear a high degree of similarity. Previous work has shown how these distributions can be accounted for by simple models, where the main ingredients are probabilities of duplication, innovation, and loss of domains. However, no one so far has addressed the issue that these distributions follow definite trends depending on protein-coding genome size only. We present a stochastic duplication/innovation model, falling in the class of so-called Chinese Restaurant Processes, able to explain this feature of the data. Using only two universal parameters, related to a minimal number of domains and to the relative weight of innovation to duplication, the model reproduces two important aspects: (a) the populations of domain classes (the sets, related to homology classes, containing realizations of the same domain in different proteins) follow common power-laws whose cutoff is dictated by genome size, and (b) the number of domain families is universal and markedly sublinear in genome size. An important ingredient of the model is that the innovation probability decreases with genome size. We propose the possibility to interpret this as a global constraint given by the cost of expanding an increasingly complex interactome.
|
1910.08352
|
Michaela Hamm
|
Michaela Hamm and Barbara Drossel
|
The concerted emergence of well-known spatial and temporal ecological
patterns in an evolutionary food web model in space
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Ecological systems show a variety of characteristic patterns of biodiversity
in space and time. It is a challenge for theory to find models that can
reproduce and explain the observed patterns. Since the advent of island
biogeography these models revolve around speciation, dispersal, and extinction,
but they usually neglect trophic structure. Here, we propose and study a
spatially extended evolutionary food web model that allows us to study large
spatial systems with several trophic layers. Our computer simulations show that
the model gives rise simultaneously to several biodiversity patterns in space
and time, from species abundance distributions to the waxing and waning of
geographic ranges. We find that trophic position in the network plays a crucial
role when it comes to the time evolution of range sizes, because the trophic
context restricts the occurrence and survival of species especially on higher
trophic levels.
|
[
{
"created": "Fri, 18 Oct 2019 11:50:16 GMT",
"version": "v1"
}
] |
2019-10-21
|
[
[
"Hamm",
"Michaela",
""
],
[
"Drossel",
"Barbara",
""
]
] |
Ecological systems show a variety of characteristic patterns of biodiversity in space and time. It is a challenge for theory to find models that can reproduce and explain the observed patterns. Since the advent of island biogeography these models revolve around speciation, dispersal, and extinction, but they usually neglect trophic structure. Here, we propose and study a spatially extended evolutionary food web model that allows us to study large spatial systems with several trophic layers. Our computer simulations show that the model gives rise simultaneously to several biodiversity patterns in space and time, from species abundance distributions to the waxing and waning of geographic ranges. We find that trophic position in the network plays a crucial role when it comes to the time evolution of range sizes, because the trophic context restricts the occurrence and survival of species especially on higher trophic levels.
|
2211.06692
|
Federico William Pasini
|
Federico W. Pasini, Alexandra N. Busch, J\'an Min\'a\v{c}, Krishnan
Padmanabhan, Lyle Muller
|
An algebraic approach to spike-time neural codes in the hippocampus
|
11 pages, 5 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although temporal coding through spike-time patterns has long been of
interest in neuroscience, the specific structures that could be useful for
spike-time codes remain highly unclear. Here, we introduce a new analytical
approach, using techniques from discrete mathematics, to study spike-time
codes. We focus on the phenomenon of ``phase precession'' in the rodent
hippocampus. During navigation and learning on a physical track, specific cells
in a rodent's brain form a highly structured pattern relative to the
oscillation of local population activity. Studies of phase precession largely
focus on its well established role in synaptic plasticity and memory formation.
Comparatively less attention has been paid to the fact that phase precession
represents one of the best candidates for a spike-time neural code. The precise
nature of this code remains an open question. Here, we derive an analytical
expression for an operator mapping points in physical space, through individual
spike times, to complex numbers. The properties of this operator highlight a
specific relationship between past and future in hippocampal spike patterns.
Importantly, this approach generalizes beyond the specific phenomenon studied
here, providing a new technique to study the neural codes within spike-time
sequences found during sensory coding and motor behavior. We then introduce a
novel spike-based decoding algorithm, based on this operator, that successfully
decodes a simulated animal's trajectory using only the animal's initial
position and a pattern of spike times. This decoder is robust to noise in spike
times and works on a timescale almost an order of magnitude shorter than
typically used with decoders that work on average firing rate. These results
illustrate the utility of a discrete approach, based on the symmetries in spike
patterns, to provide insight into the structure and function of neural systems.
|
[
{
"created": "Sat, 12 Nov 2022 15:45:18 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Pasini",
"Federico W.",
""
],
[
"Busch",
"Alexandra N.",
""
],
[
"Mináč",
"Ján",
""
],
[
"Padmanabhan",
"Krishnan",
""
],
[
"Muller",
"Lyle",
""
]
] |
Although temporal coding through spike-time patterns has long been of interest in neuroscience, the specific structures that could be useful for spike-time codes remain highly unclear. Here, we introduce a new analytical approach, using techniques from discrete mathematics, to study spike-time codes. We focus on the phenomenon of ``phase precession'' in the rodent hippocampus. During navigation and learning on a physical track, specific cells in a rodent's brain form a highly structured pattern relative to the oscillation of local population activity. Studies of phase precession largely focus on its well established role in synaptic plasticity and memory formation. Comparatively less attention has been paid to the fact that phase precession represents one of the best candidates for a spike-time neural code. The precise nature of this code remains an open question. Here, we derive an analytical expression for an operator mapping points in physical space, through individual spike times, to complex numbers. The properties of this operator highlight a specific relationship between past and future in hippocampal spike patterns. Importantly, this approach generalizes beyond the specific phenomenon studied here, providing a new technique to study the neural codes within spike-time sequences found during sensory coding and motor behavior. We then introduce a novel spike-based decoding algorithm, based on this operator, that successfully decodes a simulated animal's trajectory using only the animal's initial position and a pattern of spike times. This decoder is robust to noise in spike times and works on a timescale almost an order of magnitude shorter than typically used with decoders that work on average firing rate. These results illustrate the utility of a discrete approach, based on the symmetries in spike patterns, to provide insight into the structure and function of neural systems.
|
2302.12714
|
Maurice HT Ling
|
Zhu En Chay, Chin How Lee, Kun Cheng Lee, Jack SH Oon, Maurice HT Ling
|
Russel and Rao Coefficient is a Suitable Substitute for Dice Coefficient
in Studying Restriction Mapped Genetic Distances of Escherichia coli
| null |
iConcept Journal of Computational and Mathematical Biology 1:1
(2010)
| null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Escherichia coli is one of many bacterial inhabitants found in human
intestines and any adaptation as a result of mutations may affect its host. A
commonly used technique employed to study these mutations is Restriction
Fragment Length Polymorphism (RFLP) and is proceeded with a suitable distance
coefficient to quantify genetic differences between 2 samples. Dice is
considered a suitable distance coefficient in RFLP analyses, while others were
left unstudied in its suitability for use. Hence, this study aims to identify
substitutes for Dice. Experimental data was obtained by subculturing E. coli
for 72 passages in 8 different adaptation media and RFLP profiles analyzed
using 20 distance coefficients. Our results suggest that Dennis, Fossum,
Matching and Russel and Rao to work as well or better than Dice. Dennis,
Matching and Fossum coefficients had highest discriminatory abilities but are
limited by the lack of upper or lower boundaries. Russel and Rao coefficient is
highly correlated with Dice coefficient (r2 = 0.998), with both higher and
lower boundaries, suggesting that Russel and Rao coefficient can be used to
substitute Dice coefficient in studying genetic distances in E. coli.
|
[
{
"created": "Sun, 19 Feb 2023 02:39:00 GMT",
"version": "v1"
}
] |
2023-02-27
|
[
[
"Chay",
"Zhu En",
""
],
[
"Lee",
"Chin How",
""
],
[
"Lee",
"Kun Cheng",
""
],
[
"Oon",
"Jack SH",
""
],
[
"Ling",
"Maurice HT",
""
]
] |
Escherichia coli is one of many bacterial inhabitants found in human intestines and any adaptation as a result of mutations may affect its host. A commonly used technique employed to study these mutations is Restriction Fragment Length Polymorphism (RFLP) and is proceeded with a suitable distance coefficient to quantify genetic differences between 2 samples. Dice is considered a suitable distance coefficient in RFLP analyses, while others were left unstudied in its suitability for use. Hence, this study aims to identify substitutes for Dice. Experimental data was obtained by subculturing E. coli for 72 passages in 8 different adaptation media and RFLP profiles analyzed using 20 distance coefficients. Our results suggest that Dennis, Fossum, Matching and Russel and Rao to work as well or better than Dice. Dennis, Matching and Fossum coefficients had highest discriminatory abilities but are limited by the lack of upper or lower boundaries. Russel and Rao coefficient is highly correlated with Dice coefficient (r2 = 0.998), with both higher and lower boundaries, suggesting that Russel and Rao coefficient can be used to substitute Dice coefficient in studying genetic distances in E. coli.
|
2307.02624
|
James Brunner
|
James D. Brunner and Laverne A. Gallegos-Graves and Marie E. Kroeger
|
Inferring microbial interactions with their environment from genomic and
metagenomic data
|
27 pages, 10 figure, 4 tables
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Microbial communities assemble through a complex set of interactions between
microbes and their environment, and the resulting metabolic impact on the host
ecosystem can be profound. Microbial activity is known to impact human health,
plant growth, water quality, and soil carbon storage which has lead to the
development of many approaches and products meant to manipulate the microbiome.
In order to understand, predict, and improve microbial community engineering,
genome-scale modeling techniques have been developed to translate genomic data
into inferred microbial dynamics. However, these techniques rely heavily on
simulation to draw conclusions which may vary with unknown parameters or
initial conditions, rather than more robust qualitative analysis. To better
understand microbial community dynamics using genome-scale modeling, we provide
a tool to investigate the network of interactions between microbes and
environmental metabolites over time.
Using our previously developed algorithm for simulating microbial communities
from genome-scale metabolic models (GSMs), we infer the set of
microbe-metabolite interactions within a microbial community in a particular
environment. Because these interactions depend on the available environmental
metabolites, we refer to the networks that we infer as \emph{metabolically
contextualized}, and so name our tool MetConSIN: \underline{Met}abolically
\underline{Con}textualized \underline{S}pecies \underline{I}nteraction
\underline{N}etworks.
|
[
{
"created": "Wed, 5 Jul 2023 19:54:30 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2023 15:27:04 GMT",
"version": "v2"
}
] |
2023-11-09
|
[
[
"Brunner",
"James D.",
""
],
[
"Gallegos-Graves",
"Laverne A.",
""
],
[
"Kroeger",
"Marie E.",
""
]
] |
Microbial communities assemble through a complex set of interactions between microbes and their environment, and the resulting metabolic impact on the host ecosystem can be profound. Microbial activity is known to impact human health, plant growth, water quality, and soil carbon storage which has lead to the development of many approaches and products meant to manipulate the microbiome. In order to understand, predict, and improve microbial community engineering, genome-scale modeling techniques have been developed to translate genomic data into inferred microbial dynamics. However, these techniques rely heavily on simulation to draw conclusions which may vary with unknown parameters or initial conditions, rather than more robust qualitative analysis. To better understand microbial community dynamics using genome-scale modeling, we provide a tool to investigate the network of interactions between microbes and environmental metabolites over time. Using our previously developed algorithm for simulating microbial communities from genome-scale metabolic models (GSMs), we infer the set of microbe-metabolite interactions within a microbial community in a particular environment. Because these interactions depend on the available environmental metabolites, we refer to the networks that we infer as \emph{metabolically contextualized}, and so name our tool MetConSIN: \underline{Met}abolically \underline{Con}textualized \underline{S}pecies \underline{I}nteraction \underline{N}etworks.
|
2403.15092
|
Rub\'en Calvo Ib\'a\~nez
|
Rub\'en Calvo, Carles Martorell, Guillermo B. Morales, Serena Di Santo
and Miguel A. Mu\~noz
|
Frequency-dependent covariance reveals critical spatio-temporal patterns
of synchronized activity in the human brain
| null | null | null | null |
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent analyses combining advanced theoretical techniques and high-quality
data from thousands of simultaneously recorded neurons provide strong support
for the hypothesis that neural dynamics operate near the edge of instability
across regions in the brain. However, these analyses, as well as related
studies, often fail to capture the intricate temporal structure of brain
activity as they primarily rely on time-integrated measurements across neurons.
In this study, we present a novel framework designed to explore signatures of
criticality across diverse frequency bands and construct a much more
comprehensive description of brain activity. Additionally, we introduce a
method for projecting brain activity onto a basis of spatio-temporal patterns,
facilitating time-dependent dimensionality reduction. Applying this framework
to a magnetoencephalography dataset, we observe significant differences in both
criticality signatures and spatio-temporal activity patterns between healthy
subjects and individuals with Parkinson's disease.
|
[
{
"created": "Fri, 22 Mar 2024 10:21:28 GMT",
"version": "v1"
}
] |
2024-03-25
|
[
[
"Calvo",
"Rubén",
""
],
[
"Martorell",
"Carles",
""
],
[
"Morales",
"Guillermo B.",
""
],
[
"Di Santo",
"Serena",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] |
Recent analyses combining advanced theoretical techniques and high-quality data from thousands of simultaneously recorded neurons provide strong support for the hypothesis that neural dynamics operate near the edge of instability across regions in the brain. However, these analyses, as well as related studies, often fail to capture the intricate temporal structure of brain activity as they primarily rely on time-integrated measurements across neurons. In this study, we present a novel framework designed to explore signatures of criticality across diverse frequency bands and construct a much more comprehensive description of brain activity. Additionally, we introduce a method for projecting brain activity onto a basis of spatio-temporal patterns, facilitating time-dependent dimensionality reduction. Applying this framework to a magnetoencephalography dataset, we observe significant differences in both criticality signatures and spatio-temporal activity patterns between healthy subjects and individuals with Parkinson's disease.
|
2105.10344
|
Muhammad Usman Sanwal
|
Usman Sanwal, Thai Son Hoang, Luigia Petre and Ion Petre
|
Towards Scalable Modeling of Biology in Event-B
| null | null | null | null |
q-bio.MN cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Biology offers many examples of large-scale, complex, concurrent systems:
many processes take place in parallel, compete on resources and influence each
other's behavior. The scalable modeling of biological systems continues to be a
very active field of research. In this paper we introduce a new approach based
on Event-B, a state-based formal method with refinement as its central
ingredient, allowing us to check for model consistency step-by-step in an
automated way. Our approach based on functions leads to an elegant and concise
modeling method. We demonstrate this approach by constructing what is, to our
knowledge, the largest ever built Event-B model, describing the ErbB signaling
pathway, a key evolutionary pathway with a significant role in development and
in many types of cancer. The Event-B model for the ErbB pathway describes 1320
molecular reactions through 242 events.
|
[
{
"created": "Thu, 20 May 2021 13:37:06 GMT",
"version": "v1"
}
] |
2021-05-24
|
[
[
"Sanwal",
"Usman",
""
],
[
"Hoang",
"Thai Son",
""
],
[
"Petre",
"Luigia",
""
],
[
"Petre",
"Ion",
""
]
] |
Biology offers many examples of large-scale, complex, concurrent systems: many processes take place in parallel, compete on resources and influence each other's behavior. The scalable modeling of biological systems continues to be a very active field of research. In this paper we introduce a new approach based on Event-B, a state-based formal method with refinement as its central ingredient, allowing us to check for model consistency step-by-step in an automated way. Our approach based on functions leads to an elegant and concise modeling method. We demonstrate this approach by constructing what is, to our knowledge, the largest ever built Event-B model, describing the ErbB signaling pathway, a key evolutionary pathway with a significant role in development and in many types of cancer. The Event-B model for the ErbB pathway describes 1320 molecular reactions through 242 events.
|
2309.03911
|
Mamata Das
|
Mamata Das, Selvakumar K., P.J.A. Alphonse
|
Identifying Essential Hub Genes and Protein Complexes in Malaria GO Data
using Semantic Similarity Measures
|
23 pages, 15 figures
| null | null | null |
q-bio.MN
|
http://creativecommons.org/licenses/by/4.0/
|
Hub genes play an essential role in biological systems because of their
interaction with other genes. A vocabulary used in bioinformatics called Gene
Ontology (GO) describes how genes and proteins operate. This flexible ontology
illustrates the operation of molecular, biological, and cellular processes
(Pmol, Pbio, Pcel). There are various methodologies that can be analyzed to
determine semantic similarity. Research in this study, we employ the jack-knife
method by taking into account 4 well-liked Semantic similarity measures namely
Jaccard similarity, Cosine similarity, Pairsewise document similarity, and
Levenshtein distance. Based on these similarity values, the protein-protein
interaction network (PPI) of Malaria GO (Gene Ontology) data is built, which
causes clusters of identical or related protein complexes (Px) to form. The hub
nodes of the network are these necessary proteins. We use a variety of
centrality measures to establish clusters of these networks in order to
determine which node is the most important. The clusters' unique formation
makes it simple to determine which class of Px they are allied to.
|
[
{
"created": "Wed, 9 Aug 2023 05:10:41 GMT",
"version": "v1"
}
] |
2023-09-11
|
[
[
"Das",
"Mamata",
""
],
[
"K.",
"Selvakumar",
""
],
[
"Alphonse",
"P. J. A.",
""
]
] |
Hub genes play an essential role in biological systems because of their interaction with other genes. A vocabulary used in bioinformatics called Gene Ontology (GO) describes how genes and proteins operate. This flexible ontology illustrates the operation of molecular, biological, and cellular processes (Pmol, Pbio, Pcel). There are various methodologies that can be analyzed to determine semantic similarity. Research in this study, we employ the jack-knife method by taking into account 4 well-liked Semantic similarity measures namely Jaccard similarity, Cosine similarity, Pairsewise document similarity, and Levenshtein distance. Based on these similarity values, the protein-protein interaction network (PPI) of Malaria GO (Gene Ontology) data is built, which causes clusters of identical or related protein complexes (Px) to form. The hub nodes of the network are these necessary proteins. We use a variety of centrality measures to establish clusters of these networks in order to determine which node is the most important. The clusters' unique formation makes it simple to determine which class of Px they are allied to.
|
2104.03558
|
Govind Kaigala
|
Iago Pereiro, Anna Fomitcheva-Khartchenko, Govind V. Kaigala
|
Shake It or Shrink It: Mass Transport and Kinetics in Surface Bioassays
Using Agitation and Microfluidics
|
12 pages
| null | null | null |
q-bio.CB q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Surface assays, such as ELISA and immunofluorescence, are nothing short of
ubiquitous in biotechnology and medical diagnostics today. The development and
optimization of these assays generally focuses on three aspects: immobilization
chemistry, ligand-receptor interaction and concentrations of ligands, buffers
and sample. A fourth aspect, the transport of the analyte to the surface, is
more rarely delved into during assay design and analysis. Improving transport
is generally limited to the agitation of reagents, a mode of flow generation
inherently difficult to control, often resulting in inconsistent reaction
kinetics. However, with assay optimization reaching theoretical limits, the
role of transport becomes decisive. This perspective develops an intuitive and
practical understanding of transport in conventional agitation systems and in
microfluidics, the latter underpinning many new life science technologies. We
give rules of thumb to guide the user on system behavior, such as advection
regimes and shear stress, and derive estimates for relevant quantities that
delimit assay parameters. Illustrative cases with examples of experimental
results are used to clarify the role of fundamental concepts such as boundary
and depletion layers, mass diffusivity or surface tension.
|
[
{
"created": "Thu, 8 Apr 2021 07:19:43 GMT",
"version": "v1"
}
] |
2021-04-09
|
[
[
"Pereiro",
"Iago",
""
],
[
"Fomitcheva-Khartchenko",
"Anna",
""
],
[
"Kaigala",
"Govind V.",
""
]
] |
Surface assays, such as ELISA and immunofluorescence, are nothing short of ubiquitous in biotechnology and medical diagnostics today. The development and optimization of these assays generally focuses on three aspects: immobilization chemistry, ligand-receptor interaction and concentrations of ligands, buffers and sample. A fourth aspect, the transport of the analyte to the surface, is more rarely delved into during assay design and analysis. Improving transport is generally limited to the agitation of reagents, a mode of flow generation inherently difficult to control, often resulting in inconsistent reaction kinetics. However, with assay optimization reaching theoretical limits, the role of transport becomes decisive. This perspective develops an intuitive and practical understanding of transport in conventional agitation systems and in microfluidics, the latter underpinning many new life science technologies. We give rules of thumb to guide the user on system behavior, such as advection regimes and shear stress, and derive estimates for relevant quantities that delimit assay parameters. Illustrative cases with examples of experimental results are used to clarify the role of fundamental concepts such as boundary and depletion layers, mass diffusivity or surface tension.
|
1007.2070
|
Indrani Bose
|
Sayantari Ghosh, Kamakshi Sureka, Bhaswar Ghosh, Indrani Bose, Joyoti
Basu and Manikuntala Kundu
|
Phenotypic Heterogeneity in Mycobacterial Stringent Response
|
24 pages,8 figures, supplementary information and 5 supplementary
figures
|
BMC Systems Biology 2011, 5:18
|
10.1186/1752-0509-5-18
| null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A common survival strategy of microorganisms subjected to stress involves the
generation of phenotypic heterogeneity in the isogenic microbial population
enabling a subset of the population to survive under stress. In a recent study,
a mycobacterial population of M. smegmatis was shown to develop phenotypic
heterogeneity under nutrient depletion. The observed heterogeneity is in the
form of a bimodal distribution of the expression levels of the Green
Fluorescent Protein (GFP) as reporter with the gfp fused to the promoter of the
rel gene. The stringent response pathway is initiated in the subpopulation with
high rel activity.In the present study, we characterize quantitatively the
single cell promoter activity of the three key genes, namely, mprA, sigE and
rel, in the stringent response pathway with gfp as the reporter. The origin of
bimodality in the GFP distribution lies in two stable expression states, i.e.,
bistability. We develop a theoretical model to study the dynamics of the
stringent response pathway. The model incorporates a recently proposed
mechanism of bistability based on positive feedback and cell growth retardation
due to protein synthesis. Based on flow cytometry data, we establish that the
distribution of GFP levels in the mycobacterial population at any point of time
is a linear superposition of two invariant distributions, one Gaussian and the
other lognormal, with only the coefficients in the linear combination depending
on time. This allows us to use a binning algorithm and determine the time
variation of the mean protein level, the fraction of cells in a subpopulation
and also the coefficient of variation, a measure of gene expression noise.The
results of the theoretical model along with a comprehensive analysis of the
flow cytometry data provide definitive evidence for the coexistence of two
subpopulations with overlapping protein distributions.
|
[
{
"created": "Tue, 13 Jul 2010 10:18:02 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Feb 2011 09:13:18 GMT",
"version": "v2"
}
] |
2011-02-04
|
[
[
"Ghosh",
"Sayantari",
""
],
[
"Sureka",
"Kamakshi",
""
],
[
"Ghosh",
"Bhaswar",
""
],
[
"Bose",
"Indrani",
""
],
[
"Basu",
"Joyoti",
""
],
[
"Kundu",
"Manikuntala",
""
]
] |
A common survival strategy of microorganisms subjected to stress involves the generation of phenotypic heterogeneity in the isogenic microbial population enabling a subset of the population to survive under stress. In a recent study, a mycobacterial population of M. smegmatis was shown to develop phenotypic heterogeneity under nutrient depletion. The observed heterogeneity is in the form of a bimodal distribution of the expression levels of the Green Fluorescent Protein (GFP) as reporter with the gfp fused to the promoter of the rel gene. The stringent response pathway is initiated in the subpopulation with high rel activity.In the present study, we characterize quantitatively the single cell promoter activity of the three key genes, namely, mprA, sigE and rel, in the stringent response pathway with gfp as the reporter. The origin of bimodality in the GFP distribution lies in two stable expression states, i.e., bistability. We develop a theoretical model to study the dynamics of the stringent response pathway. The model incorporates a recently proposed mechanism of bistability based on positive feedback and cell growth retardation due to protein synthesis. Based on flow cytometry data, we establish that the distribution of GFP levels in the mycobacterial population at any point of time is a linear superposition of two invariant distributions, one Gaussian and the other lognormal, with only the coefficients in the linear combination depending on time. This allows us to use a binning algorithm and determine the time variation of the mean protein level, the fraction of cells in a subpopulation and also the coefficient of variation, a measure of gene expression noise.The results of the theoretical model along with a comprehensive analysis of the flow cytometry data provide definitive evidence for the coexistence of two subpopulations with overlapping protein distributions.
|
1104.2532
|
Sebastiano Stramaglia
|
Sebastiano Stramaglia, Daniele Marinazzo, Mario Pellicoro, and Marina
de Tommaso
|
Abnormal effective connectivity in migraine with aura under photic
stimulation
|
4 pages, 6 figures
| null | null | null |
q-bio.NC cond-mat.dis-nn physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Migraine patients with aura show a peculiar pattern of visual reactivity
compared with those of migraine patients without aura: an increased effective
connectivity, connected to a reduced synchronization among EEG channels, for
frequencies in the beta band. The effective connectivity is evaluated in terms
of the Granger causality. This anomalous response to visual stimuli may play a
crucial role in the progression of spreading depression and clinical evidences
of aura symptoms.
|
[
{
"created": "Wed, 13 Apr 2011 15:51:15 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2011 18:19:53 GMT",
"version": "v2"
}
] |
2011-05-12
|
[
[
"Stramaglia",
"Sebastiano",
""
],
[
"Marinazzo",
"Daniele",
""
],
[
"Pellicoro",
"Mario",
""
],
[
"de Tommaso",
"Marina",
""
]
] |
Migraine patients with aura show a peculiar pattern of visual reactivity compared with those of migraine patients without aura: an increased effective connectivity, connected to a reduced synchronization among EEG channels, for frequencies in the beta band. The effective connectivity is evaluated in terms of the Granger causality. This anomalous response to visual stimuli may play a crucial role in the progression of spreading depression and clinical evidences of aura symptoms.
|
q-bio/0608037
|
Yong Chen
|
Shao-Meng Qin, Yong Chen and Pan Zhang
|
Network growth approach to macroevolution
|
16 pages, 7 figures, published version
|
New J. Phys. 9 (2007) 220
|
10.1088/1367-2630/9/7/220
| null |
q-bio.PE
| null |
We propose a novel network growth model coupled with the competition
interaction to simulate macroevolution. Our work shows that the competition
plays an important role in macroevolution and it is more rational to describe
the interaction between species by network structures. Our model presents a
complete picture of the development of phyla and the splitting process. It is
found that periodic mass extinction occurred in our networks without any
extraterrestrial factors and the lifetime distribution of species is very close
to fossil record. We also perturb networks with two scenarios of mass
extinctions on different hierarchic levels in order to study their recovery.
|
[
{
"created": "Fri, 25 Aug 2006 08:40:47 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Nov 2006 10:32:10 GMT",
"version": "v2"
},
{
"created": "Sat, 10 Mar 2007 03:41:36 GMT",
"version": "v3"
},
{
"created": "Wed, 11 Jul 2007 05:42:07 GMT",
"version": "v4"
}
] |
2007-07-11
|
[
[
"Qin",
"Shao-Meng",
""
],
[
"Chen",
"Yong",
""
],
[
"Zhang",
"Pan",
""
]
] |
We propose a novel network growth model coupled with the competition interaction to simulate macroevolution. Our work shows that the competition plays an important role in macroevolution and it is more rational to describe the interaction between species by network structures. Our model presents a complete picture of the development of phyla and the splitting process. It is found that periodic mass extinction occurred in our networks without any extraterrestrial factors and the lifetime distribution of species is very close to fossil record. We also perturb networks with two scenarios of mass extinctions on different hierarchic levels in order to study their recovery.
|
1110.2727
|
Irina Manina
|
I.V. Manina, N.M. Peretolchina, N.S. Saprikina, A.M. Kozlov, I.N.
Mikhaylova, K.I. Jordanya A.Y. Barishnikov
|
Prospects of using Antagonist Histamine H2-Receptor (Cimetidinum) as
Adjuvant for Melanoma Biotherapy Treatment
|
14 pages, 5 figures; ISSN 0236-297X. International Journal of
Immunopathology, Allergology, Infectology. 2010, #4: P. 42-51
|
International Journal of Immunopathology, Allergology, Infectology
2010, #4: P. 42-51
| null | null |
q-bio.TO q-bio.CB
|
http://creativecommons.org/licenses/publicdomain/
|
Improvement of anti-tumor biotherapy effectiveness by modification of immune
response with histamine H2 receptor Cimetidinum (CM) was studied using the
experimental murine model of B16 F10 melanoma in vivo. It is shown that skin
melanoma biotherapy by antitumor whole-cell GM-CSF-producing vaccine with the
addition of CM (in dose of 25 mg/kg, daily for 5 days) increases preventive
effects of vaccination. 33 % of mice did not have tumor growth within 60 days
of observation. Average life span of animals exceeded those of the control
group up to 68 %. Using CM-combined bio-chemotherapy doesn't improve
therapeutic effect, however in the case of a monotherapeutic approach tendency
for increased average life-time and decreased metastatic processes in mice with
the developed tumors was noticed. Aquired data provides expediency to study the
further application of CM as adjuvant for skin melanoma vaccinotherapy, and
also the necessity to verify the immune status of an organism against the
complex bio-chemotherapy.
|
[
{
"created": "Wed, 12 Oct 2011 18:17:47 GMT",
"version": "v1"
}
] |
2011-10-13
|
[
[
"Manina",
"I. V.",
""
],
[
"Peretolchina",
"N. M.",
""
],
[
"Saprikina",
"N. S.",
""
],
[
"Kozlov",
"A. M.",
""
],
[
"Mikhaylova",
"I. N.",
""
],
[
"Barishnikov",
"K. I. Jordanya A. Y.",
""
]
] |
Improvement of anti-tumor biotherapy effectiveness by modification of immune response with histamine H2 receptor Cimetidinum (CM) was studied using the experimental murine model of B16 F10 melanoma in vivo. It is shown that skin melanoma biotherapy by antitumor whole-cell GM-CSF-producing vaccine with the addition of CM (in dose of 25 mg/kg, daily for 5 days) increases preventive effects of vaccination. 33 % of mice did not have tumor growth within 60 days of observation. Average life span of animals exceeded those of the control group up to 68 %. Using CM-combined bio-chemotherapy doesn't improve therapeutic effect, however in the case of a monotherapeutic approach tendency for increased average life-time and decreased metastatic processes in mice with the developed tumors was noticed. Aquired data provides expediency to study the further application of CM as adjuvant for skin melanoma vaccinotherapy, and also the necessity to verify the immune status of an organism against the complex bio-chemotherapy.
|
2001.11582
|
Jeff Mohl
|
Jeff T. Mohl, Valeria C. Caruso, Surya T. Tokdar, Jennifer M. Groh
|
Sensitivity and specificity of a Bayesian single trial analysis for time
varying neural signals
|
Accepted for publication in Neurons, Behavior, Data analysis, and
Theory
|
Neurons, Behavior, Data Analysis, and Theory, 2020
|
10.1101/690958
| null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
We recently reported the existence of fluctuations in neural signals that may
permit neurons to code multiple simultaneous stimuli sequentially across time.
This required deploying a novel statistical approach to permit investigation of
neural activity at the scale of individual trials. Here we present tests using
synthetic data to assess the sensitivity and specificity of this analysis. We
fabricated datasets to match each of several potential response patterns
derived from single-stimulus response distributions. In particular, we
simulated dual stimulus trial spike counts that reflected fluctuating mixtures
of the single stimulus spike counts, stable intermediate averages, single
stimulus winner-take-all, or response distributions that were outside the range
defined by the single stimulus responses (such as summation or suppression). We
then assessed how well the analysis recovered the correct response pattern as a
function of the number of simulated trials and the difference between the
simulated responses to each "stimulus" alone. We found excellent recovery of
the mixture, intermediate, and outside categories (>97% percent correct), and
good recovery of the single/winner-take-all category (>90% correct) when the
number of trials was >20 and the single-stimulus response rates were 50Hz and
20Hz respectively. Both larger numbers of trials and greater separation between
the single stimulus firing rates improved categorization accuracy. These
results provide a benchmark, and guidelines for data collection, for use of
this method to investigate coding of multiple items at the individual-trial
time scale.
|
[
{
"created": "Thu, 30 Jan 2020 21:59:06 GMT",
"version": "v1"
}
] |
2020-02-03
|
[
[
"Mohl",
"Jeff T.",
""
],
[
"Caruso",
"Valeria C.",
""
],
[
"Tokdar",
"Surya T.",
""
],
[
"Groh",
"Jennifer M.",
""
]
] |
We recently reported the existence of fluctuations in neural signals that may permit neurons to code multiple simultaneous stimuli sequentially across time. This required deploying a novel statistical approach to permit investigation of neural activity at the scale of individual trials. Here we present tests using synthetic data to assess the sensitivity and specificity of this analysis. We fabricated datasets to match each of several potential response patterns derived from single-stimulus response distributions. In particular, we simulated dual stimulus trial spike counts that reflected fluctuating mixtures of the single stimulus spike counts, stable intermediate averages, single stimulus winner-take-all, or response distributions that were outside the range defined by the single stimulus responses (such as summation or suppression). We then assessed how well the analysis recovered the correct response pattern as a function of the number of simulated trials and the difference between the simulated responses to each "stimulus" alone. We found excellent recovery of the mixture, intermediate, and outside categories (>97% percent correct), and good recovery of the single/winner-take-all category (>90% correct) when the number of trials was >20 and the single-stimulus response rates were 50Hz and 20Hz respectively. Both larger numbers of trials and greater separation between the single stimulus firing rates improved categorization accuracy. These results provide a benchmark, and guidelines for data collection, for use of this method to investigate coding of multiple items at the individual-trial time scale.
|
2401.04155
|
Jiajia Liu
|
Jiajia Liu, Mengyuan Yang, Yankai Yu, Haixia Xu, Kang Li and Xiaobo
Zhou
|
Large language models in bioinformatics: applications and perspectives
|
7 figures
| null | null | null |
q-bio.QM cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) are a class of artificial intelligence models
based on deep learning, which have great performance in various tasks,
especially in natural language processing (NLP). Large language models
typically consist of artificial neural networks with numerous parameters,
trained on large amounts of unlabeled input using self-supervised or
semi-supervised learning. However, their potential for solving bioinformatics
problems may even exceed their proficiency in modeling human language. In this
review, we will present a summary of the prominent large language models used
in natural language processing, such as BERT and GPT, and focus on exploring
the applications of large language models at different omics levels in
bioinformatics, mainly including applications of large language models in
genomics, transcriptomics, proteomics, drug discovery and single cell analysis.
Finally, this review summarizes the potential and prospects of large language
models in solving bioinformatic problems.
|
[
{
"created": "Mon, 8 Jan 2024 17:26:59 GMT",
"version": "v1"
}
] |
2024-01-10
|
[
[
"Liu",
"Jiajia",
""
],
[
"Yang",
"Mengyuan",
""
],
[
"Yu",
"Yankai",
""
],
[
"Xu",
"Haixia",
""
],
[
"Li",
"Kang",
""
],
[
"Zhou",
"Xiaobo",
""
]
] |
Large language models (LLMs) are a class of artificial intelligence models based on deep learning, which have great performance in various tasks, especially in natural language processing (NLP). Large language models typically consist of artificial neural networks with numerous parameters, trained on large amounts of unlabeled input using self-supervised or semi-supervised learning. However, their potential for solving bioinformatics problems may even exceed their proficiency in modeling human language. In this review, we will present a summary of the prominent large language models used in natural language processing, such as BERT and GPT, and focus on exploring the applications of large language models at different omics levels in bioinformatics, mainly including applications of large language models in genomics, transcriptomics, proteomics, drug discovery and single cell analysis. Finally, this review summarizes the potential and prospects of large language models in solving bioinformatic problems.
|
1110.3317
|
David Albers
|
DJ Albers, George Hripcsak, and Michael Schmidt
|
Population physiology: leveraging population scale (EHR) data to
understand human endocrine dynamics
| null | null |
10.1371/journal.pone.0048058
| null |
q-bio.QM nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studying physiology over a broad population for long periods of time is
difficult primarily because collecting human physiologic data is intrusive,
dangerous, and expensive. Electronic health record (EHR) data promise to
support the development and testing of mechanistic physiologic models on
diverse population, but limitations in the data have thus far thwarted such
use. For instance, using uncontrolled population-scale EHR data to verify the
outcome of time dependent behavior of mechanistic, constructive models can be
difficult because: (i) aggregation of the population can obscure or generate a
signal, (ii) there is often no control population, and (iii) diversity in how
the population is measured can make the data difficult to fit into conventional
analysis techniques. This paper shows that it is possible to use EHR data to
test a physiological model for a population and over long time scales.
Specifically, a methodology is developed and demonstrated for testing a
mechanistic, time-dependent, physiological model of serum glucose dynamics with
uncontrolled, population-scale, physiological patient data extracted from an
EHR repository. It is shown that there is no observable daily variation the
normalized mean glucose for any EHR subpopulations. In contrast, a derived
value, daily variation in nonlinear correlation quantified by the time-delayed
mutual information (TDMI), did reveal the intuitively expected diurnal
variation in glucose levels amongst a wild population of humans. Moreover, in a
population of intravenously fed patients, there was no observable TDMI-based
diurnal signal. These TDMI-based signals, via a glucose insulin model, were
then connected with human feeding patterns. In particular, a constructive
physiological model was shown to correctly predict the difference between the
general uncontrolled population and a subpopulation whose feeding was
controlled.
|
[
{
"created": "Fri, 14 Oct 2011 17:52:50 GMT",
"version": "v1"
}
] |
2015-05-30
|
[
[
"Albers",
"DJ",
""
],
[
"Hripcsak",
"George",
""
],
[
"Schmidt",
"Michael",
""
]
] |
Studying physiology over a broad population for long periods of time is difficult primarily because collecting human physiologic data is intrusive, dangerous, and expensive. Electronic health record (EHR) data promise to support the development and testing of mechanistic physiologic models on diverse population, but limitations in the data have thus far thwarted such use. For instance, using uncontrolled population-scale EHR data to verify the outcome of time dependent behavior of mechanistic, constructive models can be difficult because: (i) aggregation of the population can obscure or generate a signal, (ii) there is often no control population, and (iii) diversity in how the population is measured can make the data difficult to fit into conventional analysis techniques. This paper shows that it is possible to use EHR data to test a physiological model for a population and over long time scales. Specifically, a methodology is developed and demonstrated for testing a mechanistic, time-dependent, physiological model of serum glucose dynamics with uncontrolled, population-scale, physiological patient data extracted from an EHR repository. It is shown that there is no observable daily variation the normalized mean glucose for any EHR subpopulations. In contrast, a derived value, daily variation in nonlinear correlation quantified by the time-delayed mutual information (TDMI), did reveal the intuitively expected diurnal variation in glucose levels amongst a wild population of humans. Moreover, in a population of intravenously fed patients, there was no observable TDMI-based diurnal signal. These TDMI-based signals, via a glucose insulin model, were then connected with human feeding patterns. In particular, a constructive physiological model was shown to correctly predict the difference between the general uncontrolled population and a subpopulation whose feeding was controlled.
|
2006.03735
|
Caroline Uhler
|
Anastasiya Belyaeva, Louis Cammarata, Adityanarayanan Radhakrishnan,
Chandler Squires, Karren Dai Yang, G.V. Shivashankar, Caroline Uhler
|
Causal Network Models of SARS-CoV-2 Expression and Aging to Identify
Candidates for Drug Repurposing
| null | null |
10.1038/s41467-021-21056-z
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the severity of the SARS-CoV-2 pandemic, a major challenge is to
rapidly repurpose existing approved drugs for clinical interventions. While a
number of data-driven and experimental approaches have been suggested in the
context of drug repurposing, a platform that systematically integrates
available transcriptomic, proteomic and structural data is missing. More
importantly, given that SARS-CoV-2 pathogenicity is highly age-dependent, it is
critical to integrate aging signatures into drug discovery platforms. We here
take advantage of large-scale transcriptional drug screens combined with
RNA-seq data of the lung epithelium with SARS-CoV-2 infection as well as the
aging lung. To identify robust druggable protein targets, we propose a
principled causal framework that makes use of multiple data modalities. Our
analysis highlights the importance of serine/threonine and tyrosine kinases as
potential targets that intersect the SARS-CoV-2 and aging pathways. By
integrating transcriptomic, proteomic and structural data that is available for
many diseases, our drug discovery platform is broadly applicable. Rigorous in
vitro experiments as well as clinical trials are needed to validate the
identified candidate drugs.
|
[
{
"created": "Fri, 5 Jun 2020 23:16:44 GMT",
"version": "v1"
}
] |
2021-04-28
|
[
[
"Belyaeva",
"Anastasiya",
""
],
[
"Cammarata",
"Louis",
""
],
[
"Radhakrishnan",
"Adityanarayanan",
""
],
[
"Squires",
"Chandler",
""
],
[
"Yang",
"Karren Dai",
""
],
[
"Shivashankar",
"G. V.",
""
],
[
"Uhler",
"Caroline",
""
]
] |
Given the severity of the SARS-CoV-2 pandemic, a major challenge is to rapidly repurpose existing approved drugs for clinical interventions. While a number of data-driven and experimental approaches have been suggested in the context of drug repurposing, a platform that systematically integrates available transcriptomic, proteomic and structural data is missing. More importantly, given that SARS-CoV-2 pathogenicity is highly age-dependent, it is critical to integrate aging signatures into drug discovery platforms. We here take advantage of large-scale transcriptional drug screens combined with RNA-seq data of the lung epithelium with SARS-CoV-2 infection as well as the aging lung. To identify robust druggable protein targets, we propose a principled causal framework that makes use of multiple data modalities. Our analysis highlights the importance of serine/threonine and tyrosine kinases as potential targets that intersect the SARS-CoV-2 and aging pathways. By integrating transcriptomic, proteomic and structural data that is available for many diseases, our drug discovery platform is broadly applicable. Rigorous in vitro experiments as well as clinical trials are needed to validate the identified candidate drugs.
|
2012.08580
|
Yuan Luo
|
Yuan Luo, Chengsheng Mao
|
PANTHER: Pathway Augmented Nonnegative Tensor factorization for
HighER-order feature learning
|
Accepted by 35th AAAI Conference on Artificial Intelligence (AAAI
2021)
| null | null | null |
q-bio.QM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genetic pathways usually encode molecular mechanisms that can inform targeted
interventions. It is often challenging for existing machine learning approaches
to jointly model genetic pathways (higher-order features) and variants (atomic
features), and present to clinicians interpretable models. In order to build
more accurate and better interpretable machine learning models for genetic
medicine, we introduce Pathway Augmented Nonnegative Tensor factorization for
HighER-order feature learning (PANTHER). PANTHER selects informative genetic
pathways that directly encode molecular mechanisms. We apply genetically
motivated constrained tensor factorization to group pathways in a way that
reflects molecular mechanism interactions. We then train a softmax classifier
for disease types using the identified pathway groups. We evaluated PANTHER
against multiple state-of-the-art constrained tensor/matrix factorization
models, as well as group guided and Bayesian hierarchical models. PANTHER
outperforms all state-of-the-art comparison models significantly (p<0.05). Our
experiments on large scale Next Generation Sequencing (NGS) and whole-genome
genotyping datasets also demonstrated wide applicability of PANTHER. We
performed feature analysis in predicting disease types, which suggested
insights and benefits of the identified pathway groups.
|
[
{
"created": "Tue, 15 Dec 2020 19:39:55 GMT",
"version": "v1"
}
] |
2021-11-19
|
[
[
"Luo",
"Yuan",
""
],
[
"Mao",
"Chengsheng",
""
]
] |
Genetic pathways usually encode molecular mechanisms that can inform targeted interventions. It is often challenging for existing machine learning approaches to jointly model genetic pathways (higher-order features) and variants (atomic features), and present to clinicians interpretable models. In order to build more accurate and better interpretable machine learning models for genetic medicine, we introduce Pathway Augmented Nonnegative Tensor factorization for HighER-order feature learning (PANTHER). PANTHER selects informative genetic pathways that directly encode molecular mechanisms. We apply genetically motivated constrained tensor factorization to group pathways in a way that reflects molecular mechanism interactions. We then train a softmax classifier for disease types using the identified pathway groups. We evaluated PANTHER against multiple state-of-the-art constrained tensor/matrix factorization models, as well as group guided and Bayesian hierarchical models. PANTHER outperforms all state-of-the-art comparison models significantly (p<0.05). Our experiments on large scale Next Generation Sequencing (NGS) and whole-genome genotyping datasets also demonstrated wide applicability of PANTHER. We performed feature analysis in predicting disease types, which suggested insights and benefits of the identified pathway groups.
|
1307.7658
|
Daniel Larremore
|
Daniel B. Larremore, Woodrow L. Shew, Edward Ott, Francesco
Sorrentino, Juan G. Restrepo
|
Inhibition causes ceaseless dynamics in networks of excitable nodes
|
11 pages, 6 figures
|
Phys. Rev. Lett. 112, 138103 (2014)
|
10.1103/PhysRevLett.112.138103
| null |
q-bio.NC cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The collective dynamics of a network of excitable nodes changes dramatically
when inhibitory nodes are introduced. We consider inhibitory nodes which may be
activated just like excitatory nodes but, upon activating, decrease the
probability of activation of network neighbors. We show that, although the
direct effect of inhibitory nodes is to decrease activity, the collective
dynamics becomes self-sustaining. We explain this counterintuitive result by
defining and analyzing a "branching function" which may be thought of as an
activity-dependent branching ratio. The shape of the branching function implies
that for a range of global coupling parameters dynamics are self-sustaining.
Within the self-sustaining region of parameter space lies a critical line along
which dynamics take the form of avalanches with universal scaling of size and
duration, embedded in ceaseless timeseries of activity. Our analyses, confirmed
by numerical simulation, suggest that inhibition may play a counterintuitive
role in excitable networks.
|
[
{
"created": "Mon, 29 Jul 2013 17:36:18 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Dec 2013 20:49:45 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Feb 2014 17:20:05 GMT",
"version": "v3"
}
] |
2014-04-03
|
[
[
"Larremore",
"Daniel B.",
""
],
[
"Shew",
"Woodrow L.",
""
],
[
"Ott",
"Edward",
""
],
[
"Sorrentino",
"Francesco",
""
],
[
"Restrepo",
"Juan G.",
""
]
] |
The collective dynamics of a network of excitable nodes changes dramatically when inhibitory nodes are introduced. We consider inhibitory nodes which may be activated just like excitatory nodes but, upon activating, decrease the probability of activation of network neighbors. We show that, although the direct effect of inhibitory nodes is to decrease activity, the collective dynamics becomes self-sustaining. We explain this counterintuitive result by defining and analyzing a "branching function" which may be thought of as an activity-dependent branching ratio. The shape of the branching function implies that for a range of global coupling parameters dynamics are self-sustaining. Within the self-sustaining region of parameter space lies a critical line along which dynamics take the form of avalanches with universal scaling of size and duration, embedded in ceaseless timeseries of activity. Our analyses, confirmed by numerical simulation, suggest that inhibition may play a counterintuitive role in excitable networks.
|
2406.19041
|
Manuel Morante
|
Manuel Morante, Kristian Fr{\o}lich and Naveed ur Rehman
|
Multiscale Functional Connectivity: Exploring the brain functional
connectivity at different timescales
|
33 pages, 7 figures and 3 tables
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Human brains exhibit highly organized multiscale neurophysiological dynamics.
Understanding those dynamic changes and the neuronal networks involved is
critical for understanding how the brain functions in health and disease.
Functional Magnetic Resonance Imaging (fMRI) is a prevalent neuroimaging
technique for studying these complex interactions. However, analyzing fMRI data
poses several challenges. Furthermore, most approaches for analyzing Functional
Connectivity (FC) still rely on preprocessing or conventional methods, often
built upon oversimplified assumptions. On top of that, those approaches often
ignore frequency-related information despite evidence showing that fMRI data
contain rich information that spans multiple timescales. This study introduces
a novel methodology, Multiscale Functional Connectivity (MFC), to analyze fMRI
data by decomposing the fMRI into their intrinsic modes, allowing us to
separate the neurophysiological activation patterns at multiple timescales
while separating them from other interfering components. Additionally, the
proposed approach accounts for the natural nonlinear and nonstationary nature
of fMRI and the particularities of each individual in a data-driven way. We
evaluated the performance of our proposed methodology using three fMRI
experiments. Our results demonstrate that our novel approach effectively
separates the fMRI data into different timescales while identifying highly
reliable functional connectivity patterns across individuals. In addition, we
further extended our knowledge of how the FC for these three experiments spans
among different timescales.
|
[
{
"created": "Thu, 27 Jun 2024 09:48:05 GMT",
"version": "v1"
}
] |
2024-06-28
|
[
[
"Morante",
"Manuel",
""
],
[
"Frølich",
"Kristian",
""
],
[
"Rehman",
"Naveed ur",
""
]
] |
Human brains exhibit highly organized multiscale neurophysiological dynamics. Understanding those dynamic changes and the neuronal networks involved is critical for understanding how the brain functions in health and disease. Functional Magnetic Resonance Imaging (fMRI) is a prevalent neuroimaging technique for studying these complex interactions. However, analyzing fMRI data poses several challenges. Furthermore, most approaches for analyzing Functional Connectivity (FC) still rely on preprocessing or conventional methods, often built upon oversimplified assumptions. On top of that, those approaches often ignore frequency-related information despite evidence showing that fMRI data contain rich information that spans multiple timescales. This study introduces a novel methodology, Multiscale Functional Connectivity (MFC), to analyze fMRI data by decomposing the fMRI into their intrinsic modes, allowing us to separate the neurophysiological activation patterns at multiple timescales while separating them from other interfering components. Additionally, the proposed approach accounts for the natural nonlinear and nonstationary nature of fMRI and the particularities of each individual in a data-driven way. We evaluated the performance of our proposed methodology using three fMRI experiments. Our results demonstrate that our novel approach effectively separates the fMRI data into different timescales while identifying highly reliable functional connectivity patterns across individuals. In addition, we further extended our knowledge of how the FC for these three experiments spans among different timescales.
|
1304.1262
|
Conrad Sanderson
|
Arnold Wiliem, Yongkang Wong, Conrad Sanderson, Peter Hobson, Shaokang
Chen, Brian C. Lovell
|
Classification of Human Epithelial Type 2 Cell Indirect
Immunofluoresence Images via Codebook Based Descriptors
| null |
IEEE Workshop on Applications of Computer Vision (WACV), pp.
95-102, 2013
|
10.1109/WACV.2013.6475005
| null |
q-bio.CB cs.CV q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to
identify the existence of various diseases. A hallmark method for identifying
the presence of ANAs is the Indirect Immunofluorescence method on Human
Epithelial (HEp-2) cells, due to its high sensitivity and the large range of
antigens that can be detected. However, the method suffers from numerous
shortcomings, such as being subjective as well as time and labour intensive.
Computer Aided Diagnostic (CAD) systems have been developed to address these
problems, which automatically classify a HEp-2 cell image into one of its known
patterns (eg., speckled, homogeneous). Most of the existing CAD systems use
handpicked features to represent a HEp-2 cell image, which may only work in
limited scenarios. In this paper, we propose a cell classification system
comprised of a dual-region codebook-based descriptor, combined with the Nearest
Convex Hull Classifier. We evaluate the performance of several variants of the
descriptor on two publicly available datasets: ICPR HEp-2 cell classification
contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the
first time codebook-based descriptors are applied and studied in this domain.
Experiments show that the proposed system has consistent high performance and
is more robust than two recent CAD systems.
|
[
{
"created": "Thu, 4 Apr 2013 07:51:32 GMT",
"version": "v1"
}
] |
2013-04-05
|
[
[
"Wiliem",
"Arnold",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Hobson",
"Peter",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Lovell",
"Brian C.",
""
]
] |
The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp-2) cells, due to its high sensitivity and the large range of antigens that can be detected. However, the method suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg., speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. In this paper, we propose a cell classification system comprised of a dual-region codebook-based descriptor, combined with the Nearest Convex Hull Classifier. We evaluate the performance of several variants of the descriptor on two publicly available datasets: ICPR HEp-2 cell classification contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the first time codebook-based descriptors are applied and studied in this domain. Experiments show that the proposed system has consistent high performance and is more robust than two recent CAD systems.
|
1704.02577
|
Myrl Marmarelis
|
Myrl G. Marmarelis
|
Efficient and Robust Polylinear Analysis of Noisy Time Series
|
6 pages, 9 figures
| null | null | null |
q-bio.QM stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A method is proposed to generate an optimal fit of a number of connected
linear trend segments onto time-series data. To be able to efficiently handle
many lines, the method employs a stochastic search procedure to determine
optimal transition point locations. Traditional methods use exhaustive grid
searches, which severely limit the scale of the problems for which they can be
utilized. The proposed approach is tried against time series with severe noise
to demonstrate its robustness, and then it is applied to real medical data as
an illustrative example.
|
[
{
"created": "Sun, 9 Apr 2017 09:15:15 GMT",
"version": "v1"
}
] |
2017-04-11
|
[
[
"Marmarelis",
"Myrl G.",
""
]
] |
A method is proposed to generate an optimal fit of a number of connected linear trend segments onto time-series data. To be able to efficiently handle many lines, the method employs a stochastic search procedure to determine optimal transition point locations. Traditional methods use exhaustive grid searches, which severely limit the scale of the problems for which they can be utilized. The proposed approach is tried against time series with severe noise to demonstrate its robustness, and then it is applied to real medical data as an illustrative example.
|
1906.07794
|
Milad Mostavi
|
Milad Mostavi, Yu-Chiao Chiu, Yufei Huang, Yidong Chen
|
Convolutional neural network models for cancer type prediction based on
gene expression
|
34 pages, 5 figures, This paper was presented at ICIBM June, 2019 at
Ohio Columbus, and will be published in BMC Genomics journal. Keywords: Deep
Learning; Convolutional Neural Networks, The Cancer Genome Atlas; Cancer type
prediction; Cancer gene markers; Breast cancer subtype prediction
| null | null | null |
q-bio.GN cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Background Precise prediction of cancer types is vital for cancer diagnosis
and therapy. Important cancer marker genes can be inferred through predictive
model. Several studies have attempted to build machine learning models for this
task however none has taken into consideration the effects of tissue of origin
that can potentially bias the identification of cancer markers. Results In this
paper, we introduced several Convolutional Neural Network (CNN) models that
take unstructured gene expression inputs to classify tumor and non-tumor
samples into their designated cancer types or as normal. Based on different
designs of gene embeddings and convolution schemes, we implemented three CNN
models: 1D-CNN, 2D-Vanilla-CNN, and 2D-Hybrid-CNN. The models were trained and
tested on combined 10,340 samples of 33 cancer types and 731 matched normal
tissues of The Cancer Genome Atlas (TCGA). Our models achieved excellent
prediction accuracies (93.9-95.0%) among 34 classes (33 cancers and normal).
Furthermore, we interpreted one of the models, known as 1D-CNN model, with a
guided saliency technique and identified a total of 2,090 cancer markers (108
per class). The concordance of differential expression of these markers between
the cancer type they represent and others is confirmed. In breast cancer, for
instance, our model identified well-known markers, such as GATA3 and ESR1.
Finally, we extended the 1D-CNN model for prediction of breast cancer subtypes
and achieved an average accuracy of 88.42% among 5 subtypes. The codes can be
found at https://github.com/chenlabgccri/CancerTypePrediction.
|
[
{
"created": "Tue, 18 Jun 2019 20:27:35 GMT",
"version": "v1"
}
] |
2019-06-20
|
[
[
"Mostavi",
"Milad",
""
],
[
"Chiu",
"Yu-Chiao",
""
],
[
"Huang",
"Yufei",
""
],
[
"Chen",
"Yidong",
""
]
] |
Background Precise prediction of cancer types is vital for cancer diagnosis and therapy. Important cancer marker genes can be inferred through predictive model. Several studies have attempted to build machine learning models for this task however none has taken into consideration the effects of tissue of origin that can potentially bias the identification of cancer markers. Results In this paper, we introduced several Convolutional Neural Network (CNN) models that take unstructured gene expression inputs to classify tumor and non-tumor samples into their designated cancer types or as normal. Based on different designs of gene embeddings and convolution schemes, we implemented three CNN models: 1D-CNN, 2D-Vanilla-CNN, and 2D-Hybrid-CNN. The models were trained and tested on combined 10,340 samples of 33 cancer types and 731 matched normal tissues of The Cancer Genome Atlas (TCGA). Our models achieved excellent prediction accuracies (93.9-95.0%) among 34 classes (33 cancers and normal). Furthermore, we interpreted one of the models, known as 1D-CNN model, with a guided saliency technique and identified a total of 2,090 cancer markers (108 per class). The concordance of differential expression of these markers between the cancer type they represent and others is confirmed. In breast cancer, for instance, our model identified well-known markers, such as GATA3 and ESR1. Finally, we extended the 1D-CNN model for prediction of breast cancer subtypes and achieved an average accuracy of 88.42% among 5 subtypes. The codes can be found at https://github.com/chenlabgccri/CancerTypePrediction.
|
1501.02278
|
Jose H H Grisi-Filho Prof Dr
|
Jos\'e Henrique Hildebrand Grisi-Filho, Marcos Amaku
|
Caracteriza\c{c}\~ao de circuitos pecu\'arios com base em redes de
movimenta\c{c}\~ao de animais
|
46 pages, PhD Thesis, Portuguese
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/3.0/
|
A network is a set of nodes that are linked together by a set of edges.
Networks can represent any set of objects that have relations among themselves.
Communities are sets of nodes that are related in an important way, probably
sharing common properties and/or playing similar roles within a network. When
network analysis is applied to study the livestock movement patterns, the
epidemiological units of interest (farm premises, counties, states, countries,
etc.) are represented as nodes, and animal movements between the nodes are
represented as the edges of a network. Unraveling a network structure, and
hence the trade preferences and pathways, could be very useful to a researcher
or a decision-maker. We implemented a community detection algorithm to find
livestock communities that is consistent with the definition of a livestock
production zone, assuming that a community is a group of farm premises in which
an animal is more likely to stay during its life time than expected by chance.
We applied this algorithm to the network of within animal movements made inside
the State of Mato Grosso, for the year of 2007. This database holds information
about 87,899 premises and 521,431 movements throughout the year, totalizing
15,844,779 animals moved. The community detection algorithm achieved a network
partition that shows a clear geographical and commercial pattern, two crucial
features to preventive veterinary medicine applications, and also has a
meaningful interpretation in trade networks where links emerge from the choice
of trader nodes.
|
[
{
"created": "Fri, 9 Jan 2015 21:24:48 GMT",
"version": "v1"
}
] |
2015-01-13
|
[
[
"Grisi-Filho",
"José Henrique Hildebrand",
""
],
[
"Amaku",
"Marcos",
""
]
] |
A network is a set of nodes that are linked together by a set of edges. Networks can represent any set of objects that have relations among themselves. Communities are sets of nodes that are related in an important way, probably sharing common properties and/or playing similar roles within a network. When network analysis is applied to study the livestock movement patterns, the epidemiological units of interest (farm premises, counties, states, countries, etc.) are represented as nodes, and animal movements between the nodes are represented as the edges of a network. Unraveling a network structure, and hence the trade preferences and pathways, could be very useful to a researcher or a decision-maker. We implemented a community detection algorithm to find livestock communities that is consistent with the definition of a livestock production zone, assuming that a community is a group of farm premises in which an animal is more likely to stay during its life time than expected by chance. We applied this algorithm to the network of within animal movements made inside the State of Mato Grosso, for the year of 2007. This database holds information about 87,899 premises and 521,431 movements throughout the year, totalizing 15,844,779 animals moved. The community detection algorithm achieved a network partition that shows a clear geographical and commercial pattern, two crucial features to preventive veterinary medicine applications, and also has a meaningful interpretation in trade networks where links emerge from the choice of trader nodes.
|
0909.4158
|
Thierry Rabilloud
|
Thierry Rabilloud (BBSI), Ali R Vaezzadeh, Noelle Potier, C\'ecile
Lelong (BBSI), Emmanuelle Leize-Wagner, Mireille Chevallet (BBSI)
|
Power and limitations of electrophoretic separations in proteomics
strategies
| null |
Mass Spectrometry Reviews 28, 5 (2009) 816-43
|
10.1002/mas.20204
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proteomics can be defined as the large-scale analysis of proteins. Due to the
complexity of biological systems, it is required to concatenate various
separation techniques prior to mass spectrometry. These techniques, dealing
with proteins or peptides, can rely on chromatography or electrophoresis. In
this review, the electrophoretic techniques are under scrutiny. Their
principles are recalled, and their applications for peptide and protein
separations are presented and critically discussed. In addition, the features
that are specific to gel electrophoresis and that interplay with mass
spectrometry (i.e., protein detection after electrophoresis, and the process
leading from a gel piece to a solution of peptides) are also discussed.
|
[
{
"created": "Wed, 23 Sep 2009 09:29:36 GMT",
"version": "v1"
}
] |
2009-09-24
|
[
[
"Rabilloud",
"Thierry",
"",
"BBSI"
],
[
"Vaezzadeh",
"Ali R",
"",
"BBSI"
],
[
"Potier",
"Noelle",
"",
"BBSI"
],
[
"Lelong",
"Cécile",
"",
"BBSI"
],
[
"Leize-Wagner",
"Emmanuelle",
"",
"BBSI"
],
[
"Chevallet",
"Mireille",
"",
"BBSI"
]
] |
Proteomics can be defined as the large-scale analysis of proteins. Due to the complexity of biological systems, it is required to concatenate various separation techniques prior to mass spectrometry. These techniques, dealing with proteins or peptides, can rely on chromatography or electrophoresis. In this review, the electrophoretic techniques are under scrutiny. Their principles are recalled, and their applications for peptide and protein separations are presented and critically discussed. In addition, the features that are specific to gel electrophoresis and that interplay with mass spectrometry (i.e., protein detection after electrophoresis, and the process leading from a gel piece to a solution of peptides) are also discussed.
|
1605.03076
|
Bernhard Mehlig
|
M. Rafajlovic, A. Emanuelsson, K. Johannesson, R. K. Butlin, B. Mehlig
|
A universal mechanism generating clusters of differentiated loci during
divergence-with-migration
|
32 pages, 4 figures, 1 table, supplementary material
|
Evolution 70 (2016) 1609
|
10.1111/evo.12957
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome-wide patterns of genetic divergence reveal mechanisms of adaptation
under gene flow. Empirical data show that divergence is mostly concentrated in
narrow genomic regions. This pattern may arise because differentiated loci
protect nearby mutations from gene flow, but recent theory suggests this
mechanism is insufficient to explain the emergence of concentrated
differentiation during biologically realistic timescales. Critically, earlier
theory neglects an inevitable consequence of genetic drift: stochastic loss of
local genomic divergence. Here we demonstrate that the rate of stochastic loss
of weak local differentiation increases with recombination distance to a
strongly diverged locus and, above a critical recombination distance, local
loss is faster than local `gain' of new differentiation. Under high migration
and weak selection this critical recombination distance is much smaller than
the total recombination distance of the genomic region under selection.
Consequently, divergence between populations increases by net gain of new
differentiation within the critical recombination distance, resulting in
tightly-linked clusters of divergence. The mechanism responsible is the balance
between stochastic loss and gain of weak local differentiation, a mechanism
acting universally throughout the genome. Our results will help to explain
empirical observations and lead to novel predictions regarding changes in
genomic architectures during adaptive divergence.
|
[
{
"created": "Tue, 10 May 2016 16:01:59 GMT",
"version": "v1"
}
] |
2017-02-21
|
[
[
"Rafajlovic",
"M.",
""
],
[
"Emanuelsson",
"A.",
""
],
[
"Johannesson",
"K.",
""
],
[
"Butlin",
"R. K.",
""
],
[
"Mehlig",
"B.",
""
]
] |
Genome-wide patterns of genetic divergence reveal mechanisms of adaptation under gene flow. Empirical data show that divergence is mostly concentrated in narrow genomic regions. This pattern may arise because differentiated loci protect nearby mutations from gene flow, but recent theory suggests this mechanism is insufficient to explain the emergence of concentrated differentiation during biologically realistic timescales. Critically, earlier theory neglects an inevitable consequence of genetic drift: stochastic loss of local genomic divergence. Here we demonstrate that the rate of stochastic loss of weak local differentiation increases with recombination distance to a strongly diverged locus and, above a critical recombination distance, local loss is faster than local `gain' of new differentiation. Under high migration and weak selection this critical recombination distance is much smaller than the total recombination distance of the genomic region under selection. Consequently, divergence between populations increases by net gain of new differentiation within the critical recombination distance, resulting in tightly-linked clusters of divergence. The mechanism responsible is the balance between stochastic loss and gain of weak local differentiation, a mechanism acting universally throughout the genome. Our results will help to explain empirical observations and lead to novel predictions regarding changes in genomic architectures during adaptive divergence.
|
1807.00061
|
Elisenda Feliu
|
Meritxell S\'aez, Carsten Wiuf, Elisenda Feliu
|
Nonnegative linear elimination for chemical reaction networks
| null | null | null | null |
q-bio.MN math.DS physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider linear elimination of variables in steady state equations of a
chemical reaction network. Particular subsets of variables corresponding to
sets of so-called reactant-noninteracting species, are introduced. The steady
state equations for the variables in such a set, taken together with potential
linear conservation laws in the variables, define a linear system of equations.
We give conditions that guarantee that the solution to this system is
nonnegative, provided it is unique. The results are framed in terms of spanning
forests of a particular multidigraph derived from the reaction network and
thereby conditions for uniqueness and nonnegativity of a solution are derived
by means of the multidigraph. Though our motivation comes from applications in
systems biology, the results have general applicability in applied sciences.
|
[
{
"created": "Fri, 29 Jun 2018 20:34:59 GMT",
"version": "v1"
}
] |
2018-07-03
|
[
[
"Sáez",
"Meritxell",
""
],
[
"Wiuf",
"Carsten",
""
],
[
"Feliu",
"Elisenda",
""
]
] |
We consider linear elimination of variables in steady state equations of a chemical reaction network. Particular subsets of variables corresponding to sets of so-called reactant-noninteracting species, are introduced. The steady state equations for the variables in such a set, taken together with potential linear conservation laws in the variables, define a linear system of equations. We give conditions that guarantee that the solution to this system is nonnegative, provided it is unique. The results are framed in terms of spanning forests of a particular multidigraph derived from the reaction network and thereby conditions for uniqueness and nonnegativity of a solution are derived by means of the multidigraph. Though our motivation comes from applications in systems biology, the results have general applicability in applied sciences.
|
1612.03760
|
Daniele Marinazzo
|
Javier Rasero, Mario Pellicoro, Leonardo Angelini, Jesus M. Cortes,
Daniele Marinazzo, and Sebastiano Stramaglia
|
Consensus clustering approach to group brain connectivity matrices
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel approach rooted on the notion of consensus clustering, a strategy
developed for community detection in complex networks, is proposed to cope with
the heterogeneity that characterizes connectivity matrices in health and
disease. The method can be summarized as follows:
(i) define, for each node, a distance matrix for the set of subjects by
comparing the connectivity pattern of that node in all pairs of subjects; (ii)
cluster the distance matrix for each node; (iii) build the consensus network
from the corresponding partitions; (iv) extract groups of subjects by finding
the communities of the consensus network thus obtained.
Differently from the previous implementations of consensus clustering, we
thus propose to use the consensus strategy to combine the information arising
from the connectivity patterns of each node. The proposed approach may be seen
either as an exploratory technique or as an unsupervised pre-training step to
help the subsequent construction of a supervised classifier. Applications on a
toy model and two real data sets, show the effectiveness of the proposed
methodology, which represents heterogeneity of a set of subjects in terms of a
weighted network, the consensus matrix.
|
[
{
"created": "Mon, 12 Dec 2016 16:10:05 GMT",
"version": "v1"
},
{
"created": "Mon, 8 May 2017 12:56:16 GMT",
"version": "v2"
}
] |
2017-05-09
|
[
[
"Rasero",
"Javier",
""
],
[
"Pellicoro",
"Mario",
""
],
[
"Angelini",
"Leonardo",
""
],
[
"Cortes",
"Jesus M.",
""
],
[
"Marinazzo",
"Daniele",
""
],
[
"Stramaglia",
"Sebastiano",
""
]
] |
A novel approach rooted on the notion of consensus clustering, a strategy developed for community detection in complex networks, is proposed to cope with the heterogeneity that characterizes connectivity matrices in health and disease. The method can be summarized as follows: (i) define, for each node, a distance matrix for the set of subjects by comparing the connectivity pattern of that node in all pairs of subjects; (ii) cluster the distance matrix for each node; (iii) build the consensus network from the corresponding partitions; (iv) extract groups of subjects by finding the communities of the consensus network thus obtained. Differently from the previous implementations of consensus clustering, we thus propose to use the consensus strategy to combine the information arising from the connectivity patterns of each node. The proposed approach may be seen either as an exploratory technique or as an unsupervised pre-training step to help the subsequent construction of a supervised classifier. Applications on a toy model and two real data sets, show the effectiveness of the proposed methodology, which represents heterogeneity of a set of subjects in terms of a weighted network, the consensus matrix.
|
2209.03324
|
Casey Barkan
|
Casey O. Barkan and Robijn F. Bruinsma
|
Geometric Signatures of Switching Behavior in Mechanobiology
|
6 pages, 3 figures
| null |
10.1016/j.bpj.2022.11.1173
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proteins involved in cells' mechanobiological processes have evolved
specialized and surprising responses to applied forces. Biochemical
transformations that show catch-to-slip switching and force-induced pathway
switching serve important functions in cell adhesion, mechano-sensing and
signaling, and protein folding. We show that these switching behaviors are
generated by singularities in the flow field that describes force-induced
deformation of bound and transition states. These singularities allow for a
complete characterization of switching mechanisms in 2-dimensional (2D) free
energy landscapes, and provide a path toward elucidating novel forms of
switching in higher dimensional models. Remarkably, the singularity that
generates a catch-slip switch occurs in almost every 2D free energy landscape,
implying that almost any bond admitting a 2D model will exhibit catch-slip
behavior under appropriate force. We apply our analysis to models of P-selectin
and antigen extraction to illustrate how these singularities provide an
intuitive framework for explaining known behaviors and predicting new
behaviors.
|
[
{
"created": "Wed, 7 Sep 2022 17:33:04 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Sep 2022 09:53:23 GMT",
"version": "v2"
},
{
"created": "Sun, 26 Feb 2023 18:48:01 GMT",
"version": "v3"
}
] |
2023-07-19
|
[
[
"Barkan",
"Casey O.",
""
],
[
"Bruinsma",
"Robijn F.",
""
]
] |
The proteins involved in cells' mechanobiological processes have evolved specialized and surprising responses to applied forces. Biochemical transformations that show catch-to-slip switching and force-induced pathway switching serve important functions in cell adhesion, mechano-sensing and signaling, and protein folding. We show that these switching behaviors are generated by singularities in the flow field that describes force-induced deformation of bound and transition states. These singularities allow for a complete characterization of switching mechanisms in 2-dimensional (2D) free energy landscapes, and provide a path toward elucidating novel forms of switching in higher dimensional models. Remarkably, the singularity that generates a catch-slip switch occurs in almost every 2D free energy landscape, implying that almost any bond admitting a 2D model will exhibit catch-slip behavior under appropriate force. We apply our analysis to models of P-selectin and antigen extraction to illustrate how these singularities provide an intuitive framework for explaining known behaviors and predicting new behaviors.
|
0909.2985
|
Jerome Vanclay
|
Jerome K. Vanclay, Peter J. Sands
|
Calibrating the self-thinning frontier
|
Typos corrected, missing reference added
|
Forest Ecology and Management 259 (2009) 81-85
|
10.1016/j.foreco.2009.09.045
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Calibration of the self-thinning frontier in even-aged monocultures is
hampered by scarce data and by subjective decisions about the proximity of data
to the frontier. We present a simple model that applies to observations of the
full trajectory of stand mean diameter across a range of densities not close to
the frontier. Development of the model is based on a consideration of the slope
s=ln(Nt/Nt 1)/ln(Dt/Dt 1) of a log-transformed plot of stocking Nt and mean
stem diameter Dt at time t. This avoids the need for subjective decisions about
limiting density and allows the use of abundant data further from the
self-thinning frontier. The model can be solved analytically and yields
equations for the stocking and the stand basal area as an explicit function of
stem diameter. It predicts that self-thinning may be regulated by the maximum
basal area with a slope of -2. The significance of other predictor variables
offers an effective test of competing self-thinning theories such Yoda's -3/2
power rule and Reineke's stand density index.
|
[
{
"created": "Wed, 16 Sep 2009 11:33:42 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2009 10:19:54 GMT",
"version": "v2"
}
] |
2009-11-17
|
[
[
"Vanclay",
"Jerome K.",
""
],
[
"Sands",
"Peter J.",
""
]
] |
Calibration of the self-thinning frontier in even-aged monocultures is hampered by scarce data and by subjective decisions about the proximity of data to the frontier. We present a simple model that applies to observations of the full trajectory of stand mean diameter across a range of densities not close to the frontier. Development of the model is based on a consideration of the slope s=ln(Nt/Nt 1)/ln(Dt/Dt 1) of a log-transformed plot of stocking Nt and mean stem diameter Dt at time t. This avoids the need for subjective decisions about limiting density and allows the use of abundant data further from the self-thinning frontier. The model can be solved analytically and yields equations for the stocking and the stand basal area as an explicit function of stem diameter. It predicts that self-thinning may be regulated by the maximum basal area with a slope of -2. The significance of other predictor variables offers an effective test of competing self-thinning theories such Yoda's -3/2 power rule and Reineke's stand density index.
|
1702.07649
|
Quinton Skilling
|
Quinton M Skilling, Daniel Maruyama, Nicolette Ognjanovski, Sara J
Aton, and Michal Zochowski
|
Criticality, stability, competition, and consolidation of new
representations in brain networks
|
6 Figures
| null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The brain forms and stores distributed representations from sparse external
input that compete for neuronal resources with already stored memory traces. It
is unclear what dynamical properties of neural systems allow formation and
subsequent consolidation of new, distributed memory representations under these
conditions. Here we use analytical, computational, and experimental approaches
to show that a dynamical regime near a phase-transition in neuronal network
activity (i.e. criticality) may play an important role in this process. Our
results reveal that near-critical dynamics are necessary to stabilize and store
new sparsely driven representations when they compete with native network
states.
|
[
{
"created": "Fri, 24 Feb 2017 16:30:54 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Feb 2018 17:47:23 GMT",
"version": "v2"
}
] |
2018-02-08
|
[
[
"Skilling",
"Quinton M",
""
],
[
"Maruyama",
"Daniel",
""
],
[
"Ognjanovski",
"Nicolette",
""
],
[
"Aton",
"Sara J",
""
],
[
"Zochowski",
"Michal",
""
]
] |
The brain forms and stores distributed representations from sparse external input that compete for neuronal resources with already stored memory traces. It is unclear what dynamical properties of neural systems allow formation and subsequent consolidation of new, distributed memory representations under these conditions. Here we use analytical, computational, and experimental approaches to show that a dynamical regime near a phase-transition in neuronal network activity (i.e. criticality) may play an important role in this process. Our results reveal that near-critical dynamics are necessary to stabilize and store new sparsely driven representations when they compete with native network states.
|
1008.5166
|
Saket Navlakha
|
Saket Navlakha and Carl Kingsford
|
Network Archaeology: Uncovering Ancient Networks from Present-day
Interactions
|
16 pages, 10 figures
| null |
10.1371/journal.pcbi.1001119
| null |
q-bio.MN cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Often questions arise about old or extinct networks. What proteins interacted
in a long-extinct ancestor species of yeast? Who were the central players in
the Last.fm social network 3 years ago? Our ability to answer such questions
has been limited by the unavailability of past versions of networks. To
overcome these limitations, we propose several algorithms for reconstructing a
network's history of growth given only the network as it exists today and a
generative model by which the network is believed to have evolved. Our
likelihood-based method finds a probable previous state of the network by
reversing the forward growth model. This approach retains node identities so
that the history of individual nodes can be tracked. We apply these algorithms
to uncover older, non-extant biological and social networks believed to have
grown via several models, including duplication-mutation with complementarity,
forest fire, and preferential attachment. Through experiments on both synthetic
and real-world data, we find that our algorithms can estimate node arrival
times, identify anchor nodes from which new nodes copy links, and can reveal
significant features of networks that have long since disappeared.
|
[
{
"created": "Mon, 30 Aug 2010 21:00:27 GMT",
"version": "v1"
}
] |
2015-05-19
|
[
[
"Navlakha",
"Saket",
""
],
[
"Kingsford",
"Carl",
""
]
] |
Often questions arise about old or extinct networks. What proteins interacted in a long-extinct ancestor species of yeast? Who were the central players in the Last.fm social network 3 years ago? Our ability to answer such questions has been limited by the unavailability of past versions of networks. To overcome these limitations, we propose several algorithms for reconstructing a network's history of growth given only the network as it exists today and a generative model by which the network is believed to have evolved. Our likelihood-based method finds a probable previous state of the network by reversing the forward growth model. This approach retains node identities so that the history of individual nodes can be tracked. We apply these algorithms to uncover older, non-extant biological and social networks believed to have grown via several models, including duplication-mutation with complementarity, forest fire, and preferential attachment. Through experiments on both synthetic and real-world data, we find that our algorithms can estimate node arrival times, identify anchor nodes from which new nodes copy links, and can reveal significant features of networks that have long since disappeared.
|
1403.2352
|
Benoit Gauzens
|
Benoit Gauzens, Elisa Th\'ebault, G\'erard Lacroix, St\'ephane
Legendre
|
Trophic groups and modules: two levels of group detection in food webs
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Within food webs, species can be partitioned into groups according to various
criteria. Two notions have received particular attention: trophic groups, which
have been used for decades in the ecological literature, and more recently,
modules. The relationship between these two group definitions remains unknown
in empirical food webs because they have so far been studied separately. While
recent developments in network theory have led to efficient methods for
detecting modules in food webs, the determination of trophic groups (sets of
species that are functionally similar) is based on subjective expert knowledge.
Here, we develop a novel algorithm for trophic group detection. We apply this
method to several well-resolved empirical food webs, and show that aggregation
into trophic groups allows the simplification of food webs while preserving
their information content. Furthermore, we reveal a 2-level hierarchical
structure where modules partition food webs into large bottom-top trophic
pathways whereas trophic groups further partition these pathways into sets of
species with similar trophic connections. Bringing together trophic groups and
modules provides new perspectives to the study of dynamical and functional
consequences of food-web structure, bridging topological analysis and dynamical
systems. Trophic groups have a clear ecological meaning in terms of trophic
similarity, and are found to provide a trade-off between network complexity and
information loss.
|
[
{
"created": "Fri, 21 Feb 2014 09:59:31 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Apr 2015 12:51:37 GMT",
"version": "v2"
}
] |
2015-04-14
|
[
[
"Gauzens",
"Benoit",
""
],
[
"Thébault",
"Elisa",
""
],
[
"Lacroix",
"Gérard",
""
],
[
"Legendre",
"Stéphane",
""
]
] |
Within food webs, species can be partitioned into groups according to various criteria. Two notions have received particular attention: trophic groups, which have been used for decades in the ecological literature, and more recently, modules. The relationship between these two group definitions remains unknown in empirical food webs because they have so far been studied separately. While recent developments in network theory have led to efficient methods for detecting modules in food webs, the determination of trophic groups (sets of species that are functionally similar) is based on subjective expert knowledge. Here, we develop a novel algorithm for trophic group detection. We apply this method to several well-resolved empirical food webs, and show that aggregation into trophic groups allows the simplification of food webs while preserving their information content. Furthermore, we reveal a 2-level hierarchical structure where modules partition food webs into large bottom-top trophic pathways whereas trophic groups further partition these pathways into sets of species with similar trophic connections. Bringing together trophic groups and modules provides new perspectives to the study of dynamical and functional consequences of food-web structure, bridging topological analysis and dynamical systems. Trophic groups have a clear ecological meaning in terms of trophic similarity, and are found to provide a trade-off between network complexity and information loss.
|
2309.14523
|
Christian Klos
|
Christian Klos, Raoul-Martin Memmesheimer
|
Smooth Exact Gradient Descent Learning in Spiking Neural Networks
| null | null | null | null |
q-bio.NC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial neural networks are highly successfully trained with
backpropagation. For spiking neural networks, however, a similar gradient
descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance
of spikes. Here, we demonstrate exact gradient descent learning based on
spiking dynamics that change only continuously. These are generated by neuron
models whose spikes vanish and appear at the end of a trial, where they do not
influence other neurons anymore. This also enables gradient-based spike
addition and removal. We apply our learning scheme to induce and continuously
move spikes to desired times, in single neurons and recurrent networks.
Further, it achieves competitive performance in a benchmark task using deep,
initially silent networks. Our results show how non-disruptive learning is
possible despite discrete spikes.
|
[
{
"created": "Mon, 25 Sep 2023 20:51:00 GMT",
"version": "v1"
}
] |
2023-09-27
|
[
[
"Klos",
"Christian",
""
],
[
"Memmesheimer",
"Raoul-Martin",
""
]
] |
Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here, we demonstrate exact gradient descent learning based on spiking dynamics that change only continuously. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where they do not influence other neurons anymore. This also enables gradient-based spike addition and removal. We apply our learning scheme to induce and continuously move spikes to desired times, in single neurons and recurrent networks. Further, it achieves competitive performance in a benchmark task using deep, initially silent networks. Our results show how non-disruptive learning is possible despite discrete spikes.
|
2006.08356
|
Luigi Brugnano
|
Luigi Brugnano, Felice Iavernaro, Paolo Zanzottera
|
The hidden side of COVID-19 spread in Italy
|
21 pages, 10 figures
|
Math Meth Appl Sci. (2020) 1-14
|
10.1002/mma.7039
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background. The paper concerns the SARS-CoV2 (COVID-19) pandemic that,
starting from the end of February 2020, began spreading along the Italian
peninsula, by first attacking small communities in north regions, and then
extending to the center and south of Italy, including the two main islands.
Objective. The creation of a forecast model that manages to alert the
decision-making bodies and, in particular, the healthcare system, to hinder the
emergence of any other pandemic outbreaks, or the arrival of subsequent
pandemic waves.
Methods. A new mathematical model to describe the pandemic is given. The
model includes the class of undiagnosed infected people, and has a multi-region
extension, to cope with the in-time and in-space heterogeneity of the epidemic.
Results. We obtain a robust and reliable tool for the forecast of the total
and active cases, which can be also used to simulate different scenarios.
Conclusions. We are able to address a number of issues, such as assessing the
adoption of the lockdown in Italy, started from 11 March 2020, and how to
employ a rapid screening test campaign for containing the epidemic.
|
[
{
"created": "Thu, 11 Jun 2020 18:26:15 GMT",
"version": "v1"
}
] |
2020-11-25
|
[
[
"Brugnano",
"Luigi",
""
],
[
"Iavernaro",
"Felice",
""
],
[
"Zanzottera",
"Paolo",
""
]
] |
Background. The paper concerns the SARS-CoV2 (COVID-19) pandemic that, starting from the end of February 2020, began spreading along the Italian peninsula, by first attacking small communities in north regions, and then extending to the center and south of Italy, including the two main islands. Objective. The creation of a forecast model that manages to alert the decision-making bodies and, in particular, the healthcare system, to hinder the emergence of any other pandemic outbreaks, or the arrival of subsequent pandemic waves. Methods. A new mathematical model to describe the pandemic is given. The model includes the class of undiagnosed infected people, and has a multi-region extension, to cope with the in-time and in-space heterogeneity of the epidemic. Results. We obtain a robust and reliable tool for the forecast of the total and active cases, which can be also used to simulate different scenarios. Conclusions. We are able to address a number of issues, such as assessing the adoption of the lockdown in Italy, started from 11 March 2020, and how to employ a rapid screening test campaign for containing the epidemic.
|
2304.01348
|
James Stone Dr
|
James V Stone
|
Methods for Estimating Neural Information
|
7 pages, 2 figures
| null | null | null |
q-bio.NC cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Estimating the Shannon information associated with individual neurons is a
non-trivial problem. Three key methods used to estimate the mutual information
between neuron inputs and outputs are described, and a list of further readings
is provided.
|
[
{
"created": "Tue, 20 Dec 2022 12:13:48 GMT",
"version": "v1"
}
] |
2023-04-05
|
[
[
"Stone",
"James V",
""
]
] |
Estimating the Shannon information associated with individual neurons is a non-trivial problem. Three key methods used to estimate the mutual information between neuron inputs and outputs are described, and a list of further readings is provided.
|
q-bio/0611074
|
Pitman Damien
|
Janko Gravner (1), Damien Pitman (1), Sergey Gavrilets (2), ((1)
Mathematics Department, University of California, Davis, (2) Department of
Ecology and Evolutionary Biology and Mathematics, University of Tennessee,
Knoxville)
|
Percolation on fitness landscapes: effects of correlation, phenotype,
and incompatibilities
|
31 pages, 4 figures, 1 table
| null | null | null |
q-bio.PE
| null |
We study how correlations in the random fitness assignment may affect the
structure of fitness landscapes. We consider three classes of fitness models.
The first is a continuous phenotype space in which individuals are
characterized by a large number of continuously varying traits such as size,
weight, color, or concentrations of gene products which directly affect
fitness. The second is a simple model that explicitly describes
genotype-to-phenotype and phenotype-to-fitness maps allowing for neutrality at
both phenotype and fitness levels and resulting in a fitness landscape with
tunable correlation length. The third is a class of models in which particular
combinations of alleles or values of phenotypic characters are "incompatible"
in the sense that the resulting genotypes or phenotypes have reduced (or zero)
fitness. This class of models can be viewed as a generalization of the
canonical Bateson-Dobzhansky-Muller model of speciation. We also demonstrate
that the discrete NK model shares some signature properties of models with high
correlations. Throughout the paper, our focus is on the percolation threshold,
on the number, size and structure of connected clusters, and on the number of
viable genotypes.
|
[
{
"created": "Thu, 23 Nov 2006 04:16:41 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Gravner",
"Janko",
""
],
[
"Pitman",
"Damien",
""
],
[
"Gavrilets",
"Sergey",
""
]
] |
We study how correlations in the random fitness assignment may affect the structure of fitness landscapes. We consider three classes of fitness models. The first is a continuous phenotype space in which individuals are characterized by a large number of continuously varying traits such as size, weight, color, or concentrations of gene products which directly affect fitness. The second is a simple model that explicitly describes genotype-to-phenotype and phenotype-to-fitness maps allowing for neutrality at both phenotype and fitness levels and resulting in a fitness landscape with tunable correlation length. The third is a class of models in which particular combinations of alleles or values of phenotypic characters are "incompatible" in the sense that the resulting genotypes or phenotypes have reduced (or zero) fitness. This class of models can be viewed as a generalization of the canonical Bateson-Dobzhansky-Muller model of speciation. We also demonstrate that the discrete NK model shares some signature properties of models with high correlations. Throughout the paper, our focus is on the percolation threshold, on the number, size and structure of connected clusters, and on the number of viable genotypes.
|
0912.1637
|
Joshua Vogelstein
|
Joshua T. Vogelstein, Adam M. Packer, Tim A. Machado, Tanya Sippy,
Baktash Babadi, Rafael Yuste, Liam Paninski
|
Fast non-negative deconvolution for spike train inference from
population calcium imaging
|
22 pages, 10 figures
| null |
10.1152/jn.01073.2009
| null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Calcium imaging for observing spiking activity from large populations of
neurons are quickly gaining popularity. While the raw data are fluorescence
movies, the underlying spike trains are of interest. This work presents a fast
non-negative deconvolution filter to infer the approximately most likely spike
train for each neuron, given the fluorescence observations. This algorithm
outperforms optimal linear deconvolution (Wiener filtering) on both simulated
and biological data. The performance gains come from restricting the inferred
spike trains to be positive (using an interior-point method), unlike the Wiener
filter. The algorithm is fast enough that even when imaging over 100 neurons,
inference can be performed on the set of all observed traces faster than
real-time. Performing optimal spatial filtering on the images further refines
the estimates. Importantly, all the parameters required to perform the
inference can be estimated using only the fluorescence data, obviating the need
to perform joint electrophysiological and imaging calibration experiments.
|
[
{
"created": "Wed, 9 Dec 2009 19:19:19 GMT",
"version": "v1"
}
] |
2011-02-21
|
[
[
"Vogelstein",
"Joshua T.",
""
],
[
"Packer",
"Adam M.",
""
],
[
"Machado",
"Tim A.",
""
],
[
"Sippy",
"Tanya",
""
],
[
"Babadi",
"Baktash",
""
],
[
"Yuste",
"Rafael",
""
],
[
"Paninski",
"Liam",
""
]
] |
Calcium imaging for observing spiking activity from large populations of neurons are quickly gaining popularity. While the raw data are fluorescence movies, the underlying spike trains are of interest. This work presents a fast non-negative deconvolution filter to infer the approximately most likely spike train for each neuron, given the fluorescence observations. This algorithm outperforms optimal linear deconvolution (Wiener filtering) on both simulated and biological data. The performance gains come from restricting the inferred spike trains to be positive (using an interior-point method), unlike the Wiener filter. The algorithm is fast enough that even when imaging over 100 neurons, inference can be performed on the set of all observed traces faster than real-time. Performing optimal spatial filtering on the images further refines the estimates. Importantly, all the parameters required to perform the inference can be estimated using only the fluorescence data, obviating the need to perform joint electrophysiological and imaging calibration experiments.
|
0810.2946
|
Aleksandra Walczak
|
Aleksandra M. Walczak and Peter G. Wolynes
|
Gene-gene cooperativity in small networks
|
22 pages, 10 figures
| null |
10.1016/j.bpj.2009.03.005
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how to construct a reduced description of interacting genes in noisy,
small regulatory networks using coupled binary "spin" variables. Treating both
the protein number and gene expression state variables stochastically and on
equal footing we propose a mapping which connects the molecular level
description of networks to the binary representation. We construct a phase
diagram indicating when genes can be considered to be independent and when the
coupling between them cannot be neglected leading to synchrony or correlations.
We find that an appropriately mapped boolean description reproduces the
probabilities of gene expression states of the full stochastic system very well
and can be transfered to examples of self-regulatory systems with a larger
number of gene copies.
|
[
{
"created": "Thu, 16 Oct 2008 15:28:58 GMT",
"version": "v1"
}
] |
2015-05-13
|
[
[
"Walczak",
"Aleksandra M.",
""
],
[
"Wolynes",
"Peter G.",
""
]
] |
We show how to construct a reduced description of interacting genes in noisy, small regulatory networks using coupled binary "spin" variables. Treating both the protein number and gene expression state variables stochastically and on equal footing we propose a mapping which connects the molecular level description of networks to the binary representation. We construct a phase diagram indicating when genes can be considered to be independent and when the coupling between them cannot be neglected leading to synchrony or correlations. We find that an appropriately mapped boolean description reproduces the probabilities of gene expression states of the full stochastic system very well and can be transfered to examples of self-regulatory systems with a larger number of gene copies.
|
2201.04927
|
Alain Destexhe
|
Davide Forcella, Alberto Romagnoni, Alain Destexhe
|
Neuronal cable equations derived from the hydrodynamic motion of charged
particles
| null | null |
10.1007/s00033-023-01986-y
| null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Neuronal cable theory is usually derived from an electric analogue of the
membrane, which contrasts with the slow movement of ions in aqueous media. We
show here that it is possible to derive neuronal cable equations from a
different perspective, based on the laws of hydrodynamic motion of charged
particles (Navier-Stokes equations). This results in similar cable equations,
but with additional contributions arising from nonlinear interactions inherent
to fluid dynamics, and which may shape the integrative properties of the
neurons.
|
[
{
"created": "Thu, 13 Jan 2022 12:52:09 GMT",
"version": "v1"
}
] |
2023-04-26
|
[
[
"Forcella",
"Davide",
""
],
[
"Romagnoni",
"Alberto",
""
],
[
"Destexhe",
"Alain",
""
]
] |
Neuronal cable theory is usually derived from an electric analogue of the membrane, which contrasts with the slow movement of ions in aqueous media. We show here that it is possible to derive neuronal cable equations from a different perspective, based on the laws of hydrodynamic motion of charged particles (Navier-Stokes equations). This results in similar cable equations, but with additional contributions arising from nonlinear interactions inherent to fluid dynamics, and which may shape the integrative properties of the neurons.
|
2406.02659
|
Andrew Luo
|
Jacob Yeung, Andrew F. Luo, Gabriel Sarch, Margaret M. Henderson, Deva
Ramanan, Michael J. Tarr
|
Neural Representations of Dynamic Visual Stimuli
| null | null | null | null |
q-bio.NC cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Humans experience the world through constantly changing visual stimuli, where
scenes can shift and move, change in appearance, and vary in distance. The
dynamic nature of visual perception is a fundamental aspect of our daily lives,
yet the large majority of research on object and scene processing, particularly
using fMRI, has focused on static stimuli. While studies of static image
perception are attractive due to their computational simplicity, they impose a
strong non-naturalistic constraint on our investigation of human vision. In
contrast, dynamic visual stimuli offer a more ecologically-valid approach but
present new challenges due to the interplay between spatial and temporal
information, making it difficult to disentangle the representations of stable
image features and motion. To overcome this limitation -- given dynamic inputs,
we explicitly decouple the modeling of static image representations and motion
representations in the human brain. Three results demonstrate the feasibility
of this approach. First, we show that visual motion information as optical flow
can be predicted (or decoded) from brain activity as measured by fMRI. Second,
we show that this predicted motion can be used to realistically animate static
images using a motion-conditioned video diffusion model (where the motion is
driven by fMRI brain activity). Third, we show prediction in the reverse
direction: existing video encoders can be fine-tuned to predict fMRI brain
activity from video imagery, and can do so more effectively than image
encoders. This foundational work offers a novel, extensible framework for
interpreting how the human brain processes dynamic visual information.
|
[
{
"created": "Tue, 4 Jun 2024 17:59:49 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Yeung",
"Jacob",
""
],
[
"Luo",
"Andrew F.",
""
],
[
"Sarch",
"Gabriel",
""
],
[
"Henderson",
"Margaret M.",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Tarr",
"Michael J.",
""
]
] |
Humans experience the world through constantly changing visual stimuli, where scenes can shift and move, change in appearance, and vary in distance. The dynamic nature of visual perception is a fundamental aspect of our daily lives, yet the large majority of research on object and scene processing, particularly using fMRI, has focused on static stimuli. While studies of static image perception are attractive due to their computational simplicity, they impose a strong non-naturalistic constraint on our investigation of human vision. In contrast, dynamic visual stimuli offer a more ecologically-valid approach but present new challenges due to the interplay between spatial and temporal information, making it difficult to disentangle the representations of stable image features and motion. To overcome this limitation -- given dynamic inputs, we explicitly decouple the modeling of static image representations and motion representations in the human brain. Three results demonstrate the feasibility of this approach. First, we show that visual motion information as optical flow can be predicted (or decoded) from brain activity as measured by fMRI. Second, we show that this predicted motion can be used to realistically animate static images using a motion-conditioned video diffusion model (where the motion is driven by fMRI brain activity). Third, we show prediction in the reverse direction: existing video encoders can be fine-tuned to predict fMRI brain activity from video imagery, and can do so more effectively than image encoders. This foundational work offers a novel, extensible framework for interpreting how the human brain processes dynamic visual information.
|
2210.10901
|
Alberto Coccarelli Coccarelli
|
Albertoo Coccarelli, Michael D. Nelson
|
Modeling Reactive Hyperemia to better understand and assess
Microvascular Function: a review of techniques
|
n/a
|
Annals of Biomedical Engineering 2023
|
10.1007/s10439-022-03134-5
| null |
q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
Reactive hyperemia is a well-established technique for the non-invasive
evaluation of the peripheral microcirculatory function, measured as the
magnitude of limb re-perfusion after a brief period of ischemia. Despite
widespread adoption by researchers and clinicians alike, many uncertainties
remain surrounding interpretation, compounded by patient-specific confounding
factors (such as blood pressure or the metabolic rate of the ischemic limb).
Mathematical modeling can accelerate our understanding of the physiology
underlying the reactive hyperemia response and guide in the estimation of
quantities which are difficult to measure experimentally. In this work, we aim
to provide a comprehensive guide for mathematical modeling techniques that can
be used for describing the key phenomena involved in the reactive hyperemia
response, alongside their limitations and advantages. The reported
methodologies can be used for investigating specific reactive hyperemia aspects
alone, or can be combined into a computational framework to be used in
(pre-)clinical settings.
|
[
{
"created": "Wed, 19 Oct 2022 21:42:27 GMT",
"version": "v1"
}
] |
2023-02-03
|
[
[
"Coccarelli",
"Albertoo",
""
],
[
"Nelson",
"Michael D.",
""
]
] |
Reactive hyperemia is a well-established technique for the non-invasive evaluation of the peripheral microcirculatory function, measured as the magnitude of limb re-perfusion after a brief period of ischemia. Despite widespread adoption by researchers and clinicians alike, many uncertainties remain surrounding interpretation, compounded by patient-specific confounding factors (such as blood pressure or the metabolic rate of the ischemic limb). Mathematical modeling can accelerate our understanding of the physiology underlying the reactive hyperemia response and guide in the estimation of quantities which are difficult to measure experimentally. In this work, we aim to provide a comprehensive guide for mathematical modeling techniques that can be used for describing the key phenomena involved in the reactive hyperemia response, alongside their limitations and advantages. The reported methodologies can be used for investigating specific reactive hyperemia aspects alone, or can be combined into a computational framework to be used in (pre-)clinical settings.
|
2005.10517
|
Marco Storace PhD
|
Valentina Baruzzi, Matteo Lodi, Marco Storace, Andrey Shilnikov
|
Generalized half-center oscillators with short-term synaptic plasticity
| null |
Phys. Rev. E 102, 032406 (2020)
|
10.1103/PhysRevE.102.032406
| null |
q-bio.NC math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we develop simple yet realistic models of the small neural circuits
known as central pattern generators (CPGs), which contribute to generate
complex multi-phase locomotion in living animals? In this paper we introduce a
new model (with design criteria) of a generalized half-center oscillator
(gHCO), (pools of) neurons reciprocally coupled by fast/slow inhibitory and
excitatory synapses, to produce either alternating bursting or synchronous
patterns depending on the sensory or other external input. We also show how to
calibrate its parameters, based on both physiological and functional criteria
and on bifurcation analysis. This model accounts for short-term neuromodulation
in a bio-physically plausible way and is a building block to develop more
realistic and functionally accurate CPG models. Examples and counterexamples
are used to point out the generality and effectiveness of our design approach.
|
[
{
"created": "Thu, 21 May 2020 08:43:16 GMT",
"version": "v1"
}
] |
2020-09-16
|
[
[
"Baruzzi",
"Valentina",
""
],
[
"Lodi",
"Matteo",
""
],
[
"Storace",
"Marco",
""
],
[
"Shilnikov",
"Andrey",
""
]
] |
How can we develop simple yet realistic models of the small neural circuits known as central pattern generators (CPGs), which contribute to generate complex multi-phase locomotion in living animals? In this paper we introduce a new model (with design criteria) of a generalized half-center oscillator (gHCO), (pools of) neurons reciprocally coupled by fast/slow inhibitory and excitatory synapses, to produce either alternating bursting or synchronous patterns depending on the sensory or other external input. We also show how to calibrate its parameters, based on both physiological and functional criteria and on bifurcation analysis. This model accounts for short-term neuromodulation in a bio-physically plausible way and is a building block to develop more realistic and functionally accurate CPG models. Examples and counterexamples are used to point out the generality and effectiveness of our design approach.
|
1910.06965
|
Samaneh Jozashoori
|
Samaneh Jozashoori, Amir Jozashoori, Heiko Schoof
|
AFDP: An Automated Function Description Prediction Approach to Improve
Accuracy of Protein Function Predictions
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid growth in high-throughput biological sequencing technologies
and subsequently the amount of produced omics data, it is essential to develop
automated methods to annotate the functionality of unknown genes and proteins.
There are developed tools such as AHRD applying known proteins characterization
to annotate unknown ones. Some other algorithms such as eggNOG apply
orthologous groups of proteins to detect the most probable function. However,
while the available tools focus on the detection of the most similar
characterization, they are not able to generalize and integrate information
from multiple homologs while maintaining accuracy. Here, we devise AFDP, an
integrated approach for protein function prediction which benefits from the
combination of two available tools, AHRD and eggNOG, to predict the
functionality of novel proteins and produce more precise human readable
descriptions by applying our stCFExt algorithm. StCFExt creates function
descriptions applying available manually curated descriptions in swiss-prot.
Using a benchmark dataset we show that the annotations predicted by our
approach are more accurate than eggNOG and AHRD annotations.
|
[
{
"created": "Tue, 15 Oct 2019 12:19:34 GMT",
"version": "v1"
}
] |
2019-10-17
|
[
[
"Jozashoori",
"Samaneh",
""
],
[
"Jozashoori",
"Amir",
""
],
[
"Schoof",
"Heiko",
""
]
] |
With the rapid growth in high-throughput biological sequencing technologies and subsequently the amount of produced omics data, it is essential to develop automated methods to annotate the functionality of unknown genes and proteins. There are developed tools such as AHRD applying known proteins characterization to annotate unknown ones. Some other algorithms such as eggNOG apply orthologous groups of proteins to detect the most probable function. However, while the available tools focus on the detection of the most similar characterization, they are not able to generalize and integrate information from multiple homologs while maintaining accuracy. Here, we devise AFDP, an integrated approach for protein function prediction which benefits from the combination of two available tools, AHRD and eggNOG, to predict the functionality of novel proteins and produce more precise human readable descriptions by applying our stCFExt algorithm. StCFExt creates function descriptions applying available manually curated descriptions in swiss-prot. Using a benchmark dataset we show that the annotations predicted by our approach are more accurate than eggNOG and AHRD annotations.
|
2210.01691
|
Kevin McKee
|
Kevin McKee, Ian Crandell, Rishidev Chaudhuri, Randall O'Reilly
|
Adaptive Synaptic Failure Enables Sampling from Posterior Predictive
Distributions in the Brain
|
23 pages, 5 figures. arXiv admin note: text overlap with
arXiv:2111.09780
| null | null | null |
q-bio.NC stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Bayesian interpretations of neural processing require that biological
mechanisms represent and operate upon probability distributions in accordance
with Bayes' theorem. Many have speculated that synaptic failure constitutes a
mechanism of variational, i.e., approximate, Bayesian inference in the brain.
Whereas models have previously used synaptic failure to sample over uncertainty
in model parameters, we demonstrate that by adapting transmission probabilities
to learned network weights, synaptic failure can sample not only over model
uncertainty, but complete posterior predictive distributions as well. Our
results potentially explain the brain's ability to perform probabilistic
searches and to approximate complex integrals. These operations are involved in
numerous calculations, including likelihood evaluation and state value
estimation for complex planning.
|
[
{
"created": "Tue, 4 Oct 2022 15:41:44 GMT",
"version": "v1"
}
] |
2022-10-05
|
[
[
"McKee",
"Kevin",
""
],
[
"Crandell",
"Ian",
""
],
[
"Chaudhuri",
"Rishidev",
""
],
[
"O'Reilly",
"Randall",
""
]
] |
Bayesian interpretations of neural processing require that biological mechanisms represent and operate upon probability distributions in accordance with Bayes' theorem. Many have speculated that synaptic failure constitutes a mechanism of variational, i.e., approximate, Bayesian inference in the brain. Whereas models have previously used synaptic failure to sample over uncertainty in model parameters, we demonstrate that by adapting transmission probabilities to learned network weights, synaptic failure can sample not only over model uncertainty, but complete posterior predictive distributions as well. Our results potentially explain the brain's ability to perform probabilistic searches and to approximate complex integrals. These operations are involved in numerous calculations, including likelihood evaluation and state value estimation for complex planning.
|
2405.18402
|
Athulya Ram
|
Leonid Bunimovich and Athulya Ram
|
Antigenic Cooperation in Viral Populations: Redistribution of Loads
Among Altruistic Viruses and Maximal Load per Altruist
|
37 pages, 9 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
The paper continues the study of the phenomenon of local immunodeficiency
(LI) in viral cross-immunoreactivity networks, with a focus on the roles and
interactions between altruistic and persistent viral variants. As always, only
the state of stable (i.e. observable) LI is analysed. First, we show that a
single altruistic viral variant has an upper limit for the number of persistent
viral variants that it can support. Our findings reveal that in viral
cross-immunoreactivity networks, altruistic viruses act essentially
autonomously from each other. Namely, connections between altruistic viruses do
not change neither their qualitative roles, nor the quantitative values of the
strengths of their connections in the CRNs. In other words, each altruistic
virus does exactly the same actions and with the same strengths with or without
presence of other altruistic viruses. However, having more altruistic viruses
allows to keep sizes of populations of persistent viruses at the higher levels.
Likewise, the strength of the immune response against any altruistic virus
remains at the same constant level regardless of how many persistent viruses
this altruistic virus supports, i.e. shields from the immune response of the
host's immune system. It is also shown that viruses strongly compete with each
other in order to become persistent in the state of stable LI. We also present
an example for a CRN with stable LI that only consists of persistent viral
variants.
|
[
{
"created": "Tue, 28 May 2024 17:44:02 GMT",
"version": "v1"
}
] |
2024-05-29
|
[
[
"Bunimovich",
"Leonid",
""
],
[
"Ram",
"Athulya",
""
]
] |
The paper continues the study of the phenomenon of local immunodeficiency (LI) in viral cross-immunoreactivity networks, with a focus on the roles and interactions between altruistic and persistent viral variants. As always, only the state of stable (i.e. observable) LI is analysed. First, we show that a single altruistic viral variant has an upper limit for the number of persistent viral variants that it can support. Our findings reveal that in viral cross-immunoreactivity networks, altruistic viruses act essentially autonomously from each other. Namely, connections between altruistic viruses do not change neither their qualitative roles, nor the quantitative values of the strengths of their connections in the CRNs. In other words, each altruistic virus does exactly the same actions and with the same strengths with or without presence of other altruistic viruses. However, having more altruistic viruses allows to keep sizes of populations of persistent viruses at the higher levels. Likewise, the strength of the immune response against any altruistic virus remains at the same constant level regardless of how many persistent viruses this altruistic virus supports, i.e. shields from the immune response of the host's immune system. It is also shown that viruses strongly compete with each other in order to become persistent in the state of stable LI. We also present an example for a CRN with stable LI that only consists of persistent viral variants.
|
1404.6448
|
Markus Pagitz Dr
|
Markus Pagitz, Manuel Pagitz and C. H\"uhne
|
A modular approach to adaptive structures
|
15 pages, 9 figures
| null |
10.1088/1748-3182/9/4/046005
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A remarkable property of nastic, shape changing plants is their complete
fusion between actuators and structure. This is achieved by combining a large
number of cells whose geometry, internal pressures and material properties are
optimized for a given set of target shapes and stiffness requirements. An
advantage of such a fusion is that cell walls are prestressed by cell pressures
which increases, decreases the overall structural stiffness, weight. Inspired
by the nastic movement of plants, Pagitz et al. 2012 Bioinspir. Biomim. 7
published a novel concept for pressure actuated cellular structures. This
article extends previous work by introducing a modular approach to adaptive
structures. An algorithm that breaks down any continuous target shapes into a
small number of standardized modules is presented. Furthermore it is shown how
cytoskeletons within each cell enhance the properties of adaptive modules. An
adaptive passenger seat and an aircrafts leading, trailing edge is used to
demonstrate the potential of a modular approach.
|
[
{
"created": "Fri, 25 Apr 2014 15:03:15 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jul 2015 18:56:58 GMT",
"version": "v2"
}
] |
2015-07-20
|
[
[
"Pagitz",
"Markus",
""
],
[
"Pagitz",
"Manuel",
""
],
[
"Hühne",
"C.",
""
]
] |
A remarkable property of nastic, shape changing plants is their complete fusion between actuators and structure. This is achieved by combining a large number of cells whose geometry, internal pressures and material properties are optimized for a given set of target shapes and stiffness requirements. An advantage of such a fusion is that cell walls are prestressed by cell pressures which increases, decreases the overall structural stiffness, weight. Inspired by the nastic movement of plants, Pagitz et al. 2012 Bioinspir. Biomim. 7 published a novel concept for pressure actuated cellular structures. This article extends previous work by introducing a modular approach to adaptive structures. An algorithm that breaks down any continuous target shapes into a small number of standardized modules is presented. Furthermore it is shown how cytoskeletons within each cell enhance the properties of adaptive modules. An adaptive passenger seat and an aircrafts leading, trailing edge is used to demonstrate the potential of a modular approach.
|
1505.01642
|
Thomas Risler
|
Pedro Campinho, Martin Behrndt, Jonas Ranft, Thomas Risler, Nicolas
Minc and Carl-Philipp Heisenberg
|
Tension-oriented cell divisions limit anisotropic tissue tension in
epithelial spreading during zebrafish epiboly
|
Methods, supplementary information and associated references are
available in the published online version of the paper at
http://www.nature.com/doifinder/10.1038/ncb2869
|
Nature Cell Biology 15 (12), 1405-1414 (2013)
|
10.1038/ncb2869
| null |
q-bio.TO physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epithelial spreading is a common and fundamental aspect of various
developmental and disease-related processes such as epithelial closure and
wound healing. A key challenge for epithelial tissues undergoing spreading is
to increase their surface area without disrupting epithelial integrity. Here we
show that orienting cell divisions by tension constitutes an efficient
mechanism by which the Enveloping Cell Layer (EVL) releases anisotropic tension
while undergoing spreading during zebrafish epiboly. The control of EVL
cell-division orientation by tension involves cell elongation and requires
myosin II activity to align the mitotic spindle with the main tension axis. We
also found that in the absence of tension-oriented cell divisions and in the
presence of increased tissue tension, EVL cells undergo ectopic fusions,
suggesting that the reduction of tension anisotropy by oriented cell divisions
is required to prevent EVL cells from fusing. We conclude that cell-division
orientation by tension constitutes a key mechanism for limiting tension
anisotropy and thus promoting tissue spreading during EVL epiboly.
|
[
{
"created": "Thu, 7 May 2015 09:55:08 GMT",
"version": "v1"
}
] |
2015-05-08
|
[
[
"Campinho",
"Pedro",
""
],
[
"Behrndt",
"Martin",
""
],
[
"Ranft",
"Jonas",
""
],
[
"Risler",
"Thomas",
""
],
[
"Minc",
"Nicolas",
""
],
[
"Heisenberg",
"Carl-Philipp",
""
]
] |
Epithelial spreading is a common and fundamental aspect of various developmental and disease-related processes such as epithelial closure and wound healing. A key challenge for epithelial tissues undergoing spreading is to increase their surface area without disrupting epithelial integrity. Here we show that orienting cell divisions by tension constitutes an efficient mechanism by which the Enveloping Cell Layer (EVL) releases anisotropic tension while undergoing spreading during zebrafish epiboly. The control of EVL cell-division orientation by tension involves cell elongation and requires myosin II activity to align the mitotic spindle with the main tension axis. We also found that in the absence of tension-oriented cell divisions and in the presence of increased tissue tension, EVL cells undergo ectopic fusions, suggesting that the reduction of tension anisotropy by oriented cell divisions is required to prevent EVL cells from fusing. We conclude that cell-division orientation by tension constitutes a key mechanism for limiting tension anisotropy and thus promoting tissue spreading during EVL epiboly.
|
1804.09256
|
Giuseppe Tronci
|
Giuseppe Tronci
|
The application of collagen in advanced wound dressings
|
37 pages, 6 figures, 1 table (to appear in "Advanced Textiles for
Wound Care, 2nd ed")
|
Advanced Textiles for Wound Care (2nd ed), 2018
| null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chronic wounds fail to proceed through an orderly and timely self healing
process, resulting in cutaneous damage with full thickness in depth and leading
to a major healthcare and economic burden worldwide. In the UK alone, 200,000
patients suffer from a chronic wound, whilst the global advanced wound care
market is expected to reach nearly $11 million in 2022. Despite extensive
research efforts so far, clinically-approved chronic wound therapies are still
time-consuming, economically unaffordable and present restricted customisation.
In this chapter, the role of collagen in the extracellular matrix of biological
tissues and wound healing will be discussed, together with its use as building
block for the manufacture of advanced wound dressings. Commercially-available
collagen dressings and respective clinical performance will be presented,
followed by an overview on the latest research advances in the context of
multifunctional collagen systems for advanced wound care.
|
[
{
"created": "Tue, 24 Apr 2018 20:54:36 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Apr 2018 11:39:08 GMT",
"version": "v2"
}
] |
2018-04-30
|
[
[
"Tronci",
"Giuseppe",
""
]
] |
Chronic wounds fail to proceed through an orderly and timely self healing process, resulting in cutaneous damage with full thickness in depth and leading to a major healthcare and economic burden worldwide. In the UK alone, 200,000 patients suffer from a chronic wound, whilst the global advanced wound care market is expected to reach nearly $11 million in 2022. Despite extensive research efforts so far, clinically-approved chronic wound therapies are still time-consuming, economically unaffordable and present restricted customisation. In this chapter, the role of collagen in the extracellular matrix of biological tissues and wound healing will be discussed, together with its use as building block for the manufacture of advanced wound dressings. Commercially-available collagen dressings and respective clinical performance will be presented, followed by an overview on the latest research advances in the context of multifunctional collagen systems for advanced wound care.
|
1903.05288
|
Sang-Yoon Kim
|
Sang-Yoon Kim and Woochang Lim
|
Effect of Interpopulation Spike-Timing-Dependent Plasticity on
Synchronized Rhythms in Neuronal Networks with Inhibitory and Excitatory
Populations
| null | null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider clustered small-world networks with both inhibitory (I) and
excitatory (E) populations. This I-E neuronal network has adaptive dynamic I to
E and E to I interpopulation synaptic strengths, governed by interpopulation
spike-timing-dependent plasticity (STDP). In previous works without STDPs, fast
sparsely synchronized rhythms, related to diverse cognitive functions, were
found to appear in a range of noise intensity $D$ for static synaptic
strengths. By varying $D$, we investigate the effect of interpopulation STDPs
on diverse population and individual properties of fast sparsely synchronized
rhythms that emerge in both the I- and the E-populations. Depending on values
of $D$, long-term potentiation (LTP) and long-term depression (LTD) for
population-averaged values of saturated interpopulation synaptic strengths are
found to occur, and they make effects on the degree of fast sparse
synchronization. In a broad region of intermediate $D$, the degree of good
synchronization (with higher spiking measure) becomes decreased, while in a
region of large $D$, the degree of bad synchronization (with lower spiking
measure) gets increased. Consequently, in each I- or E-population, the
synchronization degree becomes nearly the same in a wide range of $D$. This
kind of "equalization effect" is found to occur via cooperative interplay
between the average occupation and pacing degrees of fast sparsely synchronized
rhythms. We note that the equalization effect in interpopulation synaptic
plasticity is distinctly in contrast to the Matthew (bipolarization) effect in
intrapopulation (I to I and E to E) synaptic plasticity where good (bad)
synchronization gets better (worse). Finally, emergences of LTP and LTD of
interpopulation synaptic strengths are intensively investigated via a
microscopic method based on the distributions of time delays between the pre-
and the post-synaptic spike times.
|
[
{
"created": "Wed, 13 Mar 2019 01:55:15 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2019 01:02:38 GMT",
"version": "v2"
},
{
"created": "Mon, 10 Feb 2020 08:26:23 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Feb 2020 05:43:03 GMT",
"version": "v4"
}
] |
2020-02-12
|
[
[
"Kim",
"Sang-Yoon",
""
],
[
"Lim",
"Woochang",
""
]
] |
We consider clustered small-world networks with both inhibitory (I) and excitatory (E) populations. This I-E neuronal network has adaptive dynamic I to E and E to I interpopulation synaptic strengths, governed by interpopulation spike-timing-dependent plasticity (STDP). In previous works without STDPs, fast sparsely synchronized rhythms, related to diverse cognitive functions, were found to appear in a range of noise intensity $D$ for static synaptic strengths. By varying $D$, we investigate the effect of interpopulation STDPs on diverse population and individual properties of fast sparsely synchronized rhythms that emerge in both the I- and the E-populations. Depending on values of $D$, long-term potentiation (LTP) and long-term depression (LTD) for population-averaged values of saturated interpopulation synaptic strengths are found to occur, and they make effects on the degree of fast sparse synchronization. In a broad region of intermediate $D$, the degree of good synchronization (with higher spiking measure) becomes decreased, while in a region of large $D$, the degree of bad synchronization (with lower spiking measure) gets increased. Consequently, in each I- or E-population, the synchronization degree becomes nearly the same in a wide range of $D$. This kind of "equalization effect" is found to occur via cooperative interplay between the average occupation and pacing degrees of fast sparsely synchronized rhythms. We note that the equalization effect in interpopulation synaptic plasticity is distinctly in contrast to the Matthew (bipolarization) effect in intrapopulation (I to I and E to E) synaptic plasticity where good (bad) synchronization gets better (worse). Finally, emergences of LTP and LTD of interpopulation synaptic strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic spike times.
|
1308.4187
|
Tahir Yusufaly
|
Tahir I. Yusufaly, Yun Li and Wilma K. Olson
|
5-Methylation of Cytosine in CG:CG Base-Pair Steps: A Physicochemical
Mechanism for the Epigenetic Control of DNA Nanomechanics
|
Accepted to J. Phys. Chem. B. Supplemental Information available upon
request
| null |
10.1021/jp409887t
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Van der Waals density functional theory is integrated with analysis of a
non-redundant set of protein-DNA crystal structures from the Nucleic Acid
Database to study the stacking energetics of CG:CG base-pair steps,
specifically the role of cytosine 5-methylation. Principal component analysis
of the steps reveals the dominant collective motions to correspond to a tensile
'opening' mode and two shear 'sliding' and 'tearing' modes in the orthogonal
plane. The stacking interactions of the methyl groups globally inhibit CG:CG
step overtwisting while simultaneously softening the modes locally via
potential energy modulations that create metastable states. Additionally, the
indirect effects of the methyl groups on possible base-pair steps neighboring
CG:CG are observed to be of comparable importance to their direct effects on
CG:CG. The results have implications for the epigenetic control of DNA
mechanics.
|
[
{
"created": "Mon, 19 Aug 2013 22:59:08 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2013 21:39:31 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Dec 2013 19:41:44 GMT",
"version": "v3"
}
] |
2013-12-10
|
[
[
"Yusufaly",
"Tahir I.",
""
],
[
"Li",
"Yun",
""
],
[
"Olson",
"Wilma K.",
""
]
] |
Van der Waals density functional theory is integrated with analysis of a non-redundant set of protein-DNA crystal structures from the Nucleic Acid Database to study the stacking energetics of CG:CG base-pair steps, specifically the role of cytosine 5-methylation. Principal component analysis of the steps reveals the dominant collective motions to correspond to a tensile 'opening' mode and two shear 'sliding' and 'tearing' modes in the orthogonal plane. The stacking interactions of the methyl groups globally inhibit CG:CG step overtwisting while simultaneously softening the modes locally via potential energy modulations that create metastable states. Additionally, the indirect effects of the methyl groups on possible base-pair steps neighboring CG:CG are observed to be of comparable importance to their direct effects on CG:CG. The results have implications for the epigenetic control of DNA mechanics.
|
1312.3382
|
Xiaobei Zhou
|
Xiaobei Zhou and Helen Lindsay and Mark D. Robinson
|
Robustly detecting differential expression in RNA sequencing data using
observation weights
|
18 pages, 6 figures (v2)
| null | null | null |
q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A popular approach for comparing gene expression levels between (replicated)
conditions of RNA sequencing data relies on counting reads that map to features
of interest. Within such count-based methods, many flexible and advanced
statistical approaches now exist and offer the ability to adjust for covariates
(e.g., batch effects). Often, these methods include some sort of (sharing of
information) across features to improve inferences in small samples. It is
important to achieve an appropriate tradeoff between statistical power and
protection against outliers. Here, we study the robustness of existing
approaches for count-based differential expression analysis and propose a new
strategy based on observation weights that can be used within existing
frameworks. The results suggest that outliers can have a global effect on
differential analyses. We demonstrate the effectiveness of our new approach
with real data and simulated data that reflects properties of real datasets
(e.g., dispersion-mean trend) and develop an extensible framework for
comprehensive testing of current and future methods. In addition, we explore
the origin of such outliers, in some cases highlighting additional biological
or technical factors within the experiment. Further details can be downloaded
from the project website:
http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/
|
[
{
"created": "Thu, 12 Dec 2013 02:01:15 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Mar 2014 16:15:39 GMT",
"version": "v2"
}
] |
2014-03-17
|
[
[
"Zhou",
"Xiaobei",
""
],
[
"Lindsay",
"Helen",
""
],
[
"Robinson",
"Mark D.",
""
]
] |
A popular approach for comparing gene expression levels between (replicated) conditions of RNA sequencing data relies on counting reads that map to features of interest. Within such count-based methods, many flexible and advanced statistical approaches now exist and offer the ability to adjust for covariates (e.g., batch effects). Often, these methods include some sort of (sharing of information) across features to improve inferences in small samples. It is important to achieve an appropriate tradeoff between statistical power and protection against outliers. Here, we study the robustness of existing approaches for count-based differential expression analysis and propose a new strategy based on observation weights that can be used within existing frameworks. The results suggest that outliers can have a global effect on differential analyses. We demonstrate the effectiveness of our new approach with real data and simulated data that reflects properties of real datasets (e.g., dispersion-mean trend) and develop an extensible framework for comprehensive testing of current and future methods. In addition, we explore the origin of such outliers, in some cases highlighting additional biological or technical factors within the experiment. Further details can be downloaded from the project website: http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/
|
2311.16126
|
Fang Wu
|
Fang Wu, Stan Z. Li
|
A Hierarchical Training Paradigm for Antibody Structure-sequence
Co-design
| null | null | null | null |
q-bio.BM cs.CE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Therapeutic antibodies are an essential and rapidly expanding drug modality.
The binding specificity between antibodies and antigens is decided by
complementarity-determining regions (CDRs) at the tips of these Y-shaped
proteins. In this paper, we propose a hierarchical training paradigm (HTP) for
the antibody sequence-structure co-design. HTP consists of four levels of
training stages, each corresponding to a specific protein modality within a
particular protein domain. Through carefully crafted tasks in different stages,
HTP seamlessly and effectively integrates geometric graph neural networks
(GNNs) with large-scale protein language models to excavate evolutionary
information from not only geometric structures but also vast antibody and
non-antibody sequence databases, which determines ligand binding pose and
strength. Empirical experiments show that HTP sets the new state-of-the-art
performance in the co-design problem as well as the fix-backbone design. Our
research offers a hopeful path to unleash the potential of deep generative
architectures and seeks to illuminate the way forward for the antibody sequence
and structure co-design challenge.
|
[
{
"created": "Mon, 30 Oct 2023 02:39:15 GMT",
"version": "v1"
}
] |
2023-11-29
|
[
[
"Wu",
"Fang",
""
],
[
"Li",
"Stan Z.",
""
]
] |
Therapeutic antibodies are an essential and rapidly expanding drug modality. The binding specificity between antibodies and antigens is decided by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a hierarchical training paradigm (HTP) for the antibody sequence-structure co-design. HTP consists of four levels of training stages, each corresponding to a specific protein modality within a particular protein domain. Through carefully crafted tasks in different stages, HTP seamlessly and effectively integrates geometric graph neural networks (GNNs) with large-scale protein language models to excavate evolutionary information from not only geometric structures but also vast antibody and non-antibody sequence databases, which determines ligand binding pose and strength. Empirical experiments show that HTP sets the new state-of-the-art performance in the co-design problem as well as the fix-backbone design. Our research offers a hopeful path to unleash the potential of deep generative architectures and seeks to illuminate the way forward for the antibody sequence and structure co-design challenge.
|
1309.4441
|
Astero Provata
|
A. Provata, C. Nicolis and G. Nicolis
|
DNA viewed as an out-of-equilibrium structure
| null | null |
10.1103/PhysRevE.89.052105
| null |
q-bio.GN cond-mat.stat-mech nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complexity of the primary structure of human DNA is explored using
methods from nonequilibrium statistical mechanics, dynamical systems theory and
information theory. The use of chi-square tests shows that DNA cannot be
described as a low order Markov chain of order up to $r=6$. Although detailed
balance seems to hold at the level of purine-pyrimidine notation it fails when
all four basepairs are considered, suggesting spatial asymmetry and
irreversibility. Furthermore, the block entropy does not increase linearly with
the block size, reflecting the long range nature of the correlations in the
human genomic sequences. To probe locally the spatial structure of the chain we
study the exit distances from a specific symbol, the distribution of recurrence
distances and the Hurst exponent, all of which show power law tails and long
range characteristics. These results suggest that human DNA can be viewed as a
non-equilibrium structure maintained in its state through interactions with a
constantly changing environment. Based solely on the exit distance distribution
accounting for the nonequilibrium statistics and using the Monte Carlo
rejection sampling method we construct a model DNA sequence. This method allows
to keep all long range and short range statistical characteristics of the
original sequence. The model sequence presents the same characteristic
exponents as the natural DNA but fails to capture point-to-point details.
|
[
{
"created": "Tue, 17 Sep 2013 08:01:24 GMT",
"version": "v1"
}
] |
2015-06-17
|
[
[
"Provata",
"A.",
""
],
[
"Nicolis",
"C.",
""
],
[
"Nicolis",
"G.",
""
]
] |
The complexity of the primary structure of human DNA is explored using methods from nonequilibrium statistical mechanics, dynamical systems theory and information theory. The use of chi-square tests shows that DNA cannot be described as a low order Markov chain of order up to $r=6$. Although detailed balance seems to hold at the level of purine-pyrimidine notation it fails when all four basepairs are considered, suggesting spatial asymmetry and irreversibility. Furthermore, the block entropy does not increase linearly with the block size, reflecting the long range nature of the correlations in the human genomic sequences. To probe locally the spatial structure of the chain we study the exit distances from a specific symbol, the distribution of recurrence distances and the Hurst exponent, all of which show power law tails and long range characteristics. These results suggest that human DNA can be viewed as a non-equilibrium structure maintained in its state through interactions with a constantly changing environment. Based solely on the exit distance distribution accounting for the nonequilibrium statistics and using the Monte Carlo rejection sampling method we construct a model DNA sequence. This method allows to keep all long range and short range statistical characteristics of the original sequence. The model sequence presents the same characteristic exponents as the natural DNA but fails to capture point-to-point details.
|
2306.15912
|
Yufan Liu
|
Yufan Liu and Boxue Tian
|
Protein-DNA binding sites prediction based on pre-trained protein
language model and contrastive learning
| null | null | null | null |
q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein-DNA interaction is critical for life activities such as replication,
transcription, and splicing. Identifying protein-DNA binding residues is
essential for modeling their interaction and downstream studies. However,
developing accurate and efficient computational methods for this task remains
challenging. Improvements in this area have the potential to drive novel
applications in biotechnology and drug design. In this study, we propose a
novel approach called CLAPE, which combines a pre-trained protein language
model and the contrastive learning method to predict DNA binding residues. We
trained the CLAPE-DB model on the protein-DNA binding sites dataset and
evaluated the model performance and generalization ability through various
experiments. The results showed that the AUC values of the CLAPE-DB model on
the two benchmark datasets reached 0.871 and 0.881, respectively, indicating
superior performance compared to other existing models. CLAPE-DB showed better
generalization ability and was specific to DNA-binding sites. In addition, we
trained CLAPE on different protein-ligand binding sites datasets, demonstrating
that CLAPE is a general framework for binding sites prediction. To facilitate
the scientific community, the benchmark datasets and codes are freely available
at https://github.com/YAndrewL/clape.
|
[
{
"created": "Wed, 28 Jun 2023 04:27:27 GMT",
"version": "v1"
}
] |
2023-06-29
|
[
[
"Liu",
"Yufan",
""
],
[
"Tian",
"Boxue",
""
]
] |
Protein-DNA interaction is critical for life activities such as replication, transcription, and splicing. Identifying protein-DNA binding residues is essential for modeling their interaction and downstream studies. However, developing accurate and efficient computational methods for this task remains challenging. Improvements in this area have the potential to drive novel applications in biotechnology and drug design. In this study, we propose a novel approach called CLAPE, which combines a pre-trained protein language model and the contrastive learning method to predict DNA binding residues. We trained the CLAPE-DB model on the protein-DNA binding sites dataset and evaluated the model performance and generalization ability through various experiments. The results showed that the AUC values of the CLAPE-DB model on the two benchmark datasets reached 0.871 and 0.881, respectively, indicating superior performance compared to other existing models. CLAPE-DB showed better generalization ability and was specific to DNA-binding sites. In addition, we trained CLAPE on different protein-ligand binding sites datasets, demonstrating that CLAPE is a general framework for binding sites prediction. To facilitate the scientific community, the benchmark datasets and codes are freely available at https://github.com/YAndrewL/clape.
|
2408.05695
|
Zhaoyu Liu
|
Zhaoyu Liu, Jingxun Chen, Mingkun Xu, David H. Gracias, Ken-Tye Yong,
Yuanyuan Wei, Ho-Pui Ho
|
Advancements in Programmable Lipid Nanoparticles: A Comprehensive Review
of the Four-Domain Model for Precision Drug Delivery
|
53 pages, 9 figures
| null | null | null |
q-bio.BM physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Programmable lipid nanoparticles (LNPs) represent a critical advancement in
drug delivery. It offers precise spatiotemporal control over drug distribution
and release, which is essential for treating complex diseases such as cancer
and genetic disorders. However, the design and understanding of these
sophisticated systems necessitate a structured framework. This review
introduces a novel Four-Domain Model - the Architecture, Interface, Payload,
and Dispersal - providing a modular perspective that emphasizes the
programmability of LNPs. We delve into the dynamics between LNPs components and
their environment throughout their lifecycle, focusing on thermodynamic
stability during synthesis, storage, delivery, and drug release. Through these
four distinct but interconnected domains, we introduce the concept of input
stimuli, functional components, and output responses. This modular approach
offers new perspectives for the rational design of programmable nanocarriers
for exquisite control over payload release while minimizing off-target effects.
Advances in bioinspired design principles could lead to LNPs that mimic natural
biological systems, enhancing their biocompatibility and functionality. This
review summarizes recent advancements, identifies challenges, and offers
outlooks for programmable LNPs, emphasizing their potential to evolve into more
intelligent, naturally integrated systems that enhance scalability and reduce
side effects. Additionally, exploring innovative anatomical routes such as
intranasal and intraocular delivery holds significant promise for enhancing
drug delivery efficacy and patient compliance, further expanding the clinical
potential of programmable LNPs.
|
[
{
"created": "Sun, 11 Aug 2024 04:50:35 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 02:11:49 GMT",
"version": "v2"
}
] |
2024-08-15
|
[
[
"Liu",
"Zhaoyu",
""
],
[
"Chen",
"Jingxun",
""
],
[
"Xu",
"Mingkun",
""
],
[
"Gracias",
"David H.",
""
],
[
"Yong",
"Ken-Tye",
""
],
[
"Wei",
"Yuanyuan",
""
],
[
"Ho",
"Ho-Pui",
""
]
] |
Programmable lipid nanoparticles (LNPs) represent a critical advancement in drug delivery. It offers precise spatiotemporal control over drug distribution and release, which is essential for treating complex diseases such as cancer and genetic disorders. However, the design and understanding of these sophisticated systems necessitate a structured framework. This review introduces a novel Four-Domain Model - the Architecture, Interface, Payload, and Dispersal - providing a modular perspective that emphasizes the programmability of LNPs. We delve into the dynamics between LNPs components and their environment throughout their lifecycle, focusing on thermodynamic stability during synthesis, storage, delivery, and drug release. Through these four distinct but interconnected domains, we introduce the concept of input stimuli, functional components, and output responses. This modular approach offers new perspectives for the rational design of programmable nanocarriers for exquisite control over payload release while minimizing off-target effects. Advances in bioinspired design principles could lead to LNPs that mimic natural biological systems, enhancing their biocompatibility and functionality. This review summarizes recent advancements, identifies challenges, and offers outlooks for programmable LNPs, emphasizing their potential to evolve into more intelligent, naturally integrated systems that enhance scalability and reduce side effects. Additionally, exploring innovative anatomical routes such as intranasal and intraocular delivery holds significant promise for enhancing drug delivery efficacy and patient compliance, further expanding the clinical potential of programmable LNPs.
|
0707.3224
|
Rudolf A. Roemer
|
G. Cuniberti, E. Macia, A. Rodriguez, R. A. R\"omer
|
Tight-binding modeling of charge migration in DNA devices
|
24 PDF pages of Springer SVMult LaTeX (included), ISBN-10:
3540724931, ISBN-13: 978-3540724933
|
in "Charge Migration in DNA: Perspectives from Physics, Chemistry
and Biology" (T. Chakraborty, Ed.), Springer Verlag, Berlin, pp. 1-21 (2007),
ISBN: 978-3-540-72493-3
|
10.1007/978-3-540-72494-0_1
| null |
q-bio.GN cond-mat.soft q-bio.OT
| null |
Long range charge transfer experiments in DNA oligomers and the subsequently
measured -- and very diverse -- transport response of DNA wires in solid state
experiments exemplifies the need for a thorough theoretical understanding of
charge migration in DNA-based natural and artificial materials. Here we present
a review of tight-binding models for DNA conduction which have the intrinsic
merit of containing more structural information than plain rate-equation models
while still retaining sufficient detail of the electronic properties. This
allows for simulations of transport properties to be more manageable with
respect to density functional theory methods or correlated first principle
algorithms.
|
[
{
"created": "Sat, 21 Jul 2007 19:53:08 GMT",
"version": "v1"
}
] |
2015-05-13
|
[
[
"Cuniberti",
"G.",
""
],
[
"Macia",
"E.",
""
],
[
"Rodriguez",
"A.",
""
],
[
"Römer",
"R. A.",
""
]
] |
Long range charge transfer experiments in DNA oligomers and the subsequently measured -- and very diverse -- transport response of DNA wires in solid state experiments exemplifies the need for a thorough theoretical understanding of charge migration in DNA-based natural and artificial materials. Here we present a review of tight-binding models for DNA conduction which have the intrinsic merit of containing more structural information than plain rate-equation models while still retaining sufficient detail of the electronic properties. This allows for simulations of transport properties to be more manageable with respect to density functional theory methods or correlated first principle algorithms.
|
q-bio/0310015
|
Jan Karbowski
|
Jan Karbowski
|
How does connectivity between cortical areas depend on brain size?
Implications for efficient computation
|
brain, theoretical neuroanatomy, computational neuroanatomy, cortical
areas, scaling, connectivity
|
Journal of Computational Neuroscience 15, 347-356 (2003)
| null | null |
q-bio.NC q-bio.QM
| null |
A formula for an average connectivity between cortical areas in mammals is
derived. Based on comparative neuroanatomical data, it is found, surprisingly,
that this connectivity is either only weakly dependent or independent of brain
size. It is discussed how this formula can be used to estimate the average
length of axons in white matter. Other allometric relations, such as cortical
patches and area sizes vs. brain size, are also provided. Finally, some
functional implications, with an emphasis on efficient cortical computation,
are discussed as well.
|
[
{
"created": "Tue, 14 Oct 2003 01:58:12 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Karbowski",
"Jan",
""
]
] |
A formula for an average connectivity between cortical areas in mammals is derived. Based on comparative neuroanatomical data, it is found, surprisingly, that this connectivity is either only weakly dependent or independent of brain size. It is discussed how this formula can be used to estimate the average length of axons in white matter. Other allometric relations, such as cortical patches and area sizes vs. brain size, are also provided. Finally, some functional implications, with an emphasis on efficient cortical computation, are discussed as well.
|
1307.3358
|
Andrea Riba Mr
|
Andrea Riba, Carla Bosia, Mariama El Baroudi, Laura Ollino and Michele
Caselle
|
A combination of transcriptional and microRNA regulation improves the
stability of the relative concentrations of target genes
|
23 pages, 10 figures
| null |
10.1371/journal.pcbi.1003490
| null |
q-bio.MN q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well known that, under suitable conditions, microRNAs are able to fine
tune the relative concentration of their targets to any desired value. We show
that this function is particularly effective when one of the targets is a
Transcription Factor (TF) which regulates the other targets. This combination
defines a new class of feed-forward loops (FFLs) in which the microRNA plays
the role of master regulator. Using both deterministic and stochastic equations
we show that these FFLs are indeed able not only to fine-tune the TF/target
ratio to any desired value as a function of the miRNA concentration but also,
thanks to the peculiar topology of the circuit, to ensures the stability of
this ratio against stochastic fluctuations. These two effects are due to the
interplay between the direct transcriptional regulation and the indirect
TF/Target interaction due to competition of TF and target for miRNA binding
(the so called "sponge effect"). We then perform a genome wide search of these
FFLs in the human regulatory network and show that they are characterizedby a
very peculiar enrichment pattern. In particular they are strongly enriched in
all the situations in which the TF and its target have to be precisely kept at
the same concentration notwithstanding the environmental noise. As an example
we discuss the FFL involving E2F1 as Transcription Factor, RB1 as target and
miR-17 family as master regulator. These FFLs ensure a tight control of the
E2F/RB ratio which in turns ensures the stability of the transition from the
G0/G1 to the S phase in quiescent cells.
|
[
{
"created": "Fri, 12 Jul 2013 07:57:17 GMT",
"version": "v1"
}
] |
2015-06-16
|
[
[
"Riba",
"Andrea",
""
],
[
"Bosia",
"Carla",
""
],
[
"Baroudi",
"Mariama El",
""
],
[
"Ollino",
"Laura",
""
],
[
"Caselle",
"Michele",
""
]
] |
It is well known that, under suitable conditions, microRNAs are able to fine tune the relative concentration of their targets to any desired value. We show that this function is particularly effective when one of the targets is a Transcription Factor (TF) which regulates the other targets. This combination defines a new class of feed-forward loops (FFLs) in which the microRNA plays the role of master regulator. Using both deterministic and stochastic equations we show that these FFLs are indeed able not only to fine-tune the TF/target ratio to any desired value as a function of the miRNA concentration but also, thanks to the peculiar topology of the circuit, to ensures the stability of this ratio against stochastic fluctuations. These two effects are due to the interplay between the direct transcriptional regulation and the indirect TF/Target interaction due to competition of TF and target for miRNA binding (the so called "sponge effect"). We then perform a genome wide search of these FFLs in the human regulatory network and show that they are characterizedby a very peculiar enrichment pattern. In particular they are strongly enriched in all the situations in which the TF and its target have to be precisely kept at the same concentration notwithstanding the environmental noise. As an example we discuss the FFL involving E2F1 as Transcription Factor, RB1 as target and miR-17 family as master regulator. These FFLs ensure a tight control of the E2F/RB ratio which in turns ensures the stability of the transition from the G0/G1 to the S phase in quiescent cells.
|
1302.3261
|
Dominique Vuillaume
|
O. Bichler, W. Zhao, F. Alibart, S. Pleutin, S. Lenfant, D. Vuillaume,
C. Gamrat
|
Pavlov's dog associative learning demonstrated on synaptic-like organic
transistors
| null |
Neural Computation 25(2), 549-566 (2013)
|
10.1162/NECO_a_00377
| null |
q-bio.NC cond-mat.dis-nn cs.ET cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime.
|
[
{
"created": "Wed, 13 Feb 2013 22:18:49 GMT",
"version": "v1"
}
] |
2013-02-19
|
[
[
"Bichler",
"O.",
""
],
[
"Zhao",
"W.",
""
],
[
"Alibart",
"F.",
""
],
[
"Pleutin",
"S.",
""
],
[
"Lenfant",
"S.",
""
],
[
"Vuillaume",
"D.",
""
],
[
"Gamrat",
"C.",
""
]
] |
In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low power write operations for the learning and implement short-term association using temporal coding and spike timing dependent plasticity based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime.
|
q-bio/0610002
|
Sergey Gavrilets
|
Sergey Gavrilets and Aaron Vose
|
The dynamics of Machiavellian intelligence
|
A revised version has been published by PNAS
| null |
10.1073/pnas.0601428103
| null |
q-bio.PE nlin.AO
| null |
The "Machiavellian intelligence" hypothesis (or the "social brain"
hypothesis) posits that large brains and distinctive cognitive abilities of
humans have evolved via intense social competition in which social competitors
developed increasingly sophisticated "Machiavellian" strategies as a means to
achieve higher social and reproductive success. Here we build a mathematical
model aiming to explore this hypothesis. In the model, genes control brains
which invent and learn strategies (memes) which are used by males to gain
advantage in competition for mates. We show that the dynamics of intelligence
has three distinct phases. During the dormant phase only newly invented memes
are present in the population. During the cognitive explosion phase the
population's meme count and the learning ability, cerebral capacity
(controlling the number of different memes that the brain can learn and use),
and Machiavellian fitness of individuals increase in a runaway fashion. During
the saturation phase natural selection resulting from the costs of having large
brains checks further increases in cognitive abilities. Overall, our results
suggest that the mechanisms underlying the "Machiavellian intelligence"
hypothesis can indeed result in the evolution of significant cognitive
abilities on the time scale of 10 to 20 thousand generations. We show that
cerebral capacity evolves faster and to a larger degree than learning ability.
Our model suggests that there may be a tendency toward a reduction in cognitive
abilities (driven by the costs of having a large brain) as the reproductive
advantage of having a large brain decreases and the exposure to memes increases
in modern societies.
|
[
{
"created": "Sun, 1 Oct 2006 12:11:24 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Oct 2006 12:27:32 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Oct 2006 23:59:50 GMT",
"version": "v3"
}
] |
2009-11-13
|
[
[
"Gavrilets",
"Sergey",
""
],
[
"Vose",
"Aaron",
""
]
] |
The "Machiavellian intelligence" hypothesis (or the "social brain" hypothesis) posits that large brains and distinctive cognitive abilities of humans have evolved via intense social competition in which social competitors developed increasingly sophisticated "Machiavellian" strategies as a means to achieve higher social and reproductive success. Here we build a mathematical model aiming to explore this hypothesis. In the model, genes control brains which invent and learn strategies (memes) which are used by males to gain advantage in competition for mates. We show that the dynamics of intelligence has three distinct phases. During the dormant phase only newly invented memes are present in the population. During the cognitive explosion phase the population's meme count and the learning ability, cerebral capacity (controlling the number of different memes that the brain can learn and use), and Machiavellian fitness of individuals increase in a runaway fashion. During the saturation phase natural selection resulting from the costs of having large brains checks further increases in cognitive abilities. Overall, our results suggest that the mechanisms underlying the "Machiavellian intelligence" hypothesis can indeed result in the evolution of significant cognitive abilities on the time scale of 10 to 20 thousand generations. We show that cerebral capacity evolves faster and to a larger degree than learning ability. Our model suggests that there may be a tendency toward a reduction in cognitive abilities (driven by the costs of having a large brain) as the reproductive advantage of having a large brain decreases and the exposure to memes increases in modern societies.
|
2112.06140
|
Yashar Zeighami
|
Yashar Zeighami, Mahsa Dadar, Justine Daoust, Melissa Pelletier,
Laurent Biertho, Leonie Bouvet-Bouchard, Stephanie Fulton, Andre Tchernof,
Alain Dagher, Denis Richard, Alan Evans, Andreanne Michaud
|
Impact of Weight Loss on Brain Age: Improved Brain Health Following
Bariatric Surgery
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Overweight and obese individuals tend to have increased brain age, reflecting
poorer brain health likely due to grey and white matter atrophy related to
obesity. However, it is unclear if older brain age associated with obesity can
be reversed following weight loss and cardiometabolic health improvement. The
aim of this study was to assess the impact of weight loss and cardiometabolic
improvement following bariatric surgery on brain health, as measured by change
in brain age estimated based on voxel-based morphometry (VBM). We used three
datasets: 1) CamCAN to train the brain age prediction model, 2) Human
Connectome Project to investigate whether individuals with obesity have greater
brain age than individuals with normal weight, and 3) pre-surgery, as well as
4, 12, and 24 month post-surgery data from participants (n=87) who underwent a
bariatric surgery to investigate whether weight loss and cardiometabolic
improvement as a result of bariatric surgery lowers the brain age. Our results
from the HCP dataset showed a higher brain age for individuals with obesity
compared to individuals with normal weight (p<0.0001). We also found
significant improvement in brain health, indicated by a decrease of 2.9 and 5.6
years in adjusted delta age at 12 and 24 months following bariatric surgery
compared to baseline (p<0.0005). While the overall effect seemed to be driven
by a global change across all brain regions and not from a specific region, our
exploratory analysis showed lower delta age in certain brain regions at 24
months. This reduced age was also associated with post-surgery improvements in
BMI, systolic/diastolic blood pressure, and HOMA-IR (p<0.05). In conclusion,
these results suggest that obesity-related brain health abnormalities might be
reversed by bariatric surgery-induced weight loss and widespread improvements
in cardiometabolic alterations.
|
[
{
"created": "Sun, 12 Dec 2021 03:46:18 GMT",
"version": "v1"
}
] |
2021-12-14
|
[
[
"Zeighami",
"Yashar",
""
],
[
"Dadar",
"Mahsa",
""
],
[
"Daoust",
"Justine",
""
],
[
"Pelletier",
"Melissa",
""
],
[
"Biertho",
"Laurent",
""
],
[
"Bouvet-Bouchard",
"Leonie",
""
],
[
"Fulton",
"Stephanie",
""
],
[
"Tchernof",
"Andre",
""
],
[
"Dagher",
"Alain",
""
],
[
"Richard",
"Denis",
""
],
[
"Evans",
"Alan",
""
],
[
"Michaud",
"Andreanne",
""
]
] |
Overweight and obese individuals tend to have increased brain age, reflecting poorer brain health likely due to grey and white matter atrophy related to obesity. However, it is unclear if older brain age associated with obesity can be reversed following weight loss and cardiometabolic health improvement. The aim of this study was to assess the impact of weight loss and cardiometabolic improvement following bariatric surgery on brain health, as measured by change in brain age estimated based on voxel-based morphometry (VBM). We used three datasets: 1) CamCAN to train the brain age prediction model, 2) Human Connectome Project to investigate whether individuals with obesity have greater brain age than individuals with normal weight, and 3) pre-surgery, as well as 4, 12, and 24 month post-surgery data from participants (n=87) who underwent a bariatric surgery to investigate whether weight loss and cardiometabolic improvement as a result of bariatric surgery lowers the brain age. Our results from the HCP dataset showed a higher brain age for individuals with obesity compared to individuals with normal weight (p<0.0001). We also found significant improvement in brain health, indicated by a decrease of 2.9 and 5.6 years in adjusted delta age at 12 and 24 months following bariatric surgery compared to baseline (p<0.0005). While the overall effect seemed to be driven by a global change across all brain regions and not from a specific region, our exploratory analysis showed lower delta age in certain brain regions at 24 months. This reduced age was also associated with post-surgery improvements in BMI, systolic/diastolic blood pressure, and HOMA-IR (p<0.05). In conclusion, these results suggest that obesity-related brain health abnormalities might be reversed by bariatric surgery-induced weight loss and widespread improvements in cardiometabolic alterations.
|
0806.3130
|
Alfredo Iorio
|
Alfredo Iorio, Samik Sen, Siddhartha Sen
|
Do quantum effects hold together DNA condensates?
|
8 pages, 6 figures
| null | null | null |
q-bio.QM cond-mat.soft q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The classical electrostatic interaction between DNA molecules in water in the
presence of counterions is reconsidered and we propose it is governed by a
modified Poisson-Boltzmann equation. Quantum fluctuations are then studied and
shown to lead to a vacuum interaction that is numerically computed for several
configurations of many DNA strands and found to be strongly many-body. This
Casimir vacuum interaction can be the ``glue'' holding together DNA molecules
into aggregates.
|
[
{
"created": "Thu, 19 Jun 2008 06:34:27 GMT",
"version": "v1"
}
] |
2008-06-20
|
[
[
"Iorio",
"Alfredo",
""
],
[
"Sen",
"Samik",
""
],
[
"Sen",
"Siddhartha",
""
]
] |
The classical electrostatic interaction between DNA molecules in water in the presence of counterions is reconsidered and we propose it is governed by a modified Poisson-Boltzmann equation. Quantum fluctuations are then studied and shown to lead to a vacuum interaction that is numerically computed for several configurations of many DNA strands and found to be strongly many-body. This Casimir vacuum interaction can be the ``glue'' holding together DNA molecules into aggregates.
|
1912.12796
|
Liang Zhang
|
Liang Zhang, He Zhang, David H. Mathews, Liang Huang
|
ThreshKnot: Thresholded ProbKnot for Improved RNA Secondary Structure
Prediction
| null | null | null | null |
q-bio.BM physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RNA structure prediction is a challenging problem, especially with
pseudoknots. Recently, there has been a shift from the classical minimum free
energy-based methods (MFE) to partition function-based ones that assemble
structures using base-pairing probabilities. Two examples of the latter group
are the popular maximum expected accuracy (MEA) method and the ProbKnot method.
ProbKnot is a fast heuristic that pairs nucleotides that are reciprocally most
probable pairing partners, and unlike MEA, can also predict structures with
pseudoknots. However, ProbKnot's full potential has been largely overlooked. In
particular, when introduced, it did not have an MEA-like hyperparameter that
can balance between positive predictive value (PPV) and sensitivity. We show
that a simple thresholded version of ProbKnot, which we call ThreshKnot, leads
to more accurate overall predictions by filtering out unlikely pairs whose
probabilities fall under a given threshold. We also show that on three
widely-used folding engines (RNAstructure, Vienna RNAfold, and CONTRAfold),
ThreshKnot always outperforms the much more involved MEA algorithm in (1) its
higher structure prediction accuracy, (2) its capability to predict
pseudoknots, and (3) its faster runtime and easier implementation. This
suggests that ThreshKnot should replace MEA as the default partition
function-based structure prediction algorithm. ThreshKnot is already available
in the widely used RNAstructure software package version 6.2 (released November
27, 2019): https://rna.urmc.rochester.edu/RNAstructure.html
|
[
{
"created": "Mon, 30 Dec 2019 03:13:07 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jan 2020 01:36:04 GMT",
"version": "v2"
}
] |
2020-01-10
|
[
[
"Zhang",
"Liang",
""
],
[
"Zhang",
"He",
""
],
[
"Mathews",
"David H.",
""
],
[
"Huang",
"Liang",
""
]
] |
RNA structure prediction is a challenging problem, especially with pseudoknots. Recently, there has been a shift from the classical minimum free energy-based methods (MFE) to partition function-based ones that assemble structures using base-pairing probabilities. Two examples of the latter group are the popular maximum expected accuracy (MEA) method and the ProbKnot method. ProbKnot is a fast heuristic that pairs nucleotides that are reciprocally most probable pairing partners, and unlike MEA, can also predict structures with pseudoknots. However, ProbKnot's full potential has been largely overlooked. In particular, when introduced, it did not have an MEA-like hyperparameter that can balance between positive predictive value (PPV) and sensitivity. We show that a simple thresholded version of ProbKnot, which we call ThreshKnot, leads to more accurate overall predictions by filtering out unlikely pairs whose probabilities fall under a given threshold. We also show that on three widely-used folding engines (RNAstructure, Vienna RNAfold, and CONTRAfold), ThreshKnot always outperforms the much more involved MEA algorithm in (1) its higher structure prediction accuracy, (2) its capability to predict pseudoknots, and (3) its faster runtime and easier implementation. This suggests that ThreshKnot should replace MEA as the default partition function-based structure prediction algorithm. ThreshKnot is already available in the widely used RNAstructure software package version 6.2 (released November 27, 2019): https://rna.urmc.rochester.edu/RNAstructure.html
|
q-bio/0611080
|
Maksim Kouza M
|
Mai Suan Li, Maksim Kouza, Chin-Kun Hu
|
Refolding upon force quench and pathways of mechanical and thermal
unfolding of ubiquitin
|
35 pages, 15 figures, 1 table
|
Biophysical Journal, 92, 547-561 (2007)
|
10.1529/biophysj.106.087684
| null |
q-bio.BM
| null |
The refolding from stretched initial conformations of ubiquitin (PDB ID:
1ubq) under the quenched force is studied using the Go model and the Langevin
dynamics. It is shown that the refolding decouples the collapse and folding
kinetics. The force quench refolding times scale as tau_F ~ exp(f_q*x_F/k_B*T),
where f_q is the quench force and x_F = 0.96 nm is the location of the average
transition state along the reaction coordinate given by the end-to-end
distance. This value is close to x_F = 0.8 nm obtained from the force-clamp
experiments. The mechanical and thermal unfolding pathways are studied and
compared with the experimental and all-atom simulation results in detail. The
sequencing of thermal unfolding was found to be markedly different from the
mechanical one. It is found that fixing the N-terminus of ubiquitin changes its
mechanical unfolding pathways much more drastically compared to the case when
the C-end is anchored. We obtained the distance between the native state and
the transition state x_UF=0.24 nm which is in reasonable agreement with the
experimental data.
|
[
{
"created": "Fri, 24 Nov 2006 07:33:19 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Li",
"Mai Suan",
""
],
[
"Kouza",
"Maksim",
""
],
[
"Hu",
"Chin-Kun",
""
]
] |
The refolding from stretched initial conformations of ubiquitin (PDB ID: 1ubq) under the quenched force is studied using the Go model and the Langevin dynamics. It is shown that the refolding decouples the collapse and folding kinetics. The force quench refolding times scale as tau_F ~ exp(f_q*x_F/k_B*T), where f_q is the quench force and x_F = 0.96 nm is the location of the average transition state along the reaction coordinate given by the end-to-end distance. This value is close to x_F = 0.8 nm obtained from the force-clamp experiments. The mechanical and thermal unfolding pathways are studied and compared with the experimental and all-atom simulation results in detail. The sequencing of thermal unfolding was found to be markedly different from the mechanical one. It is found that fixing the N-terminus of ubiquitin changes its mechanical unfolding pathways much more drastically compared to the case when the C-end is anchored. We obtained the distance between the native state and the transition state x_UF=0.24 nm which is in reasonable agreement with the experimental data.
|
1502.07075
|
Bartosz Rozycki
|
Bartosz Rozycki and Marek Cieplak
|
Citrate synthase proteins in extremophilic organisms: Studies within a
structure-based model
|
published in J. Chem. Phys
|
Journal of Chemical Physics 141, 235102 (2014)
|
10.1063/1.4903747
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study four citrate synthase homodimeric proteins within a structure-based
coarse-grained model. Two of these proteins come from thermophilic bacteria,
one from a cryophilic bacterium and one from a mesophilic organism; three are
in the closed and two in the open conformations. Even though the proteins
belong to the same fold, the model distinguishes the properties of these
proteins in a way which is consistent with experiments. For instance, the
thermophilic proteins are more stable thermodynamically than their mesophilic
and cryophilic homologues, which we observe both in the magnitude of thermal
fluctuations near the native state and in the kinetics of thermal unfolding.
The level of stability correlates with the average coordination number for
amino acids contacts and with the degree of structural compactness. The pattern
of positional fluctuations along the sequence in the closed conformation is
different than in the open conformation, including within the active site. The
modes of correlated and anticorrelated movements of pairs of amino acids
forming the active site are very different in the open and closed
conformations. Taken together, our results show that the precise location of
amino acid contacts in the native structure appears to be a critical element in
explaining the similarities and differences in the thermodynamic properties,
local flexibility and collective motions of the different forms of the enzyme.
|
[
{
"created": "Wed, 25 Feb 2015 07:54:59 GMT",
"version": "v1"
}
] |
2015-02-26
|
[
[
"Rozycki",
"Bartosz",
""
],
[
"Cieplak",
"Marek",
""
]
] |
We study four citrate synthase homodimeric proteins within a structure-based coarse-grained model. Two of these proteins come from thermophilic bacteria, one from a cryophilic bacterium and one from a mesophilic organism; three are in the closed and two in the open conformations. Even though the proteins belong to the same fold, the model distinguishes the properties of these proteins in a way which is consistent with experiments. For instance, the thermophilic proteins are more stable thermodynamically than their mesophilic and cryophilic homologues, which we observe both in the magnitude of thermal fluctuations near the native state and in the kinetics of thermal unfolding. The level of stability correlates with the average coordination number for amino acids contacts and with the degree of structural compactness. The pattern of positional fluctuations along the sequence in the closed conformation is different than in the open conformation, including within the active site. The modes of correlated and anticorrelated movements of pairs of amino acids forming the active site are very different in the open and closed conformations. Taken together, our results show that the precise location of amino acid contacts in the native structure appears to be a critical element in explaining the similarities and differences in the thermodynamic properties, local flexibility and collective motions of the different forms of the enzyme.
|
2111.00194
|
Yusuke Maeda
|
Yusuke T. Maeda
|
Negative autoregulation controls size scaling in confined gene
expression reactions
|
7 pages, 5 figures
|
Scientific Reports 12, 10516 (2022)
|
10.1038/s41598-022-14719-4
| null |
q-bio.MN physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gene expression via transcription-translation is the most fundamental
reaction to sustain biological systems, and complex reactions such as this one
occur in a small compartment of living cells. There is increasing evidence that
t physical effects, such as molecular crowding or excluded volume effects of
transcriptional-translational machinery, affect the yield of reaction products.
On the other hand, transcriptional feedback that controls gene expression
during mRNA synthesis is also a vital mechanism that regulates protein
synthesis in cells. However, the excluded volume effect of spatial constraints
on feedback regulation is not well understood. Here, we study the confinement
effect on transcriptional autoregulatory feedbacks of gene expression reactions
using a theoretical model. The excluded volume effects between molecules and
the membrane interface suppress the gene expression in a small cell-sized
compartment. We find that negative feedback regulation at the transcription
step mitigates this size-induced gene repression and alters the scaling
relation of gene expression level on compartment volume, approaching the
regular scaling relation without the steric effect. This recovery of regular
size-scaling of gene expression does not appear in positive feedback
regulation, suggesting that negative autoregulatory feedback is crucial for
maintaining reaction products constant regardless of compartment size in
heterogeneous cell populations.
|
[
{
"created": "Sat, 30 Oct 2021 07:05:40 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Apr 2022 15:57:28 GMT",
"version": "v2"
}
] |
2022-08-04
|
[
[
"Maeda",
"Yusuke T.",
""
]
] |
Gene expression via transcription-translation is the most fundamental reaction to sustain biological systems, and complex reactions such as this one occur in a small compartment of living cells. There is increasing evidence that t physical effects, such as molecular crowding or excluded volume effects of transcriptional-translational machinery, affect the yield of reaction products. On the other hand, transcriptional feedback that controls gene expression during mRNA synthesis is also a vital mechanism that regulates protein synthesis in cells. However, the excluded volume effect of spatial constraints on feedback regulation is not well understood. Here, we study the confinement effect on transcriptional autoregulatory feedbacks of gene expression reactions using a theoretical model. The excluded volume effects between molecules and the membrane interface suppress the gene expression in a small cell-sized compartment. We find that negative feedback regulation at the transcription step mitigates this size-induced gene repression and alters the scaling relation of gene expression level on compartment volume, approaching the regular scaling relation without the steric effect. This recovery of regular size-scaling of gene expression does not appear in positive feedback regulation, suggesting that negative autoregulatory feedback is crucial for maintaining reaction products constant regardless of compartment size in heterogeneous cell populations.
|
1011.1109
|
David Kleinhans
|
David Kleinhans, Per R. Jonsson
|
On the impact of dispersal asymmetry on metapopulation persistence
|
19 pages, 5 figures
|
Journal of Theoretical Biology, 290, pages 37-45 (2011)
|
10.1016/j.jtbi.2011.09.002
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metapopulation theory for a long time has assumed dispersal to be symmetric,
i.e. patches are connected through migrants dispersing bi-directionally without
a preferred direction. However, for natural populations symmetry is often
broken, e.g. for species in the marine environment dispersing through the
transport of pelagic larvae with ocean currents. The few recent studies of
asymmetric dispersal concluded, that asymmetry has a distinct negative impact
on the persistence of metapopulations. Detailed analysis however revealed, that
these previous studies might have been unable to properly disentangle the
effect of symmetry from other potentially confounding properties of dispersal
patterns. We resolve this issue by systematically investigating the symmetry of
dispersal patterns and its impact on metapopulation persistence. Our main
analysis based on a metapopulation model equivalent to previous studies but now
applied on regular dispersal patterns aims to isolate the effect of dispersal
symmetry on metapopulation persistence. Our results suggest, that asymmetry in
itself does not imply negative effects on metapopulation persistence. For this
reason we recommend to investigate it in connection with other properties of
dispersal instead of in isolation.
|
[
{
"created": "Thu, 4 Nov 2010 10:52:13 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jun 2011 09:51:35 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Sep 2011 09:53:04 GMT",
"version": "v3"
}
] |
2011-10-07
|
[
[
"Kleinhans",
"David",
""
],
[
"Jonsson",
"Per R.",
""
]
] |
Metapopulation theory for a long time has assumed dispersal to be symmetric, i.e. patches are connected through migrants dispersing bi-directionally without a preferred direction. However, for natural populations symmetry is often broken, e.g. for species in the marine environment dispersing through the transport of pelagic larvae with ocean currents. The few recent studies of asymmetric dispersal concluded, that asymmetry has a distinct negative impact on the persistence of metapopulations. Detailed analysis however revealed, that these previous studies might have been unable to properly disentangle the effect of symmetry from other potentially confounding properties of dispersal patterns. We resolve this issue by systematically investigating the symmetry of dispersal patterns and its impact on metapopulation persistence. Our main analysis based on a metapopulation model equivalent to previous studies but now applied on regular dispersal patterns aims to isolate the effect of dispersal symmetry on metapopulation persistence. Our results suggest, that asymmetry in itself does not imply negative effects on metapopulation persistence. For this reason we recommend to investigate it in connection with other properties of dispersal instead of in isolation.
|
1412.0780
|
Zahra Aminzare
|
Jana L. Gevertz, Zahra Aminzare, Kerri-Ann Norton, Judith
Perez-Velazquez, Alexandria Volkening, Katarzyna A. Rejniak
|
Emergence of Anti-Cancer Drug Resistance: Exploring the Importance of
the Microenvironmental Niche via a Spatial Model
| null | null | null | null |
q-bio.PE q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Practically, all chemotherapeutic agents lead to drug resistance. Clinically,
it is a challenge to determine whether resistance arises prior to, or as a
result of, cancer therapy. Further, a number of different intracellular and
microenvironmental factors have been correlated with the emergence of drug
resistance. With the goal of better understanding drug resistance and its
connection with the tumor microenvironment, we have developed a hybrid
discrete-continuous mathematical model. In this model, cancer cells described
through a particle-spring approach respond to dynamically changing oxygen and
DNA damaging drug concentrations described through partial differential
equations. We thoroughly explored the behavior of our self-calibrated model
under the following common conditions: a fixed layout of the vasculature, an
identical initial configuration of cancer cells, the same mechanism of drug
action, and one mechanism of cellular response to the drug. We considered one
set of simulations in which drug resistance existed prior to the start of
treatment, and another set in which drug resistance is acquired in response to
treatment. This allows us to compare how both kinds of resistance influence the
spatial and temporal dynamics of the developing tumor, and its clonal
diversity. We show that both pre-existing and acquired resistance can give rise
to three biologically distinct parameter regimes: successful tumor eradication,
reduced effectiveness of drug during the course of treatment (resistance), and
complete treatment failure.
|
[
{
"created": "Tue, 2 Dec 2014 04:17:38 GMT",
"version": "v1"
}
] |
2014-12-03
|
[
[
"Gevertz",
"Jana L.",
""
],
[
"Aminzare",
"Zahra",
""
],
[
"Norton",
"Kerri-Ann",
""
],
[
"Perez-Velazquez",
"Judith",
""
],
[
"Volkening",
"Alexandria",
""
],
[
"Rejniak",
"Katarzyna A.",
""
]
] |
Practically, all chemotherapeutic agents lead to drug resistance. Clinically, it is a challenge to determine whether resistance arises prior to, or as a result of, cancer therapy. Further, a number of different intracellular and microenvironmental factors have been correlated with the emergence of drug resistance. With the goal of better understanding drug resistance and its connection with the tumor microenvironment, we have developed a hybrid discrete-continuous mathematical model. In this model, cancer cells described through a particle-spring approach respond to dynamically changing oxygen and DNA damaging drug concentrations described through partial differential equations. We thoroughly explored the behavior of our self-calibrated model under the following common conditions: a fixed layout of the vasculature, an identical initial configuration of cancer cells, the same mechanism of drug action, and one mechanism of cellular response to the drug. We considered one set of simulations in which drug resistance existed prior to the start of treatment, and another set in which drug resistance is acquired in response to treatment. This allows us to compare how both kinds of resistance influence the spatial and temporal dynamics of the developing tumor, and its clonal diversity. We show that both pre-existing and acquired resistance can give rise to three biologically distinct parameter regimes: successful tumor eradication, reduced effectiveness of drug during the course of treatment (resistance), and complete treatment failure.
|
1409.4303
|
Arnab Ganguly
|
Arnab Ganguly, Derya Altintan, Heinz Koeppl
|
Jump-Diffusion Approximation of Stochastic Reaction Dynamics: Error
bounds and Algorithms
|
32 pages, 7 figures
| null | null | null |
q-bio.QM math.PR q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biochemical reactions can happen on different time scales and also the
abundance of species in these reactions can be very different from each other.
Classical approaches, such as deterministic or stochastic approach, fail to
account for or to exploit this multi-scale nature, respectively. In this paper,
we propose a jump-diffusion approximation for multi-scale Markov jump processes
that couples the two modeling approaches. An error bound of the proposed
approximation is derived and used to partition the reactions into fast and slow
sets, where the fast set is simulated by a stochastic differential equation and
the slow set is modeled by a discrete chain. The error bound leads to a very
efficient dynamic partitioning algorithm which has been implemented for several
multi-scale reaction systems. The gain in computational efficiency is
illustrated by a realistically sized model of a signal transduction cascade
coupled to a gene expression dynamics.
|
[
{
"created": "Mon, 15 Sep 2014 15:52:55 GMT",
"version": "v1"
}
] |
2014-09-16
|
[
[
"Ganguly",
"Arnab",
""
],
[
"Altintan",
"Derya",
""
],
[
"Koeppl",
"Heinz",
""
]
] |
Biochemical reactions can happen on different time scales and also the abundance of species in these reactions can be very different from each other. Classical approaches, such as deterministic or stochastic approach, fail to account for or to exploit this multi-scale nature, respectively. In this paper, we propose a jump-diffusion approximation for multi-scale Markov jump processes that couples the two modeling approaches. An error bound of the proposed approximation is derived and used to partition the reactions into fast and slow sets, where the fast set is simulated by a stochastic differential equation and the slow set is modeled by a discrete chain. The error bound leads to a very efficient dynamic partitioning algorithm which has been implemented for several multi-scale reaction systems. The gain in computational efficiency is illustrated by a realistically sized model of a signal transduction cascade coupled to a gene expression dynamics.
|
2001.03761
|
Marinho Lopes
|
Marinho A. Lopes, Jiaxiang Zhang, Dominik Krzemi\'nski, Khalid
Hamandi, Qi Chen, Lorenzo Livi, and Naoki Masuda
|
Recurrence Quantification Analysis of Dynamic Brain Networks
|
77 pages, 11 figures; note: the acknowledgments section is the most
complete in this arxiv version (compared to the published version in EJN)
| null |
10.1111/ejn.14960
| null |
q-bio.NC cond-mat.dis-nn
|
http://creativecommons.org/licenses/by/4.0/
|
Evidence suggests that brain network dynamics is a key determinant of brain
function and dysfunction. Here we propose a new framework to assess the
dynamics of brain networks based on recurrence analysis. Our framework uses
recurrence plots and recurrence quantification analysis to characterize dynamic
networks. For resting-state magnetoencephalographic dynamic functional networks
(dFNs), we have found that functional networks recur more quickly in people
with epilepsy than healthy controls. This suggests that recurrence of dFNs may
be used as a biomarker of epilepsy. For stereo electroencephalography data, we
have found that dFNs involved in epileptic seizures emerge before seizure
onset, and recurrence analysis allows us to detect seizures. We further observe
distinct dFNs before and after seizures, which may inform neurostimulation
strategies to prevent seizures. Our framework can also be used for
understanding dFNs in healthy brain function and in other neurological
disorders besides epilepsy.
|
[
{
"created": "Sat, 11 Jan 2020 14:41:34 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Sep 2020 11:04:57 GMT",
"version": "v2"
}
] |
2020-09-17
|
[
[
"Lopes",
"Marinho A.",
""
],
[
"Zhang",
"Jiaxiang",
""
],
[
"Krzemiński",
"Dominik",
""
],
[
"Hamandi",
"Khalid",
""
],
[
"Chen",
"Qi",
""
],
[
"Livi",
"Lorenzo",
""
],
[
"Masuda",
"Naoki",
""
]
] |
Evidence suggests that brain network dynamics is a key determinant of brain function and dysfunction. Here we propose a new framework to assess the dynamics of brain networks based on recurrence analysis. Our framework uses recurrence plots and recurrence quantification analysis to characterize dynamic networks. For resting-state magnetoencephalographic dynamic functional networks (dFNs), we have found that functional networks recur more quickly in people with epilepsy than healthy controls. This suggests that recurrence of dFNs may be used as a biomarker of epilepsy. For stereo electroencephalography data, we have found that dFNs involved in epileptic seizures emerge before seizure onset, and recurrence analysis allows us to detect seizures. We further observe distinct dFNs before and after seizures, which may inform neurostimulation strategies to prevent seizures. Our framework can also be used for understanding dFNs in healthy brain function and in other neurological disorders besides epilepsy.
|
2003.12089
|
Jun Chen
|
Komi Messan, Marisabel Rodriguez Messan, Jun Chen, Gloria
DeGrandi-Hoffman, Yun Kang
|
Population dynamics of Varroa mite and honeybee: Effects of parasitism
with age structure and seasonality
| null | null | null | null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Honeybees play an important role in the production of many agricultural crops
and in sustaining plant diversity in undisturbed ecosystems. The rapid decline
of honeybee populations have sparked great concern worldwide. Previous studies
have shown that the parasitic Varroa mite could be the main reason for colony
losses. In order to understand how mites affect population dynamics of
honeybees and a colony health, we propose a brood-adult bee-mite model in which
the time lag from brood to adult is taken into account. Noting that the
dynamics of a honeybee colony varies with respect to season, we validate the
model and perform parameter estimations under both constant and fluctuating
seasonality scenarios. Our analytical and numerical studies reveal the
following: (a) In the presence of parasite mites, the large time lag from brood
to adult could destabilize population dynamics and drive the colony to
collapse; but the small natural mortality of the adult population can promote a
mite-free colony when time lag is small or at an intermediate level; (b) Small
brood' infestation rates could stabilize all populations at the unique interior
equilibrium under constant seasonality while may drive the mite population to
die out when seasonality is taken into account; (c) High brood' infestation
rates can destabilize the colony dynamics leading to population collapse
depending on initial population size under constant and seasonal conditions;
(d) Results from sensitivity analysis indicate the queen's egg-laying may have
the greatest effect on colony population size. The brood death rate and the
colony size at which brood survivability is the half maximal were also shown to
be highly sensitive with an inverse correlation to the colony population size.
Our results provide insights on the effects of seasonality on the dynamics.
|
[
{
"created": "Thu, 26 Mar 2020 18:13:14 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Aug 2020 00:04:48 GMT",
"version": "v2"
}
] |
2020-08-12
|
[
[
"Messan",
"Komi",
""
],
[
"Messan",
"Marisabel Rodriguez",
""
],
[
"Chen",
"Jun",
""
],
[
"DeGrandi-Hoffman",
"Gloria",
""
],
[
"Kang",
"Yun",
""
]
] |
Honeybees play an important role in the production of many agricultural crops and in sustaining plant diversity in undisturbed ecosystems. The rapid decline of honeybee populations have sparked great concern worldwide. Previous studies have shown that the parasitic Varroa mite could be the main reason for colony losses. In order to understand how mites affect population dynamics of honeybees and a colony health, we propose a brood-adult bee-mite model in which the time lag from brood to adult is taken into account. Noting that the dynamics of a honeybee colony varies with respect to season, we validate the model and perform parameter estimations under both constant and fluctuating seasonality scenarios. Our analytical and numerical studies reveal the following: (a) In the presence of parasite mites, the large time lag from brood to adult could destabilize population dynamics and drive the colony to collapse; but the small natural mortality of the adult population can promote a mite-free colony when time lag is small or at an intermediate level; (b) Small brood' infestation rates could stabilize all populations at the unique interior equilibrium under constant seasonality while may drive the mite population to die out when seasonality is taken into account; (c) High brood' infestation rates can destabilize the colony dynamics leading to population collapse depending on initial population size under constant and seasonal conditions; (d) Results from sensitivity analysis indicate the queen's egg-laying may have the greatest effect on colony population size. The brood death rate and the colony size at which brood survivability is the half maximal were also shown to be highly sensitive with an inverse correlation to the colony population size. Our results provide insights on the effects of seasonality on the dynamics.
|
1804.06984
|
Seung Ki Baek
|
Yohsuke Murase and Seung Ki Baek
|
Seven rules to avoid the tragedy of the commons
|
11 pages, 4 figures
| null |
10.1016/j.jtbi.2018.04.027
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperation among self-interested players in a social dilemma is fragile and
easily interrupted by mistakes. In this work, we study the repeated $n$-person
public-goods game and search for a strategy that forms a cooperative Nash
equilibrium in the presence of implementation error with a guarantee that the
resulting payoff will be no less than any of the co-players'. By enumerating
strategic possibilities for $n=3$, we show that such a strategy indeed exists
when its memory length $m$ equals three. It means that a deterministic strategy
can be publicly employed to stabilize cooperation against error with avoiding
the risk of being exploited. We furthermore show that, for general $n$-person
public-goods game, $m \geq n$ is necessary to satisfy the above criteria.
|
[
{
"created": "Thu, 19 Apr 2018 02:55:03 GMT",
"version": "v1"
}
] |
2018-04-20
|
[
[
"Murase",
"Yohsuke",
""
],
[
"Baek",
"Seung Ki",
""
]
] |
Cooperation among self-interested players in a social dilemma is fragile and easily interrupted by mistakes. In this work, we study the repeated $n$-person public-goods game and search for a strategy that forms a cooperative Nash equilibrium in the presence of implementation error with a guarantee that the resulting payoff will be no less than any of the co-players'. By enumerating strategic possibilities for $n=3$, we show that such a strategy indeed exists when its memory length $m$ equals three. It means that a deterministic strategy can be publicly employed to stabilize cooperation against error with avoiding the risk of being exploited. We furthermore show that, for general $n$-person public-goods game, $m \geq n$ is necessary to satisfy the above criteria.
|
2103.07107
|
Maxime Lenormand
|
Nicolas Dubos, Cl\'ementine Pr\'eau, Maxime Lenormand, Guillaume
Papuga, Sophie Montsarrat, Pierre Denelle, Marine Le Louarn, Stien Heremans,
May Roel, Philip Roche and Sandra Luque
|
Assessing the effect of sample bias correction in species distribution
models
|
16 pages, 6 figures + Appendix
|
Ecological Indicators 145, 109487 (2022)
|
10.1016/j.ecolind.2022.109487
| null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-source biodiversity databases contain a large amount of species
occurrence records, but these are often spatially biased, which affects the
reliability of species distribution models based on these records. Sample bias
correction techniques include data filtering at the cost of record numbers or
require considerable additional sampling effort. However, independent data are
rarely available and assessment of the correction technique must rely on
performance metrics computed with subsets of the only available (biased) data,
which may be misleading. Here we assess the extent to which an acknowledged
sample bias correction technique is likely to improve models' ability to
predict species distributions in the absence of independent data. We assessed
the variation in model predictions induced by the correction and model
stochasticity. We present an index of the effect of correction relative to
model stochasticity, the Relative Overlap Index (ROI). We tested whether the
ROI better represented the effect of correction than classic performance
metrics and absolute overlap metrics using 64 vertebrate species and 21 virtual
species with a generated sample bias. When based on absolute overlaps and
cross-validation performance metrics, we found no effect of correction, except
for cAUC. When considering its effect relative to model stochasticity, the
effect of correction depended on the site and the species. Virtual species
enabled us to verify that the correction actually improved distribution
predictions and the biological relevance of the selected variables at the sites
with a clear gradient of sample bias, and when species distribution predictors
are not correlated with sample bias patterns.
|
[
{
"created": "Fri, 12 Mar 2021 07:02:04 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Nov 2021 10:12:06 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Oct 2022 06:50:50 GMT",
"version": "v3"
}
] |
2022-10-28
|
[
[
"Dubos",
"Nicolas",
""
],
[
"Préau",
"Clémentine",
""
],
[
"Lenormand",
"Maxime",
""
],
[
"Papuga",
"Guillaume",
""
],
[
"Montsarrat",
"Sophie",
""
],
[
"Denelle",
"Pierre",
""
],
[
"Louarn",
"Marine Le",
""
],
[
"Heremans",
"Stien",
""
],
[
"Roel",
"May",
""
],
[
"Roche",
"Philip",
""
],
[
"Luque",
"Sandra",
""
]
] |
Open-source biodiversity databases contain a large amount of species occurrence records, but these are often spatially biased, which affects the reliability of species distribution models based on these records. Sample bias correction techniques include data filtering at the cost of record numbers or require considerable additional sampling effort. However, independent data are rarely available and assessment of the correction technique must rely on performance metrics computed with subsets of the only available (biased) data, which may be misleading. Here we assess the extent to which an acknowledged sample bias correction technique is likely to improve models' ability to predict species distributions in the absence of independent data. We assessed the variation in model predictions induced by the correction and model stochasticity. We present an index of the effect of correction relative to model stochasticity, the Relative Overlap Index (ROI). We tested whether the ROI better represented the effect of correction than classic performance metrics and absolute overlap metrics using 64 vertebrate species and 21 virtual species with a generated sample bias. When based on absolute overlaps and cross-validation performance metrics, we found no effect of correction, except for cAUC. When considering its effect relative to model stochasticity, the effect of correction depended on the site and the species. Virtual species enabled us to verify that the correction actually improved distribution predictions and the biological relevance of the selected variables at the sites with a clear gradient of sample bias, and when species distribution predictors are not correlated with sample bias patterns.
|
1909.13667
|
Matus Medo
|
Matus Medo, Daniel M. Aebersold, Michaela Medova
|
ProtRank: Bypassing the imputation of missing values in differential
expression analysis of proteomic data
|
18 pages, 7 figures
| null | null | null |
q-bio.QM q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data from discovery proteomic and phosphoproteomic experiments typically
include missing values that correspond to proteins that have not been
identified in the analyzed sample. Replacing the missing values with random
numbers, a process known as "imputation", avoids apparent infinite fold-change
values. However, the procedure comes at a cost: Imputing a large number of
missing values has the potential to significantly impact the results of the
subsequent differential expression analysis. We propose a method that
identifies differentially expressed proteins by ranking their observed changes
with respect to the changes observed for other proteins. Missing values are
taken into account by this method directly, without the need to impute them. We
illustrate the performance of the new method on two distinct datasets and show
that it is robust to missing values and, at the same time, provides results
that are otherwise similar to those obtained with edgeR which is a state-of-art
differential expression analysis method. The new method for the differential
expression analysis of proteomic data is available as an easy to use Python
package.
|
[
{
"created": "Mon, 30 Sep 2019 13:05:31 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Medo",
"Matus",
""
],
[
"Aebersold",
"Daniel M.",
""
],
[
"Medova",
"Michaela",
""
]
] |
Data from discovery proteomic and phosphoproteomic experiments typically include missing values that correspond to proteins that have not been identified in the analyzed sample. Replacing the missing values with random numbers, a process known as "imputation", avoids apparent infinite fold-change values. However, the procedure comes at a cost: Imputing a large number of missing values has the potential to significantly impact the results of the subsequent differential expression analysis. We propose a method that identifies differentially expressed proteins by ranking their observed changes with respect to the changes observed for other proteins. Missing values are taken into account by this method directly, without the need to impute them. We illustrate the performance of the new method on two distinct datasets and show that it is robust to missing values and, at the same time, provides results that are otherwise similar to those obtained with edgeR which is a state-of-art differential expression analysis method. The new method for the differential expression analysis of proteomic data is available as an easy to use Python package.
|
2007.06623
|
Joe Greener
|
Joe G Greener, Nikita Desai, Shaun M Kandathil, David T Jones
|
Near-complete protein structural modelling of the minimal genome
|
JGG and ND contributed equally to the work
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein tertiary structure prediction has improved dramatically in recent
years. A considerable fraction of various proteomes can be modelled in the
absence of structural templates. We ask whether our DMPfold method can model
all the proteins without templates in the JCVI-syn3.0 minimal genome, which
contains 438 proteins. We find that a useful tertiary structure annotation can
be provided for all but 10 proteins. The models may help annotate function in
cases where it is unknown, and provide coverage for 29 predicted
protein-protein interactions which lacked monomer models. We also show that
DMPfold performs well on proteins with structures released since initial
publication. It is likely that the minimal genome will have complete structural
coverage within a few years.
|
[
{
"created": "Mon, 13 Jul 2020 18:53:03 GMT",
"version": "v1"
}
] |
2020-07-15
|
[
[
"Greener",
"Joe G",
""
],
[
"Desai",
"Nikita",
""
],
[
"Kandathil",
"Shaun M",
""
],
[
"Jones",
"David T",
""
]
] |
Protein tertiary structure prediction has improved dramatically in recent years. A considerable fraction of various proteomes can be modelled in the absence of structural templates. We ask whether our DMPfold method can model all the proteins without templates in the JCVI-syn3.0 minimal genome, which contains 438 proteins. We find that a useful tertiary structure annotation can be provided for all but 10 proteins. The models may help annotate function in cases where it is unknown, and provide coverage for 29 predicted protein-protein interactions which lacked monomer models. We also show that DMPfold performs well on proteins with structures released since initial publication. It is likely that the minimal genome will have complete structural coverage within a few years.
|
2005.13438
|
Hyeoncheol Cho
|
Hyeoncheol Cho, Eok Kyun Lee, Insung S. Choi
|
InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand
Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance
Propagation
| null | null | null | null |
q-bio.BM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Expanding the scope of graph-based, deep-learning models to noncovalent
protein-ligand interactions has earned increasing attention in structure-based
drug design. Modeling the protein-ligand interactions with graph neural
networks (GNNs) has experienced difficulties in the conversion of
protein-ligand complex structures into the graph representation and left
questions regarding whether the trained models properly learn the appropriate
noncovalent interactions. Here, we proposed a GNN architecture, denoted as
InteractionNet, which learns two separated molecular graphs, being covalent and
noncovalent, through distinct convolution layers. We also analyzed the
InteractionNet model with an explainability technique, i.e., layer-wise
relevance propagation, for examination of the chemical relevance of the model's
predictions. Separation of the covalent and noncovalent convolutional steps
made it possible to evaluate the contribution of each step independently and
analyze the graph-building strategy for noncovalent interactions. We applied
InteractionNet to the prediction of protein-ligand binding affinity and showed
that our model successfully predicted the noncovalent interactions in both
performance and relevance in chemical interpretation.
|
[
{
"created": "Tue, 12 May 2020 12:46:44 GMT",
"version": "v1"
}
] |
2020-05-28
|
[
[
"Cho",
"Hyeoncheol",
""
],
[
"Lee",
"Eok Kyun",
""
],
[
"Choi",
"Insung S.",
""
]
] |
Expanding the scope of graph-based, deep-learning models to noncovalent protein-ligand interactions has earned increasing attention in structure-based drug design. Modeling the protein-ligand interactions with graph neural networks (GNNs) has experienced difficulties in the conversion of protein-ligand complex structures into the graph representation and left questions regarding whether the trained models properly learn the appropriate noncovalent interactions. Here, we proposed a GNN architecture, denoted as InteractionNet, which learns two separated molecular graphs, being covalent and noncovalent, through distinct convolution layers. We also analyzed the InteractionNet model with an explainability technique, i.e., layer-wise relevance propagation, for examination of the chemical relevance of the model's predictions. Separation of the covalent and noncovalent convolutional steps made it possible to evaluate the contribution of each step independently and analyze the graph-building strategy for noncovalent interactions. We applied InteractionNet to the prediction of protein-ligand binding affinity and showed that our model successfully predicted the noncovalent interactions in both performance and relevance in chemical interpretation.
|
0901.0287
|
Aleksandar Stojmirovi\'c
|
Aleksandar Stojmirovi\'c and Yi-Kuo Yu
|
Information flow in interaction networks II: channels, path lengths and
potentials
|
Minor changes from v3. 30 pages, 7 figures. Plain LaTeX format. This
version contains some additional material compared to the journal submission:
two figures, one appendix and a few paragraphs
|
J Comput Biol. 19(4):379-403, 2012
|
10.1089/cmb.2010.0228
| null |
q-bio.MN
|
http://creativecommons.org/licenses/publicdomain/
|
In our previous publication, a framework for information flow in interaction
networks based on random walks with damping was formulated with two fundamental
modes: emitting and absorbing. While many other network analysis methods based
on random walks or equivalent notions have been developed before and after our
earlier work, one can show that they can all be mapped to one of the two modes.
In addition to these two fundamental modes, a major strength of our earlier
formalism was its accommodation of context-specific directed information flow
that yielded plausible and meaningful biological interpretation of protein
functions and pathways. However, the directed flow from origins to destinations
was induced via a potential function that was heuristic. Here, with a
theoretically sound approach called the channel mode, we extend our earlier
work for directed information flow. This is achieved by constructing a
potential function facilitating a purely probabilistic interpretation of the
channel mode. For each network node, the channel mode combines the solutions of
emitting and absorbing modes in the same context, producing what we call a
channel tensor. The entries of the channel tensor at each node can be
interpreted as the amount of flow passing through that node from an origin to a
destination. Similarly to our earlier model, the channel mode encompasses
damping as a free parameter that controls the locality of information flow.
Through examples involving the yeast pheromone response pathway, we illustrate
the versatility and stability of our new framework.
|
[
{
"created": "Fri, 2 Jan 2009 21:45:34 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2010 00:11:10 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Dec 2011 23:04:35 GMT",
"version": "v3"
},
{
"created": "Thu, 19 Apr 2012 16:15:19 GMT",
"version": "v4"
}
] |
2012-04-20
|
[
[
"Stojmirović",
"Aleksandar",
""
],
[
"Yu",
"Yi-Kuo",
""
]
] |
In our previous publication, a framework for information flow in interaction networks based on random walks with damping was formulated with two fundamental modes: emitting and absorbing. While many other network analysis methods based on random walks or equivalent notions have been developed before and after our earlier work, one can show that they can all be mapped to one of the two modes. In addition to these two fundamental modes, a major strength of our earlier formalism was its accommodation of context-specific directed information flow that yielded plausible and meaningful biological interpretation of protein functions and pathways. However, the directed flow from origins to destinations was induced via a potential function that was heuristic. Here, with a theoretically sound approach called the channel mode, we extend our earlier work for directed information flow. This is achieved by constructing a potential function facilitating a purely probabilistic interpretation of the channel mode. For each network node, the channel mode combines the solutions of emitting and absorbing modes in the same context, producing what we call a channel tensor. The entries of the channel tensor at each node can be interpreted as the amount of flow passing through that node from an origin to a destination. Similarly to our earlier model, the channel mode encompasses damping as a free parameter that controls the locality of information flow. Through examples involving the yeast pheromone response pathway, we illustrate the versatility and stability of our new framework.
|
1111.6489
|
Taiki Takahashi
|
Taiki Takahashi (1), Mizuho Shinada (1), Keigo Inukai (1,2), Shigehito
Tanida (1), Chisato Takahashi (1), Nobuhiro Mifune (1,2), Haruto Takagishi
(1,2), Yutaka Horita (1,2), Hirofumi Hashimoto (1,2), Kunihiro Yokota (1),
Tatsuya Kameda (1), Toshio Yamagishi (1) ((1) Department of Behavioral
Science, Hokkaido University, (2) Japan Society for the Promotion of
Sciences)
|
Stress hormones predict hyperbolic time-discount rates six months later
in adults
| null |
Neuro Endocrinol Lett. 2010;31(5):616-621
| null | null |
q-bio.NC q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objectives: Stress hormones have been associated with temporal discounting.
Although time-discount rate is shown to be stable over a long term, no study to
date examines whether individual differences in stress hormones could predict
individuals' time-discount rates in the relatively distant future (e.g., six
month later), which is of interest in neuroeconomics of stress-addiction
association.
Methods: We assessed 87 participants' salivary stress hormone (cortisol,
cortisone, and alpha-amylase) levels and hyperbolic discounting of delayed
rewards consisting of three magnitudes, at the time-interval of six months. For
salivary steroid assays, we employed a liquid chromatography/ mass spectroscopy
(LC/MS) method. The correlations between the stress hormone levels and
time-discount rates were examined.
Results: We observed that salivary alpha-amylase (sAA) levels were negatively
associated with time-discount rates in never-smokers. Notably, salivary levels
of stress steroids (i.e., cortisol and cortisone) negatively and positively
related to time-discount rates in men and women, respectively, in
never-smokers. Ever-smokers' discount rates were not predicted from these
stress hormone levels.
Conclusions: Individual differences in stress hormone levels predict
impulsivity in temporal discounting in the future. There are sex differences in
the effect of stress steroids on temporal discounting; while there was no sex
defference in the relationship between sAA and temporal discounting.
|
[
{
"created": "Tue, 22 Nov 2011 15:01:14 GMT",
"version": "v1"
}
] |
2012-12-04
|
[
[
"Takahashi",
"Taiki",
""
],
[
"Shinada",
"Mizuho",
""
],
[
"Inukai",
"Keigo",
""
],
[
"Tanida",
"Shigehito",
""
],
[
"Takahashi",
"Chisato",
""
],
[
"Mifune",
"Nobuhiro",
""
],
[
"Takagishi",
"Haruto",
""
],
[
"Horita",
"Yutaka",
""
],
[
"Hashimoto",
"Hirofumi",
""
],
[
"Yokota",
"Kunihiro",
""
],
[
"Kameda",
"Tatsuya",
""
],
[
"Yamagishi",
"Toshio",
""
]
] |
Objectives: Stress hormones have been associated with temporal discounting. Although time-discount rate is shown to be stable over a long term, no study to date examines whether individual differences in stress hormones could predict individuals' time-discount rates in the relatively distant future (e.g., six month later), which is of interest in neuroeconomics of stress-addiction association. Methods: We assessed 87 participants' salivary stress hormone (cortisol, cortisone, and alpha-amylase) levels and hyperbolic discounting of delayed rewards consisting of three magnitudes, at the time-interval of six months. For salivary steroid assays, we employed a liquid chromatography/ mass spectroscopy (LC/MS) method. The correlations between the stress hormone levels and time-discount rates were examined. Results: We observed that salivary alpha-amylase (sAA) levels were negatively associated with time-discount rates in never-smokers. Notably, salivary levels of stress steroids (i.e., cortisol and cortisone) negatively and positively related to time-discount rates in men and women, respectively, in never-smokers. Ever-smokers' discount rates were not predicted from these stress hormone levels. Conclusions: Individual differences in stress hormone levels predict impulsivity in temporal discounting in the future. There are sex differences in the effect of stress steroids on temporal discounting; while there was no sex defference in the relationship between sAA and temporal discounting.
|
1901.01059
|
Jochen Einbeck
|
Daniel Bonetti, Alexandre Delbem, Dorival Le\~ao, Jochen Einbeck
|
Estimation of Distribution Algorithm for Protein Structure Prediction
|
45 pages, 13 figures
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proteins are essential for maintaining life. For example, knowing the
structure of a protein, cell regulatory mechanisms of organisms can be modeled,
supporting the development of disease treatments or the understanding of
relationships between protein structures and food attributes. However,
discovering the structure of a protein can be a difficult and expensive task,
since it is hard to explore the large search to predict even a small protein.
Template-based methods (coarse-grained, homology, threading etc) depend on
Prior Knowledge (PK) of proteins determined using other methods as X-Ray
Crystallography or Nuclear Magnetic Resonance. On the other hand, template-free
methods (full-atom and ab initio) rely on atoms physical-chemical properties to
predict protein structures. In comparison with other approaches, the Estimation
of Distribution Algorithms (EDAs) can require significant less PK, suggesting
that it could be adequate for proteins of low-level of PK. Finding an EDA able
to handle both prediction quality and computational time is a difficult task,
since they are strong inversely correlated. We developed an EDA specific for
the ab initio Protein Structure Prediction (PSP) problem using full-atom
representation. We developed one univariate and two bivariate probabilistic
models in order to design a proper EDA for PSP. The bivariate models make
relationships between dihedral angles $\phi$ and $\psi$ within an amino acid.
Furthermore, we compared the proposed EDA with other approaches from the
literature. We noticed that even a relatively simple algorithm such as Random
Walk can find the correct solution, but it would require a large amount of
prior knowledge (biased prediction). On the other hand, our EDA was able to
correctly predict with no prior knowledge at all, characterizing such a
prediction as pure ab initio.
|
[
{
"created": "Fri, 4 Jan 2019 11:26:42 GMT",
"version": "v1"
}
] |
2019-01-07
|
[
[
"Bonetti",
"Daniel",
""
],
[
"Delbem",
"Alexandre",
""
],
[
"Leão",
"Dorival",
""
],
[
"Einbeck",
"Jochen",
""
]
] |
Proteins are essential for maintaining life. For example, knowing the structure of a protein, cell regulatory mechanisms of organisms can be modeled, supporting the development of disease treatments or the understanding of relationships between protein structures and food attributes. However, discovering the structure of a protein can be a difficult and expensive task, since it is hard to explore the large search to predict even a small protein. Template-based methods (coarse-grained, homology, threading etc) depend on Prior Knowledge (PK) of proteins determined using other methods as X-Ray Crystallography or Nuclear Magnetic Resonance. On the other hand, template-free methods (full-atom and ab initio) rely on atoms physical-chemical properties to predict protein structures. In comparison with other approaches, the Estimation of Distribution Algorithms (EDAs) can require significant less PK, suggesting that it could be adequate for proteins of low-level of PK. Finding an EDA able to handle both prediction quality and computational time is a difficult task, since they are strong inversely correlated. We developed an EDA specific for the ab initio Protein Structure Prediction (PSP) problem using full-atom representation. We developed one univariate and two bivariate probabilistic models in order to design a proper EDA for PSP. The bivariate models make relationships between dihedral angles $\phi$ and $\psi$ within an amino acid. Furthermore, we compared the proposed EDA with other approaches from the literature. We noticed that even a relatively simple algorithm such as Random Walk can find the correct solution, but it would require a large amount of prior knowledge (biased prediction). On the other hand, our EDA was able to correctly predict with no prior knowledge at all, characterizing such a prediction as pure ab initio.
|
1312.5231
|
Xiaofeng Liu
|
Xiaofeng Liu, Ning Xu and Aimin Jiang
|
Tortuosity Entropy: a measure of spatial complexity of behavioral
changes in animal movement data
|
12 pages, 6 figures
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of animal movement analysis is to understand how organisms explore
and exploit the complex and varying environment. Animals usually exhibit varied
and complicated movements, from apparently deterministic behaviors to highly
random ones. This is critical for assessing movement efficiency and strategies
that are used to quantify and analyze movement trajectories. Here we introduce
a tortuosity entropy (TorEn) based on comparison of parameters, e.g. heading,
bearing, speed, of consecutive points in movement trajectory, which is a simple
measure for quantifying the behavioral change in animal movement data in a fine
scale. In our approach, the differences between pairwise successive track
points are transformed inot symbolic sequences, then we map these symbols into
a group of pattern vectors and calculate the information entropy of pattern
vector. Tortuosity entropy can be easily applied to arbitrary real-world
data-deterministic or stochastic, stationary or non-stationary. We test the
algorithm on both simulated trajectories and real trajectories and show that
both mixed segments in synthetic data and different phases in real movement
data are identified accurately. The results show that the algorithm is
applicable to various situations, indicating that our approach is a promising
tool to reveal the behavioral pattern in movement data.
|
[
{
"created": "Wed, 18 Dec 2013 17:27:10 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jan 2014 12:24:51 GMT",
"version": "v2"
}
] |
2014-01-17
|
[
[
"Liu",
"Xiaofeng",
""
],
[
"Xu",
"Ning",
""
],
[
"Jiang",
"Aimin",
""
]
] |
The goal of animal movement analysis is to understand how organisms explore and exploit the complex and varying environment. Animals usually exhibit varied and complicated movements, from apparently deterministic behaviors to highly random ones. This is critical for assessing movement efficiency and strategies that are used to quantify and analyze movement trajectories. Here we introduce a tortuosity entropy (TorEn) based on comparison of parameters, e.g. heading, bearing, speed, of consecutive points in movement trajectory, which is a simple measure for quantifying the behavioral change in animal movement data in a fine scale. In our approach, the differences between pairwise successive track points are transformed inot symbolic sequences, then we map these symbols into a group of pattern vectors and calculate the information entropy of pattern vector. Tortuosity entropy can be easily applied to arbitrary real-world data-deterministic or stochastic, stationary or non-stationary. We test the algorithm on both simulated trajectories and real trajectories and show that both mixed segments in synthetic data and different phases in real movement data are identified accurately. The results show that the algorithm is applicable to various situations, indicating that our approach is a promising tool to reveal the behavioral pattern in movement data.
|
2106.14362
|
Nan Zheng
|
Nan Zheng, Vincent Fitzpatrick, Ran Cheng, Linli Shi, David L. Kaplan,
Chen Yang
|
Photoacoustic Silk Scaffolds for Neural stimulation and Regeneration
| null | null | null | null |
q-bio.NC q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural interfaces using biocompatible scaffolds provide crucial properties
for the functional repair of nerve injuries and neurodegenerative diseases,
including cell adhesion, structural support, and mass transport. Neural
stimulation has also been found to be effective in promoting neural
regeneration. This work provides a new strategy to integrate photoacoustic (PA)
neural stimulation into hydrogel scaffolds using a nanocomposite hydrogel
approach. Specifically, polyethylene glycol (PEG)-functionalized carbon
nanotubes (CNT), highly efficient photoacoustic agents, are embedded into silk
fibroin to form biocompatible and soft photoacoustic materials. We show that
these photoacoustic functional scaffolds enable non-genetic activation of
neurons with a spatial precision defined by the area of light illumination,
promoting neuron regeneration. These CNT/silk scaffolds offered reliable and
repeatable photoacoustic neural stimulation. 94% of photoacoustic stimulated
neurons exhibit a fluorescence change larger than 10% in calcium imaging in the
light illuminated area. The on-demand photoacoustic stimulation increased
neurite outgrowth by 1.74-fold in a dorsal root ganglion model, when compared
to the unstimulated group. We also confirmed that photoacoustic neural
stimulation promoted neurite outgrowth by impacting the brain-derived
neurotrophic factor (BDNF) pathway. As a multifunctional neural scaffold,
CNT/silk scaffolds demonstrated non-genetic PA neural stimulation functions and
promoted neurite outgrowth, providing a new method for non-pharmacological
neural regeneration.
|
[
{
"created": "Mon, 28 Jun 2021 01:38:39 GMT",
"version": "v1"
}
] |
2021-06-29
|
[
[
"Zheng",
"Nan",
""
],
[
"Fitzpatrick",
"Vincent",
""
],
[
"Cheng",
"Ran",
""
],
[
"Shi",
"Linli",
""
],
[
"Kaplan",
"David L.",
""
],
[
"Yang",
"Chen",
""
]
] |
Neural interfaces using biocompatible scaffolds provide crucial properties for the functional repair of nerve injuries and neurodegenerative diseases, including cell adhesion, structural support, and mass transport. Neural stimulation has also been found to be effective in promoting neural regeneration. This work provides a new strategy to integrate photoacoustic (PA) neural stimulation into hydrogel scaffolds using a nanocomposite hydrogel approach. Specifically, polyethylene glycol (PEG)-functionalized carbon nanotubes (CNT), highly efficient photoacoustic agents, are embedded into silk fibroin to form biocompatible and soft photoacoustic materials. We show that these photoacoustic functional scaffolds enable non-genetic activation of neurons with a spatial precision defined by the area of light illumination, promoting neuron regeneration. These CNT/silk scaffolds offered reliable and repeatable photoacoustic neural stimulation. 94% of photoacoustic stimulated neurons exhibit a fluorescence change larger than 10% in calcium imaging in the light illuminated area. The on-demand photoacoustic stimulation increased neurite outgrowth by 1.74-fold in a dorsal root ganglion model, when compared to the unstimulated group. We also confirmed that photoacoustic neural stimulation promoted neurite outgrowth by impacting the brain-derived neurotrophic factor (BDNF) pathway. As a multifunctional neural scaffold, CNT/silk scaffolds demonstrated non-genetic PA neural stimulation functions and promoted neurite outgrowth, providing a new method for non-pharmacological neural regeneration.
|
2403.12987
|
Bowen Gao
|
Bowen Gao, Minsi Ren, Yuyan Ni, Yanwen Huang, Bo Qiang, Zhi-Ming Ma,
Wei-Ying Ma, Yanyan Lan
|
Rethinking Specificity in SBDD: Leveraging Delta Score and Energy-Guided
Diffusion
| null | null | null | null |
q-bio.BM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of Structure-based Drug Design (SBDD), deep learning-based
generative models have achieved outstanding performance in terms of docking
score. However, further study shows that the existing molecular generative
methods and docking scores both have lacked consideration in terms of
specificity, which means that generated molecules bind to almost every protein
pocket with high affinity. To address this, we introduce the Delta Score, a new
metric for evaluating the specificity of molecular binding. To further
incorporate this insight for generation, we develop an innovative energy-guided
approach using contrastive learning, with active compounds as decoys, to direct
generative models toward creating molecules with high specificity. Our
empirical results show that this method not only enhances the delta score but
also maintains or improves traditional docking scores, successfully bridging
the gap between SBDD and real-world needs.
|
[
{
"created": "Mon, 4 Mar 2024 07:40:25 GMT",
"version": "v1"
}
] |
2024-03-21
|
[
[
"Gao",
"Bowen",
""
],
[
"Ren",
"Minsi",
""
],
[
"Ni",
"Yuyan",
""
],
[
"Huang",
"Yanwen",
""
],
[
"Qiang",
"Bo",
""
],
[
"Ma",
"Zhi-Ming",
""
],
[
"Ma",
"Wei-Ying",
""
],
[
"Lan",
"Yanyan",
""
]
] |
In the field of Structure-based Drug Design (SBDD), deep learning-based generative models have achieved outstanding performance in terms of docking score. However, further study shows that the existing molecular generative methods and docking scores both have lacked consideration in terms of specificity, which means that generated molecules bind to almost every protein pocket with high affinity. To address this, we introduce the Delta Score, a new metric for evaluating the specificity of molecular binding. To further incorporate this insight for generation, we develop an innovative energy-guided approach using contrastive learning, with active compounds as decoys, to direct generative models toward creating molecules with high specificity. Our empirical results show that this method not only enhances the delta score but also maintains or improves traditional docking scores, successfully bridging the gap between SBDD and real-world needs.
|
2312.14489
|
Yann Sakref
|
Yann Sakref, Olivier Rivoire
|
On the exclusion of exponential autocatalysts by sub-exponential
autocatalysts
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selection among autocatalytic species fundamentally depends on their growth
law: exponential species, whose number of copies grows exponentially, are
mutually exclusive, while sub-exponential ones, whose number of copies grows
polynomially, can coexist. Here we consider competitions between autocatalytic
species with different growth laws and make the simple yet counterintuitive
observation that sub-exponential species can exclude exponential ones while the
reverse is, in principle, impossible. This observation has implications for
scenarios pertaining to the emergence of natural selection.
|
[
{
"created": "Fri, 22 Dec 2023 07:35:41 GMT",
"version": "v1"
}
] |
2023-12-25
|
[
[
"Sakref",
"Yann",
""
],
[
"Rivoire",
"Olivier",
""
]
] |
Selection among autocatalytic species fundamentally depends on their growth law: exponential species, whose number of copies grows exponentially, are mutually exclusive, while sub-exponential ones, whose number of copies grows polynomially, can coexist. Here we consider competitions between autocatalytic species with different growth laws and make the simple yet counterintuitive observation that sub-exponential species can exclude exponential ones while the reverse is, in principle, impossible. This observation has implications for scenarios pertaining to the emergence of natural selection.
|
1705.03398
|
Anvar Shukurov
|
Anvar Shukurov and Mykhailo Videiko
|
The evolving system of Trypillian settlements
|
36 pages, 14 figures, submitted to Journal of Archaeological Science
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Archaeological settlement systems are usually analysed in terms of the
relation between the rank of a settlement and its size (Zipf's law). We argue
that this approach is unreliable, and can be misleading, in application to
archaeological data where the recovery rate of settlements can be low and their
size estimates are often approximate at best. An alternative framework for the
settlement data interpretation, better connected with the theoretical concepts
of the stochastic evolution of settlement systems, is applied to the evolving
system of settlements of the Late Neolithic-Bronze Age Trypillia cultural
complex (5,400-2,800 BC) in modern Ukraine. The stochastic evolution model
provides a consistent and accurate explanation of the frequency of occurrence
of Trypillian settlement areas in the range from 0.05 to 500 ha. Thus
validated, the model leads to reliable estimates of the typical size of a newly
formed settlement as well as the growth rates of the total number of
settlements and their areas. The parameters of the settlement system thus
revealed are consistent with palaeoeconomy reconstructions for the Trypillia
area.
|
[
{
"created": "Tue, 9 May 2017 15:47:54 GMT",
"version": "v1"
}
] |
2017-05-10
|
[
[
"Shukurov",
"Anvar",
""
],
[
"Videiko",
"Mykhailo",
""
]
] |
Archaeological settlement systems are usually analysed in terms of the relation between the rank of a settlement and its size (Zipf's law). We argue that this approach is unreliable, and can be misleading, in application to archaeological data where the recovery rate of settlements can be low and their size estimates are often approximate at best. An alternative framework for the settlement data interpretation, better connected with the theoretical concepts of the stochastic evolution of settlement systems, is applied to the evolving system of settlements of the Late Neolithic-Bronze Age Trypillia cultural complex (5,400-2,800 BC) in modern Ukraine. The stochastic evolution model provides a consistent and accurate explanation of the frequency of occurrence of Trypillian settlement areas in the range from 0.05 to 500 ha. Thus validated, the model leads to reliable estimates of the typical size of a newly formed settlement as well as the growth rates of the total number of settlements and their areas. The parameters of the settlement system thus revealed are consistent with palaeoeconomy reconstructions for the Trypillia area.
|
0910.4253
|
Andrea Angelini
|
A. Angelini, A. Amato, G. Bianconi, B. Bassetti, M. Cosentino
Lagomarsino
|
Mean-field methods in evolutionary duplication-innovation-loss models
for the genome-level repertoire of protein domains
|
10 Figures, 2 Tables
| null |
10.1103/PhysRevE.81.021919
| null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a combined mean-field and simulation approach to different models
describing the dynamics of classes formed by elements that can appear,
disappear or copy themselves. These models, related to a paradigm
duplication-innovation model known as Chinese Restaurant Process, are devised
to reproduce the scaling behavior observed in the genome-wide repertoire of
protein domains of all known species. In view of these data, we discuss the
qualitative and quantitative differences of the alternative model formulations,
focusing in particular on the roles of element loss and of the specificity of
empirical domain classes.
|
[
{
"created": "Thu, 22 Oct 2009 17:18:15 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jan 2010 14:47:46 GMT",
"version": "v2"
}
] |
2015-05-14
|
[
[
"Angelini",
"A.",
""
],
[
"Amato",
"A.",
""
],
[
"Bianconi",
"G.",
""
],
[
"Bassetti",
"B.",
""
],
[
"Lagomarsino",
"M. Cosentino",
""
]
] |
We present a combined mean-field and simulation approach to different models describing the dynamics of classes formed by elements that can appear, disappear or copy themselves. These models, related to a paradigm duplication-innovation model known as Chinese Restaurant Process, are devised to reproduce the scaling behavior observed in the genome-wide repertoire of protein domains of all known species. In view of these data, we discuss the qualitative and quantitative differences of the alternative model formulations, focusing in particular on the roles of element loss and of the specificity of empirical domain classes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.