id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1203.0875 | Jing Kang Dr. | Jing Kang, Bing Xu, Ye Yao, Wei Lin, Conor Hennessy, Peter Fraser,
Jianfeng Feng | A Dynamical Model Reveals Gene Co-Localizations in Nucleus | 16 pages, 7 figures; PloS Computational Biology 2011 | null | 10.1371/journal.pcbi.1002094 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-localization of networks of genes in the nucleus is thought to play an
important role in determining gene expression patterns. Based upon experimental
data, we built a dynamical model to test whether pure diffusion could account
for the observed co-localization of genes within a defined subnuclear region. A
simple standard Brownian motion model in two and three dimensions shows that
preferential co-localization is possible for co-regulated genes without any
direct interaction, and suggests the occurrence may be due to a limitation in
the number of available transcription factors. Experimental data of chromatin
movements demonstrates that fractional rather than standard Brownian motion is
more appropriate to model gene mobilizations, and we tested our dynamical model
against recent static experimental data, using a sub-diffusion process by which
the genes tend to colocalize more easily. Moreover, in order to compare our
model with recently obtained experimental data, we studied the association
level between genes and factors, and presented data supporting the validation
of this dynamic model. As further applications of our model, we applied it to
test against more biological observations. We found that increasing
transcription factor number, rather than factory number and nucleus size, might
be the reason for decreasing gene co-localization. In the scenario of
frequency- or amplitude-modulation of transcription factors, our model
predicted that frequency-modulation may increase the co-localization between
its targeted genes.
| [
{
"created": "Mon, 5 Mar 2012 11:57:06 GMT",
"version": "v1"
}
] | 2012-03-06 | [
[
"Kang",
"Jing",
""
],
[
"Xu",
"Bing",
""
],
[
"Yao",
"Ye",
""
],
[
"Lin",
"Wei",
""
],
[
"Hennessy",
"Conor",
""
],
[
"Fraser",
"Peter",
""
],
[
"Feng",
"Jianfeng",
""
]
] | Co-localization of networks of genes in the nucleus is thought to play an important role in determining gene expression patterns. Based upon experimental data, we built a dynamical model to test whether pure diffusion could account for the observed co-localization of genes within a defined subnuclear region. A simple standard Brownian motion model in two and three dimensions shows that preferential co-localization is possible for co-regulated genes without any direct interaction, and suggests the occurrence may be due to a limitation in the number of available transcription factors. Experimental data of chromatin movements demonstrates that fractional rather than standard Brownian motion is more appropriate to model gene mobilizations, and we tested our dynamical model against recent static experimental data, using a sub-diffusion process by which the genes tend to colocalize more easily. Moreover, in order to compare our model with recently obtained experimental data, we studied the association level between genes and factors, and presented data supporting the validation of this dynamic model. As further applications of our model, we applied it to test against more biological observations. We found that increasing transcription factor number, rather than factory number and nucleus size, might be the reason for decreasing gene co-localization. In the scenario of frequency- or amplitude-modulation of transcription factors, our model predicted that frequency-modulation may increase the co-localization between its targeted genes. |
0709.3606 | Baruch Meerson | Michael Assaf, Baruch Meerson | Noise enhanced persistence in a biochemical regulatory network with
feedback control | 4 pages, 4 figures, submitted for publication | Phys. Rev. Lett. 100, 058105 (2008) | 10.1103/PhysRevLett.100.058105 | null | q-bio.MN cond-mat.stat-mech | null | We find that discrete noise of inhibiting (signal) molecules can greatly
delay the extinction of plasmids in a plasmid replication system: a
prototypical biochemical regulatory network. We calculate the probability
distribution of the metastable state of the plasmids and show on this example
that the reaction rate equations may fail in predicting the average number of
regulated molecules even when this number is large, and the time is much
shorter than the mean extinction time.
| [
{
"created": "Sat, 22 Sep 2007 21:03:56 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Dec 2007 21:04:52 GMT",
"version": "v2"
}
] | 2016-10-06 | [
[
"Assaf",
"Michael",
""
],
[
"Meerson",
"Baruch",
""
]
] | We find that discrete noise of inhibiting (signal) molecules can greatly delay the extinction of plasmids in a plasmid replication system: a prototypical biochemical regulatory network. We calculate the probability distribution of the metastable state of the plasmids and show on this example that the reaction rate equations may fail in predicting the average number of regulated molecules even when this number is large, and the time is much shorter than the mean extinction time. |
1510.07810 | Caterina La Porta AM | Alessandro L. Sellerio, Emilio Ciusani, Noa Bossel Ben-Moshe, Stefania
Coco, Andrea Piccinini, Christopher R. Myers, James P. Sethna, Costanza
Giampietro, Stefano Zapperi, Caterina A. M. La Porta | Overshoot during phenotypic switching of cancer cell populations | null | Sci. Rep. 5, 15464, (2015) | 10.1038/srep15464 | null | q-bio.CB physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The dynamics of tumor cell populations is hotly debated: do populations
derive hierarchically from a subpopulation of cancer stem cells (CSCs), or are
stochastic transitions that mutate differentiated cancer cells to CSCs
important? Here we argue that regulation must also be important. We sort human
melanoma cells using three distinct cancer stem cell (CSC) markers - CXCR6,
CD271 and ABCG2 - and observe that the fraction of non-CSC-marked cells first
overshoots to a higher level and then returns to the level of unsorted cells.
This clearly indicates that the CSC population is homeostatically regulated.
Combining experimental measurements with theoretical modeling and numerical
simulations, we show that the population dynamics of cancer cells is associated
with a complex miRNA network regulating the Wnt and PI3K pathways. Hence
phenotypic switching is not stochastic, but is tightly regulated by the balance
between positive and negative cells in the population. Reducing the fraction of
CSCs below a threshold triggers massive phenotypic switching, suggesting that a
therapeutic strategy based on CSC eradication is unlikely to succeed.
| [
{
"created": "Tue, 27 Oct 2015 08:40:46 GMT",
"version": "v1"
}
] | 2015-10-28 | [
[
"Sellerio",
"Alessandro L.",
""
],
[
"Ciusani",
"Emilio",
""
],
[
"Ben-Moshe",
"Noa Bossel",
""
],
[
"Coco",
"Stefania",
""
],
[
"Piccinini",
"Andrea",
""
],
[
"Myers",
"Christopher R.",
""
],
[
"Sethna",
"Jame... | The dynamics of tumor cell populations is hotly debated: do populations derive hierarchically from a subpopulation of cancer stem cells (CSCs), or are stochastic transitions that mutate differentiated cancer cells to CSCs important? Here we argue that regulation must also be important. We sort human melanoma cells using three distinct cancer stem cell (CSC) markers - CXCR6, CD271 and ABCG2 - and observe that the fraction of non-CSC-marked cells first overshoots to a higher level and then returns to the level of unsorted cells. This clearly indicates that the CSC population is homeostatically regulated. Combining experimental measurements with theoretical modeling and numerical simulations, we show that the population dynamics of cancer cells is associated with a complex miRNA network regulating the Wnt and PI3K pathways. Hence phenotypic switching is not stochastic, but is tightly regulated by the balance between positive and negative cells in the population. Reducing the fraction of CSCs below a threshold triggers massive phenotypic switching, suggesting that a therapeutic strategy based on CSC eradication is unlikely to succeed. |
2204.08608 | Shiqiu Yin | Zaiyun Lin (Beijing Stonewise Technology) and Shiqiu Yin (Beijing
Stonewise Technology) and Lei Shi (Beijing Stonewise Technology) and Wenbiao
Zhou (Beijing Stonewise Technology) and YingSheng Zhang (Beijing Stonewise
Technology) | G2GT: Retrosynthesis Prediction with Graph to Graph Attention Neural
Network and Self-Training | number of pages:12 and number of figures:5 | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrosynthesis prediction is one of the fundamental challenges in organic
chemistry and related fields. The goal is to find reactants molecules that can
synthesize product molecules. To solve this task, we propose a new
graph-to-graph transformation model, G2GT, in which the graph encoder and graph
decoder are built upon the standard transformer structure. We also show that
self-training, a powerful data augmentation method that utilizes unlabeled
molecule data, can significantly improve the model's performance. Inspired by
the reaction type label and ensemble learning, we proposed a novel weak
ensemble method to enhance diversity. We combined beam search, nucleus, and
top-k sampling methods to further improve inference diversity and proposed a
simple ranking algorithm to retrieve the final top-10 results. We achieved new
state-of-the-art results on both the USPTO-50K dataset, with top1 accuracy of
54%, and the larger data set USPTO-full, with top1 accuracy of 50%, and
competitive top-10 results.
| [
{
"created": "Tue, 19 Apr 2022 01:55:52 GMT",
"version": "v1"
}
] | 2022-04-20 | [
[
"Lin",
"Zaiyun",
"",
"Beijing Stonewise Technology"
],
[
"Yin",
"Shiqiu",
"",
"Beijing\n Stonewise Technology"
],
[
"Shi",
"Lei",
"",
"Beijing Stonewise Technology"
],
[
"Zhou",
"Wenbiao",
"",
"Beijing Stonewise Technology"
],
[
... | Retrosynthesis prediction is one of the fundamental challenges in organic chemistry and related fields. The goal is to find reactants molecules that can synthesize product molecules. To solve this task, we propose a new graph-to-graph transformation model, G2GT, in which the graph encoder and graph decoder are built upon the standard transformer structure. We also show that self-training, a powerful data augmentation method that utilizes unlabeled molecule data, can significantly improve the model's performance. Inspired by the reaction type label and ensemble learning, we proposed a novel weak ensemble method to enhance diversity. We combined beam search, nucleus, and top-k sampling methods to further improve inference diversity and proposed a simple ranking algorithm to retrieve the final top-10 results. We achieved new state-of-the-art results on both the USPTO-50K dataset, with top1 accuracy of 54%, and the larger data set USPTO-full, with top1 accuracy of 50%, and competitive top-10 results. |
1610.01493 | Gabriel Alvarez | G. Alvarez, L. Fernandez, R. Salinas | Construction of hazard maps of Hantavirus contagion using Remote
Sensing, logistic regression and Artificial Neural Networks: case Araucan\'ia
Region, Chile | 13 pages, 14 figures | null | null | null | q-bio.PE physics.bio-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, methods and computational results based on statistical
analysis and mathematical modelling, data collection in situ in order to make a
hazard map of Hanta Virus infection in the region of Araucania, Chile are
presented. The development of this work involves several elements such as
Landsat satellite images, biological information regarding seropositivity of
Hanta Virus and information concerning positive cases of infection detected in
the region. All this information has been processed to find a function that
models the danger of contagion in the region, through logistic regression
analysis and Artificial Neural Networks
| [
{
"created": "Wed, 5 Oct 2016 15:58:20 GMT",
"version": "v1"
}
] | 2016-10-06 | [
[
"Alvarez",
"G.",
""
],
[
"Fernandez",
"L.",
""
],
[
"Salinas",
"R.",
""
]
] | In this research, methods and computational results based on statistical analysis and mathematical modelling, data collection in situ in order to make a hazard map of Hanta Virus infection in the region of Araucania, Chile are presented. The development of this work involves several elements such as Landsat satellite images, biological information regarding seropositivity of Hanta Virus and information concerning positive cases of infection detected in the region. All this information has been processed to find a function that models the danger of contagion in the region, through logistic regression analysis and Artificial Neural Networks |
2104.09307 | Yong Li | Zhenfeng Shao, Yong Li, Xiao Huang, Bowen Cai, Lin Ding, Wenkang Pan,
Ya Zhang | Monitoring urban ecosystem service value using dynamic multi-level grids | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Ecosystem services are the direct and indirect contributions of an ecosystem
to human well-being and survival. Ecosystem valuation is a method of assigning
a monetary value to an ecosystem with its goods and services,often referred to
as ecosystem service value (ESV). With the rapid expansion of cities, a
mismatch occurs between urban development and ecological development, and it is
increasingly urgent to establish a valid ecological assessment method. In this
study, we propose an ecological evaluation standard framework by designing an
ESV monitoring workflow based on the establishment of multi-level grids. The
proposed method is able to capture multi-scale features, facilitates
multi-level spatial expression, and can effectively reveal the spatial
heterogeneity of ESV. Taking Haian city in the Jiangsu province as the study
case, we implemented the proposed dynamic multi-level grids-based (DMLG) to
calculate its urban ESV in 2016 and 2019. We found that the ESV of Haian city
showed considerable growth (increased by 24.54 million RMB). Negative ESVs are
concentrated in the central city, which presented a rapid trend of outward
expansion. The results illustrated that the ongoing urban expanse does not
reduce the ecological value in the study area. The proposed unified grid
framework can be applied to other geographical regions and is expected to
benefit future studies in ecosystem service evaluation in terms of capture
multi-level spatial heterogeneity.
| [
{
"created": "Thu, 15 Apr 2021 10:57:43 GMT",
"version": "v1"
}
] | 2021-04-20 | [
[
"Shao",
"Zhenfeng",
""
],
[
"Li",
"Yong",
""
],
[
"Huang",
"Xiao",
""
],
[
"Cai",
"Bowen",
""
],
[
"Ding",
"Lin",
""
],
[
"Pan",
"Wenkang",
""
],
[
"Zhang",
"Ya",
""
]
] | Ecosystem services are the direct and indirect contributions of an ecosystem to human well-being and survival. Ecosystem valuation is a method of assigning a monetary value to an ecosystem with its goods and services,often referred to as ecosystem service value (ESV). With the rapid expansion of cities, a mismatch occurs between urban development and ecological development, and it is increasingly urgent to establish a valid ecological assessment method. In this study, we propose an ecological evaluation standard framework by designing an ESV monitoring workflow based on the establishment of multi-level grids. The proposed method is able to capture multi-scale features, facilitates multi-level spatial expression, and can effectively reveal the spatial heterogeneity of ESV. Taking Haian city in the Jiangsu province as the study case, we implemented the proposed dynamic multi-level grids-based (DMLG) to calculate its urban ESV in 2016 and 2019. We found that the ESV of Haian city showed considerable growth (increased by 24.54 million RMB). Negative ESVs are concentrated in the central city, which presented a rapid trend of outward expansion. The results illustrated that the ongoing urban expanse does not reduce the ecological value in the study area. The proposed unified grid framework can be applied to other geographical regions and is expected to benefit future studies in ecosystem service evaluation in terms of capture multi-level spatial heterogeneity. |
1508.03492 | Satoru Morita | Satoru Morita | Evolutionary game on networks with high clustering coefficient | 12 pages, 3 figures | Nonlinear Theory and Its Applications IEICE 7, 110-117 (2016) | 10.1587/nolta.7.110 | null | q-bio.PE nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the influence of lattice structure in evolutionary
games. The snowdrift games is considered in networks with high clustering
coefficients, that use four different strategy-updating. Analytical conjectures
using pair approximation were compared with the numerical results. Results
indicate that general statements asserting that the lattice structure enhances
cooperation are misleading.
| [
{
"created": "Fri, 14 Aug 2015 13:26:15 GMT",
"version": "v1"
}
] | 2021-11-05 | [
[
"Morita",
"Satoru",
""
]
] | This study investigates the influence of lattice structure in evolutionary games. The snowdrift games is considered in networks with high clustering coefficients, that use four different strategy-updating. Analytical conjectures using pair approximation were compared with the numerical results. Results indicate that general statements asserting that the lattice structure enhances cooperation are misleading. |
2107.00578 | Rodrigo M\'endez Rojano | Rodrigo M\'endez Rojano, Mansur Zhussupbekov, James F. Antaki, Didier
Lucor | Uncertainty quantification of a thrombosis model considering the
clotting assay PFA-100 | 17 pages, 10 figures, 3 tables, original research article | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical models of thrombosis are currently used to study clinical
scenarios of pathological thrombus formation. Most of these models involve
inherent uncertainties that must be assessed to increase the confidence in
model predictions and identify avenues of improvement for both thrombosis
modeling and anti-platelet therapies. In this work, an uncertainty
quantification analysis of a multi-constituent thrombosis model is performed
considering a common assay for platelet function (PFA-100). The analysis is
performed using a polynomial chaos expansion as a parametric surrogate for the
thrombosis model. The polynomial approximation is validated and used to perform
a global sensitivity analysis via computation of Sobol' coefficients. Six out
of fifteen parameters were found to be influential in the simulation
variability considering only individual effects. Nonetheless, parameter
interactions are highlighted when considering the total Sobol' indices. In
addition to the sensitivity analysis, the surrogate model was used to compute
the PFA-100 closure times of 300,000 virtual cases that align well with
clinical data. The current methodology could be used including common
anti-platelet therapies to identify scenarios that preserve the hematological
balance.
| [
{
"created": "Tue, 29 Jun 2021 14:33:49 GMT",
"version": "v1"
}
] | 2021-07-02 | [
[
"Rojano",
"Rodrigo Méndez",
""
],
[
"Zhussupbekov",
"Mansur",
""
],
[
"Antaki",
"James F.",
""
],
[
"Lucor",
"Didier",
""
]
] | Mathematical models of thrombosis are currently used to study clinical scenarios of pathological thrombus formation. Most of these models involve inherent uncertainties that must be assessed to increase the confidence in model predictions and identify avenues of improvement for both thrombosis modeling and anti-platelet therapies. In this work, an uncertainty quantification analysis of a multi-constituent thrombosis model is performed considering a common assay for platelet function (PFA-100). The analysis is performed using a polynomial chaos expansion as a parametric surrogate for the thrombosis model. The polynomial approximation is validated and used to perform a global sensitivity analysis via computation of Sobol' coefficients. Six out of fifteen parameters were found to be influential in the simulation variability considering only individual effects. Nonetheless, parameter interactions are highlighted when considering the total Sobol' indices. In addition to the sensitivity analysis, the surrogate model was used to compute the PFA-100 closure times of 300,000 virtual cases that align well with clinical data. The current methodology could be used including common anti-platelet therapies to identify scenarios that preserve the hematological balance. |
2404.07390 | Brian Camley | Pedrom Zadeh and Brian A. Camley | Nonlinear dynamics of confined cell migration -- modeling and inference | null | null | null | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The motility of eukaryotic cells is strongly influenced by their environment,
with confined cells often developing qualitatively different motility patterns
from those migrating on simple two-dimensional substrates. Recent experiments,
coupled with data-driven methods to extract a cell's equation of motion, showed
that cancerous MDA-MB-231 cells persistently hop in a limit cycle when placed
on two-state adhesive micropatterns (two large squares connected by a narrow
bridge), while they remain stationary on average in rectangular confinements.
In contrast, healthy MCF10A cells migrating on the two-state micropattern are
bistable, i.e., they settle into either basin on average with only
noise-induced hops between the two states. We can capture all these behaviors
with a single computational phase field model of a crawling cell, under the
assumption that contact with non-adhesive substrate inhibits the cell front.
Our model predicts that larger and softer cells are more likely to persistently
hop, while smaller and stiffer cells are more likely to be bistable. Other key
factors controlling cell migration are the frequency of protrusions and their
magnitude of noise. Our results show that relatively simple assumptions about
how cells sense their geometry can explain a wide variety of different cell
behaviors, and show the power of data-driven approaches to characterize both
experiment and simulation.
| [
{
"created": "Wed, 10 Apr 2024 23:32:47 GMT",
"version": "v1"
}
] | 2024-04-12 | [
[
"Zadeh",
"Pedrom",
""
],
[
"Camley",
"Brian A.",
""
]
] | The motility of eukaryotic cells is strongly influenced by their environment, with confined cells often developing qualitatively different motility patterns from those migrating on simple two-dimensional substrates. Recent experiments, coupled with data-driven methods to extract a cell's equation of motion, showed that cancerous MDA-MB-231 cells persistently hop in a limit cycle when placed on two-state adhesive micropatterns (two large squares connected by a narrow bridge), while they remain stationary on average in rectangular confinements. In contrast, healthy MCF10A cells migrating on the two-state micropattern are bistable, i.e., they settle into either basin on average with only noise-induced hops between the two states. We can capture all these behaviors with a single computational phase field model of a crawling cell, under the assumption that contact with non-adhesive substrate inhibits the cell front. Our model predicts that larger and softer cells are more likely to persistently hop, while smaller and stiffer cells are more likely to be bistable. Other key factors controlling cell migration are the frequency of protrusions and their magnitude of noise. Our results show that relatively simple assumptions about how cells sense their geometry can explain a wide variety of different cell behaviors, and show the power of data-driven approaches to characterize both experiment and simulation. |
1307.7844 | Aaron Darling | Micha{\l} Modzelewski and Norbert Dojer | MSARC: Multiple Sequence Alignment by Residue Clustering | Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013) | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Progressive methods offer efficient and reasonably good solutions to the
multiple sequence alignment problem. However, resulting alignments are biased
by guide-trees, especially for relatively distant sequences.
We propose MSARC, a new graph-clustering based algorithm that aligns sequence
sets without guide-trees. Experiments on the BAliBASE dataset show that MSARC
achieves alignment quality similar to best progressive methods and
substantially higher than the quality of other non-progressive algorithms.
Furthermore, MSARC outperforms all other methods on sequence sets with the
similarity structure hardly represented by a phylogenetic tree. Furthermore,
MSARC outperforms all other methods on sequence sets whose evolutionary
distances are hardly representable by a phylogenetic tree. These datasets are
most exposed to the guide-tree bias of alignments.
MSARC is available at http://bioputer.mimuw.edu.pl/msarc
| [
{
"created": "Tue, 30 Jul 2013 07:05:01 GMT",
"version": "v1"
}
] | 2013-07-31 | [
[
"Modzelewski",
"Michał",
""
],
[
"Dojer",
"Norbert",
""
]
] | Progressive methods offer efficient and reasonably good solutions to the multiple sequence alignment problem. However, resulting alignments are biased by guide-trees, especially for relatively distant sequences. We propose MSARC, a new graph-clustering based algorithm that aligns sequence sets without guide-trees. Experiments on the BAliBASE dataset show that MSARC achieves alignment quality similar to best progressive methods and substantially higher than the quality of other non-progressive algorithms. Furthermore, MSARC outperforms all other methods on sequence sets with the similarity structure hardly represented by a phylogenetic tree. Furthermore, MSARC outperforms all other methods on sequence sets whose evolutionary distances are hardly representable by a phylogenetic tree. These datasets are most exposed to the guide-tree bias of alignments. MSARC is available at http://bioputer.mimuw.edu.pl/msarc |
1508.05929 | Matthew Fisher | Matthew P. A. Fisher | Quantum Cognition: The possibility of processing with nuclear spins in
the brain | 8 pages, 3 figures | Annals of Physics 362, 593-602 (2015) | null | null | q-bio.NC physics.bio-ph quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The possibility that quantum processing with nuclear spins might be operative
in the brain is proposed and then explored. Phosphorus is identified as the
unique biological element with a nuclear spin that can serve as a qubit for
such putative quantum processing - a neural qubit - while the phosphate ion is
the only possible qubit-transporter. We identify the "Posner molecule",
$\text{Ca}_9 (\text{PO}_4)_6$, as the unique molecule that can protect the
neural qubits on very long times and thereby serve as a (working)
quantum-memory. A central requirement for quantum-processing is quantum
entanglement. It is argued that the enzyme catalyzed chemical reaction which
breaks a pyrophosphate ion into two phosphate ions can quantum entangle pairs
of qubits. Posner molecules, formed by binding such phosphate pairs with
extracellular calcium ions, will inherit the nuclear spin entanglement. A
mechanism for transporting Posner molecules into presynaptic neurons during a
"kiss and run" exocytosis, which releases neurotransmitters into the synaptic
cleft, is proposed. Quantum measurements can occur when a pair of Posner
molecules chemically bind and subsequently melt, releasing a shower of
intra-cellular calcium ions that can trigger further neurotransmitter release
and enhance the probability of post-synaptic neuron firing. Multiple entangled
Posner molecules, triggering non-local quantum correlations of neuron firing
rates, would provide the key mechanism for neural quantum processing.
Implications, both in vitro and in vivo, are briefly mentioned.
| [
{
"created": "Wed, 19 Aug 2015 18:23:38 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Aug 2015 00:00:42 GMT",
"version": "v2"
}
] | 2015-11-10 | [
[
"Fisher",
"Matthew P. A.",
""
]
] | The possibility that quantum processing with nuclear spins might be operative in the brain is proposed and then explored. Phosphorus is identified as the unique biological element with a nuclear spin that can serve as a qubit for such putative quantum processing - a neural qubit - while the phosphate ion is the only possible qubit-transporter. We identify the "Posner molecule", $\text{Ca}_9 (\text{PO}_4)_6$, as the unique molecule that can protect the neural qubits on very long times and thereby serve as a (working) quantum-memory. A central requirement for quantum-processing is quantum entanglement. It is argued that the enzyme catalyzed chemical reaction which breaks a pyrophosphate ion into two phosphate ions can quantum entangle pairs of qubits. Posner molecules, formed by binding such phosphate pairs with extracellular calcium ions, will inherit the nuclear spin entanglement. A mechanism for transporting Posner molecules into presynaptic neurons during a "kiss and run" exocytosis, which releases neurotransmitters into the synaptic cleft, is proposed. Quantum measurements can occur when a pair of Posner molecules chemically bind and subsequently melt, releasing a shower of intra-cellular calcium ions that can trigger further neurotransmitter release and enhance the probability of post-synaptic neuron firing. Multiple entangled Posner molecules, triggering non-local quantum correlations of neuron firing rates, would provide the key mechanism for neural quantum processing. Implications, both in vitro and in vivo, are briefly mentioned. |
1010.1714 | Randall Beer | Randall D. Beer and Bryan Daniels | Saturation Probabilities of Continuous-Time Sigmoidal Networks | 53 pages, 9 Figures | null | null | null | q-bio.NC math.CO math.DS nlin.AO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From genetic regulatory networks to nervous systems, the interactions between
elements in biological networks often take a sigmoidal or S-shaped form. This
paper develops a probabilistic characterization of the parameter space of
continuous-time sigmoidal networks (CTSNs), a simple but dynamically-universal
model of such interactions. We describe an efficient and accurate method for
calculating the probability of observing effectively M-dimensional dynamics in
an N-element CTSN, as well as a closed-form but approximate method. We then
study the dependence of this probability on N, M, and the parameter ranges over
which sampling occurs. This analysis provides insight into the overall
structure of CTSN parameter space.
| [
{
"created": "Fri, 8 Oct 2010 15:10:38 GMT",
"version": "v1"
}
] | 2010-10-11 | [
[
"Beer",
"Randall D.",
""
],
[
"Daniels",
"Bryan",
""
]
] | From genetic regulatory networks to nervous systems, the interactions between elements in biological networks often take a sigmoidal or S-shaped form. This paper develops a probabilistic characterization of the parameter space of continuous-time sigmoidal networks (CTSNs), a simple but dynamically-universal model of such interactions. We describe an efficient and accurate method for calculating the probability of observing effectively M-dimensional dynamics in an N-element CTSN, as well as a closed-form but approximate method. We then study the dependence of this probability on N, M, and the parameter ranges over which sampling occurs. This analysis provides insight into the overall structure of CTSN parameter space. |
q-bio/0508028 | Darren E. Segall | Darren E. Segall, Phillip C. Nelson and Rob Phillips | Excluded-Volume Effects in Tethered-Particle Experiments: Bead Size
Matters | 4 pages, 3 figures | Phys. Rev. Lett. Vol 96, no. 088306 (2006) | 10.1103/PhysRevLett.96.088306 | null | q-bio.BM | null | The tethered-particle method is a single-molecule technique that has been
used to explore the dynamics of a variety of macromolecules of biological
interest. We give a theoretical analysis of the particle motions in such
experiments. Our analysis reveals that the proximity of the tethered bead to a
nearby surface (the microscope slide) gives rise to a volume-exclusion effect,
resulting in an entropic force on the molecule. This force stretches the
molecule, changing its statistical properties. In particular, the proximity of
bead and surface brings about intriguing scaling relations between key
observables (statistical moments of the bead) and parameters such as the bead
size and contour length of the molecule. We present both approximate analytic
solutions and numerical results for these effects in both flexible and
semiflexible tethers. Finally, our results give a precise,
experimentally-testable prediction for the probability distribution of the
distance between the polymer attachment point and the center of the mobile
bead.
| [
{
"created": "Sat, 20 Aug 2005 17:58:10 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Aug 2005 01:27:56 GMT",
"version": "v2"
}
] | 2009-11-11 | [
[
"Segall",
"Darren E.",
""
],
[
"Nelson",
"Phillip C.",
""
],
[
"Phillips",
"Rob",
""
]
] | The tethered-particle method is a single-molecule technique that has been used to explore the dynamics of a variety of macromolecules of biological interest. We give a theoretical analysis of the particle motions in such experiments. Our analysis reveals that the proximity of the tethered bead to a nearby surface (the microscope slide) gives rise to a volume-exclusion effect, resulting in an entropic force on the molecule. This force stretches the molecule, changing its statistical properties. In particular, the proximity of bead and surface brings about intriguing scaling relations between key observables (statistical moments of the bead) and parameters such as the bead size and contour length of the molecule. We present both approximate analytic solutions and numerical results for these effects in both flexible and semiflexible tethers. Finally, our results give a precise, experimentally-testable prediction for the probability distribution of the distance between the polymer attachment point and the center of the mobile bead. |
1203.0180 | Ovidiu Radulescu | S.A. Vakulenko, O. Radulescu | Flexible and robust networks | Journal of Bioinformatics and Computational Biology, in press | null | null | null | q-bio.MN | http://creativecommons.org/licenses/publicdomain/ | We consider networks with two types of nodes. The v-nodes, called centers,
are hyper- connected and interact one to another via many u-nodes, called
satellites. This central- ized architecture, widespread in gene networks,
possesses two fundamental properties. Namely, this organization creates
feedback loops that are capable to generate practically any prescribed
patterning dynamics, chaotic or periodic, or having a number of equilib- rium
states. Moreover, this organization is robust with respect to random
perturbations of the system.
| [
{
"created": "Thu, 1 Mar 2012 13:33:39 GMT",
"version": "v1"
}
] | 2012-03-02 | [
[
"Vakulenko",
"S. A.",
""
],
[
"Radulescu",
"O.",
""
]
] | We consider networks with two types of nodes. The v-nodes, called centers, are hyper- connected and interact one to another via many u-nodes, called satellites. This central- ized architecture, widespread in gene networks, possesses two fundamental properties. Namely, this organization creates feedback loops that are capable to generate practically any prescribed patterning dynamics, chaotic or periodic, or having a number of equilib- rium states. Moreover, this organization is robust with respect to random perturbations of the system. |
q-bio/0411019 | Boris Shklovskii | B. I. Shklovskii | A simple derivation of the Gompertz law for human mortality | 2 pages, typos corrected | Theory in Biosciences, 123, 431 (2005) | null | null | q-bio.CB cond-mat.dis-nn cond-mat.other q-bio.OT | null | The Gompertz law of dependence of human mortality rate on age is derived from
a simple model of death as a result of the exponentially rare escape of
abnormal cells from immunological response.
| [
{
"created": "Thu, 4 Nov 2004 21:38:29 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Nov 2004 15:33:13 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Nov 2004 20:55:21 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Shklovskii",
"B. I.",
""
]
] | The Gompertz law of dependence of human mortality rate on age is derived from a simple model of death as a result of the exponentially rare escape of abnormal cells from immunological response. |
1503.03550 | Yunxin Zhang | Jingwei Li, Yunxin Zhang | Correlations between promoter activity and its nucleotide positions in
spacing region | null | null | null | null | q-bio.GN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription is one of the essential processes for cells to read genetic
information encoded in genes, which is initiated by the binding of RNA
polymerase to related promoter. Experiments have found that the nucleotide
sequence of promoter has great influence on gene expression strength, or
promoter activity. In synthetic biology, one interesting question is how we can
synthesize a promoter with given activity, and which positions of promoter
sequence are important for determining its activity. In this study, based on
recent experimental data, correlations between promoter activity and its
sequence positions are analyzed by various methods. Our results show that,
except nucleotides in the two highly conserved regions, $-35$ box and $-10$
box, influences of nucleotides in other positions are also not neglectable. For
example, modifications of nucleotides around position $-19$ in spacing region
may change promoter activity in a large scale. The results of this study might
be helpful to our understanding of biophysical mechanism of gene transcription,
and may also be helpful to the design of synthetic cell factory.
| [
{
"created": "Thu, 12 Mar 2015 01:32:20 GMT",
"version": "v1"
}
] | 2015-03-13 | [
[
"Li",
"Jingwei",
""
],
[
"Zhang",
"Yunxin",
""
]
] | Transcription is one of the essential processes for cells to read genetic information encoded in genes, which is initiated by the binding of RNA polymerase to related promoter. Experiments have found that the nucleotide sequence of promoter has great influence on gene expression strength, or promoter activity. In synthetic biology, one interesting question is how we can synthesize a promoter with given activity, and which positions of promoter sequence are important for determining its activity. In this study, based on recent experimental data, correlations between promoter activity and its sequence positions are analyzed by various methods. Our results show that, except nucleotides in the two highly conserved regions, $-35$ box and $-10$ box, influences of nucleotides in other positions are also not neglectable. For example, modifications of nucleotides around position $-19$ in spacing region may change promoter activity in a large scale. The results of this study might be helpful to our understanding of biophysical mechanism of gene transcription, and may also be helpful to the design of synthetic cell factory. |
2405.12897 | James Holehouse | James Holehouse | Principles of bursty mRNA expression and irreversibility in single cells
and extrinsically varying populations | 15 pages, 6 figures | null | null | null | q-bio.SC cond-mat.stat-mech q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The canonical model of mRNA expression is the telegraph model, describing a
gene that switches on and off, subject to transcription and decay. It describes
steady-state mRNA distributions that subscribe to transcription in bursts with
first-order decay, referred to as super-Poissonian expression. Using a
telegraph-like model, I propose an answer to the question of why gene
expression is bursty in the first place, and what benefits it confers. Using
analytics for the entropy production rate, I find that entropy production is
maximal when the on and off switching rates between the gene states are
approximately equal. This is related to a lower bound on the free energy
necessary to keep the system out of equilibrium, meaning that bursty gene
expression may have evolved in part due to free energy efficiency. It is shown
that there are trade-offs between having slow nuclear export, which can reduce
cytoplasmic mRNA noise, and the energy required to keep the system out of
equilibrium -- nuclear compartmentalization comes with an associated free
energy cost. At the population level, I find that extrinsic variation,
manifested in cell-to-cell differences in kinetic parameters, can make the
system more or less reversible -- and potentially energy efficient -- depending
on where the noise is located. This highlights that there evolutionary
constraints on the suppression of extrinsic noise, whose origin is in cellular
heterogeneity, in addition to intrinsic randomness arising from molecular
collisions. Finally, I investigate the partially observed nature of most mRNA
expression data which seems to obey detailed balance, yet remains unavoidably
out-of-equilibrium.
| [
{
"created": "Tue, 21 May 2024 16:07:48 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Holehouse",
"James",
""
]
] | The canonical model of mRNA expression is the telegraph model, describing a gene that switches on and off, subject to transcription and decay. It describes steady-state mRNA distributions that subscribe to transcription in bursts with first-order decay, referred to as super-Poissonian expression. Using a telegraph-like model, I propose an answer to the question of why gene expression is bursty in the first place, and what benefits it confers. Using analytics for the entropy production rate, I find that entropy production is maximal when the on and off switching rates between the gene states are approximately equal. This is related to a lower bound on the free energy necessary to keep the system out of equilibrium, meaning that bursty gene expression may have evolved in part due to free energy efficiency. It is shown that there are trade-offs between having slow nuclear export, which can reduce cytoplasmic mRNA noise, and the energy required to keep the system out of equilibrium -- nuclear compartmentalization comes with an associated free energy cost. At the population level, I find that extrinsic variation, manifested in cell-to-cell differences in kinetic parameters, can make the system more or less reversible -- and potentially energy efficient -- depending on where the noise is located. This highlights that there evolutionary constraints on the suppression of extrinsic noise, whose origin is in cellular heterogeneity, in addition to intrinsic randomness arising from molecular collisions. Finally, I investigate the partially observed nature of most mRNA expression data which seems to obey detailed balance, yet remains unavoidably out-of-equilibrium. |
2210.02913 | Hideaki Yamamoto | Takuma Sumi, Hideaki Yamamoto, Yuichi Katori, Satoshi Moriya, Tomohiro
Konno, Shigeo Sato, Ayumi Hirano-Iwata | Biological neurons act as generalization filters in reservoir computing | 31 pages, 5 figures, 3 supplementary figures | Proc. Natl. Acad. Sci., U.S.A. 120, e2217008120 (2023) | 10.1073/pnas.2217008120 | null | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reservoir computing is a machine learning paradigm that transforms the
transient dynamics of high-dimensional nonlinear systems for processing
time-series data. Although reservoir computing was initially proposed to model
information processing in the mammalian cortex, it remains unclear how the
non-random network architecture, such as the modular architecture, in the
cortex integrates with the biophysics of living neurons to characterize the
function of biological neuronal networks (BNNs). Here, we used optogenetics and
fluorescent calcium imaging to record the multicellular responses of cultured
BNNs and employed the reservoir computing framework to decode their
computational capabilities. Micropatterned substrates were used to embed the
modular architecture in the BNNs. We first show that modular BNNs can be used
to classify static input patterns with a linear decoder and that the modularity
of the BNNs positively correlates with the classification accuracy. We then
used a timer task to verify that BNNs possess a short-term memory of ~1 s and
finally show that this property can be exploited for spoken digit
classification. Interestingly, BNN-based reservoirs allow transfer learning,
wherein a network trained on one dataset can be used to classify separate
datasets of the same category. Such classification was not possible when the
input patterns were directly decoded by a linear decoder, suggesting that BNNs
act as a generalization filter to improve reservoir computing performance. Our
findings pave the way toward a mechanistic understanding of information
processing within BNNs and, simultaneously, build future expectations toward
the realization of physical reservoir computing systems based on BNNs.
| [
{
"created": "Thu, 6 Oct 2022 13:32:26 GMT",
"version": "v1"
}
] | 2023-06-14 | [
[
"Sumi",
"Takuma",
""
],
[
"Yamamoto",
"Hideaki",
""
],
[
"Katori",
"Yuichi",
""
],
[
"Moriya",
"Satoshi",
""
],
[
"Konno",
"Tomohiro",
""
],
[
"Sato",
"Shigeo",
""
],
[
"Hirano-Iwata",
"Ayumi",
""
]
] | Reservoir computing is a machine learning paradigm that transforms the transient dynamics of high-dimensional nonlinear systems for processing time-series data. Although reservoir computing was initially proposed to model information processing in the mammalian cortex, it remains unclear how the non-random network architecture, such as the modular architecture, in the cortex integrates with the biophysics of living neurons to characterize the function of biological neuronal networks (BNNs). Here, we used optogenetics and fluorescent calcium imaging to record the multicellular responses of cultured BNNs and employed the reservoir computing framework to decode their computational capabilities. Micropatterned substrates were used to embed the modular architecture in the BNNs. We first show that modular BNNs can be used to classify static input patterns with a linear decoder and that the modularity of the BNNs positively correlates with the classification accuracy. We then used a timer task to verify that BNNs possess a short-term memory of ~1 s and finally show that this property can be exploited for spoken digit classification. Interestingly, BNN-based reservoirs allow transfer learning, wherein a network trained on one dataset can be used to classify separate datasets of the same category. Such classification was not possible when the input patterns were directly decoded by a linear decoder, suggesting that BNNs act as a generalization filter to improve reservoir computing performance. Our findings pave the way toward a mechanistic understanding of information processing within BNNs and, simultaneously, build future expectations toward the realization of physical reservoir computing systems based on BNNs. |
1702.05620 | Juan Biondi | Gerardo Fern\'andez, Juan Biondi, Silvia Castro, Osvaldo Agamennoni | Pupil size behavior during on line processing of sentences | Journal of Integrative Neuroscience 2017 | null | 10.1142/S0219635216500266 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present work we analyzed the pupil size behavior of forty subjects
while they read well defined sentences with different contextual predictability
(i.e., regular sentences and proverbs). In general, pupil size increased when
reading regular sentences, but when readers realized that they were reading
proverbs their pupils strongly increase until finishing proverbs' reading. Our
results suggest that an increased pupil size is not limited to cognitive load
(i.e., relative difficulty in processing) because when participants accurately
recognized words during reading proverbs, theirs pupil size increased too. Our
results show that pupil size dynamics may be a reliable measure to investigate
the cognitive processes involved in sentence processing and memory functioning.
| [
{
"created": "Sat, 18 Feb 2017 15:03:58 GMT",
"version": "v1"
}
] | 2017-02-21 | [
[
"Fernández",
"Gerardo",
""
],
[
"Biondi",
"Juan",
""
],
[
"Castro",
"Silvia",
""
],
[
"Agamennoni",
"Osvaldo",
""
]
] | In the present work we analyzed the pupil size behavior of forty subjects while they read well defined sentences with different contextual predictability (i.e., regular sentences and proverbs). In general, pupil size increased when reading regular sentences, but when readers realized that they were reading proverbs their pupils strongly increase until finishing proverbs' reading. Our results suggest that an increased pupil size is not limited to cognitive load (i.e., relative difficulty in processing) because when participants accurately recognized words during reading proverbs, theirs pupil size increased too. Our results show that pupil size dynamics may be a reliable measure to investigate the cognitive processes involved in sentence processing and memory functioning. |
1608.08828 | Richard Betzel | Richard F. Betzel, Danielle S. Bassett | Multi-scale brain networks | 12 pages, 3 figures, review article | null | null | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The network architecture of the human brain has become a feature of
increasing interest to the neuroscientific community, largely because of its
potential to illuminate human cognition, its variation over development and
aging, and its alteration in disease or injury. Traditional tools and
approaches to study this architecture have largely focused on single scales --
of topology, time, and space. Expanding beyond this narrow view, we focus this
review on pertinent questions and novel methodological advances for the
multi-scale brain. We separate our exposition into content related to
multi-scale topological structure, multi-scale temporal structure, and
multi-scale spatial structure. In each case, we recount empirical evidence for
such structures, survey network-based methodological approaches to reveal these
structures, and outline current frontiers and open questions. Although
predominantly peppered with examples from human neuroimaging, we hope that this
account will offer an accessible guide to any neuroscientist aiming to measure,
characterize, and understand the full richness of the brain's multiscale
network structure -- irrespective of species, imaging modality, or spatial
resolution.
| [
{
"created": "Wed, 31 Aug 2016 12:43:05 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Nov 2016 15:33:27 GMT",
"version": "v2"
}
] | 2016-11-07 | [
[
"Betzel",
"Richard F.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales -- of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuroscientist aiming to measure, characterize, and understand the full richness of the brain's multiscale network structure -- irrespective of species, imaging modality, or spatial resolution. |
2404.17605 | Roy Kishony | Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay, Roy Kishony | Autonomous LLM-driven research from data to human-verifiable research
papers | null | null | null | null | q-bio.OT cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As AI promises to accelerate scientific discovery, it remains unclear whether
fully AI-driven research is possible and whether it can adhere to key
scientific values, such as transparency, traceability and verifiability.
Mimicking human scientific practices, we built data-to-paper, an automation
platform that guides interacting LLM agents through a complete stepwise
research process, while programmatically back-tracing information flow and
allowing human oversight and interactions. In autopilot mode, provided with
annotated data alone, data-to-paper raised hypotheses, designed research plans,
wrote and debugged analysis codes, generated and interpreted results, and
created complete and information-traceable research papers. Even though
research novelty was relatively limited, the process demonstrated autonomous
generation of de novo quantitative insights from data. For simple research
goals, a fully-autonomous cycle can create manuscripts which recapitulate
peer-reviewed publications without major errors in about 80-90%, yet as goal
complexity increases, human co-piloting becomes critical for assuring accuracy.
Beyond the process itself, created manuscripts too are inherently verifiable,
as information-tracing allows to programmatically chain results, methods and
data. Our work thereby demonstrates a potential for AI-driven acceleration of
scientific discovery while enhancing, rather than jeopardizing, traceability,
transparency and verifiability.
| [
{
"created": "Wed, 24 Apr 2024 23:15:49 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Ifargan",
"Tal",
""
],
[
"Hafner",
"Lukas",
""
],
[
"Kern",
"Maor",
""
],
[
"Alcalay",
"Ori",
""
],
[
"Kishony",
"Roy",
""
]
] | As AI promises to accelerate scientific discovery, it remains unclear whether fully AI-driven research is possible and whether it can adhere to key scientific values, such as transparency, traceability and verifiability. Mimicking human scientific practices, we built data-to-paper, an automation platform that guides interacting LLM agents through a complete stepwise research process, while programmatically back-tracing information flow and allowing human oversight and interactions. In autopilot mode, provided with annotated data alone, data-to-paper raised hypotheses, designed research plans, wrote and debugged analysis codes, generated and interpreted results, and created complete and information-traceable research papers. Even though research novelty was relatively limited, the process demonstrated autonomous generation of de novo quantitative insights from data. For simple research goals, a fully-autonomous cycle can create manuscripts which recapitulate peer-reviewed publications without major errors in about 80-90%, yet as goal complexity increases, human co-piloting becomes critical for assuring accuracy. Beyond the process itself, created manuscripts too are inherently verifiable, as information-tracing allows to programmatically chain results, methods and data. Our work thereby demonstrates a potential for AI-driven acceleration of scientific discovery while enhancing, rather than jeopardizing, traceability, transparency and verifiability. |
2203.11299 | Willy Wong | Willy Wong | A Fundamental Inequality Governing the Rate Coding Response of Sensory
Neurons | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | A fundamental inequality governing the spike activity of peripheral neurons
is derived and tested against auditory data. This inequality states that the
steady-state firing rate must lie between the arithmetic and geometric means of
the spontaneous and peak activities during adaptation. Implications towards the
development of auditory mechanistic models are explored.
| [
{
"created": "Mon, 21 Mar 2022 19:15:29 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Feb 2023 02:10:46 GMT",
"version": "v2"
},
{
"created": "Wed, 2 Aug 2023 00:52:21 GMT",
"version": "v3"
}
] | 2023-08-03 | [
[
"Wong",
"Willy",
""
]
] | A fundamental inequality governing the spike activity of peripheral neurons is derived and tested against auditory data. This inequality states that the steady-state firing rate must lie between the arithmetic and geometric means of the spontaneous and peak activities during adaptation. Implications towards the development of auditory mechanistic models are explored. |
2305.13338 | Marcin Joachimiak | Marcin P. Joachimiak, J. Harry Caufield, Nomi L. Harris, Hyeongsik
Kim, Christopher J. Mungall | Gene Set Summarization using Large Language Models | null | null | null | null | q-bio.GN cs.AI cs.CL q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Molecular biologists frequently interpret gene lists derived from
high-throughput experiments and computational analysis. This is typically done
as a statistical enrichment analysis that measures the over- or
under-representation of biological function terms associated with genes or
their properties, based on curated assertions from a knowledge base (KB) such
as the Gene Ontology (GO). Interpreting gene lists can also be framed as a
textual summarization task, enabling the use of Large Language Models (LLMs),
potentially utilizing scientific texts directly and avoiding reliance on a KB.
We developed SPINDOCTOR (Structured Prompt Interpolation of Natural Language
Descriptions of Controlled Terms for Ontology Reporting), a method that uses
GPT models to perform gene set function summarization as a complement to
standard enrichment analysis. This method can use different sources of gene
functional information: (1) structured text derived from curated ontological KB
annotations, (2) ontology-free narrative gene summaries, or (3) direct model
retrieval.
We demonstrate that these methods are able to generate plausible and
biologically valid summary GO term lists for gene sets. However, GPT-based
approaches are unable to deliver reliable scores or p-values and often return
terms that are not statistically significant. Crucially, these methods were
rarely able to recapitulate the most precise and informative term from standard
enrichment, likely due to an inability to generalize and reason using an
ontology. Results are highly nondeterministic, with minor variations in prompt
resulting in radically different term lists. Our results show that at this
point, LLM-based methods are unsuitable as a replacement for standard term
enrichment analysis and that manual curation of ontological assertions remains
necessary.
| [
{
"created": "Sun, 21 May 2023 02:06:33 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2023 19:10:13 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jul 2024 02:16:11 GMT",
"version": "v3"
}
] | 2024-07-08 | [
[
"Joachimiak",
"Marcin P.",
""
],
[
"Caufield",
"J. Harry",
""
],
[
"Harris",
"Nomi L.",
""
],
[
"Kim",
"Hyeongsik",
""
],
[
"Mungall",
"Christopher J.",
""
]
] | Molecular biologists frequently interpret gene lists derived from high-throughput experiments and computational analysis. This is typically done as a statistical enrichment analysis that measures the over- or under-representation of biological function terms associated with genes or their properties, based on curated assertions from a knowledge base (KB) such as the Gene Ontology (GO). Interpreting gene lists can also be framed as a textual summarization task, enabling the use of Large Language Models (LLMs), potentially utilizing scientific texts directly and avoiding reliance on a KB. We developed SPINDOCTOR (Structured Prompt Interpolation of Natural Language Descriptions of Controlled Terms for Ontology Reporting), a method that uses GPT models to perform gene set function summarization as a complement to standard enrichment analysis. This method can use different sources of gene functional information: (1) structured text derived from curated ontological KB annotations, (2) ontology-free narrative gene summaries, or (3) direct model retrieval. We demonstrate that these methods are able to generate plausible and biologically valid summary GO term lists for gene sets. However, GPT-based approaches are unable to deliver reliable scores or p-values and often return terms that are not statistically significant. Crucially, these methods were rarely able to recapitulate the most precise and informative term from standard enrichment, likely due to an inability to generalize and reason using an ontology. Results are highly nondeterministic, with minor variations in prompt resulting in radically different term lists. Our results show that at this point, LLM-based methods are unsuitable as a replacement for standard term enrichment analysis and that manual curation of ontological assertions remains necessary. |
1407.6125 | Nicolo Colombo | Nicol\`o Colombo and Nikos Vlassis | Spectral Sequence Motif Discovery | 20 pages, 3 figures, 1 table | null | null | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence discovery tools play a central role in several fields of
computational biology. In the framework of Transcription Factor binding
studies, motif finding algorithms of increasingly high performance are required
to process the big datasets produced by new high-throughput sequencing
technologies. Most existing algorithms are computationally demanding and often
cannot support the large size of new experimental data. We present a new motif
discovery algorithm that is built on a recent machine learning technique,
referred to as Method of Moments. Based on spectral decompositions, this method
is robust under model misspecification and is not prone to locally optimal
solutions. We obtain an algorithm that is extremely fast and designed for the
analysis of big sequencing data. In a few minutes, we can process datasets of
hundreds of thousand sequences and extract motif profiles that match those
computed by various state-of-the-art algorithms.
| [
{
"created": "Wed, 23 Jul 2014 08:07:50 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Aug 2014 18:33:45 GMT",
"version": "v2"
}
] | 2014-08-27 | [
[
"Colombo",
"Nicolò",
""
],
[
"Vlassis",
"Nikos",
""
]
] | Sequence discovery tools play a central role in several fields of computational biology. In the framework of Transcription Factor binding studies, motif finding algorithms of increasingly high performance are required to process the big datasets produced by new high-throughput sequencing technologies. Most existing algorithms are computationally demanding and often cannot support the large size of new experimental data. We present a new motif discovery algorithm that is built on a recent machine learning technique, referred to as Method of Moments. Based on spectral decompositions, this method is robust under model misspecification and is not prone to locally optimal solutions. We obtain an algorithm that is extremely fast and designed for the analysis of big sequencing data. In a few minutes, we can process datasets of hundreds of thousand sequences and extract motif profiles that match those computed by various state-of-the-art algorithms. |
1610.07426 | Nicholas Parker | L. E. Wadkin, L. F. Elliot, I. Neganova, N. G. Parker, V. Chichagova,
G. Swan, A. Laude, M. Lako, A. Shukurov | Dynamics of single human embryonic stem cells and their pairs: a
quantitative analysis | 29 pages, including 9 pages of Supplemental Information | Scientific Reports 7, 570 (2017) | 10.1038/s41598-017-00648-0 | null | q-bio.QM physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous biological approaches are available to characterise the mechanisms
which govern the formation of human embryonic stem cell (hESC) colonies. To
understand how the kinematics of single and pairs of hESCs impact colony
formation, we study their mobility characteristics using time-lapse imaging. We
perform a detailed statistical analysis of their speed, survival,
directionality, distance travelled and diffusivity. We confirm that single and
pairs of cells migrate as a diffusive random walk. Moreover, we show that the
presence of Cell Tracer significantly reduces hESC mobility. Our results open
the path to employ the theoretical framework of the diffusive random walk for
the prognostic modelling and optimisation of the growth of hESC colonies.
Indeed, we employ this random walk model to estimate the seeding density
required to minimise the occurrence of hESC colonies arising from more than one
founder cell and the minimal cell number needed for successful colony
formation. We believe that our prognostic model can be extended to investigate
the kinematic behaviour of somatic cells emerging from hESC differentiation and
to enable its wide application in phenotyping of pluripotent stem cells for
large scale stem cell culture expansion and differentiation platforms.
| [
{
"created": "Mon, 24 Oct 2016 14:26:06 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Mar 2017 19:08:50 GMT",
"version": "v2"
}
] | 2018-01-08 | [
[
"Wadkin",
"L. E.",
""
],
[
"Elliot",
"L. F.",
""
],
[
"Neganova",
"I.",
""
],
[
"Parker",
"N. G.",
""
],
[
"Chichagova",
"V.",
""
],
[
"Swan",
"G.",
""
],
[
"Laude",
"A.",
""
],
[
"Lako",
"M.",
... | Numerous biological approaches are available to characterise the mechanisms which govern the formation of human embryonic stem cell (hESC) colonies. To understand how the kinematics of single and pairs of hESCs impact colony formation, we study their mobility characteristics using time-lapse imaging. We perform a detailed statistical analysis of their speed, survival, directionality, distance travelled and diffusivity. We confirm that single and pairs of cells migrate as a diffusive random walk. Moreover, we show that the presence of Cell Tracer significantly reduces hESC mobility. Our results open the path to employ the theoretical framework of the diffusive random walk for the prognostic modelling and optimisation of the growth of hESC colonies. Indeed, we employ this random walk model to estimate the seeding density required to minimise the occurrence of hESC colonies arising from more than one founder cell and the minimal cell number needed for successful colony formation. We believe that our prognostic model can be extended to investigate the kinematic behaviour of somatic cells emerging from hESC differentiation and to enable its wide application in phenotyping of pluripotent stem cells for large scale stem cell culture expansion and differentiation platforms. |
0910.1178 | Alfredo Iorio | Alfredo Iorio | On (Schr\"oedinger's) quest for new physics for life | 9 pages, 5 pages, Fourth International Workshop DICE2008 | Journal of Physics: Conference Series 174 (2009) 012036 | 10.1088/1742-6596/174/1/012036 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two recent investigations are reviewed: quantum effects for DNA aggregates
and scars formation on virus capsids. The possibility that scars could explain
certain data recently obtained by Sundquist's group in electron cryotomography
of immature HIV-1 virions is also briefly addressed. Furthermore, a bottom-up
reflection is presented on the need to invent new physics to pave the way to a
rigorous physical theory of biological phenomena. Our experience in the two
researches presented here and our personal interpretation of Schroedinger's
vision are behind the latter request.
| [
{
"created": "Wed, 7 Oct 2009 08:00:12 GMT",
"version": "v1"
}
] | 2009-10-08 | [
[
"Iorio",
"Alfredo",
""
]
] | Two recent investigations are reviewed: quantum effects for DNA aggregates and scars formation on virus capsids. The possibility that scars could explain certain data recently obtained by Sundquist's group in electron cryotomography of immature HIV-1 virions is also briefly addressed. Furthermore, a bottom-up reflection is presented on the need to invent new physics to pave the way to a rigorous physical theory of biological phenomena. Our experience in the two researches presented here and our personal interpretation of Schroedinger's vision are behind the latter request. |
1507.05368 | Gennadi Glinsky | Gennadi Glinsky | Rapidly evolving in humans topologically associating domains | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide proximity placement analysis of 10,598 HSGRL within the context
of the principal regulatory structures of the interphase chromatin, namely
topologically-associating domains (TADs) and specific sub-TAD structures termed
super-enhancer domains (SEDs) revealed that 0.8%-10.3% of TADs contain more
than half of HSGRL. Of the 3,127 TADs in the hESC genome, 24 (0.8%); 53 (1.7%);
259 (8.3%); and 322 (10.3%) harbor 1,110 (52.4%); 1,936 (50.9%); 1,151 (59.6%);
and 1,601 (58.3%) HSGRL sequences from four distinct families, respectively.
TADs that are enriched for HSGRL and termed rapidly-evolving in humans TADs
(revTADs) manifest distinct correlation patterns between HSGRL placements and
recombination rates. There are significant enrichment within revTAD boundaries
of hESC-enhancers, primate-specific CTCF-binding sites, human-specific
RNAPII-binding sites, hCONDELs, and H3K4me3 peaks with human-specific
enrichment at TSS in prefrontal cortex neurons (p < 0.0001 in all instances).
In hESC genome, 331 of 504 (66%) of SE-harboring TADs contain HSGRL and 68% of
SEs co-localize with HSGRL, suggesting that HSGRL rewired SE-driven GRNs within
revTADs by inserting novel and/or erasing existing regulatory sequences.
Consequently, markedly distinct features of chromatin structures evolved in
hESC compared to mouse: the SE quantity is 3-fold higher and the median SE size
is significantly larger; concomitantly, the TAD number is increased by 42%
while the median TAD size is decreased (p=9.11E-37). Present analyses revealed
a global role for HSGRL in increasing both quantity and size of SEs and
increasing the number and size reduction of TADs, which may facilitate a
convergence of TAD and SED architectures of interphase chromatin and define a
trend of increasing regulatory complexity during evolution of GRNs.
| [
{
"created": "Mon, 20 Jul 2015 02:21:37 GMT",
"version": "v1"
}
] | 2015-07-21 | [
[
"Glinsky",
"Gennadi",
""
]
] | Genome-wide proximity placement analysis of 10,598 HSGRL within the context of the principal regulatory structures of the interphase chromatin, namely topologically-associating domains (TADs) and specific sub-TAD structures termed super-enhancer domains (SEDs) revealed that 0.8%-10.3% of TADs contain more than half of HSGRL. Of the 3,127 TADs in the hESC genome, 24 (0.8%); 53 (1.7%); 259 (8.3%); and 322 (10.3%) harbor 1,110 (52.4%); 1,936 (50.9%); 1,151 (59.6%); and 1,601 (58.3%) HSGRL sequences from four distinct families, respectively. TADs that are enriched for HSGRL and termed rapidly-evolving in humans TADs (revTADs) manifest distinct correlation patterns between HSGRL placements and recombination rates. There are significant enrichment within revTAD boundaries of hESC-enhancers, primate-specific CTCF-binding sites, human-specific RNAPII-binding sites, hCONDELs, and H3K4me3 peaks with human-specific enrichment at TSS in prefrontal cortex neurons (p < 0.0001 in all instances). In hESC genome, 331 of 504 (66%) of SE-harboring TADs contain HSGRL and 68% of SEs co-localize with HSGRL, suggesting that HSGRL rewired SE-driven GRNs within revTADs by inserting novel and/or erasing existing regulatory sequences. Consequently, markedly distinct features of chromatin structures evolved in hESC compared to mouse: the SE quantity is 3-fold higher and the median SE size is significantly larger; concomitantly, the TAD number is increased by 42% while the median TAD size is decreased (p=9.11E-37). Present analyses revealed a global role for HSGRL in increasing both quantity and size of SEs and increasing the number and size reduction of TADs, which may facilitate a convergence of TAD and SED architectures of interphase chromatin and define a trend of increasing regulatory complexity during evolution of GRNs. |
1203.5835 | Mike Steel Prof. | Mike Steel | Root location in random trees: A polarity property of all sampling
consistent phylogenetic models except one | 8 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neutral macroevolutionary models, such as the Yule model, give rise to a
probability distribution on the set of discrete rooted binary trees over a
given leaf set. Such models can provide a signal as to the approximate location
of the root when only the unrooted phylogenetic tree is known, and this signal
becomes relatively more significant as the number of leaves grows. In this
short note, we show that among models that treat all taxa equally, and are
sampling consistent (i.e. the distribution on trees is not affected by taxa yet
to be included), all such models, except one, convey some information as to the
location of the ancestral root in an unrooted tree.
| [
{
"created": "Mon, 26 Mar 2012 22:48:57 GMT",
"version": "v1"
}
] | 2012-03-28 | [
[
"Steel",
"Mike",
""
]
] | Neutral macroevolutionary models, such as the Yule model, give rise to a probability distribution on the set of discrete rooted binary trees over a given leaf set. Such models can provide a signal as to the approximate location of the root when only the unrooted phylogenetic tree is known, and this signal becomes relatively more significant as the number of leaves grows. In this short note, we show that among models that treat all taxa equally, and are sampling consistent (i.e. the distribution on trees is not affected by taxa yet to be included), all such models, except one, convey some information as to the location of the ancestral root in an unrooted tree. |
1506.04461 | Tomasz Rutkowski | Daiki Aminaka, Shoji Makino, and Tomasz M. Rutkowski | Chromatic and High-frequency cVEP-based BCI Paradigm | 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyright | null | 10.1109/EMBC.2015.7318755 | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present results of an approach to a code-modulated visual evoked potential
(cVEP) based brain-computer interface (BCI) paradigm using four high-frequency
flashing stimuli. To generate higher frequency stimulation compared to the
state-of-the-art cVEP-based BCIs, we propose to use the light-emitting diodes
(LEDs) driven from a small micro-controller board hardware generator designed
by our team. The high-frequency and green-blue chromatic flashing stimuli are
used in the study in order to minimize a danger of a photosensitive epilepsy
(PSE). We compare the the green-blue chromatic cVEP-based BCI accuracies with
the conventional white-black flicker based interface.
| [
{
"created": "Mon, 15 Jun 2015 02:43:39 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jun 2015 14:21:23 GMT",
"version": "v2"
}
] | 2016-11-17 | [
[
"Aminaka",
"Daiki",
""
],
[
"Makino",
"Shoji",
""
],
[
"Rutkowski",
"Tomasz M.",
""
]
] | We present results of an approach to a code-modulated visual evoked potential (cVEP) based brain-computer interface (BCI) paradigm using four high-frequency flashing stimuli. To generate higher frequency stimulation compared to the state-of-the-art cVEP-based BCIs, we propose to use the light-emitting diodes (LEDs) driven from a small micro-controller board hardware generator designed by our team. The high-frequency and green-blue chromatic flashing stimuli are used in the study in order to minimize a danger of a photosensitive epilepsy (PSE). We compare the the green-blue chromatic cVEP-based BCI accuracies with the conventional white-black flicker based interface. |
2008.00687 | Gennadi Glinsky | Gennadi Glinsky | Genomics-guided drawing of malignant regulatory signatures revealed a
pivotal role of human stem cell-associated retroviral sequences (SCARS) and
functionally-active hESC enhancers | 6 figues; 6 tables | null | null | null | q-bio.GN q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From patients and physicians perspectives, the clinical definition of a tumor
malignant phenotype could be restricted to the early diagnosis of sub-types of
malignancies with the increased risk of existing therapy failure and high
likelihood of death from cancer. It is the viewpoint from which the
understanding of malignant regulatory signatures is considered in this
contribution. Analyses from this perspective of experimental and clinical
observations revealed the pivotal role of human stem cell-associated retroviral
sequences (SCARS) in the origin and pathophysiology of clinically-lethal
malignancies. SCARS represent evolutionary- and biologically-related family of
genomic regulatory sequences, the principal physiological function of which is
to create and maintain the stemness phenotype during human preimplantation
embryogenesis. SCARS expression must be silenced during cellular
differentiation and SCARS activity remains silent in most
terminally-differentiated human cells performing specialized functions in the
human body. De-repression and sustained activation of SCARS result in
differentiation-defective phenotypes, tissue- and organ-specific clinical
manifestations of which are diagnosed as pathological conditions defined by a
consensus of pathomorphological, molecular, and genetic examinations as the
malignant growth. Contemporary evidence are presented that high-fidelity
molecular signals of continuing activities of SCARS in association with genomic
regulatory networks of thousands functionally-active enhancers triggering
engagements of down-stream genetic loci may serve as both reliable diagnostic
tools and druggable molecular targets readily amenable for diagnosis and
efficient therapeutic management of clinically-lethal malignancies.
| [
{
"created": "Mon, 3 Aug 2020 07:34:59 GMT",
"version": "v1"
}
] | 2020-08-04 | [
[
"Glinsky",
"Gennadi",
""
]
] | From patients and physicians perspectives, the clinical definition of a tumor malignant phenotype could be restricted to the early diagnosis of sub-types of malignancies with the increased risk of existing therapy failure and high likelihood of death from cancer. It is the viewpoint from which the understanding of malignant regulatory signatures is considered in this contribution. Analyses from this perspective of experimental and clinical observations revealed the pivotal role of human stem cell-associated retroviral sequences (SCARS) in the origin and pathophysiology of clinically-lethal malignancies. SCARS represent evolutionary- and biologically-related family of genomic regulatory sequences, the principal physiological function of which is to create and maintain the stemness phenotype during human preimplantation embryogenesis. SCARS expression must be silenced during cellular differentiation and SCARS activity remains silent in most terminally-differentiated human cells performing specialized functions in the human body. De-repression and sustained activation of SCARS result in differentiation-defective phenotypes, tissue- and organ-specific clinical manifestations of which are diagnosed as pathological conditions defined by a consensus of pathomorphological, molecular, and genetic examinations as the malignant growth. Contemporary evidence are presented that high-fidelity molecular signals of continuing activities of SCARS in association with genomic regulatory networks of thousands functionally-active enhancers triggering engagements of down-stream genetic loci may serve as both reliable diagnostic tools and druggable molecular targets readily amenable for diagnosis and efficient therapeutic management of clinically-lethal malignancies. |
1506.00344 | Duo-Fang Li | Duo-Fang Li, Tian-Guang Cao, Jin-Peng Geng, Li-Hua Qiao, Jian-Zhong
Gu, Yong Zhan | Error Threshold of Fully Random Eigen Model | 6 pages, 3 figures, 1 table | Chin. Phys. Lett., 2015,32(1):018702 | 10.1088/0256-307X/32/1/018702 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species evolution is essentially a random process of interaction between
biological populations and their environments. As a result, some physical
parameters in evolution models are subject to statistical fluctuations. In this
paper, two important parameters in the Eigen model, the fitness and mutation
rate, are treated as Gaussian distributed random variables simultaneously to
examine the property of the error threshold. Numerical simulation results show
that the error threshold in the fully random model appears as a crossover
region instead of a phase transition point, and as the fluctuation strength
increases the crossover region becomes smoother and smoother. Furthermore, it
is shown that the randomization of the mutation rate plays a dominant role in
changing the error threshold in the fully random model, which is consistent
with the existing experimental data. The implication of the threshold change
due to the randomization for antiviral strategies is discussed.
| [
{
"created": "Mon, 1 Jun 2015 04:13:56 GMT",
"version": "v1"
}
] | 2015-06-02 | [
[
"Li",
"Duo-Fang",
""
],
[
"Cao",
"Tian-Guang",
""
],
[
"Geng",
"Jin-Peng",
""
],
[
"Qiao",
"Li-Hua",
""
],
[
"Gu",
"Jian-Zhong",
""
],
[
"Zhan",
"Yong",
""
]
] | Species evolution is essentially a random process of interaction between biological populations and their environments. As a result, some physical parameters in evolution models are subject to statistical fluctuations. In this paper, two important parameters in the Eigen model, the fitness and mutation rate, are treated as Gaussian distributed random variables simultaneously to examine the property of the error threshold. Numerical simulation results show that the error threshold in the fully random model appears as a crossover region instead of a phase transition point, and as the fluctuation strength increases the crossover region becomes smoother and smoother. Furthermore, it is shown that the randomization of the mutation rate plays a dominant role in changing the error threshold in the fully random model, which is consistent with the existing experimental data. The implication of the threshold change due to the randomization for antiviral strategies is discussed. |
1401.6602 | Daniel Rabosky | Daniel L. Rabosky | Automatic detection of key innovations, rate shifts, and
diversity-dependence on phylogenetic trees | null | null | 10.1371/journal.pone.0089543 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of methods have been developed to infer differential rates of
species diversification through time and among clades using time-calibrated
phylogenetic trees. However, we lack a general framework that can delineate and
quantify heterogeneous mixtures of dynamic processes within single phylogenies.
I developed a method that can identify arbitrary numbers of time-varying
diversification processes on phylogenies without specifying their locations in
advance. The method uses reversible-jump Markov Chain Monte Carlo to move
between model subspaces that vary in the number of distinct diversification
regimes. The model assumes that changes in evolutionary regimes occur across
the branches of phylogenetic trees under a compound Poisson process and
explicitly accounts for rate variation through time and among lineages. Using
simulated datasets, I demonstrate that the method can be used to quantify
complex mixtures of time-dependent, diversity-dependent, and constant-rate
diversification processes. I compared the performance of the method to the
MEDUSA model of rate variation among lineages. As an empirical example, I
analyzed the history of speciation and extinction during the radiation of
modern whales. The method described here will greatly facilitate the
exploration of macroevolutionary dynamics across large phylogenetic trees,
which may have been shaped by heterogeneous mixtures of distinct evolutionary
processes.
| [
{
"created": "Sun, 26 Jan 2014 01:28:52 GMT",
"version": "v1"
}
] | 2015-06-18 | [
[
"Rabosky",
"Daniel L.",
""
]
] | A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. |
2206.13818 | Benjamin Walker | Benjamin J. Walker, Giulia L. Celora, Alain Goriely, Derek E. Moulton,
Helen M. Byrne | Minimal Morphoelastic Models of Solid Tumour Spheroids: A Tutorial | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumour spheroids have been the focus of a variety of mathematical models,
ranging from Greenspan's classical study of the 1970s through to contemporary
agent-based models. Of the many factors that regulate spheroid growth,
mechanical effects are perhaps some of the least studied, both theoretically
and experimentally, though experimental enquiry has established their
significance to tumour growth dynamics. In this tutorial, we formulate a
hierarchy of mathematical models of increasing complexity to explore the role
of mechanics in spheroid growth, all the while seeking to retain desirable
simplicity and analytical tractability. Beginning with the theory of
morphoelasticity, which combines solid mechanics and growth, we successively
refine our assumptions to develop a somewhat minimal model of mechanically
regulated spheroid growth that is free from many unphysical and undesirable
behaviours. In doing so, we will see how iterating upon simple models can
provide rigorous guarantees of emergent behaviour, which are often precluded by
existing, more complex modelling approaches. Perhaps surprisingly, we also
demonstrate that the final model considered in this tutorial agrees favourably
with classical experimental results, highlighting the potential for simple
models to provide mechanistic insight whilst also serving as mathematical
examples.
| [
{
"created": "Tue, 28 Jun 2022 08:23:07 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 13:31:05 GMT",
"version": "v2"
}
] | 2022-12-22 | [
[
"Walker",
"Benjamin J.",
""
],
[
"Celora",
"Giulia L.",
""
],
[
"Goriely",
"Alain",
""
],
[
"Moulton",
"Derek E.",
""
],
[
"Byrne",
"Helen M.",
""
]
] | Tumour spheroids have been the focus of a variety of mathematical models, ranging from Greenspan's classical study of the 1970s through to contemporary agent-based models. Of the many factors that regulate spheroid growth, mechanical effects are perhaps some of the least studied, both theoretically and experimentally, though experimental enquiry has established their significance to tumour growth dynamics. In this tutorial, we formulate a hierarchy of mathematical models of increasing complexity to explore the role of mechanics in spheroid growth, all the while seeking to retain desirable simplicity and analytical tractability. Beginning with the theory of morphoelasticity, which combines solid mechanics and growth, we successively refine our assumptions to develop a somewhat minimal model of mechanically regulated spheroid growth that is free from many unphysical and undesirable behaviours. In doing so, we will see how iterating upon simple models can provide rigorous guarantees of emergent behaviour, which are often precluded by existing, more complex modelling approaches. Perhaps surprisingly, we also demonstrate that the final model considered in this tutorial agrees favourably with classical experimental results, highlighting the potential for simple models to provide mechanistic insight whilst also serving as mathematical examples. |
2302.02968 | Carles Falc\'o | Carles Falc\'o, Daniel J. Cohen, Jos\'e A. Carrillo, Ruth E. Baker | Quantifying tissue growth, shape and collision via continuum models and
Bayesian inference | null | null | null | null | q-bio.TO nlin.PS q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Although tissues are usually studied in isolation, this situation rarely
occurs in biology, as cells, tissues, and organs, coexist and interact across
scales to determine both shape and function. Here, we take a quantitative
approach combining data from recent experiments, mathematical modelling, and
Bayesian parameter inference, to describe the self-assembly of multiple
epithelial sheets by growth and collision. We use two simple and well-studied
continuum models, where cells move either randomly or following population
pressure gradients. After suitable calibration, both models prove to be
practically identifiable, and can reproduce the main features of single tissue
expansions. However, our findings reveal that whenever tissue-tissue
interactions become relevant, the random motion assumption can lead to
unrealistic behaviour. Under this setting, a model accounting for population
pressure from different cell populations is more appropriate and shows a better
agreement with experimental measurements. Finally, we discuss how tissue shape
and pressure affect multi-tissue collisions. Our work thus provides a
systematic approach to quantify and predict complex tissue configurations with
applications in the design of tissue composites and more generally in tissue
engineering.
| [
{
"created": "Mon, 6 Feb 2023 17:58:18 GMT",
"version": "v1"
}
] | 2023-02-07 | [
[
"Falcó",
"Carles",
""
],
[
"Cohen",
"Daniel J.",
""
],
[
"Carrillo",
"José A.",
""
],
[
"Baker",
"Ruth E.",
""
]
] | Although tissues are usually studied in isolation, this situation rarely occurs in biology, as cells, tissues, and organs, coexist and interact across scales to determine both shape and function. Here, we take a quantitative approach combining data from recent experiments, mathematical modelling, and Bayesian parameter inference, to describe the self-assembly of multiple epithelial sheets by growth and collision. We use two simple and well-studied continuum models, where cells move either randomly or following population pressure gradients. After suitable calibration, both models prove to be practically identifiable, and can reproduce the main features of single tissue expansions. However, our findings reveal that whenever tissue-tissue interactions become relevant, the random motion assumption can lead to unrealistic behaviour. Under this setting, a model accounting for population pressure from different cell populations is more appropriate and shows a better agreement with experimental measurements. Finally, we discuss how tissue shape and pressure affect multi-tissue collisions. Our work thus provides a systematic approach to quantify and predict complex tissue configurations with applications in the design of tissue composites and more generally in tissue engineering. |
1703.02828 | Masahiko Ueda | Masahiko Ueda, Nobuto Takeuchi, Kunihiko Kaneko | Stronger selection can slow down evolution driven by recombination on a
smooth fitness landscape | 12 pages, 2 figures | PLoS ONE 12(8): e0183120 (2017) | 10.1371/journal.pone.0183120 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stronger selection implies faster evolution---that is, the greater the force,
the faster the change. This apparently self-evident proposition, however, is
derived under the assumption that genetic variation within a population is
primarily supplied by mutation (i.e.\ mutation-driven evolution). Here, we show
that this proposition does not actually hold for recombination-driven
evolution, i.e.\ evolution in which genetic variation is primarily created by
recombination rather than mutation. By numerically investigating population
genetics models of recombination, migration and selection, we demonstrate that
stronger selection can slow down evolution on a perfectly smooth fitness
landscape. Through simple analytical calculation, this apparently
counter-intuitive result is shown to stem from two opposing effects of natural
selection on the rate of evolution. On the one hand, natural selection tends to
increase the rate of evolution by increasing the fixation probability of fitter
genotypes. On the other hand, natural selection tends to decrease the rate of
evolution by decreasing the chance of recombination between immigrants and
resident individuals. As a consequence of these opposing effects, there is a
finite selection pressure maximizing the rate of evolution. Hence, stronger
selection can imply slower evolution if genetic variation is primarily supplied
by recombination.
| [
{
"created": "Wed, 8 Mar 2017 13:19:12 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jun 2017 06:00:07 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Aug 2017 02:55:48 GMT",
"version": "v3"
}
] | 2017-08-18 | [
[
"Ueda",
"Masahiko",
""
],
[
"Takeuchi",
"Nobuto",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | Stronger selection implies faster evolution---that is, the greater the force, the faster the change. This apparently self-evident proposition, however, is derived under the assumption that genetic variation within a population is primarily supplied by mutation (i.e.\ mutation-driven evolution). Here, we show that this proposition does not actually hold for recombination-driven evolution, i.e.\ evolution in which genetic variation is primarily created by recombination rather than mutation. By numerically investigating population genetics models of recombination, migration and selection, we demonstrate that stronger selection can slow down evolution on a perfectly smooth fitness landscape. Through simple analytical calculation, this apparently counter-intuitive result is shown to stem from two opposing effects of natural selection on the rate of evolution. On the one hand, natural selection tends to increase the rate of evolution by increasing the fixation probability of fitter genotypes. On the other hand, natural selection tends to decrease the rate of evolution by decreasing the chance of recombination between immigrants and resident individuals. As a consequence of these opposing effects, there is a finite selection pressure maximizing the rate of evolution. Hence, stronger selection can imply slower evolution if genetic variation is primarily supplied by recombination. |
2012.15223 | Cameron Smith | Cameron A. Smith and Christian A. Yates | Incorporating domain growth into hybrid methods for reaction-diffusion
systems | Main text: 22 pages, 6 figures. Supplementary material: 8 pages | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Reaction--diffusion mechanism are a robust paradigm that can be used to
represent many biological and physical phenomena over multiple spatial scales.
Applications include intracellular dynamics, the migration of cells and the
patterns formed by vegetation in semi-arid landscapes. Moreover, domain growth
is an important process for embryonic growth and wound healing. There are many
numerical modelling frameworks capable of simulating such systems on growing
domains, however each of these may be well suited to different spatial scales
and particle numbers. Recently, spatially extended hybrid methods on static
domains have been produced in order to bridge the gap between these different
modelling paradigms in order to represent multiscale phenomena. However, such
methods have not been developed with domain growth in mind. In this paper, we
develop three hybrid methods on growing domains, extending three of the
prominent static domain hybrid methods. We also provide detailed algorithms to
allow others to employ them. We demonstrate that the methods are able to
accurately model three representative reaction-diffusion systems accurately and
without bias.
| [
{
"created": "Wed, 30 Dec 2020 16:32:54 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Smith",
"Cameron A.",
""
],
[
"Yates",
"Christian A.",
""
]
] | Reaction--diffusion mechanism are a robust paradigm that can be used to represent many biological and physical phenomena over multiple spatial scales. Applications include intracellular dynamics, the migration of cells and the patterns formed by vegetation in semi-arid landscapes. Moreover, domain growth is an important process for embryonic growth and wound healing. There are many numerical modelling frameworks capable of simulating such systems on growing domains, however each of these may be well suited to different spatial scales and particle numbers. Recently, spatially extended hybrid methods on static domains have been produced in order to bridge the gap between these different modelling paradigms in order to represent multiscale phenomena. However, such methods have not been developed with domain growth in mind. In this paper, we develop three hybrid methods on growing domains, extending three of the prominent static domain hybrid methods. We also provide detailed algorithms to allow others to employ them. We demonstrate that the methods are able to accurately model three representative reaction-diffusion systems accurately and without bias. |
1812.00191 | Yuting Fang | Yuting Fang, Adam Noel, Andrew W. Eckford, and Nan Yang | Expected Density of Cooperative Bacteria in a 2D Quorum Sensing Based
Molecular Communication System | 7 pages, 7 figures; This work has been accepted by IEEE Globecom 2019 | null | null | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The exchange of small molecular signals within microbial populations is
generally referred to as quorum sensing (QS). QS is ubiquitous in nature and
enables microorganisms to respond to fluctuations in living environments by
working together. In this study, a QS-based molecular communication system
within a microbial population in a two-dimensional (2D) environment is
analytically modeled. Microorganisms are randomly distributed on a 2D circle
where each one releases molecules at random times. The number of molecules
observed at each randomly-distributed bacterium is first derived by
characterizing the diffusion and degradation of signaling molecules within the
population. Using the derived result and some approximation, the expected
density of cooperative bacteria is derived. Our model captures the basic
features of QS. The analytical results for noisy signal propagation agree with
simulation results where the Brownian motion of molecules is simulated by a
particle-based method. Therefore, we anticipate that our model can be used to
predict the density of cooperative bacteria in a variety of QS-coordinated
activities, e.g., biofilm formation and antibiotic resistance.
| [
{
"created": "Sat, 1 Dec 2018 11:38:23 GMT",
"version": "v1"
},
{
"created": "Sat, 4 May 2019 06:21:47 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Sep 2019 04:23:43 GMT",
"version": "v3"
}
] | 2019-09-16 | [
[
"Fang",
"Yuting",
""
],
[
"Noel",
"Adam",
""
],
[
"Eckford",
"Andrew W.",
""
],
[
"Yang",
"Nan",
""
]
] | The exchange of small molecular signals within microbial populations is generally referred to as quorum sensing (QS). QS is ubiquitous in nature and enables microorganisms to respond to fluctuations in living environments by working together. In this study, a QS-based molecular communication system within a microbial population in a two-dimensional (2D) environment is analytically modeled. Microorganisms are randomly distributed on a 2D circle where each one releases molecules at random times. The number of molecules observed at each randomly-distributed bacterium is first derived by characterizing the diffusion and degradation of signaling molecules within the population. Using the derived result and some approximation, the expected density of cooperative bacteria is derived. Our model captures the basic features of QS. The analytical results for noisy signal propagation agree with simulation results where the Brownian motion of molecules is simulated by a particle-based method. Therefore, we anticipate that our model can be used to predict the density of cooperative bacteria in a variety of QS-coordinated activities, e.g., biofilm formation and antibiotic resistance. |
1811.06717 | Oksana Gorobets Prof. | Svitlana Gorobets, Oksana Gorobets, Yuri Gorobets, Maryna Bulaievska | Ferrimagnetic organelles in multicellular organisms | 16 pages, 15 figures | null | null | null | q-bio.TO cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, it was revealed by means of methods of atomic force microscopy
and magnetic force microscopy that the biogenic magnetic nanoparticles are
localized in the form of chains in the walls of the capillaries of animals and
the walls of the conducting tissue of plants and fungi. The biogenic magnetic
nanoparticles are part of the transport system in multicellular organisms. In
this connection, a new idea of function of biogenic magnetic nanoparticles is
discussed in the paper that the chains of biogenic magnetic nanoparticles
represent a ferrimagnetic organelles of a specific purpose.
| [
{
"created": "Fri, 16 Nov 2018 09:18:32 GMT",
"version": "v1"
}
] | 2018-11-19 | [
[
"Gorobets",
"Svitlana",
""
],
[
"Gorobets",
"Oksana",
""
],
[
"Gorobets",
"Yuri",
""
],
[
"Bulaievska",
"Maryna",
""
]
] | In this paper, it was revealed by means of methods of atomic force microscopy and magnetic force microscopy that the biogenic magnetic nanoparticles are localized in the form of chains in the walls of the capillaries of animals and the walls of the conducting tissue of plants and fungi. The biogenic magnetic nanoparticles are part of the transport system in multicellular organisms. In this connection, a new idea of function of biogenic magnetic nanoparticles is discussed in the paper that the chains of biogenic magnetic nanoparticles represent a ferrimagnetic organelles of a specific purpose. |
1104.0543 | Vladimir Chechetkin R. | V.R. Chechetkin | Kinetics of binding and geometry of cells on molecular biochips | 10 pages, 1 figure | Physics Letters A, 2007, V. 366, N 4-5, pp. 460-465 | 10.1016/j.physleta.2007.01.079 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine how the shape of cells and the geometry of experiment affect the
reaction-diffusion kinetics at the binding between target and probe molecules
on molecular biochips. In particular, we compare the binding kinetics for the
probes immobilized on surface of the semispherical and flat circular cells, the
limit of thin slab of analyte solution over probe cell as well as hemispherical
gel pads and cells printed in gel slab over a substrate. It is shown that
hemispherical geometry provides significantly faster binding kinetics and
ensures more spatially homogeneous distribution of local (from a pixel) signals
over a cell in the transient regime. The advantage of using thin slabs with
small volume of analyte solution may be hampered by the much longer binding
kinetics needing the auxiliary mixing devices. Our analysis proves that the
shape of cells and the geometry of experiment should be included to the list of
essential factors at biochip designing.
| [
{
"created": "Mon, 4 Apr 2011 12:30:36 GMT",
"version": "v1"
}
] | 2011-04-05 | [
[
"Chechetkin",
"V. R.",
""
]
] | We examine how the shape of cells and the geometry of experiment affect the reaction-diffusion kinetics at the binding between target and probe molecules on molecular biochips. In particular, we compare the binding kinetics for the probes immobilized on surface of the semispherical and flat circular cells, the limit of thin slab of analyte solution over probe cell as well as hemispherical gel pads and cells printed in gel slab over a substrate. It is shown that hemispherical geometry provides significantly faster binding kinetics and ensures more spatially homogeneous distribution of local (from a pixel) signals over a cell in the transient regime. The advantage of using thin slabs with small volume of analyte solution may be hampered by the much longer binding kinetics needing the auxiliary mixing devices. Our analysis proves that the shape of cells and the geometry of experiment should be included to the list of essential factors at biochip designing. |
2111.14489 | Yuki Koyanagi | J{\o}rgen Ellegaard Andersen, Jens Ledet Jensen, Yuki Koyanagi, Jakob
Toudahl Nielsen and Rasmus Villemoes | Using Topology to Estimate Structural Similarities of Proteins | 11 pages, 11 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | An effective model for protein structures is important for the study of
protein geometry, which, to a large extent, determine the functions of
proteins. There are a number of approaches for modelling; one might focus on
the conformation of the backbone or H-bonds, and the model may be based on the
geometry or the topology of the structure in focus. We focus on the topology of
H-bonds in proteins, and explore the link between the topology and the geometry
of protein structures. More specifically, we take inspiration from CASP
Evaluation of Model Accuracy and investigate the extent to which structural
similarities, via GDT_TS, can be estimated from the topology of H-bonds. We
report on two experiments; one where we attempt to mimic the computation of
GDT_TS based solely on the topology of H-bonds, and the other where we perform
linear regression where the independent variables are various scores computed
from the topology of H-bonds. We achieved an average $\Delta\text{GDT}$ of 6.45
with 54.5% of predictions inside 2 $\Delta\mathrm{GDT}$ for the first method,
and an average $\Delta\mathrm{GDT}$ of 4.41 with 72.7% of predictions inside 2
$\Delta\mathrm{GDT}$ for the second method.
| [
{
"created": "Mon, 29 Nov 2021 12:23:42 GMT",
"version": "v1"
}
] | 2021-11-30 | [
[
"Andersen",
"Jørgen Ellegaard",
""
],
[
"Jensen",
"Jens Ledet",
""
],
[
"Koyanagi",
"Yuki",
""
],
[
"Nielsen",
"Jakob Toudahl",
""
],
[
"Villemoes",
"Rasmus",
""
]
] | An effective model for protein structures is important for the study of protein geometry, which, to a large extent, determine the functions of proteins. There are a number of approaches for modelling; one might focus on the conformation of the backbone or H-bonds, and the model may be based on the geometry or the topology of the structure in focus. We focus on the topology of H-bonds in proteins, and explore the link between the topology and the geometry of protein structures. More specifically, we take inspiration from CASP Evaluation of Model Accuracy and investigate the extent to which structural similarities, via GDT_TS, can be estimated from the topology of H-bonds. We report on two experiments; one where we attempt to mimic the computation of GDT_TS based solely on the topology of H-bonds, and the other where we perform linear regression where the independent variables are various scores computed from the topology of H-bonds. We achieved an average $\Delta\text{GDT}$ of 6.45 with 54.5% of predictions inside 2 $\Delta\mathrm{GDT}$ for the first method, and an average $\Delta\mathrm{GDT}$ of 4.41 with 72.7% of predictions inside 2 $\Delta\mathrm{GDT}$ for the second method. |
1406.7250 | Shankar Vembu | Amit G. Deshwar, Shankar Vembu, Christina K. Yung, Gun Ho Jang,
Lincoln Stein, Quaid Morris | Reconstructing subclonal composition and evolution from whole genome
sequencing of tumors | null | null | null | null | q-bio.PE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumors often contain multiple subpopulations of cancerous cells defined by
distinct somatic mutations. We describe a new method, PhyloWGS, that can be
applied to WGS data from one or more tumor samples to reconstruct complete
genotypes of these subpopulations based on variant allele frequencies (VAFs) of
point mutations and population frequencies of structural variations. We
introduce a principled phylogenic correction for VAFs in loci affected by copy
number alterations and we show that this correction greatly improves subclonal
reconstruction compared to existing methods.
| [
{
"created": "Fri, 27 Jun 2014 18:01:20 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Oct 2014 19:24:52 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Jan 2015 22:05:57 GMT",
"version": "v3"
}
] | 2015-01-08 | [
[
"Deshwar",
"Amit G.",
""
],
[
"Vembu",
"Shankar",
""
],
[
"Yung",
"Christina K.",
""
],
[
"Jang",
"Gun Ho",
""
],
[
"Stein",
"Lincoln",
""
],
[
"Morris",
"Quaid",
""
]
] | Tumors often contain multiple subpopulations of cancerous cells defined by distinct somatic mutations. We describe a new method, PhyloWGS, that can be applied to WGS data from one or more tumor samples to reconstruct complete genotypes of these subpopulations based on variant allele frequencies (VAFs) of point mutations and population frequencies of structural variations. We introduce a principled phylogenic correction for VAFs in loci affected by copy number alterations and we show that this correction greatly improves subclonal reconstruction compared to existing methods. |
q-bio/0605008 | Enrico Carlon | T. Heim, L.-C. Tranchevent, E. Carlon and G. T. Barkema | Physics-based analysis of Affymetrix microarray data | 11 pages, 10 figures | J. Phys. Chem. B 110, 22786 (2006) | 10.1021/jp062889x | null | q-bio.BM cond-mat.stat-mech physics.chem-ph | null | We analyze publicly available data on Affymetrix microarrays spike-in
experiments on the human HGU133 chipset in which sequences are added in
solution at known concentrations. The spike-in set contains sequences of
bacterial, human and artificial origin. Our analysis is based on a recently
introduced molecular-based model [E. Carlon and T. Heim, Physica A 362, 433
(2006)] which takes into account both probe-target hybridization and
target-target partial hybridization in solution. The hybridization free
energies are obtained from the nearest-neighbor model with experimentally
determined parameters. The molecular-based model suggests a rescaling that
should result in a "collapse" of the data at different concentrations into a
single universal curve. We indeed find such a collapse, with the same
parameters as obtained before for the older HGU95 chip set. The quality of the
collapse varies according to the probe set considered. Artificial sequences,
chosen by Affymetrix to be as different as possible from any other human genome
sequence, generally show a much better collapse and thus a better agreement
with the model than all other sequences. This suggests that the observed
deviations from the predicted collapse are related to the choice of probes or
have a biological origin, rather than being a problem with the proposed model.
| [
{
"created": "Fri, 5 May 2006 09:11:56 GMT",
"version": "v1"
}
] | 2011-11-10 | [
[
"Heim",
"T.",
""
],
[
"Tranchevent",
"L. -C.",
""
],
[
"Carlon",
"E.",
""
],
[
"Barkema",
"G. T.",
""
]
] | We analyze publicly available data on Affymetrix microarrays spike-in experiments on the human HGU133 chipset in which sequences are added in solution at known concentrations. The spike-in set contains sequences of bacterial, human and artificial origin. Our analysis is based on a recently introduced molecular-based model [E. Carlon and T. Heim, Physica A 362, 433 (2006)] which takes into account both probe-target hybridization and target-target partial hybridization in solution. The hybridization free energies are obtained from the nearest-neighbor model with experimentally determined parameters. The molecular-based model suggests a rescaling that should result in a "collapse" of the data at different concentrations into a single universal curve. We indeed find such a collapse, with the same parameters as obtained before for the older HGU95 chip set. The quality of the collapse varies according to the probe set considered. Artificial sequences, chosen by Affymetrix to be as different as possible from any other human genome sequence, generally show a much better collapse and thus a better agreement with the model than all other sequences. This suggests that the observed deviations from the predicted collapse are related to the choice of probes or have a biological origin, rather than being a problem with the proposed model. |
1212.3120 | Evgenii Levites | Evgenii Vladimirovich Levites and Svetlana Sergeevna Kirikovich | Zygotic combinatorial process in plants | 14 pages, 1 table | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experimental data that prove the existence of the zygotic combinatorial
process occurring in an embryogenesis-entering zygote are presented in the
paper. The zygotic combinatorial process is found when analyzing F1 hybrid
plants obtained from crossing homozygous forms different, minimum, in two
marker enzymes, and it is found in that hybrid plant which, with one marker
enzyme heterozygous spectrum, has a homozygous spectrum of the other. The
zygotic combinatorial process leads to F1 hybrids uniformity aberration. The
zygotic combinatory process revealed in the study is supposed to be conditioned
by chromosome polyteny in mother plant cells and diminution of chromatin excess
from the embryogenesis-entering zygote. An obligatory condition for
combinatorial process is the presence of free exchange of cromatides among
homological chromosomes in an embryogenesis-entering cell, i.e. the presence of
crossing-over analogous to the one proceeding at meiosis. The found
combinatorial process and the earlier-obtained data confirm the hypothesis on
multi-dimensionality of inherited information coding. Differential polyteny of
certain chromosome regions can lead to differences among homozygous plants
having the same alleles in genes located in polytenized regions and controlling
morpho-physiological traits.
| [
{
"created": "Thu, 13 Dec 2012 10:50:18 GMT",
"version": "v1"
}
] | 2012-12-14 | [
[
"Levites",
"Evgenii Vladimirovich",
""
],
[
"Kirikovich",
"Svetlana Sergeevna",
""
]
] | Experimental data that prove the existence of the zygotic combinatorial process occurring in an embryogenesis-entering zygote are presented in the paper. The zygotic combinatorial process is found when analyzing F1 hybrid plants obtained from crossing homozygous forms different, minimum, in two marker enzymes, and it is found in that hybrid plant which, with one marker enzyme heterozygous spectrum, has a homozygous spectrum of the other. The zygotic combinatorial process leads to F1 hybrids uniformity aberration. The zygotic combinatory process revealed in the study is supposed to be conditioned by chromosome polyteny in mother plant cells and diminution of chromatin excess from the embryogenesis-entering zygote. An obligatory condition for combinatorial process is the presence of free exchange of cromatides among homological chromosomes in an embryogenesis-entering cell, i.e. the presence of crossing-over analogous to the one proceeding at meiosis. The found combinatorial process and the earlier-obtained data confirm the hypothesis on multi-dimensionality of inherited information coding. Differential polyteny of certain chromosome regions can lead to differences among homozygous plants having the same alleles in genes located in polytenized regions and controlling morpho-physiological traits. |
1908.07570 | Ryota Takaki | Ryota Takaki, Mauro L. Mugnai, Yonathan Goldtzvik, D. Thirumalai | How kinesin waits for ATP affects the nucleotide and load dependence of
the stepping kinetics | null | null | 10.1073/pnas.1913650116 | null | q-bio.SC cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimeric molecular motors walk on polar tracks by binding and hydrolyzing one
ATP per step. Despite tremendous progress, the waiting state for ATP binding in
the well-studied kinesin that walks on microtubule (MT), remains controversial.
One experiment suggests that in the waiting state both heads are bound to the
MT, while the other shows that ATP binds to the leading head after the partner
head detaches. To discriminate between these two scenarios, we developed a
theory to calculate accurately several experimentally measurable quantities as
a function of ATP concentration and resistive force.
In particular, we predict that measurement of the randomness parameter could
discriminate between the two scenarios for the waiting state of kinesin,
thereby resolving this standing controversy.
| [
{
"created": "Tue, 20 Aug 2019 19:06:39 GMT",
"version": "v1"
}
] | 2022-06-08 | [
[
"Takaki",
"Ryota",
""
],
[
"Mugnai",
"Mauro L.",
""
],
[
"Goldtzvik",
"Yonathan",
""
],
[
"Thirumalai",
"D.",
""
]
] | Dimeric molecular motors walk on polar tracks by binding and hydrolyzing one ATP per step. Despite tremendous progress, the waiting state for ATP binding in the well-studied kinesin that walks on microtubule (MT), remains controversial. One experiment suggests that in the waiting state both heads are bound to the MT, while the other shows that ATP binds to the leading head after the partner head detaches. To discriminate between these two scenarios, we developed a theory to calculate accurately several experimentally measurable quantities as a function of ATP concentration and resistive force. In particular, we predict that measurement of the randomness parameter could discriminate between the two scenarios for the waiting state of kinesin, thereby resolving this standing controversy. |
2204.12598 | Halie Rando | Halie M. Rando, Christian Brueffer, Ronan Lordan, Anna Ada Dattoli,
David Manheim, Jesse G. Meyer, Ariel I. Mundo, Dimitri Perrin, David Mai,
Nils Wellhausen, COVID-19 Review Consortium, Anthony Gitter, Casey S. Greene | Molecular and Serologic Diagnostic Technologies for SARS-CoV-2 | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has presented many challenges that have spurred
biotechnological research to address specific problems. Diagnostics is one area
where biotechnology has been critical. Diagnostic tests play a vital role in
managing a viral threat by facilitating the detection of infected and/or
recovered individuals. From the perspective of what information is provided,
these tests fall into two major categories, molecular and serological.
Molecular diagnostic techniques assay whether a virus is present in a
biological sample, thus making it possible to identify individuals who are
currently infected. Additionally, when the immune system is exposed to a virus,
it responds by producing antibodies specific to the virus. Serological tests
make it possible to identify individuals who have mounted an immune response to
a virus of interest and therefore facilitate the identification of individuals
who have previously encountered the virus. These two categories of tests
provide different perspectives valuable to understanding the spread of
SARS-CoV-2. Within these categories, different biotechnological approaches
offer specific advantages and disadvantages. Here we review the categories of
tests developed for the detection of the SARS-CoV-2 virus or antibodies against
SARS-CoV-2 and discuss the role of diagnostics in the COVID-19 pandemic.
| [
{
"created": "Tue, 26 Apr 2022 21:22:40 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Apr 2022 17:59:08 GMT",
"version": "v2"
}
] | 2022-04-29 | [
[
"Rando",
"Halie M.",
""
],
[
"Brueffer",
"Christian",
""
],
[
"Lordan",
"Ronan",
""
],
[
"Dattoli",
"Anna Ada",
""
],
[
"Manheim",
"David",
""
],
[
"Meyer",
"Jesse G.",
""
],
[
"Mundo",
"Ariel I.",
""
],
... | The COVID-19 pandemic has presented many challenges that have spurred biotechnological research to address specific problems. Diagnostics is one area where biotechnology has been critical. Diagnostic tests play a vital role in managing a viral threat by facilitating the detection of infected and/or recovered individuals. From the perspective of what information is provided, these tests fall into two major categories, molecular and serological. Molecular diagnostic techniques assay whether a virus is present in a biological sample, thus making it possible to identify individuals who are currently infected. Additionally, when the immune system is exposed to a virus, it responds by producing antibodies specific to the virus. Serological tests make it possible to identify individuals who have mounted an immune response to a virus of interest and therefore facilitate the identification of individuals who have previously encountered the virus. These two categories of tests provide different perspectives valuable to understanding the spread of SARS-CoV-2. Within these categories, different biotechnological approaches offer specific advantages and disadvantages. Here we review the categories of tests developed for the detection of the SARS-CoV-2 virus or antibodies against SARS-CoV-2 and discuss the role of diagnostics in the COVID-19 pandemic. |
1502.03481 | Leonardo Barbosa | Leonardo S. Barbosa and Nestor Caticha | Backward Renormalization Priors and the Cortical Source Localization
Problem with EEG or MEG | null | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study source localization from high dimensional M/EEG data by extending a
multiscale method based on Entropic inference devised to increase the spatial
resolution of inverse problems. This method is used to construct informative
prior distributions in a manner inspired in the context of fMRI (Amaral et al
2004). We construct a set of renormalized lattices that approximate the cortex
region where the source activity is located and address the related problem of
defining the relevant variables in a coarser scale representation of the
cortex. The priors can be used in conjunction with other Bayesian methods such
as the Variational Bayes method (VB, Sato et al 2004). The central point of the
algorithm is that it uses a posterior obtained at a coarse scale to induce a
prior at the next finer scale stage of the problem. We present results which
suggest, on simulated data, that this way of including prior information is a
useful aid for the source location problem. This is judged by the rate and
magnitude of errors in source localization. Better convergence times are also
achieved. We also present results on public data collected during a face
recognition task.
| [
{
"created": "Wed, 11 Feb 2015 23:09:59 GMT",
"version": "v1"
}
] | 2015-02-13 | [
[
"Barbosa",
"Leonardo S.",
""
],
[
"Caticha",
"Nestor",
""
]
] | We study source localization from high dimensional M/EEG data by extending a multiscale method based on Entropic inference devised to increase the spatial resolution of inverse problems. This method is used to construct informative prior distributions in a manner inspired in the context of fMRI (Amaral et al 2004). We construct a set of renormalized lattices that approximate the cortex region where the source activity is located and address the related problem of defining the relevant variables in a coarser scale representation of the cortex. The priors can be used in conjunction with other Bayesian methods such as the Variational Bayes method (VB, Sato et al 2004). The central point of the algorithm is that it uses a posterior obtained at a coarse scale to induce a prior at the next finer scale stage of the problem. We present results which suggest, on simulated data, that this way of including prior information is a useful aid for the source location problem. This is judged by the rate and magnitude of errors in source localization. Better convergence times are also achieved. We also present results on public data collected during a face recognition task. |
1709.06720 | Vladimir Privman | Sergii Domanskyi, Justin W. Nicholatos, Joshua E. Schilling, Vladimir
Privman, Sergiy Libert | SIRT6 Knockout Cells Resist Apoptosis Initiation but Not Progression: A
Computational Method to Evaluate the Progression of Apoptosis | null | Apoptosis 22 (11), 1336-1343 (2017) | 10.1007/s10495-017-1412-0 | VP-280 | q-bio.QM cond-mat.stat-mech q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Apoptosis is essential for numerous processes, such as development,
resistance to infections, and suppression of tumorigenesis. Here, we
investigate the influence of the nutrient sensing and longevity-assuring enzyme
SIRT6 on the dynamics of apoptosis triggered by serum starvation. Specifically,
we characterize the progression of apoptosis in wild type and SIRT6 deficient
mouse embryonic fibroblasts using time-lapse flow cytometry and computational
modelling based on rate-equations and cell distribution analysis. We find that
SIRT6 deficient cells resist apoptosis by delaying its initiation.
Interestingly, once apoptosis is initiated, the rate of its progression is
higher in SIRT6 null cells compared to identically cultured wild type cells.
However, SIRT6 null cells succumb to apoptosis more slowly, not only in
response to nutrient deprivation but also in response to other stresses. Our
data suggest that SIRT6 plays a role in several distinct steps of apoptosis.
Overall, we demonstrate the utility of our computational model to describe
stages of apoptosis progression and the integrity of the cellular membrane.
Such measurements will be useful in a broad range of biological applications.
We describe a computational method to evaluate the progression of apoptosis
through different stages. Using this method, we describe how cells devoid of
SIRT6 longevity gene respond to apoptosis stimuli, specifically, how they
respond to starvation. We find that SIRT6 cells resist apoptosis initiation;
however, once initiated, they progress through the apoptosis at a faster rate.
These data are first of the kind and suggest that SIRT6 activities might play
different roles at different stages of apoptosis. The model that we propose can
be used to quantitatively evaluate progression of apoptosis and will be useful
in studies of cancer treatments and other areas where apoptosis is involved.
| [
{
"created": "Wed, 20 Sep 2017 04:31:26 GMT",
"version": "v1"
}
] | 2017-11-22 | [
[
"Domanskyi",
"Sergii",
""
],
[
"Nicholatos",
"Justin W.",
""
],
[
"Schilling",
"Joshua E.",
""
],
[
"Privman",
"Vladimir",
""
],
[
"Libert",
"Sergiy",
""
]
] | Apoptosis is essential for numerous processes, such as development, resistance to infections, and suppression of tumorigenesis. Here, we investigate the influence of the nutrient sensing and longevity-assuring enzyme SIRT6 on the dynamics of apoptosis triggered by serum starvation. Specifically, we characterize the progression of apoptosis in wild type and SIRT6 deficient mouse embryonic fibroblasts using time-lapse flow cytometry and computational modelling based on rate-equations and cell distribution analysis. We find that SIRT6 deficient cells resist apoptosis by delaying its initiation. Interestingly, once apoptosis is initiated, the rate of its progression is higher in SIRT6 null cells compared to identically cultured wild type cells. However, SIRT6 null cells succumb to apoptosis more slowly, not only in response to nutrient deprivation but also in response to other stresses. Our data suggest that SIRT6 plays a role in several distinct steps of apoptosis. Overall, we demonstrate the utility of our computational model to describe stages of apoptosis progression and the integrity of the cellular membrane. Such measurements will be useful in a broad range of biological applications. We describe a computational method to evaluate the progression of apoptosis through different stages. Using this method, we describe how cells devoid of SIRT6 longevity gene respond to apoptosis stimuli, specifically, how they respond to starvation. We find that SIRT6 cells resist apoptosis initiation; however, once initiated, they progress through the apoptosis at a faster rate. These data are first of the kind and suggest that SIRT6 activities might play different roles at different stages of apoptosis. The model that we propose can be used to quantitatively evaluate progression of apoptosis and will be useful in studies of cancer treatments and other areas where apoptosis is involved. |
q-bio/0601027 | Jie Liang | Ronald Jackups, Jr. and Jie Liang | Interstrand pairing patterns in $\beta$-barrel membrane proteins: the
positive-outside rule, aromatic rescue, and strand registration prediction | 26 pages, 4 figures, and 4 tables | J. Mol. Biol. (2005) 354:979--993 | 10.1016/j.jmb.2005.09.094 | null | q-bio.BM | null | $\beta$-barrel membrane proteins are found in the outer membrane of
gram-negative bacteria, mitochondria, and chloroplasts. We have developed
probabilistic models to quantify propensities of residues for different spatial
locations and for interstrand pairwise contact interactions involving strong
H-bonds, side-chain interactions, and weak H-bonds. The propensity values and
p-values measuring statistical significance are calculated exactly by
analytical formulae we have developed. Contrary to the ``positive-inside'' rule
for helical membrane proteins, $\beta$-barrel membrane proteins follow a
significant albeit weaker ``positive-outside'' rule, in that the basic residues
Arg and Lys are disproportionately favored in the extracellular cap region and
disfavored in the periplasmic cap region. Different residue pairs prefer strong
backbone H-bonded interstrand pairings (e.g. Gly-Aromatic) or non-H-bonded
pairings (e.g. Aromatic-Aromatic). In addition, Tyr and Phe participate in
aromatic rescue by shielding Gly from polar environments. These propensities
can be used to predict the registration of strand pairs, an important task for
the structure prediction of $\beta$-barrel membrane proteins. Our accuracy of
44% is considerably better than random (7%) and other studies. Our results
imply several experiments that can help to elucidate the mechanisms of in vitro
and in vivo folding of $\beta$-barrel membrane proteins. See supplementary
material after the bibliography for detailed techniques.
| [
{
"created": "Thu, 19 Jan 2006 06:33:05 GMT",
"version": "v1"
}
] | 2012-08-27 | [
[
"Jackups,",
"Ronald",
"Jr."
],
[
"Liang",
"Jie",
""
]
] | $\beta$-barrel membrane proteins are found in the outer membrane of gram-negative bacteria, mitochondria, and chloroplasts. We have developed probabilistic models to quantify propensities of residues for different spatial locations and for interstrand pairwise contact interactions involving strong H-bonds, side-chain interactions, and weak H-bonds. The propensity values and p-values measuring statistical significance are calculated exactly by analytical formulae we have developed. Contrary to the ``positive-inside'' rule for helical membrane proteins, $\beta$-barrel membrane proteins follow a significant albeit weaker ``positive-outside'' rule, in that the basic residues Arg and Lys are disproportionately favored in the extracellular cap region and disfavored in the periplasmic cap region. Different residue pairs prefer strong backbone H-bonded interstrand pairings (e.g. Gly-Aromatic) or non-H-bonded pairings (e.g. Aromatic-Aromatic). In addition, Tyr and Phe participate in aromatic rescue by shielding Gly from polar environments. These propensities can be used to predict the registration of strand pairs, an important task for the structure prediction of $\beta$-barrel membrane proteins. Our accuracy of 44% is considerably better than random (7%) and other studies. Our results imply several experiments that can help to elucidate the mechanisms of in vitro and in vivo folding of $\beta$-barrel membrane proteins. See supplementary material after the bibliography for detailed techniques. |
1704.05534 | Dennis Thomas | Dennis G. Thomas and Nathan A. Baker | GIBS: A grand-canonical Monte Carlo simulation program for simulating
ion-biomolecule interactions | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ionic environment of biomolecules strongly influences their structure,
conformational stability, and inter-molecular interactions.This paper
introduces GIBS, a grand-canonical Monte Carlo (GCMC) simulation program for
computing the thermodynamic properties of ion solutions and their distributions
around biomolecules. This software implements algorithms that automate the
excess chemical potential calculations for a given target salt concentration.
GIBS uses a cavity-bias algorithm to achieve high sampling acceptance rates for
inserting ions and solvent hard spheres in simulating dense ionic systems. In
the current version, ion-ion interactions are described using Coulomb,
hard-sphere, or Lennard-Jones (L-J) potentials; solvent-ion interactions are
described using hard-sphere, L-J and attractive square-well potentials; and,
solvent-solvent interactions are described using hard-sphere repulsions. This
paper and the software package includes examples of using GIBS to compute the
ion excess chemical potentials and mean activity coefficients of sodium
chloride as well as to compute the cylindrical radial distribution functions of
monovalent (Na$^+$, Rb$^+$), divalent (Sr$^{2+}$), and trivalent (CoHex$^{3+}$)
around fixed all-atom models of 25 base-pair nucleic acid duplexes. GIBS is
written in C++ and is freely available community use; it can be downloaded at
https://github.com/Electrostatics/GIBS.
| [
{
"created": "Tue, 18 Apr 2017 21:18:11 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Aug 2017 22:35:57 GMT",
"version": "v2"
}
] | 2017-08-07 | [
[
"Thomas",
"Dennis G.",
""
],
[
"Baker",
"Nathan A.",
""
]
] | The ionic environment of biomolecules strongly influences their structure, conformational stability, and inter-molecular interactions.This paper introduces GIBS, a grand-canonical Monte Carlo (GCMC) simulation program for computing the thermodynamic properties of ion solutions and their distributions around biomolecules. This software implements algorithms that automate the excess chemical potential calculations for a given target salt concentration. GIBS uses a cavity-bias algorithm to achieve high sampling acceptance rates for inserting ions and solvent hard spheres in simulating dense ionic systems. In the current version, ion-ion interactions are described using Coulomb, hard-sphere, or Lennard-Jones (L-J) potentials; solvent-ion interactions are described using hard-sphere, L-J and attractive square-well potentials; and, solvent-solvent interactions are described using hard-sphere repulsions. This paper and the software package includes examples of using GIBS to compute the ion excess chemical potentials and mean activity coefficients of sodium chloride as well as to compute the cylindrical radial distribution functions of monovalent (Na$^+$, Rb$^+$), divalent (Sr$^{2+}$), and trivalent (CoHex$^{3+}$) around fixed all-atom models of 25 base-pair nucleic acid duplexes. GIBS is written in C++ and is freely available community use; it can be downloaded at https://github.com/Electrostatics/GIBS. |
1804.11310 | Hao Wang | Hao Wang, Jiahui Wang, Xin Yuan Thow, Sanghoon Lee, Wendy Yen Xian
Peh, Kian Ann Ng, Tianyiyi He, Nitish V. Thakor and Chengkuo Lee | Unveiling Stimulation Secrets of Electrical Excitation of Neural Tissue
Using a Circuit Probability Theory | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new theory, named the Circuit-Probability theory, is proposed to unveil the
secret of electrical nerve stimulation, essentially explain the nonlinear and
resonant phenomena observed when neural and non-neural tissues are electrically
stimulated. For the explanation of frequency dependent response, an inductor is
involved in the neural circuit model. Furthermore, predicted response to varied
stimulation strength is calculated stochastically. Based on this theory, many
empirical models, such as strength-duration relationship and LNP model, can be
theoretically explained, derived, and amended. This theory can explain the
complex nonlinear interactions in electrical stimulation and fit in vivo
experiment data on stimulation-responses of many experiments. As such, the C-P
theory should be able to guide novel experiments and more importantly, offer an
in-depth physical understanding of the neural tissue. As a promising neural
model, we can even further explore the more accurate circuit configuration and
probability equation to better describe the electrical stimulation of neural
tissues in the future.
| [
{
"created": "Mon, 30 Apr 2018 16:34:08 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jun 2018 06:01:29 GMT",
"version": "v2"
}
] | 2018-06-15 | [
[
"Wang",
"Hao",
""
],
[
"Wang",
"Jiahui",
""
],
[
"Thow",
"Xin Yuan",
""
],
[
"Lee",
"Sanghoon",
""
],
[
"Peh",
"Wendy Yen Xian",
""
],
[
"Ng",
"Kian Ann",
""
],
[
"He",
"Tianyiyi",
""
],
[
"Thakor",... | A new theory, named the Circuit-Probability theory, is proposed to unveil the secret of electrical nerve stimulation, essentially explain the nonlinear and resonant phenomena observed when neural and non-neural tissues are electrically stimulated. For the explanation of frequency dependent response, an inductor is involved in the neural circuit model. Furthermore, predicted response to varied stimulation strength is calculated stochastically. Based on this theory, many empirical models, such as strength-duration relationship and LNP model, can be theoretically explained, derived, and amended. This theory can explain the complex nonlinear interactions in electrical stimulation and fit in vivo experiment data on stimulation-responses of many experiments. As such, the C-P theory should be able to guide novel experiments and more importantly, offer an in-depth physical understanding of the neural tissue. As a promising neural model, we can even further explore the more accurate circuit configuration and probability equation to better describe the electrical stimulation of neural tissues in the future. |
2302.05378 | Simone Saitta | Simone Saitta, Francesco Sturla, Riccardo Gorla, Omar A. Oliva,
Emiliano Votta, Francesco Bedogni, Alberto Redaelli | A CT-based deep learning system for automatic assessment of aortic root
morphology for TAVI planning | null | null | null | null | q-bio.QM eess.IV | http://creativecommons.org/licenses/by/4.0/ | Accurate planning of transcatheter aortic implantation (TAVI) is important to
minimize complications, and it requires anatomic evaluation of the aortic root
(AR), commonly done through 3D computed tomography (CT) image analysis.
Currently, there is no standard automated solution for this process. Two
convolutional neural networks (CNNs) with 3D U-Net architectures (model 1 and
model 2) were trained on 310 CT scans for AR analysis. Model 1 performed AR
segmentation and model 2 identified the aortic annulus and sinotubular junction
(STJ) contours. Results were validated against manual measurements of 178 TAVI
candidates. After training, the two models were integrated into a fully
automated pipeline for geometric analysis of the AR. The trained CNNs
effectively segmented the AR, annulus and STJ, resulting in mean Dice scores of
0.93 for the AR, and mean surface distances of 1.16 mm and 1.30 mm for the
annulus and STJ, respectively. Automatic measurements were in good agreement
with manual annotations, yielding annulus diameters that differed by 0.52
[-2.96, 4.00] mm (bias and 95% limits of agreement for manual minus algorithm).
Evaluating the area-derived diameter, bias and limits of agreement were 0.07
[-0.25, 0.39] mm. STJ and sinuses diameters computed by the automatic method
yielded differences of 0.16 [-2.03, 2.34] and 0.1 [-2.93, 3.13] mm,
respectively. The proposed tool is a fully automatic solution to quantify
morphological biomarkers for pre-TAVI planning. The method was validated
against manual annotation from clinical experts and showed to be quick and
effective in assessing AR anatomy, with potential for time and cost savings.
| [
{
"created": "Fri, 10 Feb 2023 16:58:54 GMT",
"version": "v1"
}
] | 2023-02-13 | [
[
"Saitta",
"Simone",
""
],
[
"Sturla",
"Francesco",
""
],
[
"Gorla",
"Riccardo",
""
],
[
"Oliva",
"Omar A.",
""
],
[
"Votta",
"Emiliano",
""
],
[
"Bedogni",
"Francesco",
""
],
[
"Redaelli",
"Alberto",
""
]... | Accurate planning of transcatheter aortic implantation (TAVI) is important to minimize complications, and it requires anatomic evaluation of the aortic root (AR), commonly done through 3D computed tomography (CT) image analysis. Currently, there is no standard automated solution for this process. Two convolutional neural networks (CNNs) with 3D U-Net architectures (model 1 and model 2) were trained on 310 CT scans for AR analysis. Model 1 performed AR segmentation and model 2 identified the aortic annulus and sinotubular junction (STJ) contours. Results were validated against manual measurements of 178 TAVI candidates. After training, the two models were integrated into a fully automated pipeline for geometric analysis of the AR. The trained CNNs effectively segmented the AR, annulus and STJ, resulting in mean Dice scores of 0.93 for the AR, and mean surface distances of 1.16 mm and 1.30 mm for the annulus and STJ, respectively. Automatic measurements were in good agreement with manual annotations, yielding annulus diameters that differed by 0.52 [-2.96, 4.00] mm (bias and 95% limits of agreement for manual minus algorithm). Evaluating the area-derived diameter, bias and limits of agreement were 0.07 [-0.25, 0.39] mm. STJ and sinuses diameters computed by the automatic method yielded differences of 0.16 [-2.03, 2.34] and 0.1 [-2.93, 3.13] mm, respectively. The proposed tool is a fully automatic solution to quantify morphological biomarkers for pre-TAVI planning. The method was validated against manual annotation from clinical experts and showed to be quick and effective in assessing AR anatomy, with potential for time and cost savings. |
1605.07675 | David Holcman | C. Guerrier D. Holcman | Hybrid Markov-mass action law for cell activation by rare binding events | 4 pages (submitted) | null | null | null | q-bio.SC physics.bio-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The binding of molecules, ions or proteins to specific target sites is a
generic step for cell activation. However, this step relies on rare events
where stochastic particles located in a large bulk are searching for small and
often hidden targets and thus remains difficult to study. We present here a
hybrid discrete-continuum model where the large ensemble of particles is
described by mass-action laws. The rare discrete binding events are modeled by
a Markov chain for the encounter of a finite number of small targets by few
Brownian particles, for which the arrival time is Poissonian. This model is
applied for predicting the time distribution of vesicular release at neuronal
synapses that remains elusive. This release is triggered by the binding of few
calcium ions that can originate either from the synaptic bulk or from the
transient entry through calcium channels. We report that the distribution of
release time is bimodal although triggered by a single fast action potential:
while the first peak follows a stimulation, the second corresponds to the
random arrival over much longer time of ions located in the bulk to small
binding targets. To conclude, the present multiscale stochastic chemical
reaction modeling allows studying cellular events based on integrating discrete
molecular events over various time scales.
| [
{
"created": "Tue, 24 May 2016 22:34:43 GMT",
"version": "v1"
}
] | 2016-05-26 | [
[
"Holcman",
"C. Guerrier D.",
""
]
] | The binding of molecules, ions or proteins to specific target sites is a generic step for cell activation. However, this step relies on rare events where stochastic particles located in a large bulk are searching for small and often hidden targets and thus remains difficult to study. We present here a hybrid discrete-continuum model where the large ensemble of particles is described by mass-action laws. The rare discrete binding events are modeled by a Markov chain for the encounter of a finite number of small targets by few Brownian particles, for which the arrival time is Poissonian. This model is applied for predicting the time distribution of vesicular release at neuronal synapses that remains elusive. This release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the transient entry through calcium channels. We report that the distribution of release time is bimodal although triggered by a single fast action potential: while the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the bulk to small binding targets. To conclude, the present multiscale stochastic chemical reaction modeling allows studying cellular events based on integrating discrete molecular events over various time scales. |
1605.04825 | Graziano Vernizzi | Graziano Vernizzi, Henri Orland, A. Zee | Improved RNA pseudoknots prediction and classification using a new
topological invariant | 9 pages, 6 figures | Phys. Rev. E 94, 042410 (2016) | 10.1103/PhysRevE.94.042410 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new topological characterization of RNA secondary structures
with pseudoknots based on two topological invariants. Starting from the classic
arc-representation of RNA secondary structures, we consider a model that
couples both I) the topological genus of the graph and II) the number of
crossing arcs of the corresponding primitive graph. We add a term proportional
to these topological invariants to the standard free energy of the RNA
molecule, thus obtaining a novel free energy parametrization which takes into
account the abundance of topologies of RNA pseudoknots observed in RNA
databases.
| [
{
"created": "Mon, 16 May 2016 16:18:45 GMT",
"version": "v1"
}
] | 2016-10-19 | [
[
"Vernizzi",
"Graziano",
""
],
[
"Orland",
"Henri",
""
],
[
"Zee",
"A.",
""
]
] | We propose a new topological characterization of RNA secondary structures with pseudoknots based on two topological invariants. Starting from the classic arc-representation of RNA secondary structures, we consider a model that couples both I) the topological genus of the graph and II) the number of crossing arcs of the corresponding primitive graph. We add a term proportional to these topological invariants to the standard free energy of the RNA molecule, thus obtaining a novel free energy parametrization which takes into account the abundance of topologies of RNA pseudoknots observed in RNA databases. |
2006.12322 | Gregor Corbin | G. Corbin, C. Engwer, A. Klar, J. Nieto, J. Soler, C. Surulescu, M.
Wenske | Modeling glioma invasion with anisotropy- and hypoxia-triggered motility
enhancement: from subcellular dynamics to macroscopic PDEs with multiple
taxis | null | null | null | null | q-bio.TO cs.NA math.NA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We deduce a model for glioma invasion making use of DTI data and accounting
for the dynamics of brain tissue being actively degraded by tumor cells via
excessive acidity production, but also according to the local orientation of
tissue fibers. Our approach has a multiscale character: we start with a
microscopic description of single cell dynamics including biochemical and/or
biophysical effects of the tumor microenvironment, translated on the one hand
into cell stress and corresponding forces and on the other hand into receptor
binding dynamics; these lead on the mesoscopic level to kinetic equations
involving transport terms w.r.t. all kinetic variables and eventually, by
appropriate upscaling, to a macroscopic reaction-diffusion equation for glioma
density with multiple taxis, coupled to (integro-)differential equations
characterizing the evolution of acidity and macro- and mesoscopic tissue. Our
approach also allows for a switch between fast and slower moving regimes,
%diffusion- and drift-dominated regimes, according to the local tissue
anisotropy. We perform numerical simulations to investigate the behavior of
solutions w.r.t. various scenarios of tissue dynamics and the dominance of each
of the tactic terms, also suggesting how the model can be used to perform a
numerical necrosis-based tumor grading or support radiotherapy planning by dose
painting. We also provide a discussion about alternative ways of including cell
level environmental influences in such multiscale modeling approach, ultimately
leading in the macroscopic limit to (multiple) taxis.
| [
{
"created": "Mon, 22 Jun 2020 15:07:04 GMT",
"version": "v1"
}
] | 2020-06-23 | [
[
"Corbin",
"G.",
""
],
[
"Engwer",
"C.",
""
],
[
"Klar",
"A.",
""
],
[
"Nieto",
"J.",
""
],
[
"Soler",
"J.",
""
],
[
"Surulescu",
"C.",
""
],
[
"Wenske",
"M.",
""
]
] | We deduce a model for glioma invasion making use of DTI data and accounting for the dynamics of brain tissue being actively degraded by tumor cells via excessive acidity production, but also according to the local orientation of tissue fibers. Our approach has a multiscale character: we start with a microscopic description of single cell dynamics including biochemical and/or biophysical effects of the tumor microenvironment, translated on the one hand into cell stress and corresponding forces and on the other hand into receptor binding dynamics; these lead on the mesoscopic level to kinetic equations involving transport terms w.r.t. all kinetic variables and eventually, by appropriate upscaling, to a macroscopic reaction-diffusion equation for glioma density with multiple taxis, coupled to (integro-)differential equations characterizing the evolution of acidity and macro- and mesoscopic tissue. Our approach also allows for a switch between fast and slower moving regimes, %diffusion- and drift-dominated regimes, according to the local tissue anisotropy. We perform numerical simulations to investigate the behavior of solutions w.r.t. various scenarios of tissue dynamics and the dominance of each of the tactic terms, also suggesting how the model can be used to perform a numerical necrosis-based tumor grading or support radiotherapy planning by dose painting. We also provide a discussion about alternative ways of including cell level environmental influences in such multiscale modeling approach, ultimately leading in the macroscopic limit to (multiple) taxis. |
2208.06362 | Imra Aqeel | Imra Aqeel and Abdul Majid | Hybrid Approach to Identify Druglikeness Leading Compounds against
COVID-19 3CL Protease | 28 pages | null | 10.3390/ph15111333 | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | SARS-COV-2 is a positive single-strand RNA-based macromolecule that has
caused the death of more than 6.3 million people since June 2022. Moreover, by
disturbing global supply chains through lockdown, the virus has indirectly
caused devastating damage to the global economy. It is vital to design and
develop drugs for this virus and its various variants. In this paper, we
developed an in-silico study-based hybrid framework to repurpose existing
therapeutic agents in finding drug-like bioactive molecules that would cure
Covid-19. We employed the Lipinski rules on the retrieved molecules from the
ChEMBL database and found 133 drug-likeness bioactive molecules against SARS
coronavirus 3CL Protease. Based on standard IC50, the dataset was divided into
three classes active, inactive, and intermediate. Our comparative analysis
demonstrated that the proposed Extra Tree Regressor (ETR) based QSAR model has
improved prediction results related to the bioactivity of chemical compounds as
compared to Gradient Boosting, XGBoost, Support Vector, Decision Tree, and
Random Forest based regressor models. ADMET analysis is carried out to identify
thirteen bioactive molecules with ChEMBL IDs 187460, 190743, 222234, 222628,
222735, 222769, 222840, 222893, 225515, 358279, 363535, 365134 and 426898.
These molecules are highly suitable drug candidates for SARS-COV-2 3CL
Protease. In the next step, the efficacy of bioactive molecules is computed in
terms of binding affinity using molecular docking and then shortlisted six
bioactive molecules with ChEMBL IDs 187460, 222769, 225515, 358279, 363535, and
365134. These molecules can be suitable drug candidates for SARS-COV-2. It is
anticipated that the pharmacologist/drug manufacturer would further investigate
these six molecules to find suitable drug candidates for SARS-COV-2. They can
adopt these promising compounds for their downstream drug development stages.
| [
{
"created": "Wed, 3 Aug 2022 22:17:22 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Aug 2022 20:56:35 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Aug 2022 09:25:25 GMT",
"version": "v3"
}
] | 2022-11-01 | [
[
"Aqeel",
"Imra",
""
],
[
"Majid",
"Abdul",
""
]
] | SARS-COV-2 is a positive single-strand RNA-based macromolecule that has caused the death of more than 6.3 million people since June 2022. Moreover, by disturbing global supply chains through lockdown, the virus has indirectly caused devastating damage to the global economy. It is vital to design and develop drugs for this virus and its various variants. In this paper, we developed an in-silico study-based hybrid framework to repurpose existing therapeutic agents in finding drug-like bioactive molecules that would cure Covid-19. We employed the Lipinski rules on the retrieved molecules from the ChEMBL database and found 133 drug-likeness bioactive molecules against SARS coronavirus 3CL Protease. Based on standard IC50, the dataset was divided into three classes active, inactive, and intermediate. Our comparative analysis demonstrated that the proposed Extra Tree Regressor (ETR) based QSAR model has improved prediction results related to the bioactivity of chemical compounds as compared to Gradient Boosting, XGBoost, Support Vector, Decision Tree, and Random Forest based regressor models. ADMET analysis is carried out to identify thirteen bioactive molecules with ChEMBL IDs 187460, 190743, 222234, 222628, 222735, 222769, 222840, 222893, 225515, 358279, 363535, 365134 and 426898. These molecules are highly suitable drug candidates for SARS-COV-2 3CL Protease. In the next step, the efficacy of bioactive molecules is computed in terms of binding affinity using molecular docking and then shortlisted six bioactive molecules with ChEMBL IDs 187460, 222769, 225515, 358279, 363535, and 365134. These molecules can be suitable drug candidates for SARS-COV-2. It is anticipated that the pharmacologist/drug manufacturer would further investigate these six molecules to find suitable drug candidates for SARS-COV-2. They can adopt these promising compounds for their downstream drug development stages. |
2306.14902 | Kong Deqian | Deqian Kong, Bo Pang, Tian Han and Ying Nian Wu | Molecule Design by Latent Space Energy-Based Modeling and Gradual
Distribution Shifting | null | 39th Conference on Uncertainty in Artificial Intelligence 2023 | null | null | q-bio.BM cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Generation of molecules with desired chemical and biological properties such
as high drug-likeness, high binding affinity to target proteins, is critical
for drug discovery. In this paper, we propose a probabilistic generative model
to capture the joint distribution of molecules and their properties. Our model
assumes an energy-based model (EBM) in the latent space. Conditional on the
latent vector, the molecule and its properties are modeled by a molecule
generation model and a property regression model respectively. To search for
molecules with desired properties, we propose a sampling with gradual
distribution shifting (SGDS) algorithm, so that after learning the model
initially on the training data of existing molecules and their properties, the
proposed algorithm gradually shifts the model distribution towards the region
supported by molecules with desired values of properties. Our experiments show
that our method achieves very strong performances on various molecule design
tasks.
| [
{
"created": "Fri, 9 Jun 2023 03:04:21 GMT",
"version": "v1"
}
] | 2023-07-12 | [
[
"Kong",
"Deqian",
""
],
[
"Pang",
"Bo",
""
],
[
"Han",
"Tian",
""
],
[
"Wu",
"Ying Nian",
""
]
] | Generation of molecules with desired chemical and biological properties such as high drug-likeness, high binding affinity to target proteins, is critical for drug discovery. In this paper, we propose a probabilistic generative model to capture the joint distribution of molecules and their properties. Our model assumes an energy-based model (EBM) in the latent space. Conditional on the latent vector, the molecule and its properties are modeled by a molecule generation model and a property regression model respectively. To search for molecules with desired properties, we propose a sampling with gradual distribution shifting (SGDS) algorithm, so that after learning the model initially on the training data of existing molecules and their properties, the proposed algorithm gradually shifts the model distribution towards the region supported by molecules with desired values of properties. Our experiments show that our method achieves very strong performances on various molecule design tasks. |
0711.2723 | Ganesh Bagler Dr | Ganesh Bagler and Somdatta Sinha | Assortative mixing in Protein Contact Networks and protein folding
kinetics | Published in Bioinformatics | Bioinformatics, vol. 23, no. 14, 1760--1767 (2007) | 10.1093/bioinformatics/btm257 | null | q-bio.MN q-bio.BM | null | Starting from linear chains of amino acids, the spontaneous folding of
proteins into their elaborate three-dimensional structures is one of the
remarkable examples of biological self-organization. We investigated native
state structures of 30 single-domain, two-state proteins, from complex networks
perspective, to understand the role of topological parameters in proteins'
folding kinetics, at two length scales-- as ``Protein Contact Networks (PCNs)''
and their corresponding ``Long-range Interaction Networks (LINs)'' constructed
by ignoring the short-range interactions. Our results show that, both PCNs and
LINs exhibit the exceptional topological property of ``assortative mixing''
that is absent in all other biological and technological networks studied so
far. We show that the degree distribution of these contact networks is partly
responsible for the observed assortativity. The coefficient of assortativity
also shows a positive correlation with the rate of protein folding at both
short and long contact scale, whereas, the clustering coefficients of only the
LINs exhibit a negative correlation. The results indicate that the general
topological parameters of these naturally-evolved protein networks can
effectively represent the structural and functional properties required for
fast information transfer among the residues facilitating biochemical/kinetic
functions, such as, allostery, stability, and the rate of folding.
| [
{
"created": "Sat, 17 Nov 2007 08:45:23 GMT",
"version": "v1"
}
] | 2007-11-20 | [
[
"Bagler",
"Ganesh",
""
],
[
"Sinha",
"Somdatta",
""
]
] | Starting from linear chains of amino acids, the spontaneous folding of proteins into their elaborate three-dimensional structures is one of the remarkable examples of biological self-organization. We investigated native state structures of 30 single-domain, two-state proteins, from complex networks perspective, to understand the role of topological parameters in proteins' folding kinetics, at two length scales-- as ``Protein Contact Networks (PCNs)'' and their corresponding ``Long-range Interaction Networks (LINs)'' constructed by ignoring the short-range interactions. Our results show that, both PCNs and LINs exhibit the exceptional topological property of ``assortative mixing'' that is absent in all other biological and technological networks studied so far. We show that the degree distribution of these contact networks is partly responsible for the observed assortativity. The coefficient of assortativity also shows a positive correlation with the rate of protein folding at both short and long contact scale, whereas, the clustering coefficients of only the LINs exhibit a negative correlation. The results indicate that the general topological parameters of these naturally-evolved protein networks can effectively represent the structural and functional properties required for fast information transfer among the residues facilitating biochemical/kinetic functions, such as, allostery, stability, and the rate of folding. |
q-bio/0609001 | Philippe Veber | Nicola Yanev (IRISA / INRIA Rennes), Rumen Andonov (IRISA / INRIA
Rennes), Philippe Veber (IRISA / INRIA Rennes), Stefan Balev (LIH EA3219) | Lagrangian Approaches for a class of Matching Problems in Computational
Biology | null | null | null | null | q-bio.QM | null | This paper presents efficient algorithms for solving the problem of aligning
a protein structure template to a query amino-acid sequence, known as protein
threading problem. We consider the problem as a special case of graph matching
problem. We give formal graph and integer programming models of the problem.
After studying the properties of these models, we propose two kinds of
Lagrangian relaxation for solving them. We present experimental results on real
life instances showing the efficiency of our approaches.
| [
{
"created": "Fri, 1 Sep 2006 13:41:43 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Yanev",
"Nicola",
"",
"IRISA / INRIA Rennes"
],
[
"Andonov",
"Rumen",
"",
"IRISA / INRIA\n Rennes"
],
[
"Veber",
"Philippe",
"",
"IRISA / INRIA Rennes"
],
[
"Balev",
"Stefan",
"",
"LIH EA3219"
]
] | This paper presents efficient algorithms for solving the problem of aligning a protein structure template to a query amino-acid sequence, known as protein threading problem. We consider the problem as a special case of graph matching problem. We give formal graph and integer programming models of the problem. After studying the properties of these models, we propose two kinds of Lagrangian relaxation for solving them. We present experimental results on real life instances showing the efficiency of our approaches. |
2210.16414 | Navid Shervani-Tabar | Navid Shervani-Tabar and Robert Rosenbaum | Meta-Learning Biologically Plausible Plasticity Rules with Random
Feedback Pathways | null | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backpropagation is widely used to train artificial neural networks, but its
relationship to synaptic plasticity in the brain is unknown. Some biological
models of backpropagation rely on feedback projections that are symmetric with
feedforward connections, but experiments do not corroborate the existence of
such symmetric backward connectivity. Random feedback alignment offers an
alternative model in which errors are propagated backward through fixed, random
backward connections. This approach successfully trains shallow models, but
learns slowly and does not perform well with deeper models or online learning.
In this study, we develop a meta-learning approach to discover interpretable,
biologically plausible plasticity rules that improve online learning
performance with fixed random feedback connections. The resulting plasticity
rules show improved online training of deep models in the low data regime. Our
results highlight the potential of meta-learning to discover effective,
interpretable learning rules satisfying biological constraints.
| [
{
"created": "Fri, 28 Oct 2022 21:40:56 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Nov 2022 23:23:15 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Nov 2022 13:29:28 GMT",
"version": "v3"
},
{
"created": "Thu, 1 Dec 2022 14:31:10 GMT",
"version": "v4"
},
{
"crea... | 2023-02-08 | [
[
"Shervani-Tabar",
"Navid",
""
],
[
"Rosenbaum",
"Robert",
""
]
] | Backpropagation is widely used to train artificial neural networks, but its relationship to synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely on feedback projections that are symmetric with feedforward connections, but experiments do not corroborate the existence of such symmetric backward connectivity. Random feedback alignment offers an alternative model in which errors are propagated backward through fixed, random backward connections. This approach successfully trains shallow models, but learns slowly and does not perform well with deeper models or online learning. In this study, we develop a meta-learning approach to discover interpretable, biologically plausible plasticity rules that improve online learning performance with fixed random feedback connections. The resulting plasticity rules show improved online training of deep models in the low data regime. Our results highlight the potential of meta-learning to discover effective, interpretable learning rules satisfying biological constraints. |
q-bio/0406047 | I. C. Baianu Dr. | I.C. Baianu, P.R. Lozano, V.I. Prisecaru and H.C. Lin | Applications of Novel Techniques to Health Foods, Medical and
Agricultural Biotechnology | 39 pages and 9 figures | null | null | HMAT04 | q-bio.OT | null | Selected applications of novel techniques in Agricultural Biotechnology,
Health Food formulations and Medical Biotechnology are being reviewed with the
aim of unraveling future developments and policy changes that are likely to
open new niches for Biotechnology and prevent the shrinking or closing the
existing ones. Amongst the selected novel techniques with applications to both
Agricultural and Medical Biotechnology are: immobilized bacterial cells and
enzymes, microencapsulation and liposome production, genetic manipulation of
microorganisms, development of novel vaccines from plants, epigenomics of
mammalian cells and organisms, as well as biocomputational tools for molecular
modeling related to disease and Bioinformatics. Both fundamental and applied
aspects of the emerging new techniques are being discussed in relation to their
anticipated impact on future biotechnology applications together with policy
changes that are needed for continued success in both Agricultural and Medical
Biotechnology. Several novel techniques are illustrated in an attempt to convey
the most representative and powerful tools that are currently being developed
for both immediate and long term applications in Agriculture, Health Food
formulation and production, pharmaceuticals and Medicine. The research aspects
are naturally emphasized in our review as they are key to further developments
in Medical and Agricultural Biotechnology.
| [
{
"created": "Thu, 24 Jun 2004 03:40:08 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Baianu",
"I. C.",
""
],
[
"Lozano",
"P. R.",
""
],
[
"Prisecaru",
"V. I.",
""
],
[
"Lin",
"H. C.",
""
]
] | Selected applications of novel techniques in Agricultural Biotechnology, Health Food formulations and Medical Biotechnology are being reviewed with the aim of unraveling future developments and policy changes that are likely to open new niches for Biotechnology and prevent the shrinking or closing the existing ones. Amongst the selected novel techniques with applications to both Agricultural and Medical Biotechnology are: immobilized bacterial cells and enzymes, microencapsulation and liposome production, genetic manipulation of microorganisms, development of novel vaccines from plants, epigenomics of mammalian cells and organisms, as well as biocomputational tools for molecular modeling related to disease and Bioinformatics. Both fundamental and applied aspects of the emerging new techniques are being discussed in relation to their anticipated impact on future biotechnology applications together with policy changes that are needed for continued success in both Agricultural and Medical Biotechnology. Several novel techniques are illustrated in an attempt to convey the most representative and powerful tools that are currently being developed for both immediate and long term applications in Agriculture, Health Food formulation and production, pharmaceuticals and Medicine. The research aspects are naturally emphasized in our review as they are key to further developments in Medical and Agricultural Biotechnology. |
1603.00958 | Elecia Johnston | Elecia B Johnston, Sandip D Kamath, Andreas L Lopata, Patrick M
Schaeffer | Tus-Ter-lock immuno-PCR assays for the sensitive detection of
tropomyosin-specific IgE antibodies | Author's final version, 5 figures | Bioanalysis, 2014, Vol. 6, No. 4, Pages 465-476 | 10.4155/bio.13.315 | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The increasing prevalence of food allergies requires development
of specific and sensitive tests capable of identifying the allergen responsible
for the disease. The development of serologic tests that can detect specific
IgE antibodies to allergenic proteins would therefore be highly received.
Results: Here we present two new quantitative immuno-PCR assays for the
sensitive detection of antibodies specific to the shrimp allergen tropomyosin.
Both assays are based on the self-assembling Tus-Ter-lock protein-DNA
conjugation system. Significantly elevated levels of tropomyosin-specific IgE
were detected in sera from patients allergic to shrimp. Conclusions: This is
the first time an allergenic protein has been fused with Tus to enable specific
IgE antibody detection in human sera by quantitative immuno-PCR.
| [
{
"created": "Thu, 3 Mar 2016 03:22:22 GMT",
"version": "v1"
}
] | 2016-03-04 | [
[
"Johnston",
"Elecia B",
""
],
[
"Kamath",
"Sandip D",
""
],
[
"Lopata",
"Andreas L",
""
],
[
"Schaeffer",
"Patrick M",
""
]
] | Background: The increasing prevalence of food allergies requires development of specific and sensitive tests capable of identifying the allergen responsible for the disease. The development of serologic tests that can detect specific IgE antibodies to allergenic proteins would therefore be highly received. Results: Here we present two new quantitative immuno-PCR assays for the sensitive detection of antibodies specific to the shrimp allergen tropomyosin. Both assays are based on the self-assembling Tus-Ter-lock protein-DNA conjugation system. Significantly elevated levels of tropomyosin-specific IgE were detected in sera from patients allergic to shrimp. Conclusions: This is the first time an allergenic protein has been fused with Tus to enable specific IgE antibody detection in human sera by quantitative immuno-PCR. |
2001.03119 | Simona Arrighi | Simona Arrighi, Adriana Moroni, Laura Tassoni, Francesco Boschin,
Federica Badino, Eugenio Bortolini, Paolo Boscato, Jacopo Crezzini, Carla
Figus, Manuela Forte, Federico Lugli, Giulia Marciani, Gregorio Oxilia, Fabio
Negrino, Julien Riel-Salvatore, Matteo Romandini, Enza Elena Spinapolice,
Marco Peresani, Annamaria Ronchitelli, Stefano Benazzi | Bone tools, ornaments and other unusual objects during the Middle to
Upper Palaeolithic transition in Italy | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The arrival of Modern Humans (MHs) in Europe between 50 ka and 36 ka
coincides with significant changes in human behaviour, regarding the production
of tools, the exploitation of resources and the systematic use of ornaments and
colouring substances. The emergence of the so-called modern behaviours is
usually associated with MHs, although in these last decades findings relating
to symbolic thinking of pre-Sapiens groups have been claimed. In this paper we
present a synthesis of the Italian evidence concerning bone manufacturing and
the use of ornaments and pigments in the time span encompassing the demise of
Neandertals and their replacement by MHs. Current data show that Mousterian
bone tools are mostly obtained from bone fragments used as is. Conversely an
organized production of fine shaped bone tools is characteristic of the
Uluzzian and the Protoaurignacian, when the complexity inherent in the
manufacturing processes suggests that bone artefacts are not to be considered
as expedient resources. Some traces of symbolic activities are associated to
Neandertals in Northern Italy. Ornaments (mostly tusk shells) and pigments used
for decorative purposes are well recorded during the Uluzzian. Their features
and distribution witness to an intriguing cultural homogeneity within this
technocomplex. The Protoaurignacian is characterized by a wider archaeological
evidence, consisting of personal ornaments (mostly pierced gastropods),
pigments and artistic items.
| [
{
"created": "Wed, 4 Dec 2019 13:29:30 GMT",
"version": "v1"
}
] | 2020-01-10 | [
[
"Arrighi",
"Simona",
""
],
[
"Moroni",
"Adriana",
""
],
[
"Tassoni",
"Laura",
""
],
[
"Boschin",
"Francesco",
""
],
[
"Badino",
"Federica",
""
],
[
"Bortolini",
"Eugenio",
""
],
[
"Boscato",
"Paolo",
""
]... | The arrival of Modern Humans (MHs) in Europe between 50 ka and 36 ka coincides with significant changes in human behaviour, regarding the production of tools, the exploitation of resources and the systematic use of ornaments and colouring substances. The emergence of the so-called modern behaviours is usually associated with MHs, although in these last decades findings relating to symbolic thinking of pre-Sapiens groups have been claimed. In this paper we present a synthesis of the Italian evidence concerning bone manufacturing and the use of ornaments and pigments in the time span encompassing the demise of Neandertals and their replacement by MHs. Current data show that Mousterian bone tools are mostly obtained from bone fragments used as is. Conversely an organized production of fine shaped bone tools is characteristic of the Uluzzian and the Protoaurignacian, when the complexity inherent in the manufacturing processes suggests that bone artefacts are not to be considered as expedient resources. Some traces of symbolic activities are associated to Neandertals in Northern Italy. Ornaments (mostly tusk shells) and pigments used for decorative purposes are well recorded during the Uluzzian. Their features and distribution witness to an intriguing cultural homogeneity within this technocomplex. The Protoaurignacian is characterized by a wider archaeological evidence, consisting of personal ornaments (mostly pierced gastropods), pigments and artistic items. |
2002.05114 | Nam Lyong Kang | Nam. Lyong Kang | New method for evaluating fitness using the waist-to-height ratio among
Korean adults | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objectives: This paper introduces a new method for evaluating fitness and
determining effective exercises for reducing abdominal obesity in Korean adults
using the new kind of waist-to-height ratio (WHT2R). Materials and Methods: The
body mass index (BMI), body shape index (ABSI), and two other waist-to-height
ratios (WHT.5R, WHTR) were considered as possible contenders for the WHT2R. The
correlation coefficients were calculated by correlation analyses between the
indices and four fitness tests for comparison. The LMV (lump mean value) and
FSPW (fitness sensitivity percentage to WHT2R) were introduced to find the
association between fitness and abdominal obesity using a linear regression
method and to use as an indicator for the effective control of abdominal
obesity. Results: The WHT2R is more suitable for assessing fitness than the
other indices and can be controlled effectively by decreasing the 10-m shuttle
run score for both males and females. Conclusions: The WHT2R can be used as a
possible contender for evaluating fitness and is an effective indicator for the
reduction of abdominal obesity. The LMV and FSPW can be used to establish
personal exercise aims.
| [
{
"created": "Tue, 11 Feb 2020 01:19:17 GMT",
"version": "v1"
}
] | 2020-02-13 | [
[
"Kang",
"Nam. Lyong",
""
]
] | Objectives: This paper introduces a new method for evaluating fitness and determining effective exercises for reducing abdominal obesity in Korean adults using the new kind of waist-to-height ratio (WHT2R). Materials and Methods: The body mass index (BMI), body shape index (ABSI), and two other waist-to-height ratios (WHT.5R, WHTR) were considered as possible contenders for the WHT2R. The correlation coefficients were calculated by correlation analyses between the indices and four fitness tests for comparison. The LMV (lump mean value) and FSPW (fitness sensitivity percentage to WHT2R) were introduced to find the association between fitness and abdominal obesity using a linear regression method and to use as an indicator for the effective control of abdominal obesity. Results: The WHT2R is more suitable for assessing fitness than the other indices and can be controlled effectively by decreasing the 10-m shuttle run score for both males and females. Conclusions: The WHT2R can be used as a possible contender for evaluating fitness and is an effective indicator for the reduction of abdominal obesity. The LMV and FSPW can be used to establish personal exercise aims. |
0811.3124 | David Hochberg | Josep M. Ribo and David Hochberg | Stability of racemic and chiral steady states in open and closed
chemical systems | 25 pages, 1 figure. To appear in Physics Letters A (2008) | null | 10.1016/j.physleta.2008.10.079 | null | q-bio.PE q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stability properties of models of spontaneous mirror symmetry breaking in
chemistry are characterized algebraically. The models considered here all
derive either from the Frank model or from autocatalysis with limited
enantioselectivity. Emphasis is given to identifying the critical parameter
controlling the chiral symmetry breaking transition from racemic to chiral
steady-state solutions. This parameter is identified in each case, and the
constraints on the chemical rate constants determined from dynamic stability
are derived.
| [
{
"created": "Wed, 19 Nov 2008 14:21:31 GMT",
"version": "v1"
}
] | 2008-11-20 | [
[
"Ribo",
"Josep M.",
""
],
[
"Hochberg",
"David",
""
]
] | The stability properties of models of spontaneous mirror symmetry breaking in chemistry are characterized algebraically. The models considered here all derive either from the Frank model or from autocatalysis with limited enantioselectivity. Emphasis is given to identifying the critical parameter controlling the chiral symmetry breaking transition from racemic to chiral steady-state solutions. This parameter is identified in each case, and the constraints on the chemical rate constants determined from dynamic stability are derived. |
2304.06353 | Alexandra Blenkinsop | Alexandra Blenkinsop, Lysandros Sofocleous, Francesco di Lauro,
Evangelia Georgia Kostaki, Ard van Sighem, Daniela Bezemer, Thijs van de
Laar, Peter Reiss, Godelieve de Bree, Nikos Pantazis, and Oliver Ratmann | Bayesian mixture models for phylogenetic source attribution from
consensus sequences and time since infection estimates | null | null | null | null | q-bio.PE stat.ME | http://creativecommons.org/licenses/by/4.0/ | In stopping the spread of infectious diseases, pathogen genomic data can be
used to reconstruct transmission events and characterize population-level
sources of infection. Most approaches for identifying transmission pairs do not
account for the time that passed since divergence of pathogen variants in
individuals, which is problematic in viruses with high within-host evolutionary
rates. This is prompting us to consider possible transmission pairs in terms of
phylogenetic data and additional estimates of time since infection derived from
clinical biomarkers. We develop Bayesian mixture models with an evolutionary
clock as signal component and additional mixed effects or covariate random
functions describing the mixing weights to classify potential pairs into likely
and unlikely transmission pairs. We demonstrate that although sources cannot be
identified at the individual level with certainty, even with the additional
data on time elapsed, inferences into the population-level sources of
transmission are possible, and more accurate than using only phylogenetic data
without time since infection estimates. We apply the approach to estimate
age-specific sources of HIV infection in Amsterdam MSM transmission networks
between 2010-2021. This study demonstrates that infection time estimates
provide informative data to characterize transmission sources, and shows how
phylogenetic source attribution can then be done with multi-dimensional mixture
models.
| [
{
"created": "Thu, 13 Apr 2023 09:19:35 GMT",
"version": "v1"
}
] | 2023-04-14 | [
[
"Blenkinsop",
"Alexandra",
""
],
[
"Sofocleous",
"Lysandros",
""
],
[
"di Lauro",
"Francesco",
""
],
[
"Kostaki",
"Evangelia Georgia",
""
],
[
"van Sighem",
"Ard",
""
],
[
"Bezemer",
"Daniela",
""
],
[
"van de Laar... | In stopping the spread of infectious diseases, pathogen genomic data can be used to reconstruct transmission events and characterize population-level sources of infection. Most approaches for identifying transmission pairs do not account for the time that passed since divergence of pathogen variants in individuals, which is problematic in viruses with high within-host evolutionary rates. This is prompting us to consider possible transmission pairs in terms of phylogenetic data and additional estimates of time since infection derived from clinical biomarkers. We develop Bayesian mixture models with an evolutionary clock as signal component and additional mixed effects or covariate random functions describing the mixing weights to classify potential pairs into likely and unlikely transmission pairs. We demonstrate that although sources cannot be identified at the individual level with certainty, even with the additional data on time elapsed, inferences into the population-level sources of transmission are possible, and more accurate than using only phylogenetic data without time since infection estimates. We apply the approach to estimate age-specific sources of HIV infection in Amsterdam MSM transmission networks between 2010-2021. This study demonstrates that infection time estimates provide informative data to characterize transmission sources, and shows how phylogenetic source attribution can then be done with multi-dimensional mixture models. |
1507.04487 | Henrik Ronellenfitsch | Henrik Ronellenfitsch, Jana Lasser, Douglas C. Daly, Eleni Katifori | Topological phenotypes constitute a new dimension in the phenotypic
space of leaf venation networks | null | PLoS Comput Biol. 2015 Dec 23;11(12):e1004680 | 10.1371/journal.pcbi.1004680 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The leaves of angiosperms contain highly complex venation networks consisting
of recursively nested, hierarchically organized loops. We describe a new
phenotypic trait of reticulate vascular networks based on the topology of the
nested loops. This phenotypic trait encodes information orthogonal to widely
used geometric phenotypic traits, and thus constitutes a new dimension in the
leaf venation phenotypic space. We apply our metric to a database of 186 leaves
and leaflets representing 137 species, predominantly from the Burseraceae
family, revealing diverse topological network traits even within this single
family. We show that topological information significantly improves
identification of leaves from fragments by calculating a "leaf venation
fingerprint" from topology and geometry. Further, we present a phenomenological
model suggesting that the topological traits can be explained by noise effects
unique to specimen during development of each leaf which leave their imprint on
the final network. This work opens the path to new quantitative identification
techniques for leaves which go beyond simple geometric traits such as vein
density and is directly applicable to other planar or sub-planar networks such
as blood vessels in the brain.
| [
{
"created": "Thu, 16 Jul 2015 08:49:34 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2015 11:17:16 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Dec 2015 18:58:47 GMT",
"version": "v3"
}
] | 2016-06-23 | [
[
"Ronellenfitsch",
"Henrik",
""
],
[
"Lasser",
"Jana",
""
],
[
"Daly",
"Douglas C.",
""
],
[
"Katifori",
"Eleni",
""
]
] | The leaves of angiosperms contain highly complex venation networks consisting of recursively nested, hierarchically organized loops. We describe a new phenotypic trait of reticulate vascular networks based on the topology of the nested loops. This phenotypic trait encodes information orthogonal to widely used geometric phenotypic traits, and thus constitutes a new dimension in the leaf venation phenotypic space. We apply our metric to a database of 186 leaves and leaflets representing 137 species, predominantly from the Burseraceae family, revealing diverse topological network traits even within this single family. We show that topological information significantly improves identification of leaves from fragments by calculating a "leaf venation fingerprint" from topology and geometry. Further, we present a phenomenological model suggesting that the topological traits can be explained by noise effects unique to specimen during development of each leaf which leave their imprint on the final network. This work opens the path to new quantitative identification techniques for leaves which go beyond simple geometric traits such as vein density and is directly applicable to other planar or sub-planar networks such as blood vessels in the brain. |
2010.11124 | Paul Smolen | Paul Smolen, Douglas A Baxter, John H Byrne | Modeling Suggests Combined-Drug Treatments for Disorders Impairing
Synaptic Plasticity via Shared Signaling Pathways | Accepted to Journal of Computational Neuroscience | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic disorders such as Rubinstein-Taybi syndrome (RTS) and Coffin-Lowry
syndrome (CLS) cause lifelong cognitive disability, including deficits in
learning and memory. Can pharmacological therapies be suggested to improve
learning and memory in these disorders? To address this question, we simulated
drug effects within a computational model describing induction of late
long-term potentiation (L-LTP). Biochemical pathways impaired in these and
other disorders converge on a common target, histone acetylation by
acetyltransferases such as CREB binding protein (CBP), which facilitates gene
induction necessary for L-LTP. We focused on four drug classes: tropomyosin
receptor kinase B (TrkB) agonists, cAMP phosphodiesterase inhibitors, histone
deacetylase inhibitors, and ampakines. Simulations suggested each drug type
alone may rescue deficits in L-LTP. A potential disadvantage, however, was the
necessity of simulating strong drug effects (high doses), which could produce
adverse side effects. Thus, we investigated the effects of six drug pairs among
the four classes described above. These combination treatments normalized
impaired L-LTP with substantially smaller drug doses. In addition three of
these combinations, a TrkB agonist paired with an ampakine and a cAMP
phosphodiesterase inhibitor paired with a TrkB agonist or an ampakine,
exhibited strong synergism in L-LTP rescue. Therefore, we suggest these drug
combinations are promising candidates for further empirical studies in animal
models of genetic disorders that impair acetylation, L-LTP, and learning.
| [
{
"created": "Wed, 21 Oct 2020 16:33:31 GMT",
"version": "v1"
}
] | 2020-10-22 | [
[
"Smolen",
"Paul",
""
],
[
"Baxter",
"Douglas A",
""
],
[
"Byrne",
"John H",
""
]
] | Genetic disorders such as Rubinstein-Taybi syndrome (RTS) and Coffin-Lowry syndrome (CLS) cause lifelong cognitive disability, including deficits in learning and memory. Can pharmacological therapies be suggested to improve learning and memory in these disorders? To address this question, we simulated drug effects within a computational model describing induction of late long-term potentiation (L-LTP). Biochemical pathways impaired in these and other disorders converge on a common target, histone acetylation by acetyltransferases such as CREB binding protein (CBP), which facilitates gene induction necessary for L-LTP. We focused on four drug classes: tropomyosin receptor kinase B (TrkB) agonists, cAMP phosphodiesterase inhibitors, histone deacetylase inhibitors, and ampakines. Simulations suggested each drug type alone may rescue deficits in L-LTP. A potential disadvantage, however, was the necessity of simulating strong drug effects (high doses), which could produce adverse side effects. Thus, we investigated the effects of six drug pairs among the four classes described above. These combination treatments normalized impaired L-LTP with substantially smaller drug doses. In addition three of these combinations, a TrkB agonist paired with an ampakine and a cAMP phosphodiesterase inhibitor paired with a TrkB agonist or an ampakine, exhibited strong synergism in L-LTP rescue. Therefore, we suggest these drug combinations are promising candidates for further empirical studies in animal models of genetic disorders that impair acetylation, L-LTP, and learning. |
2210.14522 | Zhihao Cao | Zhihao Cao, Hongchun Qu | Simulation-based Modelling of Growth and Pollination of Greenhouse
Strawberry | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The cultivated strawberry Fragaria ananassa Duch. is widely planted in
greenhouses in China. Its production heavily depends on pollination services.
Compared with artificial pollination, bee pollination can significantly improve
fruit quality and save considerable labor requirement. Multiple factors such as
bee foraging behavior, planting pattern and the spatial complexity of the
greenhouse environment interacting over time and space are major obstacles to
understanding of bee pollination dynamics. We propose a spatially-explicit
agent-based simulation model which allows users to explore how various factors
including bee foraging behavior and strawberry phenology conditions as well as
the greenhouse environment influence pollination efficiency and fruit quality.
Simulation experiments allowed us to compare pollination efficiencies in
different conditions. Especially, the cause of bee pollination advantage,
optimal bee density and bee hive location were discussed based on sensitivity
analysis. In addition, simulation results provide some insights for strawberry
planting in a greenhouse. The firmly validated open-source model is a useful
tool for hypothesis testing and theory development for strawberry pollination
research.
| [
{
"created": "Wed, 26 Oct 2022 07:21:56 GMT",
"version": "v1"
}
] | 2022-10-27 | [
[
"Cao",
"Zhihao",
""
],
[
"Qu",
"Hongchun",
""
]
] | The cultivated strawberry Fragaria ananassa Duch. is widely planted in greenhouses in China. Its production heavily depends on pollination services. Compared with artificial pollination, bee pollination can significantly improve fruit quality and save considerable labor requirement. Multiple factors such as bee foraging behavior, planting pattern and the spatial complexity of the greenhouse environment interacting over time and space are major obstacles to understanding of bee pollination dynamics. We propose a spatially-explicit agent-based simulation model which allows users to explore how various factors including bee foraging behavior and strawberry phenology conditions as well as the greenhouse environment influence pollination efficiency and fruit quality. Simulation experiments allowed us to compare pollination efficiencies in different conditions. Especially, the cause of bee pollination advantage, optimal bee density and bee hive location were discussed based on sensitivity analysis. In addition, simulation results provide some insights for strawberry planting in a greenhouse. The firmly validated open-source model is a useful tool for hypothesis testing and theory development for strawberry pollination research. |
0904.1215 | Jerome Vanclay | Jerome K Vanclay | Tree diameter, height and stocking in even-aged forests | 15 pages, 8 figures. Annals of Forest Science, in press | Annals of Forest Science 66 (2009) 702 | 10.1051/forest/2009063 | null | q-bio.QM q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Empirical observations suggest that in pure even-aged forests, the mean
diameter of forest trees (D, diameter at breast height, 1.3 m above ground)
tends to remain a constant proportion of stand height (H, average height of the
largest trees in a stand) divided by the logarithm of stand density (N, number
of trees per hectare): D = beta (H-1.3)/Ln(N). Thinning causes a relatively
small and temporary change in the slope beta, the magnitude and duration of
which depends on the nature of the thinning. This relationship may provide a
robust predictor of growth in situations where scarce data and resources
preclude more sophisticated modelling approaches.
| [
{
"created": "Tue, 7 Apr 2009 20:28:55 GMT",
"version": "v1"
}
] | 2009-08-23 | [
[
"Vanclay",
"Jerome K",
""
]
] | Empirical observations suggest that in pure even-aged forests, the mean diameter of forest trees (D, diameter at breast height, 1.3 m above ground) tends to remain a constant proportion of stand height (H, average height of the largest trees in a stand) divided by the logarithm of stand density (N, number of trees per hectare): D = beta (H-1.3)/Ln(N). Thinning causes a relatively small and temporary change in the slope beta, the magnitude and duration of which depends on the nature of the thinning. This relationship may provide a robust predictor of growth in situations where scarce data and resources preclude more sophisticated modelling approaches. |
2306.06138 | Yule Wang | Yule Wang, Zijing Wu, Chengrui Li, Anqi Wu | Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics
Alignment with Diffusion Models | null | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of behavior-related brain computation, it is necessary to align
raw neural signals against the drastic domain shift among them. A foundational
framework within neuroscience research posits that trial-based neural
population activities rely on low-dimensional latent dynamics, thus focusing on
the latter greatly facilitates the alignment procedure. Despite this field's
progress, existing methods ignore the intrinsic spatio-temporal structure
during the alignment phase. Hence, their solutions usually lead to poor quality
in latent dynamics structures and overall performance. To tackle this problem,
we propose an alignment method ERDiff, which leverages the expressivity of the
diffusion model to preserve the spatio-temporal structure of latent dynamics.
Specifically, the latent dynamics structures of the source domain are first
extracted by a diffusion model. Then, under the guidance of this diffusion
model, such structures are well-recovered through a maximum likelihood
alignment procedure in the target domain. We first demonstrate the
effectiveness of our proposed method on a synthetic dataset. Then, when applied
to neural recordings from the non-human primate motor cortex, under both
cross-day and inter-subject settings, our method consistently manifests its
capability of preserving the spatiotemporal structure of latent dynamics and
outperforms existing approaches in alignment goodness-of-fit and neural
decoding performance.
| [
{
"created": "Fri, 9 Jun 2023 05:53:11 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 20:11:55 GMT",
"version": "v2"
}
] | 2024-03-12 | [
[
"Wang",
"Yule",
""
],
[
"Wu",
"Zijing",
""
],
[
"Li",
"Chengrui",
""
],
[
"Wu",
"Anqi",
""
]
] | In the field of behavior-related brain computation, it is necessary to align raw neural signals against the drastic domain shift among them. A foundational framework within neuroscience research posits that trial-based neural population activities rely on low-dimensional latent dynamics, thus focusing on the latter greatly facilitates the alignment procedure. Despite this field's progress, existing methods ignore the intrinsic spatio-temporal structure during the alignment phase. Hence, their solutions usually lead to poor quality in latent dynamics structures and overall performance. To tackle this problem, we propose an alignment method ERDiff, which leverages the expressivity of the diffusion model to preserve the spatio-temporal structure of latent dynamics. Specifically, the latent dynamics structures of the source domain are first extracted by a diffusion model. Then, under the guidance of this diffusion model, such structures are well-recovered through a maximum likelihood alignment procedure in the target domain. We first demonstrate the effectiveness of our proposed method on a synthetic dataset. Then, when applied to neural recordings from the non-human primate motor cortex, under both cross-day and inter-subject settings, our method consistently manifests its capability of preserving the spatiotemporal structure of latent dynamics and outperforms existing approaches in alignment goodness-of-fit and neural decoding performance. |
2307.14360 | Ludwig A. Hoffmann | Ludwig A. Hoffmann, Luca Giomi | Theory of cellular homochirality and trait evolution in flocking systems | 10 pages, 7 figures | null | null | null | q-bio.CB cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Chirality is a feature of many biological systems and much research has been
focused on understanding the origin and implications of this property.
Famously, sugars and amino acids found in nature are homochiral, i.e., chiral
symmetry is broken and only one of the two possible chiral states is ever
observed. Certain types of cells show chiral behavior, too. Understanding the
origin of cellular chirality and its effect on tissues and cellular dynamics is
still an open problem and subject to much (recent) research, e.g., in the
context of drosophila morphogenesis. Here, we develop a simple model to
describe the possible origin of homochirality in cells. Combining the Vicsek
model for collective behavior with the model of Jafarpour et al., developed to
describe the emergence of molecular homochirality, we investigate how a
homochiral state might have evolved in cells from an initially symmetric state
without any mechanisms that explicitly break chiral symmetry. We investigate
the transition to homochirality and show how the "openness" of the system as
well as noise determine if and when a globally homochiral state is reached. We
discuss how our model can be applied to the evolution of traits in flocking
systems in general, or to study systems consisting of multiple interacting
species.
| [
{
"created": "Mon, 24 Jul 2023 17:33:12 GMT",
"version": "v1"
}
] | 2023-07-28 | [
[
"Hoffmann",
"Ludwig A.",
""
],
[
"Giomi",
"Luca",
""
]
] | Chirality is a feature of many biological systems and much research has been focused on understanding the origin and implications of this property. Famously, sugars and amino acids found in nature are homochiral, i.e., chiral symmetry is broken and only one of the two possible chiral states is ever observed. Certain types of cells show chiral behavior, too. Understanding the origin of cellular chirality and its effect on tissues and cellular dynamics is still an open problem and subject to much (recent) research, e.g., in the context of drosophila morphogenesis. Here, we develop a simple model to describe the possible origin of homochirality in cells. Combining the Vicsek model for collective behavior with the model of Jafarpour et al., developed to describe the emergence of molecular homochirality, we investigate how a homochiral state might have evolved in cells from an initially symmetric state without any mechanisms that explicitly break chiral symmetry. We investigate the transition to homochirality and show how the "openness" of the system as well as noise determine if and when a globally homochiral state is reached. We discuss how our model can be applied to the evolution of traits in flocking systems in general, or to study systems consisting of multiple interacting species. |
0708.2038 | Julius Lucks | Julius B. Lucks, David R. Nelson, Grzegorz Kudla, Joshua B. Plotkin | Genome landscapes and bacteriophage codon usage | 9 Color Figures, 5 Tables, 53 References | Lucks JB, Nelson DR, Kudla GR, Plotkin JB (2008) Genome Landscapes
and Bacteriophage Codon Usage. PLoS Computational Biology 4(2): e1000001 | 10.1371/journal.pcbi.1000001 | null | q-bio.GN | null | Across all kingdoms of biological life, protein-coding genes exhibit unequal
usage of synonmous codons. Although alternative theories abound, translational
selection has been accepted as an important mechanism that shapes the patterns
of codon usage in prokaryotes and simple eukaryotes. Here we analyze patterns
of codon usage across 74 diverse bacteriophages that infect E. coli, P.
aeruginosa and L. lactis as their primary host. We introduce the concept of a
`genome landscape,' which helps reveal non-trivial, long-range patterns in
codon usage across a genome. We develop a series of randomization tests that
allow us to interrogate the significance of one aspect of codon usage, such a
GC content, while controlling for another aspect, such as adaptation to
host-preferred codons. We find that 33 phage genomes exhibit highly non-random
patterns in their GC3-content, use of host-preferred codons, or both. We show
that the head and tail proteins of these phages exhibit significant bias
towards host-preferred codons, relative to the non-structural phage proteins.
Our results support the hypothesis of translational selection on viral genes
for host-preferred codons, over a broad range of bacteriophages.
| [
{
"created": "Tue, 14 Aug 2007 22:44:18 GMT",
"version": "v1"
}
] | 2008-03-04 | [
[
"Lucks",
"Julius B.",
""
],
[
"Nelson",
"David R.",
""
],
[
"Kudla",
"Grzegorz",
""
],
[
"Plotkin",
"Joshua B.",
""
]
] | Across all kingdoms of biological life, protein-coding genes exhibit unequal usage of synonmous codons. Although alternative theories abound, translational selection has been accepted as an important mechanism that shapes the patterns of codon usage in prokaryotes and simple eukaryotes. Here we analyze patterns of codon usage across 74 diverse bacteriophages that infect E. coli, P. aeruginosa and L. lactis as their primary host. We introduce the concept of a `genome landscape,' which helps reveal non-trivial, long-range patterns in codon usage across a genome. We develop a series of randomization tests that allow us to interrogate the significance of one aspect of codon usage, such a GC content, while controlling for another aspect, such as adaptation to host-preferred codons. We find that 33 phage genomes exhibit highly non-random patterns in their GC3-content, use of host-preferred codons, or both. We show that the head and tail proteins of these phages exhibit significant bias towards host-preferred codons, relative to the non-structural phage proteins. Our results support the hypothesis of translational selection on viral genes for host-preferred codons, over a broad range of bacteriophages. |
2005.12879 | Nicholas Randolph | E. Benjamin Randall, Nicholas Z. Randolph, Alen Alexanderian, Mette S.
Olufsen | Global sensitivity analysis informed model reduction and selection
applied to a Valsalva maneuver model | null | null | 10.1016/j.jtbi.2021.110759 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we develop a methodology for model reduction and selection
informed by global sensitivity analysis (GSA) methods. We apply these
techniques to a control model that takes systolic blood pressure and thoracic
tissue pressure data as inputs and predicts heart rate in response to the
Valsalva maneuver (VM). The study compares four GSA methods based on Sobol'
indices (SIs) quantifying the parameter influence on the difference between the
model output and the heart rate data. The GSA methods include standard scalar
SIs determining the average parameter influence over the time interval studied
and three time-varying methods analyzing how parameter influence changes over
time. The time-varying methods include a new technique, termed limited-memory
SIs, predicting parameter influence using a moving window approach. Using the
limited-memory SIs, we perform model reduction and selection to analyze the
necessity of modeling both the aortic and carotid baroreceptor regions in
response to the VM. We compare the original model to three systematically
reduced models including (i) the aortic and carotid regions, (ii) the aortic
region only, and (iii) the carotid region only. Model selection is done
quantitatively using the Akaike and Bayesian Information Criteria and
qualitatively by comparing the neurological predictions. Results show that it
is necessary to incorporate both the aortic and carotid regions to model the
VM.
| [
{
"created": "Tue, 26 May 2020 17:18:34 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Feb 2021 21:47:40 GMT",
"version": "v2"
},
{
"created": "Fri, 14 May 2021 15:48:22 GMT",
"version": "v3"
}
] | 2021-05-17 | [
[
"Randall",
"E. Benjamin",
""
],
[
"Randolph",
"Nicholas Z.",
""
],
[
"Alexanderian",
"Alen",
""
],
[
"Olufsen",
"Mette S.",
""
]
] | In this study, we develop a methodology for model reduction and selection informed by global sensitivity analysis (GSA) methods. We apply these techniques to a control model that takes systolic blood pressure and thoracic tissue pressure data as inputs and predicts heart rate in response to the Valsalva maneuver (VM). The study compares four GSA methods based on Sobol' indices (SIs) quantifying the parameter influence on the difference between the model output and the heart rate data. The GSA methods include standard scalar SIs determining the average parameter influence over the time interval studied and three time-varying methods analyzing how parameter influence changes over time. The time-varying methods include a new technique, termed limited-memory SIs, predicting parameter influence using a moving window approach. Using the limited-memory SIs, we perform model reduction and selection to analyze the necessity of modeling both the aortic and carotid baroreceptor regions in response to the VM. We compare the original model to three systematically reduced models including (i) the aortic and carotid regions, (ii) the aortic region only, and (iii) the carotid region only. Model selection is done quantitatively using the Akaike and Bayesian Information Criteria and qualitatively by comparing the neurological predictions. Results show that it is necessary to incorporate both the aortic and carotid regions to model the VM. |
1310.4010 | Marcus Kaiser | Marcus Kaiser | The Potential of the Human Connectome as a Biomarker of Brain Disease | Perspective Article for special issue on Magnetic Resonance Imaging
of Healthy and Diseased Brain Networks | Frontiers in Human Neuroscience 7:484, 2013 | 10.3389/fnhum.2013.00484 | null | q-bio.NC physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human connectome at the level of fiber tracts between brain regions has
been shown to differ in patients with brain disorders compared to healthy
control groups. Nonetheless, there is a potentially large number of different
network organizations for individual patients that could lead to cognitive
deficits prohibiting correct diagnosis. Therefore changes that can distinguish
groups might not be sufficient to diagnose the disease that an individual
patient suffers from and to indicate the best treatment option for that
patient. We describe the challenges introduced by the large variability of
connectomes within healthy subjects and patients and outline three common
strategies to use connectomes as biomarkers of brain diseases. Finally, we
propose a fourth option in using models of simulated brain activity (the
dynamic connectome) based on structural connectivity rather than the structure
(connectome) itself as a biomarker of disease. Dynamic connectomes, in addition
to currently used structural, functional, or effective connectivity, could be
an important future biomarker for clinical applications.
| [
{
"created": "Tue, 15 Oct 2013 11:18:16 GMT",
"version": "v1"
}
] | 2013-10-16 | [
[
"Kaiser",
"Marcus",
""
]
] | The human connectome at the level of fiber tracts between brain regions has been shown to differ in patients with brain disorders compared to healthy control groups. Nonetheless, there is a potentially large number of different network organizations for individual patients that could lead to cognitive deficits prohibiting correct diagnosis. Therefore changes that can distinguish groups might not be sufficient to diagnose the disease that an individual patient suffers from and to indicate the best treatment option for that patient. We describe the challenges introduced by the large variability of connectomes within healthy subjects and patients and outline three common strategies to use connectomes as biomarkers of brain diseases. Finally, we propose a fourth option in using models of simulated brain activity (the dynamic connectome) based on structural connectivity rather than the structure (connectome) itself as a biomarker of disease. Dynamic connectomes, in addition to currently used structural, functional, or effective connectivity, could be an important future biomarker for clinical applications. |
1611.00693 | Osman Kahraman | Osman Kahraman, William S. Klug, Christoph A. Haselwandter | Signatures of protein structure in the cooperative gating of
mechanosensitive ion channels | null | EPL, 107 (2014), 48004 | 10.1209/0295-5075/107/48004 | null | q-bio.BM cond-mat.soft physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Membrane proteins deform the surrounding lipid bilayer, which can lead to
membrane-mediated interactions between neighboring proteins. Using the
mechanosensitive channel of large conductance (MscL) as a model system, we
demonstrate how the observed differences in protein structure can affect
membrane-mediated interactions and cooperativity among membrane proteins. We
find that distinct oligomeric states of MscL lead to distinct gateway states
for the clustering of MscL, and predict signatures of MscL structure and
spatial organization in the cooperative gating of MscL. Our modeling approach
establishes a quantitative relation between the observed shapes and cooperative
function of membrane~proteins.
| [
{
"created": "Wed, 2 Nov 2016 17:37:01 GMT",
"version": "v1"
}
] | 2016-11-03 | [
[
"Kahraman",
"Osman",
""
],
[
"Klug",
"William S.",
""
],
[
"Haselwandter",
"Christoph A.",
""
]
] | Membrane proteins deform the surrounding lipid bilayer, which can lead to membrane-mediated interactions between neighboring proteins. Using the mechanosensitive channel of large conductance (MscL) as a model system, we demonstrate how the observed differences in protein structure can affect membrane-mediated interactions and cooperativity among membrane proteins. We find that distinct oligomeric states of MscL lead to distinct gateway states for the clustering of MscL, and predict signatures of MscL structure and spatial organization in the cooperative gating of MscL. Our modeling approach establishes a quantitative relation between the observed shapes and cooperative function of membrane~proteins. |
2108.03465 | Imdadullah Khan | Sarwan Ali, Bikram Sahoo, Naimat Ullah, Alexander Zelikovskiy, Murray
Patterson, Imdadullah Khan | A k-mer Based Approach for SARS-CoV-2 Variant Identification | Accepted for Publication at "International Symposium on
Bioinformatics Research and Applications (ISBRA), 2021 | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | With the rapid spread of the novel coronavirus (COVID-19) across the globe
and its continuous mutation, it is of pivotal importance to design a system to
identify different known (and unknown) variants of SARS-CoV-2. Identifying
particular variants helps to understand and model their spread patterns, design
effective mitigation strategies, and prevent future outbreaks. It also plays a
crucial role in studying the efficacy of known vaccines against each variant
and modeling the likelihood of breakthrough infections. It is well known that
the spike protein contains most of the information/variation pertaining to
coronavirus variants.
In this paper, we use spike sequences to classify different variants of the
coronavirus in humans. We show that preserving the order of the amino acids
helps the underlying classifiers to achieve better performance. We also show
that we can train our model to outperform the baseline algorithms using only a
small number of training samples ($1\%$ of the data). Finally, we show the
importance of the different amino acids which play a key role in identifying
variants and how they coincide with those reported by the USA's Centers for
Disease Control and Prevention (CDC).
| [
{
"created": "Sat, 7 Aug 2021 15:08:15 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Aug 2021 19:46:41 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Aug 2021 07:42:57 GMT",
"version": "v3"
},
{
"created": "Sat, 25 Sep 2021 19:50:09 GMT",
"version": "v4"
},
{
"cr... | 2021-10-13 | [
[
"Ali",
"Sarwan",
""
],
[
"Sahoo",
"Bikram",
""
],
[
"Ullah",
"Naimat",
""
],
[
"Zelikovskiy",
"Alexander",
""
],
[
"Patterson",
"Murray",
""
],
[
"Khan",
"Imdadullah",
""
]
] | With the rapid spread of the novel coronavirus (COVID-19) across the globe and its continuous mutation, it is of pivotal importance to design a system to identify different known (and unknown) variants of SARS-CoV-2. Identifying particular variants helps to understand and model their spread patterns, design effective mitigation strategies, and prevent future outbreaks. It also plays a crucial role in studying the efficacy of known vaccines against each variant and modeling the likelihood of breakthrough infections. It is well known that the spike protein contains most of the information/variation pertaining to coronavirus variants. In this paper, we use spike sequences to classify different variants of the coronavirus in humans. We show that preserving the order of the amino acids helps the underlying classifiers to achieve better performance. We also show that we can train our model to outperform the baseline algorithms using only a small number of training samples ($1\%$ of the data). Finally, we show the importance of the different amino acids which play a key role in identifying variants and how they coincide with those reported by the USA's Centers for Disease Control and Prevention (CDC). |
1502.05772 | Alistair Perry | Alistair Perry, Wei Wen, Anton Lord, Anbupalam Thalamuthu, Perminder
Sachdev, Michael Breakspear | The Organisation of the Elderly Connectome | 35 pages, 6 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigations of the human connectome have elucidated core features of adult
structural networks, particularly the crucial role of hub-regions. However,
little is known regarding network organisation of the healthy elderly
connectome, a crucial prelude to the systematic study of neurodegenerative
disorders. Here, whole-brain probabilistic tractography was performed on
high-angular diffusion-weighted images acquired from 115 healthy elderly
subjects, whom were 76 to 94 years old. Structural networks were reconstructed
between 512 cortical and subcortical brain regions. We sought to investigate
the architectural features of hub-regions, as well as left-right asymmetries,
and sexual dimorphisms. We observed that the topology of hub-regions is
consistent with adult connectomic data, and more importantly, their
architectural features reflect their ongoing vital role in network
communication. We also found substantial sexual dimorphisms, with females
exhibiting stronger inter-hemispheric connections between cingulate and
prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized
subnetworks consistent with the neural circuitry specialised for language and
executive functions, while rightward subnetworks were dominant in visual and
visuospatial streams. These findings provide insights into healthy brain ageing
and provide a benchmark for the study of neurodegenerative disorders such as
Alzheimers disease and Frontotemporal Dementia.
| [
{
"created": "Fri, 20 Feb 2015 04:57:14 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Dec 2015 06:37:26 GMT",
"version": "v2"
}
] | 2015-12-15 | [
[
"Perry",
"Alistair",
""
],
[
"Wen",
"Wei",
""
],
[
"Lord",
"Anton",
""
],
[
"Thalamuthu",
"Anbupalam",
""
],
[
"Sachdev",
"Perminder",
""
],
[
"Breakspear",
"Michael",
""
]
] | Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects, whom were 76 to 94 years old. Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with adult connectomic data, and more importantly, their architectural features reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, while rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimers disease and Frontotemporal Dementia. |
2005.03430 | Roberto Budzinski | R. C. Budzinski, S. R. Lopes, C. Masoller | Symbolic analysis of bursting dynamical regimes of Rulkov neural
networks | null | null | 10.1016/j.neucom.2020.05.122 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurons modeled by the Rulkov map display a variety of dynamic regimes that
include tonic spikes and chaotic bursting. Here we study an ensemble of
bursting neurons coupled with the Watts-Strogatz small-world topology. We
characterize the sequences of bursts using the symbolic method of time-series
analysis known as ordinal analysis, which detects nonlinear temporal
correlations. We show that the probabilities of the different symbols
distinguish different dynamical regimes, which depend on the coupling strength
and the network topology. These regimes have different spatio-temporal
properties that can be visualized with raster plots.
| [
{
"created": "Thu, 7 May 2020 13:01:57 GMT",
"version": "v1"
}
] | 2023-06-13 | [
[
"Budzinski",
"R. C.",
""
],
[
"Lopes",
"S. R.",
""
],
[
"Masoller",
"C.",
""
]
] | Neurons modeled by the Rulkov map display a variety of dynamic regimes that include tonic spikes and chaotic bursting. Here we study an ensemble of bursting neurons coupled with the Watts-Strogatz small-world topology. We characterize the sequences of bursts using the symbolic method of time-series analysis known as ordinal analysis, which detects nonlinear temporal correlations. We show that the probabilities of the different symbols distinguish different dynamical regimes, which depend on the coupling strength and the network topology. These regimes have different spatio-temporal properties that can be visualized with raster plots. |
2002.12429 | Karunia Putra Wijaya | Ahd Mahmoud Al-Salman, Joseph P\'aez Ch\'avez and Karunia Putra Wijaya | A modeling study of predator--prey interaction propounding honest
signals and cues | null | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Honest signals and cues have been observed as part of interspecific and
intraspecific communication among animals. Recent theories suggest that
existing signaling systems have evolved through natural selection imposed by
predators. Honest signaling in the interspecific communication can provide
insight into the evolution of anti-predation techniques. In this work, we
introduce a deterministic three-stage, two-species predator-prey model, which
modulates the impact of honest signals and cues on the interacting populations.
The model is built from a set of first principles originated from signaling and
social learning theory in which the response of predators to transmitted honest
signals or cues is determined. The predators then use the signals to decide
whether to pursue the attack or save their energy for an easier catch. Other
members from the prey population that are not familiar with signaling their
fitness observe and learn the technique. Our numerical bifurcation analysis
indicates that increasing the predator's search rate and the corresponding
assimilation efficiency gives a journey from predator-prey abundance and
scarcity, a stable transient cycle between persistence and near-extinction, a
homoclinic orbit pointing towards extinction, and ultimately, a quasi-periodic
orbit. A similar discovery is met under the increment of the prey's intrinsic
birth rate and carrying capacity. When both parameters are of sufficiently
large magnitudes, the separator between honest signal and cue takes the similar
journey from a stable equilibrium to a quasi-periodic orbit as it increases. In
the context of modeling, we conclude that under prey abundance, transmitting
error-free honest signals leads to not only a stable but also more predictable
predator-prey dynamics.
| [
{
"created": "Thu, 27 Feb 2020 20:48:21 GMT",
"version": "v1"
}
] | 2020-03-02 | [
[
"Al-Salman",
"Ahd Mahmoud",
""
],
[
"Chávez",
"Joseph Páez",
""
],
[
"Wijaya",
"Karunia Putra",
""
]
] | Honest signals and cues have been observed as part of interspecific and intraspecific communication among animals. Recent theories suggest that existing signaling systems have evolved through natural selection imposed by predators. Honest signaling in the interspecific communication can provide insight into the evolution of anti-predation techniques. In this work, we introduce a deterministic three-stage, two-species predator-prey model, which modulates the impact of honest signals and cues on the interacting populations. The model is built from a set of first principles originated from signaling and social learning theory in which the response of predators to transmitted honest signals or cues is determined. The predators then use the signals to decide whether to pursue the attack or save their energy for an easier catch. Other members from the prey population that are not familiar with signaling their fitness observe and learn the technique. Our numerical bifurcation analysis indicates that increasing the predator's search rate and the corresponding assimilation efficiency gives a journey from predator-prey abundance and scarcity, a stable transient cycle between persistence and near-extinction, a homoclinic orbit pointing towards extinction, and ultimately, a quasi-periodic orbit. A similar discovery is met under the increment of the prey's intrinsic birth rate and carrying capacity. When both parameters are of sufficiently large magnitudes, the separator between honest signal and cue takes the similar journey from a stable equilibrium to a quasi-periodic orbit as it increases. In the context of modeling, we conclude that under prey abundance, transmitting error-free honest signals leads to not only a stable but also more predictable predator-prey dynamics. |
1709.04416 | Marcus Aguiar de | Carolina L. N. Costa, Flavia M. D. Marquitti, S. Ivan Perez, David M.
Schneider, Marlon F. Ramos, Marcus A.M. de Aguiar | Registering the evolutionary history in individual-based models of
speciation | This is a revised version, with a new title and 2 new co-authors. 24
pages, 7 figures | Physica A 510 (2018) 1 | 10.1016/j.physa.2018.05.150 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the emergence of biodiversity patterns in nature is a central
problem in biology. Theoretical models of speciation have addressed this
question in the macroecological scale, but little has been investigated in the
macroevolutionary context. Knowledge of the evolutionary history allows the
study of patterns underlying the processes considered in these models,
revealing their signatures and the role of speciation and extinction in shaping
macroevolutionary patterns. In this paper we introduce two algorithms to record
the evolutionary history of populations in individual-based models of
speciation, from which genealogies and phylogenies can be constructed. The
first algorithm relies on saving ancestral-descendant relationships, generating
a matrix that contains the times to the most recent common ancestor between all
pairs of individuals at every generation (the Most Recent Common Ancestor Time
matrix, MRCAT). The second algorithm directly records all speciation and
extinction events throughout the evolutionary process, generating a matrix with
the true phylogeny of species (the Sequential Speciation and Extinction Events,
SSEE). We illustrate the use of these algorithms in a spatially explicit
individual-based model of speciation. We compare the trees generated via MRCAT
and SSEE algorithms with trees inferred by methods that use only genetic
distance among extant species, commonly used in empirical studies and applied
here to simulated genetic data. Comparisons between tress are performed with
metrics describing the overall topology, branch length distribution and
imbalance of trees. We observe that both MRCAT and distance-based trees differ
from the true phylogeny, with the first being closer to the true tree than the
second.
| [
{
"created": "Wed, 13 Sep 2017 16:56:05 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2017 11:33:40 GMT",
"version": "v2"
}
] | 2018-10-09 | [
[
"Costa",
"Carolina L. N.",
""
],
[
"Marquitti",
"Flavia M. D.",
""
],
[
"Perez",
"S. Ivan",
""
],
[
"Schneider",
"David M.",
""
],
[
"Ramos",
"Marlon F.",
""
],
[
"de Aguiar",
"Marcus A. M.",
""
]
] | Understanding the emergence of biodiversity patterns in nature is a central problem in biology. Theoretical models of speciation have addressed this question in the macroecological scale, but little has been investigated in the macroevolutionary context. Knowledge of the evolutionary history allows the study of patterns underlying the processes considered in these models, revealing their signatures and the role of speciation and extinction in shaping macroevolutionary patterns. In this paper we introduce two algorithms to record the evolutionary history of populations in individual-based models of speciation, from which genealogies and phylogenies can be constructed. The first algorithm relies on saving ancestral-descendant relationships, generating a matrix that contains the times to the most recent common ancestor between all pairs of individuals at every generation (the Most Recent Common Ancestor Time matrix, MRCAT). The second algorithm directly records all speciation and extinction events throughout the evolutionary process, generating a matrix with the true phylogeny of species (the Sequential Speciation and Extinction Events, SSEE). We illustrate the use of these algorithms in a spatially explicit individual-based model of speciation. We compare the trees generated via MRCAT and SSEE algorithms with trees inferred by methods that use only genetic distance among extant species, commonly used in empirical studies and applied here to simulated genetic data. Comparisons between tress are performed with metrics describing the overall topology, branch length distribution and imbalance of trees. We observe that both MRCAT and distance-based trees differ from the true phylogeny, with the first being closer to the true tree than the second. |
1505.01142 | Vicente M. Reyes Ph.D. | Vicente M. Reyes | Two Complementary Methods for Relative Quantification of Ligand Binding
Site Burial Depth in Proteins: The "Cutting Plane" and "Tangent Sphere"
Methods | 11 pages text; 7 figures (all multi-panel); 3 tables; 34 total pages
(incl. figures & tables) | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe two complementary methods to quantify the degree of burial of
ligand and/or ligand binding site (LBS) in a protein-ligand complex, namely,
the "cutting plane" (CP) and the "tangent sphere" (TS) methods. To construct
the CP and TS, two centroids are required: the protein molecular centroid
(global centroid, GC), and the LBS centroid (local centroid, LC). The CP is
defined as the plane passing through the LBS centroid (LC) and normal to the
line passing through the LC and the protein molecular centroid (GC). The
"anterior side" of the CP is the side not containing the GC (which the
"posterior" side does). The TS is defined as the sphere with center at GC and
tangent to the CP at LC. The percentage of protein atoms (a.) inside the TS,
and (b.) on the anterior side of the CP, are two complementary measures of
ligand or LBS burial depth since the latter is directly proportional to (b.)
and inversely proportional to (a.). We tested the CP and TS methods using a
test set of 67 well characterized protein-ligand structures (Laskowski et al.,
1996), as well as the theoretical case of an artificial protein in the form of
a cubic lattice grid of points in the overall shape of a sphere and in which
LBS of any depth can be specified. Results from both the CP and TS methods
agree very well with data reported by Laskowski et al., and results from the
theoretical case further confirm that that both methods are suitable measures
of ligand or LBS burial. Prior to this study, there were no such numerical
measures of LBS burial available, and hence no way to directly and objectively
compare LBS depths in different proteins. LBS burial depth is an important
parameter as it is usually directly related to the amount of conformational
change a protein undergoes upon ligand binding, and ability to quantify it
could allow meaningful comparison of protein dynamics and flexibility.
| [
{
"created": "Sat, 7 Feb 2015 04:17:39 GMT",
"version": "v1"
}
] | 2015-05-06 | [
[
"Reyes",
"Vicente M.",
""
]
] | We describe two complementary methods to quantify the degree of burial of ligand and/or ligand binding site (LBS) in a protein-ligand complex, namely, the "cutting plane" (CP) and the "tangent sphere" (TS) methods. To construct the CP and TS, two centroids are required: the protein molecular centroid (global centroid, GC), and the LBS centroid (local centroid, LC). The CP is defined as the plane passing through the LBS centroid (LC) and normal to the line passing through the LC and the protein molecular centroid (GC). The "anterior side" of the CP is the side not containing the GC (which the "posterior" side does). The TS is defined as the sphere with center at GC and tangent to the CP at LC. The percentage of protein atoms (a.) inside the TS, and (b.) on the anterior side of the CP, are two complementary measures of ligand or LBS burial depth since the latter is directly proportional to (b.) and inversely proportional to (a.). We tested the CP and TS methods using a test set of 67 well characterized protein-ligand structures (Laskowski et al., 1996), as well as the theoretical case of an artificial protein in the form of a cubic lattice grid of points in the overall shape of a sphere and in which LBS of any depth can be specified. Results from both the CP and TS methods agree very well with data reported by Laskowski et al., and results from the theoretical case further confirm that that both methods are suitable measures of ligand or LBS burial. Prior to this study, there were no such numerical measures of LBS burial available, and hence no way to directly and objectively compare LBS depths in different proteins. LBS burial depth is an important parameter as it is usually directly related to the amount of conformational change a protein undergoes upon ligand binding, and ability to quantify it could allow meaningful comparison of protein dynamics and flexibility. |
2311.11132 | R\'emy Ben Messaoud | Remy Ben Messaoud, Vincent Le Du, Brigitte Charlotte Kaufmann,
Baptiste Couvy-Duchesne, Lara Migliaccio, Paolo Bartolomeo, Mario Chavez,
Fabrizio De Vico Fallani | Low-dimensional controllability of brain networks | null | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Network controllability is a powerful tool to study causal relationships in
complex systems and identify the driver nodes for steering the network dynamics
into desired states. However, due to ill-posed conditions, results become
unreliable when the number of drivers becomes too small compared to the network
size. This is a very common situation, particularly in real-world applications,
where the possibility to access multiple nodes at the same time is limited by
technological constraints, such as in the human brain. Although targeting
smaller network parts might improve accuracy, challenges may remain for
extremely unbalanced situations, when for example there is one single driver.
To address this problem, we developed a mathematical framework that combines
concepts from spectral graph theory and modern network science. Instead of
controlling the original network dynamics, we aimed to control its
low-dimensional embedding into the topological space derived from the network
Laplacian. By performing extensive simulations on synthetic networks, we showed
that a relatively low number of projected components is enough to improve the
overall control accuracy, notably when dealing with very few drivers. Based on
these findings, we introduced alternative low-dimensional controllability
metrics and used them to identify the main driver areas of the human connectome
obtained from N=6134 healthy individuals in the UK-biobank cohort. Results
revealed previously unappreciated influential regions compared to standard
approaches, enabled to draw control maps between distinct specialized
large-scale brain systems, and yielded an anatomically-based understanding of
hemispheric functional lateralization. Taken together, our results offered a
theoretically-grounded solution to deal with network controllability in
real-life applications and provided insights into the causal interactions of
the human brain.
| [
{
"created": "Sat, 18 Nov 2023 17:46:32 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Nov 2023 16:59:59 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Nov 2023 14:48:55 GMT",
"version": "v3"
}
] | 2023-11-29 | [
[
"Messaoud",
"Remy Ben",
""
],
[
"Du",
"Vincent Le",
""
],
[
"Kaufmann",
"Brigitte Charlotte",
""
],
[
"Couvy-Duchesne",
"Baptiste",
""
],
[
"Migliaccio",
"Lara",
""
],
[
"Bartolomeo",
"Paolo",
""
],
[
"Chavez",
... | Network controllability is a powerful tool to study causal relationships in complex systems and identify the driver nodes for steering the network dynamics into desired states. However, due to ill-posed conditions, results become unreliable when the number of drivers becomes too small compared to the network size. This is a very common situation, particularly in real-world applications, where the possibility to access multiple nodes at the same time is limited by technological constraints, such as in the human brain. Although targeting smaller network parts might improve accuracy, challenges may remain for extremely unbalanced situations, when for example there is one single driver. To address this problem, we developed a mathematical framework that combines concepts from spectral graph theory and modern network science. Instead of controlling the original network dynamics, we aimed to control its low-dimensional embedding into the topological space derived from the network Laplacian. By performing extensive simulations on synthetic networks, we showed that a relatively low number of projected components is enough to improve the overall control accuracy, notably when dealing with very few drivers. Based on these findings, we introduced alternative low-dimensional controllability metrics and used them to identify the main driver areas of the human connectome obtained from N=6134 healthy individuals in the UK-biobank cohort. Results revealed previously unappreciated influential regions compared to standard approaches, enabled to draw control maps between distinct specialized large-scale brain systems, and yielded an anatomically-based understanding of hemispheric functional lateralization. Taken together, our results offered a theoretically-grounded solution to deal with network controllability in real-life applications and provided insights into the causal interactions of the human brain. |
2403.09678 | Christophe Pouzat | Christophe Pouzat (IRMA), Morgan Andr\'e | A Quasi-Stationary Approach to Metastability in a System of Spiking
Neurons with Synaptic Plasticity | null | null | null | null | q-bio.NC math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After reviewing the behavioral studies of working memory and of the cellular
substrate of the latter, we argue that metastable states constitute candidates
for the type of transient information storage required by working memory. We
then present a simple neural network model made of stochastic units whose
synapses exhibit short-term facilitation. The Markov process dynamics of this
model was specifically designed to be analytically tractable, simple to
simulate numerically and to exhibit a quasi-stationary distribution (QSD).
Since the state space is finite this QSD is also a Yaglom limit, which allows
us to bridge the gap between quasi-stationarity and metastability by
considering the relative orders of magnitude of the relaxation and absorption
times. We present first analytical results: characterization of the absorbing
region of the Markov process, irreducibility outside this absorbing region and
consequently existence and uniqueness of a QSD. We then apply Perron-Frobenius
spectral analysis to obtain any specific QSD, and design an approximate method
for the first moments of this QSD when the exact method is intractable. Finally
we use these methods to study the relaxation time toward the QSD and establish
numerically the memorylessness of the time of extinction.
| [
{
"created": "Wed, 7 Feb 2024 08:07:50 GMT",
"version": "v1"
}
] | 2024-03-18 | [
[
"Pouzat",
"Christophe",
"",
"IRMA"
],
[
"André",
"Morgan",
""
]
] | After reviewing the behavioral studies of working memory and of the cellular substrate of the latter, we argue that metastable states constitute candidates for the type of transient information storage required by working memory. We then present a simple neural network model made of stochastic units whose synapses exhibit short-term facilitation. The Markov process dynamics of this model was specifically designed to be analytically tractable, simple to simulate numerically and to exhibit a quasi-stationary distribution (QSD). Since the state space is finite this QSD is also a Yaglom limit, which allows us to bridge the gap between quasi-stationarity and metastability by considering the relative orders of magnitude of the relaxation and absorption times. We present first analytical results: characterization of the absorbing region of the Markov process, irreducibility outside this absorbing region and consequently existence and uniqueness of a QSD. We then apply Perron-Frobenius spectral analysis to obtain any specific QSD, and design an approximate method for the first moments of this QSD when the exact method is intractable. Finally we use these methods to study the relaxation time toward the QSD and establish numerically the memorylessness of the time of extinction. |
2306.10953 | Asuna Gilfoyle | Asuna Gilfoyle, Willow Baird | The Impact of Rising Ocean Acidification Levels on Fish Migration | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Ocean acidification, a direct consequence of increased carbon dioxide (CO2)
emissions, has emerged as a critical area of concern within the scientific
community. The world's oceans absorb approximately one-third of human-caused
CO2 emissions, leading to chemical reactions that reduce seawater pH, carbonate
ion concentration, and saturation states of biologically important calcium
carbonate minerals. This process, known as ocean acidification, has
far-reaching implications for marine ecosystems, particularly for marine
organisms such as fish, whose migratory patterns are integral to the health and
function of these ecosystems.
| [
{
"created": "Mon, 19 Jun 2023 14:11:30 GMT",
"version": "v1"
}
] | 2023-06-21 | [
[
"Gilfoyle",
"Asuna",
""
],
[
"Baird",
"Willow",
""
]
] | Ocean acidification, a direct consequence of increased carbon dioxide (CO2) emissions, has emerged as a critical area of concern within the scientific community. The world's oceans absorb approximately one-third of human-caused CO2 emissions, leading to chemical reactions that reduce seawater pH, carbonate ion concentration, and saturation states of biologically important calcium carbonate minerals. This process, known as ocean acidification, has far-reaching implications for marine ecosystems, particularly for marine organisms such as fish, whose migratory patterns are integral to the health and function of these ecosystems. |
1612.06336 | Andrew Murphy | Andrew C. Murphy, Sarah F. Muldoon, David Baker, Adam Lastowka,
Brittany Bennett, Muzhi Yang, and Danielle S. Bassett | Structure, Function, and Control of the Musculoskeletal Network | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human body is a complex organism whose gross mechanical properties are
enabled by an interconnected musculoskeletal network controlled by the nervous
system. The nature of musculoskeletal interconnection facilitates stability,
voluntary movement, and robustness to injury. However, a fundamental
understanding of this network and its control by neural systems has remained
elusive. Here we utilize medical databases and mathematical modeling to reveal
the organizational structure, predicted function, and neural control of the
musculoskeletal system. We construct a whole-body musculoskeletal network in
which single muscles connect to multiple bones via both origin and insertion
points. We demonstrate that a muscle's role in this network predicts
susceptibility of surrounding components to secondary injury. Finally, we
illustrate that sets of muscles cluster into network communities that mimic the
organization of motor cortex control modules. This novel formalism for
describing interactions between the muscular and skeletal systems serves as a
foundation to develop and test therapeutic responses to injury, inspiring
future advances in clinical treatments.
| [
{
"created": "Tue, 13 Dec 2016 21:43:11 GMT",
"version": "v1"
}
] | 2016-12-20 | [
[
"Murphy",
"Andrew C.",
""
],
[
"Muldoon",
"Sarah F.",
""
],
[
"Baker",
"David",
""
],
[
"Lastowka",
"Adam",
""
],
[
"Bennett",
"Brittany",
""
],
[
"Yang",
"Muzhi",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | The human body is a complex organism whose gross mechanical properties are enabled by an interconnected musculoskeletal network controlled by the nervous system. The nature of musculoskeletal interconnection facilitates stability, voluntary movement, and robustness to injury. However, a fundamental understanding of this network and its control by neural systems has remained elusive. Here we utilize medical databases and mathematical modeling to reveal the organizational structure, predicted function, and neural control of the musculoskeletal system. We construct a whole-body musculoskeletal network in which single muscles connect to multiple bones via both origin and insertion points. We demonstrate that a muscle's role in this network predicts susceptibility of surrounding components to secondary injury. Finally, we illustrate that sets of muscles cluster into network communities that mimic the organization of motor cortex control modules. This novel formalism for describing interactions between the muscular and skeletal systems serves as a foundation to develop and test therapeutic responses to injury, inspiring future advances in clinical treatments. |
2203.11687 | Martin Marzidovsek | Martin Marzidov\v{s}ek, Vid Podpe\v{c}an, Erminia Conti, Marko
Debeljak, Christian Mulder | BEFANA: A Tool for Biodiversity-Ecosystem Functioning Assessment by
Network Analysis | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BEFANA is a free and open-source software tool for ecological network
analysis and visualisation. It is adapted to ecologists' needs and allows them
to study the topology and dynamics of ecological networks as well as apply
selected machine learning algorithms. BEFANA is implemented in Python, and
structured as an ordered collection of interactive computational notebooks. It
relies on widely used open-source libraries, and aims to achieve simplicity,
interactivity, and extensibility. BEFANA provides methods and implementations
for data loading and preprocessing, network analysis and interactive
visualisation, modelling with experimental data, and predictive modelling with
machine learning. We showcase BEFANA through a concrete example of a detrital
soil food web of agricultural grasslands, and demonstrate all of its main
components and functionalities.
| [
{
"created": "Mon, 21 Mar 2022 16:51:09 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2022 14:36:59 GMT",
"version": "v2"
}
] | 2022-03-29 | [
[
"Marzidovšek",
"Martin",
""
],
[
"Podpečan",
"Vid",
""
],
[
"Conti",
"Erminia",
""
],
[
"Debeljak",
"Marko",
""
],
[
"Mulder",
"Christian",
""
]
] | BEFANA is a free and open-source software tool for ecological network analysis and visualisation. It is adapted to ecologists' needs and allows them to study the topology and dynamics of ecological networks as well as apply selected machine learning algorithms. BEFANA is implemented in Python, and structured as an ordered collection of interactive computational notebooks. It relies on widely used open-source libraries, and aims to achieve simplicity, interactivity, and extensibility. BEFANA provides methods and implementations for data loading and preprocessing, network analysis and interactive visualisation, modelling with experimental data, and predictive modelling with machine learning. We showcase BEFANA through a concrete example of a detrital soil food web of agricultural grasslands, and demonstrate all of its main components and functionalities. |
0704.1912 | Adrian Melott | L.C. Natarajan, A.L. Melott, B.M. Rothschild, and L.D. Martin
(University of Kansas) | Bone Cancer Rates in Dinosaurs Compared with Modern Vertebrates | As published in Transactions of the Kansas Academy of Science | TKAS 110, 155-158 (2007) | null | null | q-bio.PE astro-ph physics.geo-ph | null | Data on the prevalence of bone cancer in dinosaurs is available from past
radiological examination of preserved bones. We statistically test this data
for consistency with rates extrapolated from information on bone cancer in
modern vertebrates, and find that there is no evidence of a different rate.
Thus, this test provides no support for a possible role of ionizing radiation
in the K-T extinction event.
| [
{
"created": "Sun, 15 Apr 2007 19:08:16 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Sep 2007 16:28:09 GMT",
"version": "v2"
},
{
"created": "Tue, 11 Sep 2007 14:19:34 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Oct 2007 18:17:26 GMT",
"version": "v4"
}
] | 2007-10-16 | [
[
"Natarajan",
"L. C.",
"",
"University of Kansas"
],
[
"Melott",
"A. L.",
"",
"University of Kansas"
],
[
"Rothschild",
"B. M.",
"",
"University of Kansas"
],
[
"Martin",
"L. D.",
"",
"University of Kansas"
]
] | Data on the prevalence of bone cancer in dinosaurs is available from past radiological examination of preserved bones. We statistically test this data for consistency with rates extrapolated from information on bone cancer in modern vertebrates, and find that there is no evidence of a different rate. Thus, this test provides no support for a possible role of ionizing radiation in the K-T extinction event. |
2407.02126 | Paul Bilokon | Oleksandr Bilokon and Nataliya Bilokon and Paul Bilokon | AI-driven Alternative Medicine: A Novel Approach to Drug Discovery and
Repurposing | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | AIAltMed is a cutting-edge platform designed for drug discovery and
repurposing. It utilizes Tanimoto similarity to identify structurally similar
non-medicinal compounds to known medicinal ones. This preprint introduces
AIAltMed, discusses the concept of `AI-driven alternative medicine,' evaluates
Tanimoto similarity's advantages and limitations, and details the system's
architecture. Furthermore, it explores the benefits of extending the system to
include PubChem and outlines a corresponding implementation strategy.
| [
{
"created": "Tue, 2 Jul 2024 10:17:13 GMT",
"version": "v1"
}
] | 2024-07-03 | [
[
"Bilokon",
"Oleksandr",
""
],
[
"Bilokon",
"Nataliya",
""
],
[
"Bilokon",
"Paul",
""
]
] | AIAltMed is a cutting-edge platform designed for drug discovery and repurposing. It utilizes Tanimoto similarity to identify structurally similar non-medicinal compounds to known medicinal ones. This preprint introduces AIAltMed, discusses the concept of `AI-driven alternative medicine,' evaluates Tanimoto similarity's advantages and limitations, and details the system's architecture. Furthermore, it explores the benefits of extending the system to include PubChem and outlines a corresponding implementation strategy. |
2202.10873 | Jinjiang Guo Ph.D. | Jinjiang Guo, Qi Liu, Han Guo, Xi Lu | Ligandformer: A Graph Neural Network for Predicting Compound Property
with Robust Interpretation | 7 pages, 4 figures | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust and efficient interpretation of QSAR methods is quite useful to
validate AI prediction rationales with subjective opinion (chemist or biologist
expertise), understand sophisticated chemical or biological process mechanisms,
and provide heuristic ideas for structure optimization in pharmaceutical
industry. For this purpose, we construct a multi-layer self-attention based
Graph Neural Network framework, namely Ligandformer, for predicting compound
property with interpretation. Ligandformer integrates attention maps on
compound structure from different network blocks. The integrated attention map
reflects the machine's local interest on compound structure, and indicates the
relationship between predicted compound property and its structure. This work
mainly contributes to three aspects: 1. Ligandformer directly opens the
black-box of deep learning methods, providing local prediction rationales on
chemical structures. 2. Ligandformer gives robust prediction in different
experimental rounds, overcoming the ubiquitous prediction instability of deep
learning methods. 3. Ligandformer can be generalized to predict different
chemical or biological properties with high performance. Furthermore,
Ligandformer can simultaneously output specific property score and visible
attention map on structure, which can support researchers to investigate
chemical or biological property and optimize structure efficiently. Our
framework outperforms over counterparts in terms of accuracy, robustness and
generalization, and can be applied in complex system study.
| [
{
"created": "Mon, 21 Feb 2022 15:46:44 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Feb 2022 02:38:11 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Feb 2022 02:54:52 GMT",
"version": "v3"
}
] | 2022-02-25 | [
[
"Guo",
"Jinjiang",
""
],
[
"Liu",
"Qi",
""
],
[
"Guo",
"Han",
""
],
[
"Lu",
"Xi",
""
]
] | Robust and efficient interpretation of QSAR methods is quite useful to validate AI prediction rationales with subjective opinion (chemist or biologist expertise), understand sophisticated chemical or biological process mechanisms, and provide heuristic ideas for structure optimization in pharmaceutical industry. For this purpose, we construct a multi-layer self-attention based Graph Neural Network framework, namely Ligandformer, for predicting compound property with interpretation. Ligandformer integrates attention maps on compound structure from different network blocks. The integrated attention map reflects the machine's local interest on compound structure, and indicates the relationship between predicted compound property and its structure. This work mainly contributes to three aspects: 1. Ligandformer directly opens the black-box of deep learning methods, providing local prediction rationales on chemical structures. 2. Ligandformer gives robust prediction in different experimental rounds, overcoming the ubiquitous prediction instability of deep learning methods. 3. Ligandformer can be generalized to predict different chemical or biological properties with high performance. Furthermore, Ligandformer can simultaneously output specific property score and visible attention map on structure, which can support researchers to investigate chemical or biological property and optimize structure efficiently. Our framework outperforms over counterparts in terms of accuracy, robustness and generalization, and can be applied in complex system study. |
2301.09576 | Prashant S Alegaonkar | Charles Johnstone and Prashant S. Alegaonkar | Understanding Physical Processes in Describing a State of Consciousness:
A Review | 45, 02 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The way we view the reality of nature, including ourselves, depend on
consciousness.It also defines the identity of the person, since we know people
in terms of their experiences. In general, consciousness defines human
existence in this universe. Furthermore, consciousness is associated with the
most debated problems in physics such as the notion of observation, observer,in
the measurement problem. However,its nature, occurrence mechanism in the brain
and the definite universal locality of the consciousness are not clearly known.
Due to this consciousness is considered asan essential unresolved scientific
problem of the current era.Here, we review the physical processes which are
associated in tackling these challenges. Firstly, we discuss the association of
consciousness with transmission of signals in the brain, chain of events,
quantum phenomena process and integrated information. We also highlight the
roles of structure of matter,field, and the concept of universality towards
understanding consciousness. Finally, we propose further studies for achieving
better understanding of consciousness.
| [
{
"created": "Thu, 12 Jan 2023 09:33:06 GMT",
"version": "v1"
}
] | 2023-01-24 | [
[
"Johnstone",
"Charles",
""
],
[
"Alegaonkar",
"Prashant S.",
""
]
] | The way we view the reality of nature, including ourselves, depend on consciousness.It also defines the identity of the person, since we know people in terms of their experiences. In general, consciousness defines human existence in this universe. Furthermore, consciousness is associated with the most debated problems in physics such as the notion of observation, observer,in the measurement problem. However,its nature, occurrence mechanism in the brain and the definite universal locality of the consciousness are not clearly known. Due to this consciousness is considered asan essential unresolved scientific problem of the current era.Here, we review the physical processes which are associated in tackling these challenges. Firstly, we discuss the association of consciousness with transmission of signals in the brain, chain of events, quantum phenomena process and integrated information. We also highlight the roles of structure of matter,field, and the concept of universality towards understanding consciousness. Finally, we propose further studies for achieving better understanding of consciousness. |
2105.14409 | Niharika S. D'Souza | Niharika Shimona D'Souza, Mary Beth Nebel, Deana Crocetti, Nicholas
Wymbs, Joshua Robinson, Stewart Mostofsky, Archana Venkataraman | A Matrix Autoencoder Framework to Align the Functional and Structural
Connectivity Manifolds as Guided by Behavioral Phenotypes | null | null | null | null | q-bio.NC cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | We propose a novel matrix autoencoder to map functional connectomes from
resting state fMRI (rs-fMRI) to structural connectomes from Diffusion Tensor
Imaging (DTI), as guided by subject-level phenotypic measures. Our specialized
autoencoder infers a low dimensional manifold embedding for the rs-fMRI
correlation matrices that mimics a canonical outer-product decomposition. The
embedding is simultaneously used to reconstruct DTI tractography matrices via a
second manifold alignment decoder and to predict inter-subject phenotypic
variability via an artificial neural network. We validate our framework on a
dataset of 275 healthy individuals from the Human Connectome Project database
and on a second clinical dataset consisting of 57 subjects with Autism Spectrum
Disorder. We demonstrate that the model reliably recovers structural
connectivity patterns across individuals, while robustly extracting predictive
and interpretable brain biomarkers in a cross-validated setting. Finally, our
framework outperforms several baselines at predicting behavioral phenotypes in
both real-world datasets.
| [
{
"created": "Sun, 30 May 2021 02:06:12 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jul 2021 21:59:34 GMT",
"version": "v2"
}
] | 2021-07-13 | [
[
"D'Souza",
"Niharika Shimona",
""
],
[
"Nebel",
"Mary Beth",
""
],
[
"Crocetti",
"Deana",
""
],
[
"Wymbs",
"Nicholas",
""
],
[
"Robinson",
"Joshua",
""
],
[
"Mostofsky",
"Stewart",
""
],
[
"Venkataraman",
"Arch... | We propose a novel matrix autoencoder to map functional connectomes from resting state fMRI (rs-fMRI) to structural connectomes from Diffusion Tensor Imaging (DTI), as guided by subject-level phenotypic measures. Our specialized autoencoder infers a low dimensional manifold embedding for the rs-fMRI correlation matrices that mimics a canonical outer-product decomposition. The embedding is simultaneously used to reconstruct DTI tractography matrices via a second manifold alignment decoder and to predict inter-subject phenotypic variability via an artificial neural network. We validate our framework on a dataset of 275 healthy individuals from the Human Connectome Project database and on a second clinical dataset consisting of 57 subjects with Autism Spectrum Disorder. We demonstrate that the model reliably recovers structural connectivity patterns across individuals, while robustly extracting predictive and interpretable brain biomarkers in a cross-validated setting. Finally, our framework outperforms several baselines at predicting behavioral phenotypes in both real-world datasets. |
1712.09206 | Priyadarshini Panda | Priyadarshini Panda, and Kaushik Roy | Chaos-guided Input Structuring for Improved Learning in Recurrent Neural
Networks | 11 pages with 5 figures including supplementary material | null | null | null | q-bio.NC cs.NE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anatomical studies demonstrate that brain reformats input information to
generate reliable responses for performing computations. However, it remains
unclear how neural circuits encode complex spatio-temporal patterns. We show
that neural dynamics are strongly influenced by the phase alignment between the
input and the spontaneous chaotic activity. Input structuring along the
dominant chaotic projections causes the chaotic trajectories to become stable
channels (or attractors), hence, improving the computational capability of a
recurrent network. Using mean field analysis, we derive the impact of input
structuring on the overall stability of attractors formed. Our results indicate
that input alignment determines the extent of intrinsic noise suppression and
hence, alters the attractor state stability, thereby controlling the network's
inference ability.
| [
{
"created": "Tue, 26 Dec 2017 08:29:32 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jan 2018 15:53:23 GMT",
"version": "v2"
},
{
"created": "Sun, 18 Feb 2018 18:58:48 GMT",
"version": "v3"
}
] | 2018-02-20 | [
[
"Panda",
"Priyadarshini",
""
],
[
"Roy",
"Kaushik",
""
]
] | Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input structuring along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network. Using mean field analysis, we derive the impact of input structuring on the overall stability of attractors formed. Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network's inference ability. |
2109.10675 | Gustavo Mockaitis | Mahmood Mahmoodi-Eshkaftakia and Gustavo Mockaitis | Structural optimization of biohydrogen production: Impact of
pretreatments on volatile fatty acids and biogas parameters | null | Int J Hydrogen Energ. 47(11): 7072-81, 2022 | 10.1016/j.ijhydene.2021.12.088 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The present study aims to describe an innovative approach that enables the
system to achieve high yielding for biohydrogen (bio-H$_2$) production using
xylose as a by-product of lignocellulosic biomass processing. A hybrid
optimization technique, structural modelling, desirability analysis, and
genetic algorithm could determine the optimum input factors to maximize useful
biogas parameters, especially bio-H$_2$ and CH$_4$. As found, the input factors
(pretreatment, digestion time and biogas relative pressure) and volatile fatty
acids (acetic acid, propionic acid and butyric acid) had indirectly and
significantly impacted the bio-H$_2$ and desirability score. The pretreatment
factor had the most effect on bio-H$_2$ and CH$_4$ production among the
factors, and after that, were propionic acid and digestion time. The
optimization method showed that the best pretreatment was acidic pretreatment,
digestion time > 20 h, relative pressure in a range of 300-800 mbar, acetic
acid in a range of 90-200 mg/L, propionic acid in a range of 20-150 mg/L, and
butyric acid in a range of 250-420 mg/L. These values caused to produce H$_2$ >
10.2 mmol/L, CH$_4$ > 3.9 mmol/L, N$_2$ < 15.3 mmol/L, CO$_2$ < 19.5 mmol/L,
total biogas > 0.31 L, produced biogas > 0.10 L, and accumulated biogas > 0.41
L.
| [
{
"created": "Wed, 22 Sep 2021 12:04:58 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Feb 2022 17:59:19 GMT",
"version": "v2"
}
] | 2022-02-09 | [
[
"Mahmoodi-Eshkaftakia",
"Mahmood",
""
],
[
"Mockaitis",
"Gustavo",
""
]
] | The present study aims to describe an innovative approach that enables the system to achieve high yielding for biohydrogen (bio-H$_2$) production using xylose as a by-product of lignocellulosic biomass processing. A hybrid optimization technique, structural modelling, desirability analysis, and genetic algorithm could determine the optimum input factors to maximize useful biogas parameters, especially bio-H$_2$ and CH$_4$. As found, the input factors (pretreatment, digestion time and biogas relative pressure) and volatile fatty acids (acetic acid, propionic acid and butyric acid) had indirectly and significantly impacted the bio-H$_2$ and desirability score. The pretreatment factor had the most effect on bio-H$_2$ and CH$_4$ production among the factors, and after that, were propionic acid and digestion time. The optimization method showed that the best pretreatment was acidic pretreatment, digestion time > 20 h, relative pressure in a range of 300-800 mbar, acetic acid in a range of 90-200 mg/L, propionic acid in a range of 20-150 mg/L, and butyric acid in a range of 250-420 mg/L. These values caused to produce H$_2$ > 10.2 mmol/L, CH$_4$ > 3.9 mmol/L, N$_2$ < 15.3 mmol/L, CO$_2$ < 19.5 mmol/L, total biogas > 0.31 L, produced biogas > 0.10 L, and accumulated biogas > 0.41 L. |
1207.7228 | Matthias Schultze-Kraft | Matthias Schultze-Kraft, Markus Diesmann, Sonja Gr\"un, Moritz Helias | Noise Suppression and Surplus Synchrony by Coincidence Detection | null | Schultze-Kraft M, Diesmann M, Gr\"un S, Helias M (2013) Noise
Suppression and Surplus Synchrony by Coincidence Detection. PLoS Comput Biol
9(4): e1002904 | 10.1371/journal.pcbi.1002904 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The functional significance of correlations between action potentials of
neurons is still a matter of vivid debates. In particular it is presently
unclear how much synchrony is caused by afferent synchronized events and how
much is intrinsic due to the connectivity structure of cortex. The available
analytical approaches based on the diffusion approximation do not allow to
model spike synchrony, preventing a thorough analysis. Here we theoretically
investigate to what extent common synaptic afferents and synchronized inputs
each contribute to closely time-locked spiking activity of pairs of neurons. We
employ direct simulation and extend earlier analytical methods based on the
diffusion approximation to pulse-coupling, allowing us to introduce precisely
timed correlations in the spiking activity of the synaptic afferents. We
investigate the transmission of correlated synaptic input currents by pairs of
integrate-and-fire model neurons, so that the same input covariance can be
realized by common inputs or by spiking synchrony. We identify two distinct
regimes: In the limit of low correlation linear perturbation theory accurately
determines the correlation transmission coefficient, which is typically smaller
than unity, but increases sensitively even for weakly synchronous inputs. In
the limit of high afferent correlation, in the presence of synchrony a
qualitatively new picture arises. As the non-linear neuronal response becomes
dominant, the output correlation becomes higher than the total correlation in
the input. This transmission coefficient larger unity is a direct consequence
of non-linear neural processing in the presence of noise, elucidating how
synchrony-coded signals benefit from these generic properties present in
cortical networks.
| [
{
"created": "Tue, 31 Jul 2012 12:51:28 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Aug 2012 12:29:49 GMT",
"version": "v2"
}
] | 2013-04-09 | [
[
"Schultze-Kraft",
"Matthias",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Grün",
"Sonja",
""
],
[
"Helias",
"Moritz",
""
]
] | The functional significance of correlations between action potentials of neurons is still a matter of vivid debates. In particular it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to closely time-locked spiking activity of pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high afferent correlation, in the presence of synchrony a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. |
2211.02315 | Yiheng Liu | Yiheng Liu, Enjie Ge, Ning Qiang, Tianming Liu, Bao Ge | Spatial-Temporal Convolutional Attention for Mapping Functional Brain
Networks | 5 pages, 5 figures, submitted to 20th IEEE International Symposium on
Biomedical Imaging (ISBI 2023) | null | null | null | q-bio.NC cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Using functional magnetic resonance imaging (fMRI) and deep learning to
explore functional brain networks (FBNs) has attracted many researchers.
However, most of these studies are still based on the temporal correlation
between the sources and voxel signals, and lack of researches on the dynamics
of brain function. Due to the widespread local correlations in the volumes,
FBNs can be generated directly in the spatial domain in a self-supervised
manner by using spatial-wise attention (SA), and the resulting FBNs has a
higher spatial similarity with templates compared to the classical method.
Therefore, we proposed a novel Spatial-Temporal Convolutional Attention (STCA)
model to discover the dynamic FBNs by using the sliding windows. To validate
the performance of the proposed method, we evaluate the approach on HCP-rest
dataset. The results indicate that STCA can be used to discover FBNs in a
dynamic way which provide a novel approach to better understand human brain.
| [
{
"created": "Fri, 4 Nov 2022 08:36:09 GMT",
"version": "v1"
}
] | 2022-11-07 | [
[
"Liu",
"Yiheng",
""
],
[
"Ge",
"Enjie",
""
],
[
"Qiang",
"Ning",
""
],
[
"Liu",
"Tianming",
""
],
[
"Ge",
"Bao",
""
]
] | Using functional magnetic resonance imaging (fMRI) and deep learning to explore functional brain networks (FBNs) has attracted many researchers. However, most of these studies are still based on the temporal correlation between the sources and voxel signals, and lack of researches on the dynamics of brain function. Due to the widespread local correlations in the volumes, FBNs can be generated directly in the spatial domain in a self-supervised manner by using spatial-wise attention (SA), and the resulting FBNs has a higher spatial similarity with templates compared to the classical method. Therefore, we proposed a novel Spatial-Temporal Convolutional Attention (STCA) model to discover the dynamic FBNs by using the sliding windows. To validate the performance of the proposed method, we evaluate the approach on HCP-rest dataset. The results indicate that STCA can be used to discover FBNs in a dynamic way which provide a novel approach to better understand human brain. |
2104.02853 | Gabriela Savioli | Juan Santos (1, 2 and 3), Jos\'e Carcione (4 and 1), Gabriela Savioli
(2) and Patricia Gauzellino (5) ((1) School of Earth Sciences and
Engineering, Hohai University, Nanjing, China, (2) Universidad de Buenos
Aires, Facultad de Ingenier\'ia, Instituto del Gas y del Petr\'oleo, Buenos
Aires, Argentina, (3) Departmet of Mathematics, Purdue University, United
States, (4) National Institute of Oceanography and Applied Geophysics, OGS,
Trieste, Italy, (5) Facultad de Ciencias Astron\'omicas y Geof\'isicas,
Universidad Nacional de La Plata, Argentina) | An SEIR epidemic model of fractional order to analyze the evolution of
the COVID-19 epidemic in Argentina | 20 pages, 14 figures. To be published as a Chapter in the book
"Analysis of Infectious Disease Problems (Covid-19) and Their Global Impact",
edited by Praveen Agarwal, Juan J. Nieto, Michael Ruzhansky, Delfim F. M.
Torres, Springer Nature | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A pandemic caused by a new coronavirus (COVID-19) has spread worldwide,
inducing an epidemic still active in Argentina. In this chapter, we present a
case study using an SEIR (Susceptible-Exposed-Infected-Recovered) diffusion
model of fractional order in time to analyze the evolution of the epidemic in
Buenos Aires and neighboring areas (Regi\'on Metropolitana de Buenos Aires,
(RMBA)) comprising about 15 million inhabitants. In the SEIR model, individuals
are divided into four classes, namely, susceptible (S), exposed (E), infected
(I) and recovered (R). The SEIR model of fractional order allows for the
incorporation of memory, with hereditary properties of the system, being a
generalization of the classic SEIR first-order system, where such effects are
ignored. Furthermore, the fractional model provides one additional parameter to
obtain a better fit of the data. The parameters of the model are calibrated by
using as data the number of casualties officially reported. Since infinite
solutions honour the data, we show a set of cases with different values of the
lockdown parameters, fatality rate, and incubation and infectious periods. The
different reproduction ratios R0 and infection fatality rates (IFR) so obtained
indicate the results may differ from recent reported values, constituting
possible alternative solutions. A comparison with results obtained with the
classic SEIR model is also included. The analysis allows us to study how
isolation and social distancing measures affect the time evolution of the
epidemic.
| [
{
"created": "Wed, 7 Apr 2021 01:42:15 GMT",
"version": "v1"
}
] | 2021-04-08 | [
[
"Santos",
"Juan",
"",
"1, 2 and 3"
],
[
"Carcione",
"José",
"",
"4 and 1"
],
[
"Savioli",
"Gabriela",
""
],
[
"Gauzellino",
"Patricia",
""
]
] | A pandemic caused by a new coronavirus (COVID-19) has spread worldwide, inducing an epidemic still active in Argentina. In this chapter, we present a case study using an SEIR (Susceptible-Exposed-Infected-Recovered) diffusion model of fractional order in time to analyze the evolution of the epidemic in Buenos Aires and neighboring areas (Regi\'on Metropolitana de Buenos Aires, (RMBA)) comprising about 15 million inhabitants. In the SEIR model, individuals are divided into four classes, namely, susceptible (S), exposed (E), infected (I) and recovered (R). The SEIR model of fractional order allows for the incorporation of memory, with hereditary properties of the system, being a generalization of the classic SEIR first-order system, where such effects are ignored. Furthermore, the fractional model provides one additional parameter to obtain a better fit of the data. The parameters of the model are calibrated by using as data the number of casualties officially reported. Since infinite solutions honour the data, we show a set of cases with different values of the lockdown parameters, fatality rate, and incubation and infectious periods. The different reproduction ratios R0 and infection fatality rates (IFR) so obtained indicate the results may differ from recent reported values, constituting possible alternative solutions. A comparison with results obtained with the classic SEIR model is also included. The analysis allows us to study how isolation and social distancing measures affect the time evolution of the epidemic. |
0901.0086 | Szymon Niewieczerza{\l} | Marek Cieplak and Szymon Niewieczerza{\l} | Hydrodynamic Interactions in Protein Folding | null | null | 10.1063/1.3050103 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We incorporate hydrodynamic interactions (HI) in a coarse-grained and
structure-based model of proteins by employing the Rotne-Prager hydrodynamic
tensor. We study several small proteins and demonstrate that HI facilitate
folding. We also study HIV-1 protease and show that HI make the flap closing
dynamics faster. The HI are found to affect time correlation functions in the
vicinity of the native state even though they have no impact on same time
characteristics of the structure fluctuations around the native state.
| [
{
"created": "Wed, 31 Dec 2008 10:54:26 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Cieplak",
"Marek",
""
],
[
"Niewieczerzał",
"Szymon",
""
]
] | We incorporate hydrodynamic interactions (HI) in a coarse-grained and structure-based model of proteins by employing the Rotne-Prager hydrodynamic tensor. We study several small proteins and demonstrate that HI facilitate folding. We also study HIV-1 protease and show that HI make the flap closing dynamics faster. The HI are found to affect time correlation functions in the vicinity of the native state even though they have no impact on same time characteristics of the structure fluctuations around the native state. |
q-bio/0602013 | Emidio Capriotti | Emidio Capriotti and Rita Casadio | The evaluation of protein folding rate constant is improved by
predicting the folding kinetic order with a SVM-based method | The paper will be published on WSEAS Transaction on Biology and
Biomedicine | null | null | null | q-bio.BM q-bio.QM | null | Protein folding is a problem of large interest since it concerns the
mechanism by which the genetic information is translated into proteins with
well defined three-dimensional (3D) structures and functions. Recently
theoretical models have been developed to predict the protein folding rate
considering the relationships of the process with tolopological parameters
derived from the native (atomic-solved) protein structures. Previous works
classified proteins in two different groups exhibiting either a
single-exponential or a multi-exponential folding kinetics. It is well known
that these two classes of proteins are related to different protein structural
features. The increasing number of available experimental kinetic data allows
the application to the problem of a machine learning approach, in order to
predict the kinetic order of the folding process starting from the experimental
data so far collected. This information can be used to improve the prediction
of the folding rate. In this work first we describe a support vector
machine-based method (SVM-KO) to predict for a given protein the kinetic order
of the folding process. Using this method we can classify correctly 78% of the
folding mechanisms over a set of 63 experimental data. Secondly we focus on the
prediction of the logarithm of the folding rate. This value can be obtained as
a linear regression task with a SVM-based method. In this paper we show that
linear correlation of the predicted with experimental data can improve when the
regression task is computed over two different sets, instead of one, each of
them composed by the proteins with a correctly predicted two state or
multistate kinetic order.
| [
{
"created": "Mon, 13 Feb 2006 13:26:00 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Capriotti",
"Emidio",
""
],
[
"Casadio",
"Rita",
""
]
] | Protein folding is a problem of large interest since it concerns the mechanism by which the genetic information is translated into proteins with well defined three-dimensional (3D) structures and functions. Recently theoretical models have been developed to predict the protein folding rate considering the relationships of the process with tolopological parameters derived from the native (atomic-solved) protein structures. Previous works classified proteins in two different groups exhibiting either a single-exponential or a multi-exponential folding kinetics. It is well known that these two classes of proteins are related to different protein structural features. The increasing number of available experimental kinetic data allows the application to the problem of a machine learning approach, in order to predict the kinetic order of the folding process starting from the experimental data so far collected. This information can be used to improve the prediction of the folding rate. In this work first we describe a support vector machine-based method (SVM-KO) to predict for a given protein the kinetic order of the folding process. Using this method we can classify correctly 78% of the folding mechanisms over a set of 63 experimental data. Secondly we focus on the prediction of the logarithm of the folding rate. This value can be obtained as a linear regression task with a SVM-based method. In this paper we show that linear correlation of the predicted with experimental data can improve when the regression task is computed over two different sets, instead of one, each of them composed by the proteins with a correctly predicted two state or multistate kinetic order. |
1412.0603 | Alexei Koulakov | Daniel D. Ferrante, Yi Wei, and Alexei A. Koulakov | Statistical model of evolution of brain parcellation | 9 pages, plenty of pictures | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the distribution of brain and cortical area sizes [parcellation
units (PUs)] obtained for three species: mouse, macaque, and human. We find
that the distribution of PU sizes is close to lognormal. We analyze the
mathematical model of evolution of brain parcellation based on iterative
fragmentation and specialization. In this model, each existing PU has a
probability to be split that depends on PU size only. This model shows that the
same evolutionary process may have led to brain parcellation in these three
species. Our model suggests that region-to-region (macro) connectivity is given
by the outer product form. We show that most experimental data on non-vanishing
macaque cortex macroconnectivity (62% for area V1) can be explained by the
outer product power-law form suggested by our model. We propose a
multiplicative Hebbian learning rule for the macroconnectome that could yield
the correct scaling of connection strengths between areas. We thus propose a
universal evolutionary model that may have contributed to both brain
parcellation and mesoscopic level connectivity in mammals.
| [
{
"created": "Mon, 1 Dec 2014 19:28:10 GMT",
"version": "v1"
}
] | 2014-12-02 | [
[
"Ferrante",
"Daniel D.",
""
],
[
"Wei",
"Yi",
""
],
[
"Koulakov",
"Alexei A.",
""
]
] | We study the distribution of brain and cortical area sizes [parcellation units (PUs)] obtained for three species: mouse, macaque, and human. We find that the distribution of PU sizes is close to lognormal. We analyze the mathematical model of evolution of brain parcellation based on iterative fragmentation and specialization. In this model, each existing PU has a probability to be split that depends on PU size only. This model shows that the same evolutionary process may have led to brain parcellation in these three species. Our model suggests that region-to-region (macro) connectivity is given by the outer product form. We show that most experimental data on non-vanishing macaque cortex macroconnectivity (62% for area V1) can be explained by the outer product power-law form suggested by our model. We propose a multiplicative Hebbian learning rule for the macroconnectome that could yield the correct scaling of connection strengths between areas. We thus propose a universal evolutionary model that may have contributed to both brain parcellation and mesoscopic level connectivity in mammals. |
1803.03742 | Nir Lahav | Nir Lahav, Baruch Ksherim, Eti Ben-Simon, Adi Maron-Katz, Reuven Cohen
and Shlomo Havlin | K-shell decomposition reveals hierarchical cortical organization of the
human brain | New Journal of Physics, Volume 18, August 2016 | null | 10.1088/1367-2630/18/8/083013 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years numerous attempts to understand the human brain were
undertaken from a network point of view. A network framework takes into account
the relationships between the different parts of the system and enables to
examine how global and complex functions might emerge from network topology.
Previous work revealed that the human brain features 'small world'
characteristics and that cortical hubs tend to interconnect among themselves.
However, in order to fully understand the topological structure of hubs one
needs to go beyond the properties of a specific hub and examine the various
structural layers of the network. To address this topic further, we applied an
analysis known in statistical physics and network theory as k-shell
decomposition analysis. The analysis was applied on a human cortical network,
derived from MRI\DSI data of six participants. Such analysis enables us to
portray a detailed account of cortical connectivity focusing on different
neighborhoods of interconnected layers across the cortex. Our findings reveal
that the human cortex is highly connected and efficient, and unlike the
internet network contains no isolated nodes. The cortical network is comprised
of a nucleus alongside shells of increasing connectivity that formed one
connected giant component. All these components were further categorized into
three hierarchies in accordance with their connectivity profile, with each
hierarchy reflecting different functional roles. Such a model may explain an
efficient flow of information from the lowest hierarchy to the highest one,
with each step enabling increased data integration. At the top, the highest
hierarchy (the nucleus) serves as a global interconnected collective and
demonstrates high correlation with consciousness related regions, suggesting
that the nucleus might serve as a platform for consciousness to emerge.
| [
{
"created": "Sat, 10 Mar 2018 02:19:09 GMT",
"version": "v1"
}
] | 2018-03-13 | [
[
"Lahav",
"Nir",
""
],
[
"Ksherim",
"Baruch",
""
],
[
"Ben-Simon",
"Eti",
""
],
[
"Maron-Katz",
"Adi",
""
],
[
"Cohen",
"Reuven",
""
],
[
"Havlin",
"Shlomo",
""
]
] | In recent years numerous attempts to understand the human brain were undertaken from a network point of view. A network framework takes into account the relationships between the different parts of the system and enables to examine how global and complex functions might emerge from network topology. Previous work revealed that the human brain features 'small world' characteristics and that cortical hubs tend to interconnect among themselves. However, in order to fully understand the topological structure of hubs one needs to go beyond the properties of a specific hub and examine the various structural layers of the network. To address this topic further, we applied an analysis known in statistical physics and network theory as k-shell decomposition analysis. The analysis was applied on a human cortical network, derived from MRI\DSI data of six participants. Such analysis enables us to portray a detailed account of cortical connectivity focusing on different neighborhoods of interconnected layers across the cortex. Our findings reveal that the human cortex is highly connected and efficient, and unlike the internet network contains no isolated nodes. The cortical network is comprised of a nucleus alongside shells of increasing connectivity that formed one connected giant component. All these components were further categorized into three hierarchies in accordance with their connectivity profile, with each hierarchy reflecting different functional roles. Such a model may explain an efficient flow of information from the lowest hierarchy to the highest one, with each step enabling increased data integration. At the top, the highest hierarchy (the nucleus) serves as a global interconnected collective and demonstrates high correlation with consciousness related regions, suggesting that the nucleus might serve as a platform for consciousness to emerge. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.