id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.07074 | Pedro Antonio Vald\'es-Hern\'andez | Pedro A. Valdes-Hernandez, Thomas Knoesche | Initial conditions in the neural field model | null | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In spite of the large amount of existing neural models in the literature,
there is a lack of a systematic review of the possible effect of choosing
different initial conditions on the dynamic evolution of neural systems. In
this short review we intend to give insights into this topic by discussing some
published examples. First, we briefly introduce the different ingredients of a
neural dynamical model. Secondly, we introduce some concepts used to describe
the dynamic behavior of neural models, namely phase space and its portraits,
time series, spectra, multistability and bifurcations. We end with an analysis
of the irreversibility of processes and its implications on the functioning of
normal and pathological brains.
| [
{
"created": "Mon, 23 May 2016 16:15:21 GMT",
"version": "v1"
}
] | 2016-05-24 | [
[
"Valdes-Hernandez",
"Pedro A.",
""
],
[
"Knoesche",
"Thomas",
""
]
] | In spite of the large amount of existing neural models in the literature, there is a lack of a systematic review of the possible effect of choosing different initial conditions on the dynamic evolution of neural systems. In this short review we intend to give insights into this topic by discussing some published examples. First, we briefly introduce the different ingredients of a neural dynamical model. Secondly, we introduce some concepts used to describe the dynamic behavior of neural models, namely phase space and its portraits, time series, spectra, multistability and bifurcations. We end with an analysis of the irreversibility of processes and its implications on the functioning of normal and pathological brains. |
1310.3234 | Darren Kessner | Darren Kessner and John Novembre | forqs: Forward-in-time Simulation of Recombination, Quantitative Traits,
and Selection | preprint include Supplementary Information.
https://bitbucket.org/dkessner/forqs | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | forqs is a forward-in-time simulation of recombination, quantitative traits,
and selection. It was designed to investigate haplotype patterns resulting from
scenarios where substantial evolutionary change has taken place in a small
number of generations due to recombination and/or selection on polygenic
quantitative traits. forqs is implemented as a command- line C++ program.
Source code and binary executables for Linux, OSX, and Windows are freely
available under a permissive BSD license.
| [
{
"created": "Fri, 11 Oct 2013 18:31:09 GMT",
"version": "v1"
}
] | 2013-10-14 | [
[
"Kessner",
"Darren",
""
],
[
"Novembre",
"John",
""
]
] | forqs is a forward-in-time simulation of recombination, quantitative traits, and selection. It was designed to investigate haplotype patterns resulting from scenarios where substantial evolutionary change has taken place in a small number of generations due to recombination and/or selection on polygenic quantitative traits. forqs is implemented as a command- line C++ program. Source code and binary executables for Linux, OSX, and Windows are freely available under a permissive BSD license. |
1302.5507 | Ruibang Luo | Ruibang Luo, Thomas Wong, Jianqiao Zhu, Chi-Man Liu, Edward Wu,
Lap-Kei Lee, Haoxiang Lin, Wenjuan Zhu, David W. Cheung, Hing-Fung Ting,
Siu-Ming Yiu, Chang Yu, Yingrui Li, Ruiqiang Li, Tak-Wah Lam | SOAP3-dp: Fast, Accurate and Sensitive GPU-based Short Read Aligner | 21 pages, 6 figures, submitted to PLoS ONE, additional files
available at "https://www.dropbox.com/sh/bhclhxpoiubh371/O5CO_CkXQE".
Comments most welcome | null | 10.1371/journal.pone.0065632 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To tackle the exponentially increasing throughput of Next-Generation
Sequencing (NGS), most of the existing short-read aligners can be configured to
favor speed in trade of accuracy and sensitivity. SOAP3-dp, through leveraging
the computational power of both CPU and GPU with optimized algorithms, delivers
high speed and sensitivity simultaneously. Compared with widely adopted
aligners including BWA, Bowtie2, SeqAlto, GEM and GPU-based aligners including
BarraCUDA and CUSHAW, SOAP3-dp is two to tens of times faster, while
maintaining the highest sensitivity and lowest false discovery rate (FDR) on
Illumina reads with different lengths. Transcending its predecessor SOAP3,
which does not allow gapped alignment, SOAP3-dp by default tolerates alignment
similarity as low as 60 percent. Real data evaluation using human genome
demonstrates SOAP3-dp's power to enable more authentic variants and longer
Indels to be discovered. Fosmid sequencing shows a 9.1 percent FDR on newly
discovered deletions. SOAP3-dp natively supports BAM file format and provides a
scoring scheme same as BWA, which enables it to be integrated into existing
analysis pipelines. SOAP3-dp has been deployed on Amazon-EC2, NIH-Biowulf and
Tianhe-1A.
| [
{
"created": "Fri, 22 Feb 2013 07:56:11 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Mar 2013 03:23:24 GMT",
"version": "v2"
}
] | 2015-06-15 | [
[
"Luo",
"Ruibang",
""
],
[
"Wong",
"Thomas",
""
],
[
"Zhu",
"Jianqiao",
""
],
[
"Liu",
"Chi-Man",
""
],
[
"Wu",
"Edward",
""
],
[
"Lee",
"Lap-Kei",
""
],
[
"Lin",
"Haoxiang",
""
],
[
"Zhu",
"Wenjuan",
""
],
[
"Cheung",
"David W.",
""
],
[
"Ting",
"Hing-Fung",
""
],
[
"Yiu",
"Siu-Ming",
""
],
[
"Yu",
"Chang",
""
],
[
"Li",
"Yingrui",
""
],
[
"Li",
"Ruiqiang",
""
],
[
"Lam",
"Tak-Wah",
""
]
] | To tackle the exponentially increasing throughput of Next-Generation Sequencing (NGS), most of the existing short-read aligners can be configured to favor speed in trade of accuracy and sensitivity. SOAP3-dp, through leveraging the computational power of both CPU and GPU with optimized algorithms, delivers high speed and sensitivity simultaneously. Compared with widely adopted aligners including BWA, Bowtie2, SeqAlto, GEM and GPU-based aligners including BarraCUDA and CUSHAW, SOAP3-dp is two to tens of times faster, while maintaining the highest sensitivity and lowest false discovery rate (FDR) on Illumina reads with different lengths. Transcending its predecessor SOAP3, which does not allow gapped alignment, SOAP3-dp by default tolerates alignment similarity as low as 60 percent. Real data evaluation using human genome demonstrates SOAP3-dp's power to enable more authentic variants and longer Indels to be discovered. Fosmid sequencing shows a 9.1 percent FDR on newly discovered deletions. SOAP3-dp natively supports BAM file format and provides a scoring scheme same as BWA, which enables it to be integrated into existing analysis pipelines. SOAP3-dp has been deployed on Amazon-EC2, NIH-Biowulf and Tianhe-1A. |
1103.4621 | Armando G. M. Neves | Armando G. M. Neves and Maurizio Serva | Extremely rare interbreeding events can explain Neanderthal DNA in
modern humans | 26 pages, 6 figures, updated version | null | null | null | q-bio.PE math-ph math.MP math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considering the recent experimental discovery of Green et al that present day
non-Africans have 1 to 4% of their nuclear DNA of Neanderthal origin, we
propose here a model which is able to quantify the interbreeding events between
Africans and Neanderthals at the time they coexisted in the Middle East. The
model consists of a solvable system of deterministic ordinary differential
equations containing as a stochastic ingredient a realization of the neutral
Wright-Fisher drift process. By simulating the stochastic part of the model we
are able to apply it to the interbreeding of African and Neanderthal
subpopulations and estimate the only parameter of the model, which is the
number of individuals per generation exchanged between subpopulations. Our
results indicate that the amount of Neanderthal DNA in non-Africans can be
explained with maximum probability by the exchange of a single pair of
individuals between the subpopulations at each 77 generations, but larger
exchange frequencies are also allowed with sizeable probability. The results
are compatible with a total interbreeding population of order 10,000
individuals and with all living humans being descendents of Africans both for
mitochondrial DNA and Y chromosome.
| [
{
"created": "Wed, 23 Mar 2011 20:23:11 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2011 14:20:42 GMT",
"version": "v2"
}
] | 2011-06-06 | [
[
"Neves",
"Armando G. M.",
""
],
[
"Serva",
"Maurizio",
""
]
] | Considering the recent experimental discovery of Green et al that present day non-Africans have 1 to 4% of their nuclear DNA of Neanderthal origin, we propose here a model which is able to quantify the interbreeding events between Africans and Neanderthals at the time they coexisted in the Middle East. The model consists of a solvable system of deterministic ordinary differential equations containing as a stochastic ingredient a realization of the neutral Wright-Fisher drift process. By simulating the stochastic part of the model we are able to apply it to the interbreeding of African and Neanderthal subpopulations and estimate the only parameter of the model, which is the number of individuals per generation exchanged between subpopulations. Our results indicate that the amount of Neanderthal DNA in non-Africans can be explained with maximum probability by the exchange of a single pair of individuals between the subpopulations at each 77 generations, but larger exchange frequencies are also allowed with sizeable probability. The results are compatible with a total interbreeding population of order 10,000 individuals and with all living humans being descendents of Africans both for mitochondrial DNA and Y chromosome. |
2311.03394 | Gopinath Sadhu | Gopinath Sadhu, K S Yadav, Siddhartha Sankar Ghosh and D C Dalal | On impact of oxygen distribution on tumor necrotic region: A Multiphase
Model | null | null | null | null | q-bio.TO math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background and Objective: In an in-vivo situation, the tissue near the blood
vessels is rich in oxygen supply compared to the one far from blood vessels.
Hence, non-uniform oxygen distribution is observed in biological tissues. Our
objective is to explore the influence of non-uniform oxygen supply in the
development of necrotic core and, also to examine the effects of necrotic core
on tumor growth. Methods: The research is processed through a mathematical
approach based on the multiphase mathematical model. To simulate the model, a
finite difference numerical method based on the Semi-Implicit Method for
Pressure-Linked Equations (SIMPLE) algorithm is adopted. Results: The necrotic
core starts to form at the boundary of the tumor with lower oxygen
concentration from the initial time. Investigations reveal that the position of
the necrotic core varies depending on the oxygen supply through the tumor
boundary. The results predict asymmetrical tumor growth under unequal oxygen
supply at tumor boundaries. Also, it is hinted that a tumor with a larger size
of necrotic core grows slowly as compared to a tumor containing a smaller size
of necrotic core. Conclusions: The formulated model has the potential to cast
the situation of the tumor growth in an in-vivo and in-vitro situations. This
study provides an idea about the location and shape of the necrotic core and
the impact of the necrotic core on tumor growth. This information will be
beneficial to the clinicians and medical practitioners in predicting the stage
of the disease.
| [
{
"created": "Sun, 5 Nov 2023 05:37:33 GMT",
"version": "v1"
}
] | 2023-11-08 | [
[
"Sadhu",
"Gopinath",
""
],
[
"Yadav",
"K S",
""
],
[
"Ghosh",
"Siddhartha Sankar",
""
],
[
"Dalal",
"D C",
""
]
] | Background and Objective: In an in-vivo situation, the tissue near the blood vessels is rich in oxygen supply compared to the one far from blood vessels. Hence, non-uniform oxygen distribution is observed in biological tissues. Our objective is to explore the influence of non-uniform oxygen supply in the development of necrotic core and, also to examine the effects of necrotic core on tumor growth. Methods: The research is processed through a mathematical approach based on the multiphase mathematical model. To simulate the model, a finite difference numerical method based on the Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm is adopted. Results: The necrotic core starts to form at the boundary of the tumor with lower oxygen concentration from the initial time. Investigations reveal that the position of the necrotic core varies depending on the oxygen supply through the tumor boundary. The results predict asymmetrical tumor growth under unequal oxygen supply at tumor boundaries. Also, it is hinted that a tumor with a larger size of necrotic core grows slowly as compared to a tumor containing a smaller size of necrotic core. Conclusions: The formulated model has the potential to cast the situation of the tumor growth in an in-vivo and in-vitro situations. This study provides an idea about the location and shape of the necrotic core and the impact of the necrotic core on tumor growth. This information will be beneficial to the clinicians and medical practitioners in predicting the stage of the disease. |
1611.08760 | Robert Cameron | R. P. Cameron, J. A. Cameron and S. M. Barnett | Stegosaurus chirality | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explain that Stegosaurus exhibited exterior chirality and observe that the
largest plate in particular of USNM 4394, USNM 4714, DMNS 2818 and NHMUK R36730
appears to have tilted to the right rather than to the left in each case.
Several instances in which Stegosaurus specimens have been confused with their
distinct, hypothetical mirror-image forms are highlighted. We believe our
findings to be consistent with the hypothesis that Stegosaurus's plates acted
primarily as display structures. A collection of more than one Stegosaurus
might be referred to henceforth as a 'handful' of Stegosaurus.
| [
{
"created": "Sat, 26 Nov 2016 23:14:34 GMT",
"version": "v1"
}
] | 2016-11-29 | [
[
"Cameron",
"R. P.",
""
],
[
"Cameron",
"J. A.",
""
],
[
"Barnett",
"S. M.",
""
]
] | We explain that Stegosaurus exhibited exterior chirality and observe that the largest plate in particular of USNM 4394, USNM 4714, DMNS 2818 and NHMUK R36730 appears to have tilted to the right rather than to the left in each case. Several instances in which Stegosaurus specimens have been confused with their distinct, hypothetical mirror-image forms are highlighted. We believe our findings to be consistent with the hypothesis that Stegosaurus's plates acted primarily as display structures. A collection of more than one Stegosaurus might be referred to henceforth as a 'handful' of Stegosaurus. |
1506.04443 | Barry Slaff | Barry M. Slaff, Shane T. Jensen, and Aalim M. Weljie | Probabilistic Approach for Evaluating Metabolite Sample Integrity | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of metabolomics studies depends upon the "fitness" of each
biological sample used for analysis: it is critical that metabolite levels
reported for a biological sample represent an accurate snapshot of the studied
organism's metabolite profile at time of sample collection. Numerous factors
may compromise metabolite sample fitness, including chemical and biological
factors which intervene during sample collection, handling, storage, and
preparation for analysis. We propose a probabilistic model for the quantitative
assessment of metabolite sample fitness. Collection and processing of nuclear
magnetic resonance (NMR) and ultra-performance liquid chromatography (UPLC-MS)
metabolomics data is discussed. Feature selection methods utilized for
multivariate data analysis are briefly reviewed, including feature clustering
and computation of latent vectors using spectral methods. We propose that the
time-course of metabolite changes in samples stored at different temperatures
may be utilized to identify changing-metabolite-to-stable-metabolite ratios as
markers of sample fitness. Tolerance intervals may be computed to characterize
these ratios among fresh samples. In order to discover additional structure in
the data relevant to sample fitness, we propose using data labeled according to
these ratios to train a Dirichlet process mixture model (DPMM) for assessing
sample fitness. DPMMs are highly intuitive since they model the metabolite
levels in a sample as arising from a combination of processes including, e.g.,
normal biological processes and degradation- or contamination-inducing
processes. The outputs of a DPMM are probabilities that a sample is associated
with a given process, and these probabilities may be incorporated into a final
classifier for sample fitness.
| [
{
"created": "Sun, 14 Jun 2015 21:50:09 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Slaff",
"Barry M.",
""
],
[
"Jensen",
"Shane T.",
""
],
[
"Weljie",
"Aalim M.",
""
]
] | The success of metabolomics studies depends upon the "fitness" of each biological sample used for analysis: it is critical that metabolite levels reported for a biological sample represent an accurate snapshot of the studied organism's metabolite profile at time of sample collection. Numerous factors may compromise metabolite sample fitness, including chemical and biological factors which intervene during sample collection, handling, storage, and preparation for analysis. We propose a probabilistic model for the quantitative assessment of metabolite sample fitness. Collection and processing of nuclear magnetic resonance (NMR) and ultra-performance liquid chromatography (UPLC-MS) metabolomics data is discussed. Feature selection methods utilized for multivariate data analysis are briefly reviewed, including feature clustering and computation of latent vectors using spectral methods. We propose that the time-course of metabolite changes in samples stored at different temperatures may be utilized to identify changing-metabolite-to-stable-metabolite ratios as markers of sample fitness. Tolerance intervals may be computed to characterize these ratios among fresh samples. In order to discover additional structure in the data relevant to sample fitness, we propose using data labeled according to these ratios to train a Dirichlet process mixture model (DPMM) for assessing sample fitness. DPMMs are highly intuitive since they model the metabolite levels in a sample as arising from a combination of processes including, e.g., normal biological processes and degradation- or contamination-inducing processes. The outputs of a DPMM are probabilities that a sample is associated with a given process, and these probabilities may be incorporated into a final classifier for sample fitness. |
1910.08157 | Thomas Booth | Thomas Booth | An Update on Machine Learning in Neuro-oncology Diagnostics | arXiv admin note: substantial text overlap with arXiv:1910.07440 | null | 10.1007/978-3-030-11723-8_4 | null | q-bio.QM cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imaging biomarkers in neuro-oncology are used for diagnosis, prognosis and
treatment response monitoring. Magnetic resonance imaging is typically used
throughout the patient pathway because routine structural imaging provides
detailed anatomical and pathological information and advanced techniques
provide additional physiological detail. Following image feature extraction,
machine learning allows accurate classification in a variety of scenarios.
Machine learning also enables image feature extraction de novo although the low
prevalence of brain tumours makes such approaches challenging. Much research is
applied to determining molecular profiles, histological tumour grade and
prognosis at the time that patients first present with a brain tumour.
Following treatment, differentiating a treatment response from a post-treatment
related effect is clinically important and also an area of study. Most of the
evidence is low level having been obtained retrospectively and in single
centres.
| [
{
"created": "Fri, 9 Aug 2019 08:57:51 GMT",
"version": "v1"
}
] | 2019-10-21 | [
[
"Booth",
"Thomas",
""
]
] | Imaging biomarkers in neuro-oncology are used for diagnosis, prognosis and treatment response monitoring. Magnetic resonance imaging is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail. Following image feature extraction, machine learning allows accurate classification in a variety of scenarios. Machine learning also enables image feature extraction de novo although the low prevalence of brain tumours makes such approaches challenging. Much research is applied to determining molecular profiles, histological tumour grade and prognosis at the time that patients first present with a brain tumour. Following treatment, differentiating a treatment response from a post-treatment related effect is clinically important and also an area of study. Most of the evidence is low level having been obtained retrospectively and in single centres. |
1803.07352 | Heiko Sch\"utt | Heiko H. Sch\"utt, Lars O. M. Rothkegel, Hans A. Trukenbrod, Ralf
Engbert, Felix A. Wichmann | Disentangling top-down vs. bottom-up and low-level vs. high-level
influences on eye movements over time | Submitted to Journal of Vision | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bottom-up and top-down, as well as low-level and high-level factors influence
where we fixate when viewing natural scenes. However, the importance of each of
these factors and how they interact remains a matter of debate. Here, we
disentangle these factors by analysing their influence over time. For this
purpose we develop a saliency model which is based on the internal
representation of a recent early spatial vision model to measure the low-level
bottom-up factor. To measure the influence of high-level bottom-up features, we
use a recent DNN-based saliency model. To account for top-down influences, we
evaluate the models on two large datasets with different tasks: first, a
memorisation task and, second, a search task. Our results lend support to a
separation of visual scene exploration into three phases: The first saccade, an
initial guided exploration characterised by a gradual broadening of the
fixation density, and an steady state which is reached after roughly 10
fixations. Saccade target selection during the initial exploration and in the
steady state are related to similar areas of interest, which are better
predicted when including high-level features. In the search dataset, fixation
locations are determined predominantly by top-down processes. In contrast, the
first fixation follows a different fixation density and contains a strong
central fixation bias. Nonetheless, first fixations are guided strongly by
image properties and as early as 200 ms after image onset, fixations are better
predicted by high-level information. We conclude that any low-level bottom-up
factors are mainly limited to the generation of the first saccade. All saccades
are better explained when high-level features are considered, and later this
high-level bottom-up control can be overruled by top-down influences.
| [
{
"created": "Tue, 20 Mar 2018 10:33:44 GMT",
"version": "v1"
},
{
"created": "Thu, 17 May 2018 03:45:47 GMT",
"version": "v2"
}
] | 2018-05-18 | [
[
"Schütt",
"Heiko H.",
""
],
[
"Rothkegel",
"Lars O. M.",
""
],
[
"Trukenbrod",
"Hans A.",
""
],
[
"Engbert",
"Ralf",
""
],
[
"Wichmann",
"Felix A.",
""
]
] | Bottom-up and top-down, as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analysing their influence over time. For this purpose we develop a saliency model which is based on the internal representation of a recent early spatial vision model to measure the low-level bottom-up factor. To measure the influence of high-level bottom-up features, we use a recent DNN-based saliency model. To account for top-down influences, we evaluate the models on two large datasets with different tasks: first, a memorisation task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: The first saccade, an initial guided exploration characterised by a gradual broadening of the fixation density, and an steady state which is reached after roughly 10 fixations. Saccade target selection during the initial exploration and in the steady state are related to similar areas of interest, which are better predicted when including high-level features. In the search dataset, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later this high-level bottom-up control can be overruled by top-down influences. |
1602.05227 | Konstantin Blyuss | N. Sherborne, K.B. Blyuss, I.Z. Kiss | Compact pairwise models for epidemics with multiple infectious stages on
degree heterogeneous and clustered networks | 22 pages, 9 figures | J. Theor. Biol. 407, 387-400 (2016) | 10.1016/j.jtbi.2016.07.015 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a compact pairwise model that describes the spread of
multi-stage epidemics on networks. The multi-stage model corresponds to a
gamma-distributed infectious period which interpolates between the classical
Markovian models with exponentially distributed infectious period and epidemics
with a constant infectious period. We show how the compact approach leads to a
system of equations whose size is independent of the range of node degrees,
thus significantly reducing the complexity of the model. Network clustering is
incorporated into the model to provide a more accurate representation of
realistic contact networks, and the accuracy of proposed closures is analysed
for different levels of clustering and number of infection stages. Our results
support recent findings that standard closure techniques are likely to perform
better when the infectious period is constant.
| [
{
"created": "Sun, 14 Feb 2016 16:45:26 GMT",
"version": "v1"
}
] | 2016-12-08 | [
[
"Sherborne",
"N.",
""
],
[
"Blyuss",
"K. B.",
""
],
[
"Kiss",
"I. Z.",
""
]
] | This paper presents a compact pairwise model that describes the spread of multi-stage epidemics on networks. The multi-stage model corresponds to a gamma-distributed infectious period which interpolates between the classical Markovian models with exponentially distributed infectious period and epidemics with a constant infectious period. We show how the compact approach leads to a system of equations whose size is independent of the range of node degrees, thus significantly reducing the complexity of the model. Network clustering is incorporated into the model to provide a more accurate representation of realistic contact networks, and the accuracy of proposed closures is analysed for different levels of clustering and number of infection stages. Our results support recent findings that standard closure techniques are likely to perform better when the infectious period is constant. |
2211.12856 | Rafael Navajas-P\'erez | Gregor Mendel (translated by Juan Rojas-Garc\'ia and Rafael
Navajas-P\'erez) | Experimentos sobre la Hibridaci\'on en Plantas | in Spanish language | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mendel performed his experiments from 1856 and 1863, presented his results in
two meetings of the Natural Science Society in Brunn in February and March of
1865, and finally published them in the iconical paper of 1866, Versuche uber
Plflanzenhybriden. Two main translations to Spanish are available: the one done
by Prevosti in 1977, and a more recent one by Gomez-Graciani in 2014. Both were
translated from the English version of Druery and Bateson. Here, we present the
first direct translation from German to Spanish of this paper. This new version
corrects some errors detected in the previous versions, uses a more agile style
and includes the Darwinized terms pointed out by Abbot and Faribanks in a
recent paper (Genetics, 204(2):401-405, 2016; doi:
10.1534/genetics.116.194613).
| [
{
"created": "Wed, 23 Nov 2022 11:02:14 GMT",
"version": "v1"
}
] | 2022-11-24 | [
[
"Mendel",
"Gregor",
"",
"translated by Juan Rojas-García and Rafael\n Navajas-Pérez"
]
] | Mendel performed his experiments from 1856 and 1863, presented his results in two meetings of the Natural Science Society in Brunn in February and March of 1865, and finally published them in the iconical paper of 1866, Versuche uber Plflanzenhybriden. Two main translations to Spanish are available: the one done by Prevosti in 1977, and a more recent one by Gomez-Graciani in 2014. Both were translated from the English version of Druery and Bateson. Here, we present the first direct translation from German to Spanish of this paper. This new version corrects some errors detected in the previous versions, uses a more agile style and includes the Darwinized terms pointed out by Abbot and Faribanks in a recent paper (Genetics, 204(2):401-405, 2016; doi: 10.1534/genetics.116.194613). |
2306.06156 | Timothy Truong Jr | Timothy F. Truong Jr, Tristan Bepler | PoET: A generative model of protein families as sequences-of-sequences | null | Advances in Neural Information Processing Systems (Vol. 36), 2023 | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative protein language models are a natural way to design new proteins
with desired functions. However, current models are either difficult to direct
to produce a protein from a specific family of interest, or must be trained on
a large multiple sequence alignment (MSA) from the specific family of interest,
making them unable to benefit from transfer learning across families. To
address this, we propose $\textbf{P}$r$\textbf{o}$tein $\textbf{E}$volutionary
$\textbf{T}$ransformer (PoET), an autoregressive generative model of whole
protein families that learns to generate sets of related proteins as
sequences-of-sequences across tens of millions of natural protein sequence
clusters. PoET can be used as a retrieval-augmented language model to generate
and score arbitrary modifications conditioned on any protein family of
interest, and can extrapolate from short context lengths to generalize well
even for small families. This is enabled by a unique Transformer layer; we
model tokens sequentially within sequences while attending between sequences
order invariantly, allowing PoET to scale to context lengths beyond those used
during training. In extensive experiments on deep mutational scanning datasets,
we show that PoET outperforms existing protein language models and evolutionary
sequence models for variant function prediction across proteins of all MSA
depths. We also demonstrate PoET's ability to controllably generate new protein
sequences.
| [
{
"created": "Fri, 9 Jun 2023 16:06:36 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 13:48:49 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Nov 2023 12:34:47 GMT",
"version": "v3"
}
] | 2024-01-08 | [
[
"Truong",
"Timothy F.",
"Jr"
],
[
"Bepler",
"Tristan",
""
]
] | Generative protein language models are a natural way to design new proteins with desired functions. However, current models are either difficult to direct to produce a protein from a specific family of interest, or must be trained on a large multiple sequence alignment (MSA) from the specific family of interest, making them unable to benefit from transfer learning across families. To address this, we propose $\textbf{P}$r$\textbf{o}$tein $\textbf{E}$volutionary $\textbf{T}$ransformer (PoET), an autoregressive generative model of whole protein families that learns to generate sets of related proteins as sequences-of-sequences across tens of millions of natural protein sequence clusters. PoET can be used as a retrieval-augmented language model to generate and score arbitrary modifications conditioned on any protein family of interest, and can extrapolate from short context lengths to generalize well even for small families. This is enabled by a unique Transformer layer; we model tokens sequentially within sequences while attending between sequences order invariantly, allowing PoET to scale to context lengths beyond those used during training. In extensive experiments on deep mutational scanning datasets, we show that PoET outperforms existing protein language models and evolutionary sequence models for variant function prediction across proteins of all MSA depths. We also demonstrate PoET's ability to controllably generate new protein sequences. |
1202.1266 | Chia Ying Lee | Chia Ying Lee | Stochastic simulation of biochemical systems with randomly fluctuating
rate constants | null | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an experimental study of single enzyme reactions, it has been proposed
that the rate constants of the enzymatic reactions fluctuate randomly,
according to a given distribution. To quantify the uncertainty arising from
random rate constants, it is necessary to investigate how one can simulate such
a biochemical system. To do this, we will take the Gillespie's stochastic
simulation algorithm for simulating the evolution of the state of a chemical
system, and study a modification of the algorithm that incorporates the random
rate constants. In addition to simulating the waiting time of each reaction
step, the modified algorithm also involves simulating the random fluctuation of
the rate constant at each reaction time. We consider the modified algorithm in
a general framework, then specialize it to two contrasting physical models, one
in which the fluctuations occur on a much faster time scale than the reaction
step, and the other in which the fluctuations occur much more slowly. The
latter case was applied to the single enzyme reaction system, using in part the
Metropolis-Hastings algorithm to enact the given distribution on the random
rate constants. The modified algorithm is shown to produce simulation outputs
that are corroborated by the experimental results. It is hoped that this
modified algorithm can subsequently be used as a tool for the estimation or
calibration of parameters in the system using experimental data.
| [
{
"created": "Mon, 6 Feb 2012 20:28:20 GMT",
"version": "v1"
}
] | 2012-02-07 | [
[
"Lee",
"Chia Ying",
""
]
] | In an experimental study of single enzyme reactions, it has been proposed that the rate constants of the enzymatic reactions fluctuate randomly, according to a given distribution. To quantify the uncertainty arising from random rate constants, it is necessary to investigate how one can simulate such a biochemical system. To do this, we will take the Gillespie's stochastic simulation algorithm for simulating the evolution of the state of a chemical system, and study a modification of the algorithm that incorporates the random rate constants. In addition to simulating the waiting time of each reaction step, the modified algorithm also involves simulating the random fluctuation of the rate constant at each reaction time. We consider the modified algorithm in a general framework, then specialize it to two contrasting physical models, one in which the fluctuations occur on a much faster time scale than the reaction step, and the other in which the fluctuations occur much more slowly. The latter case was applied to the single enzyme reaction system, using in part the Metropolis-Hastings algorithm to enact the given distribution on the random rate constants. The modified algorithm is shown to produce simulation outputs that are corroborated by the experimental results. It is hoped that this modified algorithm can subsequently be used as a tool for the estimation or calibration of parameters in the system using experimental data. |
1804.01906 | Yannik Stradmann | Syed Ahmed Aamir, Yannik Stradmann, Paul M\"uller, Christian Pehle,
Andreas Hartel, Andreas Gr\"ubl, Johannes Schemmel and Karlheinz Meier | An Accelerated LIF Neuronal Network Array for a Large Scale Mixed-Signal
Neuromorphic Architecture | 14 pages, 9 Figures, accepted for publication in IEEE Transactions on
Circuits and Systems I | null | 10.1109/TCSI.2018.2840718 | null | q-bio.NC cs.ET physics.bio-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an array of leaky integrate-and-fire (LIF) neuron circuits
designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS
neuromorphic hardware. The neuronal array is embedded in the analog network
core of a scaled-down prototype HICANN-DLS chip. Designed as continuous-time
circuits, the neurons are highly tunable and reconfigurable elements with
accelerated dynamics. Each neuron integrates input current from a multitude of
incoming synapses and evokes a digital spike event output. The circuit offers a
wide tuning range for synaptic and membrane time constants, as well as for
refractory periods to cover a number of computational models. We elucidate our
design methodology, underlying circuit design, calibration and measurement
results from individual sub-circuits across multiple dies. The circuit dynamics
match with the behavior of the LIF mathematical model. We further demonstrate a
winner-take-all network on the prototype chip as a typical element of cortical
processing.
| [
{
"created": "Thu, 5 Apr 2018 15:19:00 GMT",
"version": "v1"
},
{
"created": "Tue, 15 May 2018 11:44:07 GMT",
"version": "v2"
},
{
"created": "Wed, 23 May 2018 12:40:58 GMT",
"version": "v3"
}
] | 2019-03-28 | [
[
"Aamir",
"Syed Ahmed",
""
],
[
"Stradmann",
"Yannik",
""
],
[
"Müller",
"Paul",
""
],
[
"Pehle",
"Christian",
""
],
[
"Hartel",
"Andreas",
""
],
[
"Grübl",
"Andreas",
""
],
[
"Schemmel",
"Johannes",
""
],
[
"Meier",
"Karlheinz",
""
]
] | We present an array of leaky integrate-and-fire (LIF) neuron circuits designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS neuromorphic hardware. The neuronal array is embedded in the analog network core of a scaled-down prototype HICANN-DLS chip. Designed as continuous-time circuits, the neurons are highly tunable and reconfigurable elements with accelerated dynamics. Each neuron integrates input current from a multitude of incoming synapses and evokes a digital spike event output. The circuit offers a wide tuning range for synaptic and membrane time constants, as well as for refractory periods to cover a number of computational models. We elucidate our design methodology, underlying circuit design, calibration and measurement results from individual sub-circuits across multiple dies. The circuit dynamics match with the behavior of the LIF mathematical model. We further demonstrate a winner-take-all network on the prototype chip as a typical element of cortical processing. |
1611.03488 | Yi-Xiang Wang | Xian Jun Zeng, Min Deng, Yi Xiang Wang, James F. Griffith, Lai Chang
He, Anthony W. L. Kwok, Jason C. S. Leung, Timothy Kwok, Ping Chung Leung | Prevalence of algorithm-based qualitative (ABQ) method osteoporotic
vertebral fracture in elderly Chinese men and women with reference to
semi-quantitative (SQ) method: Mr. Os and Ms Os. (Hong Kong) studies | 26 pages,4 figures, 6 tables | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Introduction: This study evaluated algorithm-based qualitative (ABQ) method
for vertebral fracture (VF) evaluation with reference to semi-quantitative (SQ)
method and bone mineral density (BMD) measurement. Methods: Mr. OS (Hong Kong)
and Ms. OS (Hong Kong) represent the first large-scale cohort studies on bone
health in elderly Chinese men and women. The current study compared Genant's SQ
method and ABQ method in these two cohorts. Based on quantitative measurement,
the severity of ABQ method detected fractures was additionally classified into
grade-1, grad-2, and grade-3 according to SQ's deformity criteria. The
radiographs of 1,954 elderly Chinese men (mean: 72.3 years) and 1,953 elderly
Chinese women (mean: 72.5 years) were evaluated. Results: according to ABQ,
grade-1,-2,-3 VFs accounted for 1.89%, 1.74%, 2.25% in men, and 3.33%, 3.07%,
and 5.53% in women. In men and women, 15.7% (35/223) and 34.5% (48/139) of
vertebrae with SQ grade-1 deformity were ABQ(+, with fracture) respectively. In
men and women, 89.7% (35/39) and 66.7% (48/72) of vertebrae with ABQ grade-1
fracture had SQ grade-1 deformity. For grade-1 change, SQ (-, negative without
fracture) & ABQ (+, positive with vertebral cortex line fracture) subjects tend
to have a lower BMD than the SQ(+)& ABQ(-) subjects. In subjects with SQ
grade-2 deformity, those were also ABQ(+) tended to have a lower BMD than those
were ABQ(-). In all grades, SQ(-)&ABQ(-) subjects tended to have highest BMD,
while SQ(+)&ABQ(+)subjects tended to have lowest BMD. Conclusion: ABQ method
may be more sensitive to VF associated mild lower BMD than SQ method.
| [
{
"created": "Thu, 10 Nov 2016 09:59:48 GMT",
"version": "v1"
}
] | 2016-11-14 | [
[
"Zeng",
"Xian Jun",
""
],
[
"Deng",
"Min",
""
],
[
"Wang",
"Yi Xiang",
""
],
[
"Griffith",
"James F.",
""
],
[
"He",
"Lai Chang",
""
],
[
"Kwok",
"Anthony W. L.",
""
],
[
"Leung",
"Jason C. S.",
""
],
[
"Kwok",
"Timothy",
""
],
[
"Leung",
"Ping Chung",
""
]
] | Introduction: This study evaluated algorithm-based qualitative (ABQ) method for vertebral fracture (VF) evaluation with reference to semi-quantitative (SQ) method and bone mineral density (BMD) measurement. Methods: Mr. OS (Hong Kong) and Ms. OS (Hong Kong) represent the first large-scale cohort studies on bone health in elderly Chinese men and women. The current study compared Genant's SQ method and ABQ method in these two cohorts. Based on quantitative measurement, the severity of ABQ method detected fractures was additionally classified into grade-1, grad-2, and grade-3 according to SQ's deformity criteria. The radiographs of 1,954 elderly Chinese men (mean: 72.3 years) and 1,953 elderly Chinese women (mean: 72.5 years) were evaluated. Results: according to ABQ, grade-1,-2,-3 VFs accounted for 1.89%, 1.74%, 2.25% in men, and 3.33%, 3.07%, and 5.53% in women. In men and women, 15.7% (35/223) and 34.5% (48/139) of vertebrae with SQ grade-1 deformity were ABQ(+, with fracture) respectively. In men and women, 89.7% (35/39) and 66.7% (48/72) of vertebrae with ABQ grade-1 fracture had SQ grade-1 deformity. For grade-1 change, SQ (-, negative without fracture) & ABQ (+, positive with vertebral cortex line fracture) subjects tend to have a lower BMD than the SQ(+)& ABQ(-) subjects. In subjects with SQ grade-2 deformity, those were also ABQ(+) tended to have a lower BMD than those were ABQ(-). In all grades, SQ(-)&ABQ(-) subjects tended to have highest BMD, while SQ(+)&ABQ(+)subjects tended to have lowest BMD. Conclusion: ABQ method may be more sensitive to VF associated mild lower BMD than SQ method. |
q-bio/0612044 | Ingileif Hallgrimsdottir | Ingileif B. Hallgrimsdottir and Debbie S. Yuster | A complete classification of epistatic two-locus models | 24 pages, 7 figures | null | null | null | q-bio.QM | null | The study of epistasis is of great importance in statistical genetics in
fields such as linkage and association analysis and QTL mapping. In an effort
to classify the types of epistasis in the case of two biallelic loci Li and
Reich listed and described all models in the simplest case of 0/1 penetrance
values. However, they left open the problem of finding a classification of
two-locus models with continuous penetrance values. We provide a complete
classification of biallelic two-locus models. In addition to solving the
classification problem for dichotomous trait disease models, our results apply
to any instance where real numbers are assigned to genotypes, and provide a
complete framework for studying epistasis in QTL data. Our approach is
geometric and we show that there are 387 distinct types of two-locus models,
which can be reduced to 69 when symmetry between loci and alleles is accounted
for. The model types are defined by 86 circuits, which are linear combinations
of genotype values, each of which measures a fundamental unit of interaction.
The circuits provide information on epistasis beyond that contained in the
additive x add, add x dom, and dom x dom interaction terms. We discuss the
connection between our classification and standard epistatic models and
demonstrate its utility by analyzing a previously published dataset.
| [
{
"created": "Sun, 24 Dec 2006 03:41:19 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Hallgrimsdottir",
"Ingileif B.",
""
],
[
"Yuster",
"Debbie S.",
""
]
] | The study of epistasis is of great importance in statistical genetics in fields such as linkage and association analysis and QTL mapping. In an effort to classify the types of epistasis in the case of two biallelic loci Li and Reich listed and described all models in the simplest case of 0/1 penetrance values. However, they left open the problem of finding a classification of two-locus models with continuous penetrance values. We provide a complete classification of biallelic two-locus models. In addition to solving the classification problem for dichotomous trait disease models, our results apply to any instance where real numbers are assigned to genotypes, and provide a complete framework for studying epistasis in QTL data. Our approach is geometric and we show that there are 387 distinct types of two-locus models, which can be reduced to 69 when symmetry between loci and alleles is accounted for. The model types are defined by 86 circuits, which are linear combinations of genotype values, each of which measures a fundamental unit of interaction. The circuits provide information on epistasis beyond that contained in the additive x add, add x dom, and dom x dom interaction terms. We discuss the connection between our classification and standard epistatic models and demonstrate its utility by analyzing a previously published dataset. |
2305.16160 | Jeff Guo | Jeff Guo, Philippe Schwaller | Augmented Memory: Capitalizing on Experience Replay to Accelerate De
Novo Molecular Design | null | null | null | null | q-bio.BM cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Sample efficiency is a fundamental challenge in de novo molecular design.
Ideally, molecular generative models should learn to satisfy a desired
objective under minimal oracle evaluations (computational prediction or wet-lab
experiment). This problem becomes more apparent when using oracles that can
provide increased predictive accuracy but impose a significant cost.
Consequently, these oracles cannot be directly optimized under a practical
budget. Molecular generative models have shown remarkable sample efficiency
when coupled with reinforcement learning, as demonstrated in the Practical
Molecular Optimization (PMO) benchmark. Here, we propose a novel algorithm
called Augmented Memory that combines data augmentation with experience replay.
We show that scores obtained from oracle calls can be reused to update the
model multiple times. We compare Augmented Memory to previously proposed
algorithms and show significantly enhanced sample efficiency in an exploitation
task and a drug discovery case study requiring both exploration and
exploitation. Our method achieves a new state-of-the-art in the PMO benchmark
which enforces a computational budget, outperforming the previous best
performing method on 19/23 tasks.
| [
{
"created": "Wed, 10 May 2023 14:00:50 GMT",
"version": "v1"
}
] | 2023-05-26 | [
[
"Guo",
"Jeff",
""
],
[
"Schwaller",
"Philippe",
""
]
] | Sample efficiency is a fundamental challenge in de novo molecular design. Ideally, molecular generative models should learn to satisfy a desired objective under minimal oracle evaluations (computational prediction or wet-lab experiment). This problem becomes more apparent when using oracles that can provide increased predictive accuracy but impose a significant cost. Consequently, these oracles cannot be directly optimized under a practical budget. Molecular generative models have shown remarkable sample efficiency when coupled with reinforcement learning, as demonstrated in the Practical Molecular Optimization (PMO) benchmark. Here, we propose a novel algorithm called Augmented Memory that combines data augmentation with experience replay. We show that scores obtained from oracle calls can be reused to update the model multiple times. We compare Augmented Memory to previously proposed algorithms and show significantly enhanced sample efficiency in an exploitation task and a drug discovery case study requiring both exploration and exploitation. Our method achieves a new state-of-the-art in the PMO benchmark which enforces a computational budget, outperforming the previous best performing method on 19/23 tasks. |
2402.03967 | Alia Abbara | Alia Abbara, Lisa Pagani, Celia Garc\'ia-Pareja, Anne-Florence Bitbol | Mutant fate in spatially structured populations on graphs: connecting
models to experiments | Main text: 13 pages, 5 figures. 6 supplementary figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In nature, most microbial populations have complex spatial structures that
can affect their evolution. Evolutionary graph theory predicts that some
spatial structures modelled by placing individuals on the nodes of a graph
affect the probability that a mutant will fix. Evolution experiments are
beginning to explicitly address the impact of graph structures on mutant
fixation. However, the assumptions of evolutionary graph theory differ from the
conditions of modern evolution experiments, making the comparison between
theory and experiment challenging. Here, we aim to bridge this gap. We use our
new model of spatially structured populations with well-mixed demes at the
nodes of a graph, which allows asymmetric migrations, can handle large
populations, and explicitly models serial passage events with migrations, thus
closely mimicking experimental conditions. We analyze recent experiments in
this light. We suggest useful parameter regimes for future experiments, and we
make quantitative predictions for these experiments. In particular, we propose
experiments to directly test our recent prediction that the star graph with
asymmetric migrations suppresses natural selection and can accelerate mutant
fixation or extinction, compared to a well-mixed population.
| [
{
"created": "Tue, 6 Feb 2024 12:57:29 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Abbara",
"Alia",
""
],
[
"Pagani",
"Lisa",
""
],
[
"García-Pareja",
"Celia",
""
],
[
"Bitbol",
"Anne-Florence",
""
]
] | In nature, most microbial populations have complex spatial structures that can affect their evolution. Evolutionary graph theory predicts that some spatial structures modelled by placing individuals on the nodes of a graph affect the probability that a mutant will fix. Evolution experiments are beginning to explicitly address the impact of graph structures on mutant fixation. However, the assumptions of evolutionary graph theory differ from the conditions of modern evolution experiments, making the comparison between theory and experiment challenging. Here, we aim to bridge this gap. We use our new model of spatially structured populations with well-mixed demes at the nodes of a graph, which allows asymmetric migrations, can handle large populations, and explicitly models serial passage events with migrations, thus closely mimicking experimental conditions. We analyze recent experiments in this light. We suggest useful parameter regimes for future experiments, and we make quantitative predictions for these experiments. In particular, we propose experiments to directly test our recent prediction that the star graph with asymmetric migrations suppresses natural selection and can accelerate mutant fixation or extinction, compared to a well-mixed population. |
2003.06882 | Larissa Terumi Arashiro | Maria Jesus Garcia-Galan, Larissa Arashiro, Lucia H.M.L.M. Santos,
Sara Insa, Sara Rodriguez-Mozaz, Damia Barcelo, Ivet Ferrer, Marianna Garfi | Fate of priority pharmaceuticals and their main metabolites and
transformation products in microalgae-based wastewater treatment systems | null | null | 10.1016/j.jhazmat.2019.121771 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The present study evaluates the removal capacity of two high rate algae ponds
(HRAPs) to eliminate 12 pharmaceuticals (PhACs) and 26 of their corresponding
main metabolites and transformation products. The efficiency of these ponds,
operating with and without primary treatment, was compared in order to study
their capacity under the best performance conditions (highest solar
irradiance). Concentrations of all the target compounds were determined in both
water and biomass samples. Removal rates ranged from moderate (40-60%) to high
(>60%) for most of them, with the exception of the psychiatric drugs
carbamazepine, the beta-blocking agent metoprolol and its metabolite,
metoprolol acid. O-desmethylvenlafaxine, despite its very low biodegradability
in conventional wastewater treatment plants, was removed to certain extent
(13-39%) Biomass concentrations suggested that bioadsorption/bioaccumulation to
microalgae biomass was decisive regarding the elimination of some
non-biodegradable compounds such as venlafaxine and its main metabolites. HRAP
treatment with and without primary treatment did not yield significant
differences in terms of PhACs removal efficiency. The implementation of HRAPs
as secondary treatment is a viable alternative to CAS in terms of overall
wastewater treatment (including organic micropollutants), with generally higher
removal performances and implying a green, low-cost and more sustainable
technology.
| [
{
"created": "Sun, 15 Mar 2020 18:09:06 GMT",
"version": "v1"
}
] | 2020-03-17 | [
[
"Garcia-Galan",
"Maria Jesus",
""
],
[
"Arashiro",
"Larissa",
""
],
[
"Santos",
"Lucia H. M. L. M.",
""
],
[
"Insa",
"Sara",
""
],
[
"Rodriguez-Mozaz",
"Sara",
""
],
[
"Barcelo",
"Damia",
""
],
[
"Ferrer",
"Ivet",
""
],
[
"Garfi",
"Marianna",
""
]
] | The present study evaluates the removal capacity of two high rate algae ponds (HRAPs) to eliminate 12 pharmaceuticals (PhACs) and 26 of their corresponding main metabolites and transformation products. The efficiency of these ponds, operating with and without primary treatment, was compared in order to study their capacity under the best performance conditions (highest solar irradiance). Concentrations of all the target compounds were determined in both water and biomass samples. Removal rates ranged from moderate (40-60%) to high (>60%) for most of them, with the exception of the psychiatric drugs carbamazepine, the beta-blocking agent metoprolol and its metabolite, metoprolol acid. O-desmethylvenlafaxine, despite its very low biodegradability in conventional wastewater treatment plants, was removed to certain extent (13-39%) Biomass concentrations suggested that bioadsorption/bioaccumulation to microalgae biomass was decisive regarding the elimination of some non-biodegradable compounds such as venlafaxine and its main metabolites. HRAP treatment with and without primary treatment did not yield significant differences in terms of PhACs removal efficiency. The implementation of HRAPs as secondary treatment is a viable alternative to CAS in terms of overall wastewater treatment (including organic micropollutants), with generally higher removal performances and implying a green, low-cost and more sustainable technology. |
2009.03753 | Afroza Shirin | Afroza Shirin, Yen Ting Lin, Francesco Sorrentino | Data-driven Optimized Control of the COVID-19 Epidemics | 5 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimizing the impact on the economy of control strategies aiming at
containing the spread of COVID-19 is a critical challenge. We use daily new
case counts of COVID-19 patients reported by local health administrations from
different Metropolitan Statistical Areas (MSAs) within the US to parametrize a
model that well describes the propagation of the disease in each area. We then
introduce a time-varying control input that represents the level of social
distancing imposed on the population of a given area and solve an optimal
control problem with the goal of minimizing the impact of social distancing on
the economy in the presence of relevant constraints, such as a desired level of
suppression for the epidemics at a terminal time. We find that with the
exception of the initial time and of the final time, the optimal control input
is well approximated by a constant, specific to each area, which contrasts with
the implemented system of reopening `in phases'. For all the areas considered,
this optimal level corresponds to stricter social distancing than the level
estimated from data. Proper selection of the time period for application of the
control action optimally is important: depending on the particular MSA this
period should be either short or long or intermediate. We also consider the
case that the transmissibility increases in time (due e.g. to increasingly
colder weather), for which we find that the optimal control solution yields
progressively stricter measures of social distancing. {We finally compute the
optimal control solution for a model modified to incorporate the effects of
vaccinations on the population and we see that depending on a number of
factors, social distancing measures could be optimally reduced during the
period over which vaccines are administered to the population.
| [
{
"created": "Fri, 4 Sep 2020 19:19:13 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Feb 2021 17:38:07 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Mar 2021 23:19:31 GMT",
"version": "v3"
}
] | 2021-03-12 | [
[
"Shirin",
"Afroza",
""
],
[
"Lin",
"Yen Ting",
""
],
[
"Sorrentino",
"Francesco",
""
]
] | Optimizing the impact on the economy of control strategies aiming at containing the spread of COVID-19 is a critical challenge. We use daily new case counts of COVID-19 patients reported by local health administrations from different Metropolitan Statistical Areas (MSAs) within the US to parametrize a model that well describes the propagation of the disease in each area. We then introduce a time-varying control input that represents the level of social distancing imposed on the population of a given area and solve an optimal control problem with the goal of minimizing the impact of social distancing on the economy in the presence of relevant constraints, such as a desired level of suppression for the epidemics at a terminal time. We find that with the exception of the initial time and of the final time, the optimal control input is well approximated by a constant, specific to each area, which contrasts with the implemented system of reopening `in phases'. For all the areas considered, this optimal level corresponds to stricter social distancing than the level estimated from data. Proper selection of the time period for application of the control action optimally is important: depending on the particular MSA this period should be either short or long or intermediate. We also consider the case that the transmissibility increases in time (due e.g. to increasingly colder weather), for which we find that the optimal control solution yields progressively stricter measures of social distancing. {We finally compute the optimal control solution for a model modified to incorporate the effects of vaccinations on the population and we see that depending on a number of factors, social distancing measures could be optimally reduced during the period over which vaccines are administered to the population. |
2001.07092 | Grace Lindsay | Grace W. Lindsay | Convolutional Neural Networks as a Model of the Visual System: Past,
Present, and Future | Review Article to be published in Journal of Cognitive Neuroscience,
18 pages, 5 figures plus 8 pages of references | null | 10.1162/jocn_a_01544 | null | q-bio.NC cs.CV cs.NE | http://creativecommons.org/licenses/by-sa/4.0/ | Convolutional neural networks (CNNs) were inspired by early findings in the
study of biological vision. They have since become successful tools in computer
vision and state-of-the-art models of both neural activity and behavior on
visual tasks. This review highlights what, in the context of CNNs, it means to
be a good model in computational neuroscience and the various ways models can
provide insight. Specifically, it covers the origins of CNNs and the methods by
which we validate them as models of biological vision. It then goes on to
elaborate on what we can learn about biological vision by understanding and
experimenting on CNNs and discusses emerging opportunities for the use of CNNS
in vision research beyond basic object recognition.
| [
{
"created": "Mon, 20 Jan 2020 13:04:37 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Feb 2020 11:37:16 GMT",
"version": "v2"
}
] | 2020-02-11 | [
[
"Lindsay",
"Grace W.",
""
]
] | Convolutional neural networks (CNNs) were inspired by early findings in the study of biological vision. They have since become successful tools in computer vision and state-of-the-art models of both neural activity and behavior on visual tasks. This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight. Specifically, it covers the origins of CNNs and the methods by which we validate them as models of biological vision. It then goes on to elaborate on what we can learn about biological vision by understanding and experimenting on CNNs and discusses emerging opportunities for the use of CNNS in vision research beyond basic object recognition. |
q-bio/0606003 | Georgy Karev | Georgy P. Karev | Inhomogeneous maps: the basic theorems and some applications | 10 pages, 3 figures; submitted to Conference on Differential &
Difference Equations and Applications, Florida Institute of Technology, 2005 | null | null | null | q-bio.PE q-bio.QM | null | Non-linear maps can possess various dynamical behaviors varying from stable
steady states and cycles to chaotic oscillations. Most models assume that
individuals within a given population are identical ignoring the fundamental
role of variation. Here we develop a theory of inhomogeneous maps and apply the
general approach to modeling heterogeneous populations with discrete
evolutionary time step. We show that the behavior of the inhomogeneous maps may
possess complex transition regimes, which depends both on the mean and the
variance of the initial parameter distribution. The examples of inhomogeneous
models are discussed.
| [
{
"created": "Fri, 2 Jun 2006 18:16:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Karev",
"Georgy P.",
""
]
] | Non-linear maps can possess various dynamical behaviors varying from stable steady states and cycles to chaotic oscillations. Most models assume that individuals within a given population are identical ignoring the fundamental role of variation. Here we develop a theory of inhomogeneous maps and apply the general approach to modeling heterogeneous populations with discrete evolutionary time step. We show that the behavior of the inhomogeneous maps may possess complex transition regimes, which depends both on the mean and the variance of the initial parameter distribution. The examples of inhomogeneous models are discussed. |
2004.11763 | Sebastian Gottwald | Sebastian Gottwald, Daniel A. Braun | The Two Kinds of Free Energy and the Bayesian Revolution | null | PLOS Computational Biology 16(12), 2020 | 10.1371/journal.pcbi.1008420 | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | The concept of free energy has its origins in 19th century thermodynamics,
but has recently found its way into the behavioral and neural sciences, where
it has been promoted for its wide applicability and has even been suggested as
a fundamental principle of understanding intelligent behavior and brain
function. We argue that there are essentially two different notions of free
energy in current models of intelligent agency, that can both be considered as
applications of Bayesian inference to the problem of action selection: one that
appears when trading off accuracy and uncertainty based on a general maximum
entropy principle, and one that formulates action selection in terms of
minimizing an error measure that quantifies deviations of beliefs and policies
from given reference models. The first approach provides a normative rule for
action selection in the face of model uncertainty or when information
processing capabilities are limited. The second approach directly aims to
formulate the action selection problem as an inference problem in the context
of Bayesian brain theories, also known as Active Inference in the literature.
We elucidate the main ideas and discuss critical technical and conceptual
issues revolving around these two notions of free energy that both claim to
apply at all levels of decision-making, from the high-level deliberation of
reasoning down to the low-level information processing of perception.
| [
{
"created": "Fri, 24 Apr 2020 14:09:28 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Aug 2020 11:56:08 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Sep 2020 07:17:59 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Dec 2020 00:03:21 GMT",
"version": "v4"
}
] | 2020-12-08 | [
[
"Gottwald",
"Sebastian",
""
],
[
"Braun",
"Daniel A.",
""
]
] | The concept of free energy has its origins in 19th century thermodynamics, but has recently found its way into the behavioral and neural sciences, where it has been promoted for its wide applicability and has even been suggested as a fundamental principle of understanding intelligent behavior and brain function. We argue that there are essentially two different notions of free energy in current models of intelligent agency, that can both be considered as applications of Bayesian inference to the problem of action selection: one that appears when trading off accuracy and uncertainty based on a general maximum entropy principle, and one that formulates action selection in terms of minimizing an error measure that quantifies deviations of beliefs and policies from given reference models. The first approach provides a normative rule for action selection in the face of model uncertainty or when information processing capabilities are limited. The second approach directly aims to formulate the action selection problem as an inference problem in the context of Bayesian brain theories, also known as Active Inference in the literature. We elucidate the main ideas and discuss critical technical and conceptual issues revolving around these two notions of free energy that both claim to apply at all levels of decision-making, from the high-level deliberation of reasoning down to the low-level information processing of perception. |
1601.05700 | Jody Reimer | Jody R. Reimer, Michael B. Bonsall, Philip K. Maini | The Critical Domain Size of Stochastic Population Models | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying the critical domain size necessary for a population to persist is
an important question in ecology. Both demographic and environmental
stochasticity impact a population's ability to persist. Here we explore ways of
including this variability. We study populations which have traditionally been
modelled using a deterministic integrodifference equation (IDE) framework, with
distinct dispersal and sedentary stages. Individual based models (IBMs) are the
most intuitive stochastic analogues to IDEs but yield few analytic insights. We
explore two alternate approaches; one is a scaling up to the population level
using the Central Limit Theorem, and the other a variation on both
Galton-Watson branching processes and branching processes in random
environments. These branching process models closely approximate the IBM and
yield insight into the factors determining the critical domain size for a given
population subject to stochasticity.
| [
{
"created": "Thu, 21 Jan 2016 16:27:54 GMT",
"version": "v1"
}
] | 2016-01-22 | [
[
"Reimer",
"Jody R.",
""
],
[
"Bonsall",
"Michael B.",
""
],
[
"Maini",
"Philip K.",
""
]
] | Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework, with distinct dispersal and sedentary stages. Individual based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity. |
1506.03400 | Andreas Hanke | Stefan M. Giovan, Andreas Hanke, and Stephen D. Levene | DNA cyclization and looping in the wormlike limit: normal modes and the
validity of the harmonic approximation | 23 pages, 6 figures. Typos corrected. Manuscript improved | Biopolymers 103, 528-38 (2015) (special issue in honor of Don
Crothers) | null | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For much of the last three decades Monte Carlo-simulation methods have been
the standard approach for accurately calculating the cyclization probability,
$J$, or J factor, for DNA models having sequence-dependent bends or
inhomogeneous bending flexibility. Within the last ten years, however,
approaches based on harmonic analysis of semi-flexible polymer models have been
introduced, which offer much greater computational efficiency than Monte Carlo
techniques. These methods consider the ensemble of molecular conformations in
terms of harmonic fluctuations about a well-defined elastic-energy minimum.
However, the harmonic approximation is only applicable for small systems,
because the accessible conformation space of larger systems is increasingly
dominated by anharmonic contributions. In the case of computed values of the J
factor, deviations of the harmonic approximation from the exact value of $J$ as
a function of DNA length have not been characterized. Using a recent,
numerically exact method that accounts for both anharmonic and harmonic
contributions to $J$ for wormlike chains of arbitrary size, we report here the
apparent error that results from neglecting anharmonic behavior. For wormlike
chains having contour lengths less than four times the persistence length the
error in $J$ arising from the harmonic approximation is generally small,
amounting to free energies less than the thermal energy, $k_B T$. For larger
systems, however, the deviations between harmonic and exact $J$ values increase
approximately linearly with size.
| [
{
"created": "Wed, 10 Jun 2015 17:17:04 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2015 23:02:22 GMT",
"version": "v2"
}
] | 2015-07-30 | [
[
"Giovan",
"Stefan M.",
""
],
[
"Hanke",
"Andreas",
""
],
[
"Levene",
"Stephen D.",
""
]
] | For much of the last three decades Monte Carlo-simulation methods have been the standard approach for accurately calculating the cyclization probability, $J$, or J factor, for DNA models having sequence-dependent bends or inhomogeneous bending flexibility. Within the last ten years, however, approaches based on harmonic analysis of semi-flexible polymer models have been introduced, which offer much greater computational efficiency than Monte Carlo techniques. These methods consider the ensemble of molecular conformations in terms of harmonic fluctuations about a well-defined elastic-energy minimum. However, the harmonic approximation is only applicable for small systems, because the accessible conformation space of larger systems is increasingly dominated by anharmonic contributions. In the case of computed values of the J factor, deviations of the harmonic approximation from the exact value of $J$ as a function of DNA length have not been characterized. Using a recent, numerically exact method that accounts for both anharmonic and harmonic contributions to $J$ for wormlike chains of arbitrary size, we report here the apparent error that results from neglecting anharmonic behavior. For wormlike chains having contour lengths less than four times the persistence length the error in $J$ arising from the harmonic approximation is generally small, amounting to free energies less than the thermal energy, $k_B T$. For larger systems, however, the deviations between harmonic and exact $J$ values increase approximately linearly with size. |
2207.12897 | Lam Ho | Nhat L. Vu, Thanh P. Nguyen, Binh T. Nguyen, Vu Dinh, Lam Si Tung Ho | When can we reconstruct the ancestral state? Beyond Brownian motion | null | null | null | null | q-bio.PE math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing the ancestral state of a group of species helps answer many
important questions in evolutionary biology. Therefore, it is crucial to
understand when we can estimate the ancestral state accurately. Previous works
provide a necessary and sufficient condition, called the big bang condition,
for the existence of an accurate reconstruction method under discrete trait
evolution models and the Brownian motion model. In this paper, we extend this
result to a wide range of continuous trait evolution models. In particular, we
consider a general setting where continuous traits evolve along the tree
according to stochastic processes that satisfy some regularity conditions. We
verify these conditions for popular continuous trait evolution models including
Ornstein-Uhlenbeck, reflected Brownian Motion, and Cox-Ingersoll-Ross.
| [
{
"created": "Tue, 26 Jul 2022 13:45:20 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2023 03:07:47 GMT",
"version": "v2"
}
] | 2023-04-21 | [
[
"Vu",
"Nhat L.",
""
],
[
"Nguyen",
"Thanh P.",
""
],
[
"Nguyen",
"Binh T.",
""
],
[
"Dinh",
"Vu",
""
],
[
"Ho",
"Lam Si Tung",
""
]
] | Reconstructing the ancestral state of a group of species helps answer many important questions in evolutionary biology. Therefore, it is crucial to understand when we can estimate the ancestral state accurately. Previous works provide a necessary and sufficient condition, called the big bang condition, for the existence of an accurate reconstruction method under discrete trait evolution models and the Brownian motion model. In this paper, we extend this result to a wide range of continuous trait evolution models. In particular, we consider a general setting where continuous traits evolve along the tree according to stochastic processes that satisfy some regularity conditions. We verify these conditions for popular continuous trait evolution models including Ornstein-Uhlenbeck, reflected Brownian Motion, and Cox-Ingersoll-Ross. |
2207.07930 | Yan-Liang Shi | Yan-Liang Shi, Roxana Zeraati, Anna Levina, Tatiana A. Engel | Spatial and temporal correlations in neural networks with structured
connectivity | 25 pages, 20 figures | null | null | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | Correlated fluctuations in the activity of neural populations reflect the
network's dynamics and connectivity. The temporal and spatial dimensions of
neural correlations are interdependent. However, prior theoretical work mainly
analyzed correlations in either spatial or temporal domains, oblivious to their
interplay. We show that the network dynamics and connectivity jointly define
the spatiotemporal profile of neural correlations. We derive analytical
expressions for pairwise correlations in networks of binary units with
spatially arranged connectivity in one and two dimensions. We find that spatial
interactions among units generate multiple timescales in auto- and
cross-correlations. Each timescale is associated with fluctuations at a
particular spatial frequency, making a hierarchical contribution to the
correlations. External inputs can modulate the correlation timescales when
spatial interactions are nonlinear, and the modulation effect depends on the
operating regime of network dynamics. These theoretical results open new ways
to relate connectivity and dynamics in cortical networks via measurements of
spatiotemporal neural correlations.
| [
{
"created": "Sat, 16 Jul 2022 12:47:32 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Shi",
"Yan-Liang",
""
],
[
"Zeraati",
"Roxana",
""
],
[
"Levina",
"Anna",
""
],
[
"Engel",
"Tatiana A.",
""
]
] | Correlated fluctuations in the activity of neural populations reflect the network's dynamics and connectivity. The temporal and spatial dimensions of neural correlations are interdependent. However, prior theoretical work mainly analyzed correlations in either spatial or temporal domains, oblivious to their interplay. We show that the network dynamics and connectivity jointly define the spatiotemporal profile of neural correlations. We derive analytical expressions for pairwise correlations in networks of binary units with spatially arranged connectivity in one and two dimensions. We find that spatial interactions among units generate multiple timescales in auto- and cross-correlations. Each timescale is associated with fluctuations at a particular spatial frequency, making a hierarchical contribution to the correlations. External inputs can modulate the correlation timescales when spatial interactions are nonlinear, and the modulation effect depends on the operating regime of network dynamics. These theoretical results open new ways to relate connectivity and dynamics in cortical networks via measurements of spatiotemporal neural correlations. |
2309.09816 | Niklas Tillmanns | Niklas Tillmanns, Jan Lost, Joanna Tabor, Sagar Vasandani, Shaurey
Vetsa, Neelan Marianayagam, Kanat Yalcin, E. Zeynep Erson-Omay, Marc von
Reppert, Leon Jekel, Sara Merkaj, Divya Ramakrishnan, Arman Avesta, Irene
Dixe de Oliveira Santo, Lan Jin, Anita Huttner, Khaled Bousabarah, Ichiro
Ikuta, MingDe Lin, Sanjay Aneja, Bernd Turowski, Mariam Aboian, Jennifer
Moliterno | Application of Novel PACS-based Informatics Platform to Identify Imaging
Based Predictors of CDKN2A Allelic Status in Glioblastomas | 23 pages, 5 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Gliomas with CDKN2A mutations are known to have worse prognosis but imaging
features of these gliomas are unknown. Our goal is to identify CDKN2A specific
qualitative imaging biomarkers in glioblastomas using a new informatics
workflow that enables rapid analysis of qualitative imaging features with
Visually AcceSAble Rembrandtr Images (VASARI) for large datasets in PACS. Sixty
nine patients undergoing GBM resection with CDKN2A status determined by
whole-exome sequencing were included. GBMs on magnetic resonance images were
automatically 3D segmented using deep learning algorithms incorporated within
PACS. VASARI features were assessed using FHIR forms integrated within PACS.
GBMs without CDKN2A alterations were significantly larger (64% vs. 30%,
p=0.007) compared to tumors with homozygous deletion (HOMDEL) and heterozygous
loss (HETLOSS). Lesions larger than 8 cm were four times more likely to have no
CDKN2A alteration (OR: 4.3; 95% CI:1.5-12.1; p<0.001). We developed a novel
integrated PACS informatics platform for the assessment of GBM molecular
subtypes and show that tumors with HOMDEL are more likely to have radiographic
evidence of pial invasion and less likely to have deep white matter invasion or
subependymal invasion. These imaging features may allow noninvasive
identification of CDKN2A allele status.
| [
{
"created": "Mon, 18 Sep 2023 14:37:31 GMT",
"version": "v1"
}
] | 2023-09-19 | [
[
"Tillmanns",
"Niklas",
""
],
[
"Lost",
"Jan",
""
],
[
"Tabor",
"Joanna",
""
],
[
"Vasandani",
"Sagar",
""
],
[
"Vetsa",
"Shaurey",
""
],
[
"Marianayagam",
"Neelan",
""
],
[
"Yalcin",
"Kanat",
""
],
[
"Erson-Omay",
"E. Zeynep",
""
],
[
"von Reppert",
"Marc",
""
],
[
"Jekel",
"Leon",
""
],
[
"Merkaj",
"Sara",
""
],
[
"Ramakrishnan",
"Divya",
""
],
[
"Avesta",
"Arman",
""
],
[
"Santo",
"Irene Dixe de Oliveira",
""
],
[
"Jin",
"Lan",
""
],
[
"Huttner",
"Anita",
""
],
[
"Bousabarah",
"Khaled",
""
],
[
"Ikuta",
"Ichiro",
""
],
[
"Lin",
"MingDe",
""
],
[
"Aneja",
"Sanjay",
""
],
[
"Turowski",
"Bernd",
""
],
[
"Aboian",
"Mariam",
""
],
[
"Moliterno",
"Jennifer",
""
]
] | Gliomas with CDKN2A mutations are known to have worse prognosis but imaging features of these gliomas are unknown. Our goal is to identify CDKN2A specific qualitative imaging biomarkers in glioblastomas using a new informatics workflow that enables rapid analysis of qualitative imaging features with Visually AcceSAble Rembrandtr Images (VASARI) for large datasets in PACS. Sixty nine patients undergoing GBM resection with CDKN2A status determined by whole-exome sequencing were included. GBMs on magnetic resonance images were automatically 3D segmented using deep learning algorithms incorporated within PACS. VASARI features were assessed using FHIR forms integrated within PACS. GBMs without CDKN2A alterations were significantly larger (64% vs. 30%, p=0.007) compared to tumors with homozygous deletion (HOMDEL) and heterozygous loss (HETLOSS). Lesions larger than 8 cm were four times more likely to have no CDKN2A alteration (OR: 4.3; 95% CI:1.5-12.1; p<0.001). We developed a novel integrated PACS informatics platform for the assessment of GBM molecular subtypes and show that tumors with HOMDEL are more likely to have radiographic evidence of pial invasion and less likely to have deep white matter invasion or subependymal invasion. These imaging features may allow noninvasive identification of CDKN2A allele status. |
0903.0031 | Alexander Spirov | Alexander V. Spirov | Design of a dynamic model of genes with multiple autonomous regulatory
modules by evolution in silico | 24 pages | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New approach to design a dynamic model of genes with multiple autonomous
regulatory modules by evolution in silico is proposed. The approach is based on
Genetic Algorithms, enforced by new crossover operators, especially worked out
for these purposes. The approach exploits the subbasin-portal architecture of
the fitness functions suitable for this kind of evolutionary modeling. The
effectiveness of the approach is demonstrated on a series of benchmark tests.
| [
{
"created": "Sat, 28 Feb 2009 01:35:32 GMT",
"version": "v1"
}
] | 2009-03-03 | [
[
"Spirov",
"Alexander V.",
""
]
] | New approach to design a dynamic model of genes with multiple autonomous regulatory modules by evolution in silico is proposed. The approach is based on Genetic Algorithms, enforced by new crossover operators, especially worked out for these purposes. The approach exploits the subbasin-portal architecture of the fitness functions suitable for this kind of evolutionary modeling. The effectiveness of the approach is demonstrated on a series of benchmark tests. |
1912.12047 | Sebastian Billaudelle | Sebastian Billaudelle, Benjamin Cramer, Mihai A. Petrovici, Korbinian
Schreiber, David Kappel, Johannes Schemmel, Karlheinz Meier | Structural plasticity on an accelerated analog neuromorphic hardware
system | null | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by/4.0/ | In computational neuroscience, as well as in machine learning, neuromorphic
devices promise an accelerated and scalable alternative to neural network
simulations. Their neural connectivity and synaptic capacity depends on their
specific design choices, but is always intrinsically limited. Here, we present
a strategy to achieve structural plasticity that optimizes resource allocation
under these constraints by constantly rewiring the pre- and gpostsynaptic
partners while keeping the neuronal fan-in constant and the connectome sparse.
In particular, we implemented this algorithm on the analog neuromorphic system
BrainScaleS-2. It was executed on a custom embedded digital processor located
on chip, accompanying the mixed-signal substrate of spiking neurons and synapse
circuits. We evaluated our implementation in a simple supervised learning
scenario, showing its ability to optimize the network topology with respect to
the nature of its training data, as well as its overall computational
efficiency.
| [
{
"created": "Fri, 27 Dec 2019 10:15:58 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 08:20:35 GMT",
"version": "v2"
}
] | 2020-10-01 | [
[
"Billaudelle",
"Sebastian",
""
],
[
"Cramer",
"Benjamin",
""
],
[
"Petrovici",
"Mihai A.",
""
],
[
"Schreiber",
"Korbinian",
""
],
[
"Kappel",
"David",
""
],
[
"Schemmel",
"Johannes",
""
],
[
"Meier",
"Karlheinz",
""
]
] | In computational neuroscience, as well as in machine learning, neuromorphic devices promise an accelerated and scalable alternative to neural network simulations. Their neural connectivity and synaptic capacity depends on their specific design choices, but is always intrinsically limited. Here, we present a strategy to achieve structural plasticity that optimizes resource allocation under these constraints by constantly rewiring the pre- and gpostsynaptic partners while keeping the neuronal fan-in constant and the connectome sparse. In particular, we implemented this algorithm on the analog neuromorphic system BrainScaleS-2. It was executed on a custom embedded digital processor located on chip, accompanying the mixed-signal substrate of spiking neurons and synapse circuits. We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology with respect to the nature of its training data, as well as its overall computational efficiency. |
1811.00004 | Carlos Domingo-Felez | Carlos Domingo-F\'elez and Barth F. Smets | Modelling N2O dynamics of activated sludge biomass under nitrifying and
denitrifying conditions: pathway contributions and uncertainty analysis | Text and Supporting Information | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Nitrous oxide (N2O) is a potent greenhouse gas emitted during biological
wastewater treatment. A pseudo-mechanistic model describing three biological
pathways for nitric oxide (NO) and N2O production was calibrated for mixed
culture biomass from an activated sludge process using laboratory-scale
experiments. The model (NDHA) comprehensively describes N2O producing pathways
by both autotrophic ammonium oxidizing bacteria and heterotrophic bacteria.
Extant respirometric assays and anaerobic batch experiments were designed to
calibrate endogenous and exogenous processes (heterotrophic denitrification and
autotrophic ammonium/nitrite oxidation) together with the associated net N2O
production. Ten parameters describing heterotrophic processes and seven for
autotrophic processes were accurately estimated (variance/mean < 25%). The
model predicted NO and N2O dynamics at varying dissolved oxygen, ammonium and
nitrite levels and was validated against an independent set of experiments with
the same biomass. Aerobic ammonium oxidation experiments at two oxygen levels
used for model evaluation (2 and 0.5 mg/L) indicated that both the nitrifier
denitrification (42, 64%) and heterotrophic denitrification (7, 17%) pathways
increased and dominated N2O production at high nitrite and low oxygen
concentrations; while the nitrifier nitrification pathway showed the largest
contribution at high dissolved oxygen levels (51, 19%). The uncertainty of the
biological parameter estimates was propagated to N2O model outputs via Monte
Carlo simulations as 95% confidence intervals. The accuracy of the estimated
parameters resulted in a low uncertainty of the N2O emission factors (4.6 +-
0.6% and 1.2 +- 0.1%).
| [
{
"created": "Wed, 31 Oct 2018 09:52:44 GMT",
"version": "v1"
}
] | 2018-11-02 | [
[
"Domingo-Félez",
"Carlos",
""
],
[
"Smets",
"Barth F.",
""
]
] | Nitrous oxide (N2O) is a potent greenhouse gas emitted during biological wastewater treatment. A pseudo-mechanistic model describing three biological pathways for nitric oxide (NO) and N2O production was calibrated for mixed culture biomass from an activated sludge process using laboratory-scale experiments. The model (NDHA) comprehensively describes N2O producing pathways by both autotrophic ammonium oxidizing bacteria and heterotrophic bacteria. Extant respirometric assays and anaerobic batch experiments were designed to calibrate endogenous and exogenous processes (heterotrophic denitrification and autotrophic ammonium/nitrite oxidation) together with the associated net N2O production. Ten parameters describing heterotrophic processes and seven for autotrophic processes were accurately estimated (variance/mean < 25%). The model predicted NO and N2O dynamics at varying dissolved oxygen, ammonium and nitrite levels and was validated against an independent set of experiments with the same biomass. Aerobic ammonium oxidation experiments at two oxygen levels used for model evaluation (2 and 0.5 mg/L) indicated that both the nitrifier denitrification (42, 64%) and heterotrophic denitrification (7, 17%) pathways increased and dominated N2O production at high nitrite and low oxygen concentrations; while the nitrifier nitrification pathway showed the largest contribution at high dissolved oxygen levels (51, 19%). The uncertainty of the biological parameter estimates was propagated to N2O model outputs via Monte Carlo simulations as 95% confidence intervals. The accuracy of the estimated parameters resulted in a low uncertainty of the N2O emission factors (4.6 +- 0.6% and 1.2 +- 0.1%). |
1809.08179 | Patrick Dondl | Patrina S.P. Poh, Dvina Valainis, Kaushik Bhattacharya, Martijn van
Griensven, Patrick Dondl | Optimizing Bone Scaffold Porosity Distributions | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a simple one-dimensional time-dependent model for bone
regeneration in the presence of a bio-resorbable polymer scaffold. Within the
framework of the model, we optimize the effective mechanical stiffness of the
polymer scaffold together with the regenerated bone matrix. The result of the
optimization procedure is a scaffold porosity distribution which maximizes the
stiffness of the scaffold-bone system over the regeneration time, such that the
propensity for mechanical failure is reduced.
| [
{
"created": "Fri, 21 Sep 2018 15:52:47 GMT",
"version": "v1"
}
] | 2018-09-24 | [
[
"Poh",
"Patrina S. P.",
""
],
[
"Valainis",
"Dvina",
""
],
[
"Bhattacharya",
"Kaushik",
""
],
[
"van Griensven",
"Martijn",
""
],
[
"Dondl",
"Patrick",
""
]
] | We consider a simple one-dimensional time-dependent model for bone regeneration in the presence of a bio-resorbable polymer scaffold. Within the framework of the model, we optimize the effective mechanical stiffness of the polymer scaffold together with the regenerated bone matrix. The result of the optimization procedure is a scaffold porosity distribution which maximizes the stiffness of the scaffold-bone system over the regeneration time, such that the propensity for mechanical failure is reduced. |
1708.00662 | Meurig Thomas Gallagher | Meurig Thomas Gallagher, Cara Victoria Neal, Kenton P. Arkill and
David John Smith | Model-based image analysis of a tethered Brownian fibre for shear stress
sensing | Submitted for publication | null | null | null | q-bio.QM cond-mat.soft physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measurement of shear stress acting on a biologically relevant surface is
a challenging problem, particularly in the complex environment of, for example,
the vasculature. While an experimental method for the direct detection of wall
shear stress via the imaging of a synthetic biology nanorod has recently been
developed, the data interpretation so far has been limited to phenomenological
random walk modelling, small angle approximation, and image analysis techniques
which do not take into account the production of an image from a 3D subject. In
this report we develop a mathematical and statistical framework to estimate
shear stress from rapid imaging sequences based firstly on stochastic modelling
of the dynamics of a tethered Brownian fibre in shear flow, and secondly on
novel model-based image analysis, which reconstructs phage positions by solving
the inverse problem of image formation. This framework is tested on
experimental data, providing the first mechanistically rational analysis of the
novel assay. What follows further develops the established theory for an
untethered particle in a semi-dilute suspension, which is of relevance to, for
example, the study of Brownian nanowires without flow, and presents new ideas
in the field of multidisciplinary image analysis.
| [
{
"created": "Wed, 2 Aug 2017 09:29:18 GMT",
"version": "v1"
}
] | 2017-08-03 | [
[
"Gallagher",
"Meurig Thomas",
""
],
[
"Neal",
"Cara Victoria",
""
],
[
"Arkill",
"Kenton P.",
""
],
[
"Smith",
"David John",
""
]
] | The measurement of shear stress acting on a biologically relevant surface is a challenging problem, particularly in the complex environment of, for example, the vasculature. While an experimental method for the direct detection of wall shear stress via the imaging of a synthetic biology nanorod has recently been developed, the data interpretation so far has been limited to phenomenological random walk modelling, small angle approximation, and image analysis techniques which do not take into account the production of an image from a 3D subject. In this report we develop a mathematical and statistical framework to estimate shear stress from rapid imaging sequences based firstly on stochastic modelling of the dynamics of a tethered Brownian fibre in shear flow, and secondly on novel model-based image analysis, which reconstructs phage positions by solving the inverse problem of image formation. This framework is tested on experimental data, providing the first mechanistically rational analysis of the novel assay. What follows further develops the established theory for an untethered particle in a semi-dilute suspension, which is of relevance to, for example, the study of Brownian nanowires without flow, and presents new ideas in the field of multidisciplinary image analysis. |
1409.6378 | Dante Chialvo | Enzo Tagliazucchi, Helmut Laufs, Dante R. Chialvo | A few points suffice: Efficient large-scale computation of brain
voxel-wise functional connectomes from a sparse spatio-temporal point-process | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large efforts are currently under way to systematically map functional
connectivity between all pairs of millimeter-scale brain regions using big
volumes of neuroimaging data. Functional magnetic resonance imaging (fMRI) can
produce these functional connectomes, however, large amounts of data and
lengthy computation times add important overhead to this task. Previous work
has demonstrated that fMRI data admits a sparse representation in the form a
discrete point-process containing sufficient information for the efficient
estimation of functional connectivity between all pairs of voxels. In this work
we validate this method, by replicating results obtained with standard
whole-brain voxel-wise linear correlation matrices in two datasets. In the
first one (n=71) we study the changes in node strength (a measure of network
centrality) during deep sleep. The second is a large database (n=1147) of
subjects in which we look at the age-related reorganization of the voxel-wise
network of functional connections. In both cases it is shown that the proposed
method compares well with standard techniques, despite requiring of the order
of 1 % of the original fMRI time series. Overall, these results demonstrate
that the proposed approach allows efficient fMRI data compression and a
subsequent reduction of computation times.
| [
{
"created": "Tue, 23 Sep 2014 00:43:48 GMT",
"version": "v1"
}
] | 2014-09-24 | [
[
"Tagliazucchi",
"Enzo",
""
],
[
"Laufs",
"Helmut",
""
],
[
"Chialvo",
"Dante R.",
""
]
] | Large efforts are currently under way to systematically map functional connectivity between all pairs of millimeter-scale brain regions using big volumes of neuroimaging data. Functional magnetic resonance imaging (fMRI) can produce these functional connectomes, however, large amounts of data and lengthy computation times add important overhead to this task. Previous work has demonstrated that fMRI data admits a sparse representation in the form a discrete point-process containing sufficient information for the efficient estimation of functional connectivity between all pairs of voxels. In this work we validate this method, by replicating results obtained with standard whole-brain voxel-wise linear correlation matrices in two datasets. In the first one (n=71) we study the changes in node strength (a measure of network centrality) during deep sleep. The second is a large database (n=1147) of subjects in which we look at the age-related reorganization of the voxel-wise network of functional connections. In both cases it is shown that the proposed method compares well with standard techniques, despite requiring of the order of 1 % of the original fMRI time series. Overall, these results demonstrate that the proposed approach allows efficient fMRI data compression and a subsequent reduction of computation times. |
1706.01312 | Marc Dinh | M. Dinh and V. Fromion | RBA like problem with thermo-kinetics is non convex | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this short note is to show that the class of problem involving
kinetic or thermo-kinetic constraints in addition to the usual stoechiometric
one is non convex.
| [
{
"created": "Tue, 23 May 2017 12:52:56 GMT",
"version": "v1"
}
] | 2017-06-06 | [
[
"Dinh",
"M.",
""
],
[
"Fromion",
"V.",
""
]
] | The aim of this short note is to show that the class of problem involving kinetic or thermo-kinetic constraints in addition to the usual stoechiometric one is non convex. |
2404.10031 | Ammar Ahmed Pallikonda Latheef | Ammar Ahmed Pallikonda Latheef, Alberto Santamaria-Pang, Craig K
Jones, Haris I Sair | Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to
Model Hierarchical Brain Networks | 10 pages, 4 figures | null | null | null | q-bio.NC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain networks display a hierarchical organization, a complexity that poses a
challenge for existing deep learning models, often structured as flat
classifiers, leading to difficulties in interpretability and the 'black box'
issue. To bridge this gap, we propose a novel architecture: a symbolic
autoencoder informed by weak supervision and an Emergent Language (EL)
framework. This model moves beyond traditional flat classifiers by producing
hierarchical clusters and corresponding imagery, subsequently represented
through symbolic sentences to improve the clinical interpretability of
hierarchically organized data such as intrinsic brain networks, which can be
characterized using resting-state fMRI images. Our innovation includes a
generalized hierarchical loss function designed to ensure that both sentences
and images accurately reflect the hierarchical structure of functional brain
networks. This enables us to model functional brain networks from a broader
perspective down to more granular details. Furthermore, we introduce a
quantitative method to assess the hierarchical consistency of these symbolic
representations. Our qualitative analyses show that our model successfully
generates hierarchically organized, clinically interpretable images, a finding
supported by our quantitative evaluations. We find that our best performing
loss function leads to a hierarchical consistency of over 97% when identifying
images corresponding to brain networks. This approach not only advances the
interpretability of deep learning models in neuroimaging analysis but also
represents a significant step towards modeling the intricate hierarchical
nature of brain networks.
| [
{
"created": "Mon, 15 Apr 2024 13:51:05 GMT",
"version": "v1"
}
] | 2024-04-17 | [
[
"Latheef",
"Ammar Ahmed Pallikonda",
""
],
[
"Santamaria-Pang",
"Alberto",
""
],
[
"Jones",
"Craig K",
""
],
[
"Sair",
"Haris I",
""
]
] | Brain networks display a hierarchical organization, a complexity that poses a challenge for existing deep learning models, often structured as flat classifiers, leading to difficulties in interpretability and the 'black box' issue. To bridge this gap, we propose a novel architecture: a symbolic autoencoder informed by weak supervision and an Emergent Language (EL) framework. This model moves beyond traditional flat classifiers by producing hierarchical clusters and corresponding imagery, subsequently represented through symbolic sentences to improve the clinical interpretability of hierarchically organized data such as intrinsic brain networks, which can be characterized using resting-state fMRI images. Our innovation includes a generalized hierarchical loss function designed to ensure that both sentences and images accurately reflect the hierarchical structure of functional brain networks. This enables us to model functional brain networks from a broader perspective down to more granular details. Furthermore, we introduce a quantitative method to assess the hierarchical consistency of these symbolic representations. Our qualitative analyses show that our model successfully generates hierarchically organized, clinically interpretable images, a finding supported by our quantitative evaluations. We find that our best performing loss function leads to a hierarchical consistency of over 97% when identifying images corresponding to brain networks. This approach not only advances the interpretability of deep learning models in neuroimaging analysis but also represents a significant step towards modeling the intricate hierarchical nature of brain networks. |
1806.05172 | Rubem Mondaini | R.P. Mondaini, S.C. de Albuquerque Neto | The Protein Family Classification in Protein Databases via Entropy
Measures | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present work, we review the fundamental methods which have been
developed in the last few years for classifying into families and clans the
distribution of amino acids in protein databases. This is done through
functions of random variables, the Entropy Measures of probabilities of
occurrence of the amino acids. An intensive study of the Pfam databases is
presented with restrictions to families which could be represented by
rectangular arrays of amino acids with m rows (protein domains) and n columns
(amino acids). This work is also an invitation to scientific research groups
worldwide to undertake the statistical analysis with different numbers of rows
and columns since we believe in the mathematical characterization of the
distribution of amino acids as a fundamental insight on the determination of
protein structure and evolution.
| [
{
"created": "Tue, 12 Jun 2018 22:47:04 GMT",
"version": "v1"
}
] | 2018-06-15 | [
[
"Mondaini",
"R. P.",
""
],
[
"Neto",
"S. C. de Albuquerque",
""
]
] | In the present work, we review the fundamental methods which have been developed in the last few years for classifying into families and clans the distribution of amino acids in protein databases. This is done through functions of random variables, the Entropy Measures of probabilities of occurrence of the amino acids. An intensive study of the Pfam databases is presented with restrictions to families which could be represented by rectangular arrays of amino acids with m rows (protein domains) and n columns (amino acids). This work is also an invitation to scientific research groups worldwide to undertake the statistical analysis with different numbers of rows and columns since we believe in the mathematical characterization of the distribution of amino acids as a fundamental insight on the determination of protein structure and evolution. |
2305.19801 | Oliver Bent | Sebastien Boyer, Sam Money-Kyrle, Oliver Bent | Predicting protein stability changes under multiple amino acid
substitutions using equivariant graph neural networks | null | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The accurate prediction of changes in protein stability under multiple amino
acid substitutions is essential for realising true in-silico protein re-design.
To this purpose, we propose improvements to state-of-the-art Deep learning (DL)
protein stability prediction models, enabling first-of-a-kind predictions for
variable numbers of amino acid substitutions, on structural representations, by
decoupling the atomic and residue scales of protein representations. This was
achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic
environment (AE) embedding and residue-level scoring tasks. Our AE embedder was
used to featurise a residue-level graph, then trained to score mutant stability
($\Delta\Delta G$). To achieve effective training of this predictive EGNN we
have leveraged the unprecedented scale of a new high-throughput protein
stability experimental data-set, Mega-scale. Finally, we demonstrate the
immediately promising results of this procedure, discuss the current
shortcomings, and highlight potential future strategies.
| [
{
"created": "Tue, 30 May 2023 14:48:06 GMT",
"version": "v1"
}
] | 2023-06-01 | [
[
"Boyer",
"Sebastien",
""
],
[
"Money-Kyrle",
"Sam",
""
],
[
"Bent",
"Oliver",
""
]
] | The accurate prediction of changes in protein stability under multiple amino acid substitutions is essential for realising true in-silico protein re-design. To this purpose, we propose improvements to state-of-the-art Deep learning (DL) protein stability prediction models, enabling first-of-a-kind predictions for variable numbers of amino acid substitutions, on structural representations, by decoupling the atomic and residue scales of protein representations. This was achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic environment (AE) embedding and residue-level scoring tasks. Our AE embedder was used to featurise a residue-level graph, then trained to score mutant stability ($\Delta\Delta G$). To achieve effective training of this predictive EGNN we have leveraged the unprecedented scale of a new high-throughput protein stability experimental data-set, Mega-scale. Finally, we demonstrate the immediately promising results of this procedure, discuss the current shortcomings, and highlight potential future strategies. |
2311.18527 | Chun-Hsiang Chuang | Chun-Hsiang Chuang, Shao-Xun Fang, Chih-Sheng Huang, Weiping Ding | InfoFlowNet: A Multi-head Attention-based Self-supervised Learning Model
with Surrogate Approach for Uncovering Brain Effective Connectivity | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deciphering brain network topology can enhance the depth of neuroscientific
knowledge and facilitate the development of neural engineering methods.
Effective connectivity, a measure of brain network dynamics, is particularly
useful for investigating the directional influences among different brain
regions. In this study, we introduce a novel brain causal inference model named
InfoFlowNet, which leverages the self-attention mechanism to capture
associations among electroencephalogram (EEG) time series. The proposed method
estimates the magnitude of directional information flow (dIF) among EEG
processes by measuring the loss of model inference resulting from the shuffling
of the time order of the original time series. To evaluate the feasibility of
InfoFlowNet, we conducted experiments using a synthetic time series and two EEG
datasets. The results demonstrate that InfoFlowNet can extract time-varying
causal relationships among processes, reflected in the fluctuation of dIF
values. Compared with the Granger causality model and temporal causal discovery
framework, InfoFlowNet can identify more significant causal edges underlying
EEG processes while maintaining an acceptable computation time. Our work
demonstrates the potential of InfoFlowNet for analyzing effective connectivity
in EEG data. The findings highlight the importance of effective connectivity in
understanding the complex dynamics of the brain network.
| [
{
"created": "Thu, 30 Nov 2023 13:06:04 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Chuang",
"Chun-Hsiang",
""
],
[
"Fang",
"Shao-Xun",
""
],
[
"Huang",
"Chih-Sheng",
""
],
[
"Ding",
"Weiping",
""
]
] | Deciphering brain network topology can enhance the depth of neuroscientific knowledge and facilitate the development of neural engineering methods. Effective connectivity, a measure of brain network dynamics, is particularly useful for investigating the directional influences among different brain regions. In this study, we introduce a novel brain causal inference model named InfoFlowNet, which leverages the self-attention mechanism to capture associations among electroencephalogram (EEG) time series. The proposed method estimates the magnitude of directional information flow (dIF) among EEG processes by measuring the loss of model inference resulting from the shuffling of the time order of the original time series. To evaluate the feasibility of InfoFlowNet, we conducted experiments using a synthetic time series and two EEG datasets. The results demonstrate that InfoFlowNet can extract time-varying causal relationships among processes, reflected in the fluctuation of dIF values. Compared with the Granger causality model and temporal causal discovery framework, InfoFlowNet can identify more significant causal edges underlying EEG processes while maintaining an acceptable computation time. Our work demonstrates the potential of InfoFlowNet for analyzing effective connectivity in EEG data. The findings highlight the importance of effective connectivity in understanding the complex dynamics of the brain network. |
2108.12938 | Patricio Foncea | Patricio Foncea, Susana Mondschein, Marcelo Olivares | Replacing quarantine of COVID-19 contacts with periodic testing is also
effective in mitigating the risk of transmission | To appear in Scientific Reports. 23 pages, 15 pages of appendix, 14
figures, 4 tables | null | 10.1038/s41598-022-07447-2 | null | q-bio.PE physics.soc-ph stat.AP | http://creativecommons.org/licenses/by/4.0/ | The quarantine of identified close contacts has been vital to reducing
transmission rates and averting secondary infection risk before symptom onset
and by asymptomatic cases. The effectiveness of this contact tracing strategy
to mitigate transmission is sensitive to the adherence to quarantines, which
may be lower for longer quarantine periods or in vaccinated populations (where
perceptions of risk are reduced). This study develops a simulation model to
evaluate contact tracing strategies based on the sequential testing of
identified contacts after exposure as an alternative to quarantines, in which
contacts are isolated only after confirmation by a positive test. The analysis
considers different number and types of tests (PCR and lateral flow antigen
tests (LFA)) to identify the cost-effective testing policies that minimize the
expected infecting days post-exposure considering different levels of testing
capacity. This analysis suggests that even a limited number of tests can be
effective at reducing secondary infection risk: two LFA tests (with optimal
timing) avert infectiousness at a level that is comparable to 14-day quarantine
with 80-90% adherence, or equivalently, 7-9 day quarantine with full adherence
(depending on the sensitivity of the LFA test). Adding a third test (PCR or
LFA) reaches the efficiency of a 14-day quarantine with 90-100% adherence.
These results are robust to the exposure dates of the contact, test sensitivity
of LFA and alternative models of viral load evolution, which suggests that
simple testing rules can be effective for improving contact tracing in settings
where strict quarantine adherence is difficult to implement.
| [
{
"created": "Mon, 30 Aug 2021 00:17:09 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Feb 2022 20:02:24 GMT",
"version": "v2"
}
] | 2022-03-11 | [
[
"Foncea",
"Patricio",
""
],
[
"Mondschein",
"Susana",
""
],
[
"Olivares",
"Marcelo",
""
]
] | The quarantine of identified close contacts has been vital to reducing transmission rates and averting secondary infection risk before symptom onset and by asymptomatic cases. The effectiveness of this contact tracing strategy to mitigate transmission is sensitive to the adherence to quarantines, which may be lower for longer quarantine periods or in vaccinated populations (where perceptions of risk are reduced). This study develops a simulation model to evaluate contact tracing strategies based on the sequential testing of identified contacts after exposure as an alternative to quarantines, in which contacts are isolated only after confirmation by a positive test. The analysis considers different number and types of tests (PCR and lateral flow antigen tests (LFA)) to identify the cost-effective testing policies that minimize the expected infecting days post-exposure considering different levels of testing capacity. This analysis suggests that even a limited number of tests can be effective at reducing secondary infection risk: two LFA tests (with optimal timing) avert infectiousness at a level that is comparable to 14-day quarantine with 80-90% adherence, or equivalently, 7-9 day quarantine with full adherence (depending on the sensitivity of the LFA test). Adding a third test (PCR or LFA) reaches the efficiency of a 14-day quarantine with 90-100% adherence. These results are robust to the exposure dates of the contact, test sensitivity of LFA and alternative models of viral load evolution, which suggests that simple testing rules can be effective for improving contact tracing in settings where strict quarantine adherence is difficult to implement. |
2406.07269 | Leo D'Amato | Leo D'Amato, Gian Luca Lancia and Giovanni Pezzulo | The geometry of efficient codes: how rate-distortion trade-offs distort
the latent representations of generative models | null | null | null | null | q-bio.NC cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Living organisms rely on internal models of the world to act adaptively.
These models cannot encode every detail and hence need to compress information.
From a cognitive standpoint, information compression can manifest as a
distortion of latent representations, resulting in the emergence of
representations that may not accurately reflect the external world or its
geometry. Rate-distortion theory formalizes the optimal way to compress
information, by considering factors such as capacity limitations, the frequency
and the utility of stimuli. However, while this theory explains why the above
factors distort latent representations, it does not specify which specific
distortions they produce. To address this question, here we systematically
explore the geometry of the latent representations that emerge in generative
models that operate under the principles of rate-distortion theory
($\beta$-VAEs). Our results highlight that three main classes of distortions of
internal representations -- prototypization, specialization, orthogonalization
-- emerge as signatures of information compression, under constraints on
capacity, data distributions and tasks. These distortions can coexist, giving
rise to a rich landscape of latent spaces, whose geometry could differ
significantly across generative models subject to different constraints. Our
findings contribute to explain how the normative constraints of rate-distortion
theory distort the geometry of latent representations of generative models of
artificial systems and living organisms.
| [
{
"created": "Tue, 11 Jun 2024 13:53:27 GMT",
"version": "v1"
}
] | 2024-06-12 | [
[
"D'Amato",
"Leo",
""
],
[
"Lancia",
"Gian Luca",
""
],
[
"Pezzulo",
"Giovanni",
""
]
] | Living organisms rely on internal models of the world to act adaptively. These models cannot encode every detail and hence need to compress information. From a cognitive standpoint, information compression can manifest as a distortion of latent representations, resulting in the emergence of representations that may not accurately reflect the external world or its geometry. Rate-distortion theory formalizes the optimal way to compress information, by considering factors such as capacity limitations, the frequency and the utility of stimuli. However, while this theory explains why the above factors distort latent representations, it does not specify which specific distortions they produce. To address this question, here we systematically explore the geometry of the latent representations that emerge in generative models that operate under the principles of rate-distortion theory ($\beta$-VAEs). Our results highlight that three main classes of distortions of internal representations -- prototypization, specialization, orthogonalization -- emerge as signatures of information compression, under constraints on capacity, data distributions and tasks. These distortions can coexist, giving rise to a rich landscape of latent spaces, whose geometry could differ significantly across generative models subject to different constraints. Our findings contribute to explain how the normative constraints of rate-distortion theory distort the geometry of latent representations of generative models of artificial systems and living organisms. |
0909.1411 | Byungjoon Min | Byungjoon Min, K.-I. Goh, and I.-M. Kim | Noise Characteristics of Molecular Oscillations in Simple Genetic
Oscillatory Systems | 7 pages, 6 figures, minor changes, final published version | J. Korean Phys. Soc. 56, 911 (2010) | 10.3938/jkps.56.911 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the noise characteristics of stochastic oscillations in protein
number dynamics of simple genetic oscillatory systems. Using the
three-component negative feedback transcription regulatory system called the
repressilator as a prototypical example, we quantify the degree of fluctuations
in oscillation periods and amplitudes, as well as the noise propagation along
the regulatory cascade in the stable oscillation regime via dynamic Monte Carlo
simulations. For the single protein-species level, the fluctuation in the
oscillation amplitudes is found to be larger than that of the oscillation
periods, the distributions of which are reasonably described by the Weibull
distribution and the Gaussian tail, respectively. Correlations between
successive periods and between successive amplitudes, respectively, are
measured to assess the noise propagation properties, which are found to decay
faster for the amplitude than for the period. The local fluctuation property is
also studied.
| [
{
"created": "Tue, 8 Sep 2009 07:29:29 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Apr 2010 08:08:21 GMT",
"version": "v2"
}
] | 2015-03-13 | [
[
"Min",
"Byungjoon",
""
],
[
"Goh",
"K. -I.",
""
],
[
"Kim",
"I. -M.",
""
]
] | We study the noise characteristics of stochastic oscillations in protein number dynamics of simple genetic oscillatory systems. Using the three-component negative feedback transcription regulatory system called the repressilator as a prototypical example, we quantify the degree of fluctuations in oscillation periods and amplitudes, as well as the noise propagation along the regulatory cascade in the stable oscillation regime via dynamic Monte Carlo simulations. For the single protein-species level, the fluctuation in the oscillation amplitudes is found to be larger than that of the oscillation periods, the distributions of which are reasonably described by the Weibull distribution and the Gaussian tail, respectively. Correlations between successive periods and between successive amplitudes, respectively, are measured to assess the noise propagation properties, which are found to decay faster for the amplitude than for the period. The local fluctuation property is also studied. |
2011.05846 | Petr Hedenec | Petr Hed\v{e}nec, Lars Ola Nilsson, Haifeng Zheng, Per Gundersen,
Inger Kappel Schmidt, Johannes Rousk, Lars Vesterdal | Mycorrhizal association of common European tree species shapes biomass
and metabolic activity of bacterial and fungal communities in soil | Authors Accepted Manuscript | In: Soil Biology & Biochemistry. 2020 ; Vol. 149 | 10.1016/j.soilbio.2020.107933 | null | q-bio.PE | http://creativecommons.org/licenses/by-sa/4.0/ | Recent studies have revealed effects of various tree species on soil physical
and chemical properties. However, effects of various tree species on
composition and activity of soil microbiota and the relevant controls remain
poorly understood. We evaluated the influence of tree species associated with
two different mycorrhizal types, ectomycorrhiza (EcM) and arbuscular mycorrhiza
(AM), on growth, biomass and metabolic activity of soil fungal and bacterial
communities using common garden tree species experiments throughout Denmark.
The soil microbial communities differed between six European tree species as
well as between EcM (beech, lime, oak and spruce) and AM (ash and maple) tree
species. The EcM tree species had higher fungal biomass, fungal growth and
bacterial biomass, while AM species showed higher bacterial growth. The results
indicated that microbial community composition and functioning differed between
groups of tree species with distinct litter qualities that generate soil C/N
ratio and soil pH differences. The mycorrhizal association only partly
explained litter quality and soil microbial species differences since lime was
more similar to AM tree species. In addition, our results indicated that tree
species-mediated soil pH and C/N ratio were the most important variables
shaping microbial communities with a positive effect on bacterial and a
negative effect on fungal growth rates. The results suggest that tree
species-mediated microbial community composition and activity may be important
drivers of the different vertical soil C distribution previously observed in AM
and EcM tree species.
| [
{
"created": "Tue, 10 Nov 2020 18:10:33 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Nov 2020 10:45:05 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Nov 2020 13:57:07 GMT",
"version": "v3"
}
] | 2020-11-26 | [
[
"Heděnec",
"Petr",
""
],
[
"Nilsson",
"Lars Ola",
""
],
[
"Zheng",
"Haifeng",
""
],
[
"Gundersen",
"Per",
""
],
[
"Schmidt",
"Inger Kappel",
""
],
[
"Rousk",
"Johannes",
""
],
[
"Vesterdal",
"Lars",
""
]
] | Recent studies have revealed effects of various tree species on soil physical and chemical properties. However, effects of various tree species on composition and activity of soil microbiota and the relevant controls remain poorly understood. We evaluated the influence of tree species associated with two different mycorrhizal types, ectomycorrhiza (EcM) and arbuscular mycorrhiza (AM), on growth, biomass and metabolic activity of soil fungal and bacterial communities using common garden tree species experiments throughout Denmark. The soil microbial communities differed between six European tree species as well as between EcM (beech, lime, oak and spruce) and AM (ash and maple) tree species. The EcM tree species had higher fungal biomass, fungal growth and bacterial biomass, while AM species showed higher bacterial growth. The results indicated that microbial community composition and functioning differed between groups of tree species with distinct litter qualities that generate soil C/N ratio and soil pH differences. The mycorrhizal association only partly explained litter quality and soil microbial species differences since lime was more similar to AM tree species. In addition, our results indicated that tree species-mediated soil pH and C/N ratio were the most important variables shaping microbial communities with a positive effect on bacterial and a negative effect on fungal growth rates. The results suggest that tree species-mediated microbial community composition and activity may be important drivers of the different vertical soil C distribution previously observed in AM and EcM tree species. |
2111.06979 | Jenelle Feather | Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox,
Josh H. McDermott, James J. DiCarlo, SueYeon Chung | Neural Population Geometry Reveals the Role of Stochasticity in Robust
Perception | 35th Conference on Neural Information Processing Systems (NeurIPS
2021) | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial examples are often cited by neuroscientists and machine learning
researchers as an example of how computational models diverge from biological
sensory systems. Recent work has proposed adding biologically-inspired
components to visual neural networks as a way to improve their adversarial
robustness. One surprisingly effective component for reducing adversarial
vulnerability is response stochasticity, like that exhibited by biological
neurons. Here, using recently developed geometrical techniques from
computational neuroscience, we investigate how adversarial perturbations
influence the internal representations of standard, adversarially trained, and
biologically-inspired stochastic networks. We find distinct geometric
signatures for each type of network, revealing different mechanisms for
achieving robust representations. Next, we generalize these results to the
auditory domain, showing that neural stochasticity also makes auditory models
more robust to adversarial perturbations. Geometric analysis of the stochastic
networks reveals overlap between representations of clean and adversarially
perturbed stimuli, and quantitatively demonstrates that competing geometric
effects of stochasticity mediate a tradeoff between adversarial and clean
performance. Our results shed light on the strategies of robust perception
utilized by adversarially trained and stochastic networks, and help explain how
stochasticity may be beneficial to machine and biological computation.
| [
{
"created": "Fri, 12 Nov 2021 22:59:45 GMT",
"version": "v1"
}
] | 2021-11-16 | [
[
"Dapello",
"Joel",
""
],
[
"Feather",
"Jenelle",
""
],
[
"Le",
"Hang",
""
],
[
"Marques",
"Tiago",
""
],
[
"Cox",
"David D.",
""
],
[
"McDermott",
"Josh H.",
""
],
[
"DiCarlo",
"James J.",
""
],
[
"Chung",
"SueYeon",
""
]
] | Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems. Recent work has proposed adding biologically-inspired components to visual neural networks as a way to improve their adversarial robustness. One surprisingly effective component for reducing adversarial vulnerability is response stochasticity, like that exhibited by biological neurons. Here, using recently developed geometrical techniques from computational neuroscience, we investigate how adversarial perturbations influence the internal representations of standard, adversarially trained, and biologically-inspired stochastic networks. We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations. Next, we generalize these results to the auditory domain, showing that neural stochasticity also makes auditory models more robust to adversarial perturbations. Geometric analysis of the stochastic networks reveals overlap between representations of clean and adversarially perturbed stimuli, and quantitatively demonstrates that competing geometric effects of stochasticity mediate a tradeoff between adversarial and clean performance. Our results shed light on the strategies of robust perception utilized by adversarially trained and stochastic networks, and help explain how stochasticity may be beneficial to machine and biological computation. |
0905.1916 | Vitaly Ganusov | Vitaly V. Ganusov, Aron E. Lukacher, and Anthony M. Byers | Similar in vivo killing efficacy of polyoma virus-specific CD8 T cells
during acute and chronic phases of the infection | null | null | null | null | q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Viral infections can be broadly divided into infections that are cleared from
the host (acute) and those that persist (chronic). Why some viruses establish
chronic infections while other do not is poorly understood. One possibility is
that the host's immune response is impaired during chronic infections and is
unable to clear the virus from the host. In this report we use a recently
proposed framework to estimate the per capita killing efficacy of CD8$^+$ T
cells, specific for the MT389 epitope of polyoma virus (PyV), which establishes
a chronic infection in mice. Surprisingly, the estimated per cell killing
efficacy of MT389-specific effector CD8$^+$ T cells during the acute phase of
the infection was very similar to the previously estimated efficacy of effector
CD8$^+$ T cells specific to lymphocytic choriomeningitis virus
(LCMV-Armstrong), which is cleared from the host. We also find that during the
chronic phase of the infection the killing efficacy of PyV-specific CD8$^+$ T
cells was only half of that of cells in the acute phase. This decrease in the
killing efficacy is again surprisingly similar to the change in the killing
efficacy of LCMV-specific CD8$^+$ T cells from the peak of the response to the
memory phase. Interestingly, we also find that PyV-specific CD8$^+$ T cells in
the chronic phase of the infection require lower doses of antigen to kill a
target cell. In summary, we find little support for the hypothesis that
persistence of infections is caused by inability of the host to mount an
efficient immune response, and that even in the presence of an efficient
CD8$^+$ T cell response, some viruses can still establish a persistent
infection.
| [
{
"created": "Tue, 12 May 2009 17:34:52 GMT",
"version": "v1"
}
] | 2009-05-13 | [
[
"Ganusov",
"Vitaly V.",
""
],
[
"Lukacher",
"Aron E.",
""
],
[
"Byers",
"Anthony M.",
""
]
] | Viral infections can be broadly divided into infections that are cleared from the host (acute) and those that persist (chronic). Why some viruses establish chronic infections while other do not is poorly understood. One possibility is that the host's immune response is impaired during chronic infections and is unable to clear the virus from the host. In this report we use a recently proposed framework to estimate the per capita killing efficacy of CD8$^+$ T cells, specific for the MT389 epitope of polyoma virus (PyV), which establishes a chronic infection in mice. Surprisingly, the estimated per cell killing efficacy of MT389-specific effector CD8$^+$ T cells during the acute phase of the infection was very similar to the previously estimated efficacy of effector CD8$^+$ T cells specific to lymphocytic choriomeningitis virus (LCMV-Armstrong), which is cleared from the host. We also find that during the chronic phase of the infection the killing efficacy of PyV-specific CD8$^+$ T cells was only half of that of cells in the acute phase. This decrease in the killing efficacy is again surprisingly similar to the change in the killing efficacy of LCMV-specific CD8$^+$ T cells from the peak of the response to the memory phase. Interestingly, we also find that PyV-specific CD8$^+$ T cells in the chronic phase of the infection require lower doses of antigen to kill a target cell. In summary, we find little support for the hypothesis that persistence of infections is caused by inability of the host to mount an efficient immune response, and that even in the presence of an efficient CD8$^+$ T cell response, some viruses can still establish a persistent infection. |
2005.08968 | Fengqi You | Abdulelah S. Alshehri, Rafiqul Gani, Fengqi You | Deep Learning and Knowledge-Based Methods for Computer Aided Molecular
Design -- Toward a Unified Approach: State-of-the-Art and Future Directions | null | Computers and Chemical Engineering 141 (2020) 107005 | 10.1016/j.compchemeng.2020.107005 | null | q-bio.BM cs.LG physics.chem-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The optimal design of compounds through manipulating properties at the
molecular level is often the key to considerable scientific advances and
improved process systems performance. This paper highlights key trends,
challenges, and opportunities underpinning the Computer-Aided Molecular Design
(CAMD) problems. A brief review of knowledge-driven property estimation methods
and solution techniques, as well as corresponding CAMD tools and applications,
are first presented. In view of the computational challenges plaguing
knowledge-based methods and techniques, we survey the current state-of-the-art
applications of deep learning to molecular design as a fertile approach towards
overcoming computational limitations and navigating uncharted territories of
the chemical space. The main focus of the survey is given to deep generative
modeling of molecules under various deep learning architectures and different
molecular representations. Further, the importance of benchmarking and
empirical rigor in building deep learning models is spotlighted. The review
article also presents a detailed discussion of the current perspectives and
challenges of knowledge-based and data-driven CAMD and identifies key areas for
future research directions. Special emphasis is on the fertile avenue of hybrid
modeling paradigm, in which deep learning approaches are exploited while
leveraging the accumulated wealth of knowledge-driven CAMD methods and tools.
| [
{
"created": "Mon, 18 May 2020 14:17:51 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Jul 2020 15:00:54 GMT",
"version": "v2"
}
] | 2020-07-13 | [
[
"Alshehri",
"Abdulelah S.",
""
],
[
"Gani",
"Rafiqul",
""
],
[
"You",
"Fengqi",
""
]
] | The optimal design of compounds through manipulating properties at the molecular level is often the key to considerable scientific advances and improved process systems performance. This paper highlights key trends, challenges, and opportunities underpinning the Computer-Aided Molecular Design (CAMD) problems. A brief review of knowledge-driven property estimation methods and solution techniques, as well as corresponding CAMD tools and applications, are first presented. In view of the computational challenges plaguing knowledge-based methods and techniques, we survey the current state-of-the-art applications of deep learning to molecular design as a fertile approach towards overcoming computational limitations and navigating uncharted territories of the chemical space. The main focus of the survey is given to deep generative modeling of molecules under various deep learning architectures and different molecular representations. Further, the importance of benchmarking and empirical rigor in building deep learning models is spotlighted. The review article also presents a detailed discussion of the current perspectives and challenges of knowledge-based and data-driven CAMD and identifies key areas for future research directions. Special emphasis is on the fertile avenue of hybrid modeling paradigm, in which deep learning approaches are exploited while leveraging the accumulated wealth of knowledge-driven CAMD methods and tools. |
1607.00276 | Pushpam Aji John | Kristiina Ausmees, Pushpam Aji John | Analysis of Chromosome 20 - A Study | 6 pages, 7 figures | The 6th International Conference on Computational Systems-Biology
and Bioinformatics 2015 | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Since the arrival of next-generation sequencing technologies the amount of
genetic sequencing data has increased dramatically. This has has fueled an
increase in human genetics research. At the same time, with the recent advent
of technologies in processing large data sets, lot of these technologies are
proving valuable and efficient in analyzing these huge datasets. In this paper
we use some of these technologies to analyze genetic sequencing data of 1000
Genomes Project,produce and evaluate a framework to process the sequencing data
thereof and look into structural variations with respect to population groups.
| [
{
"created": "Thu, 30 Jun 2016 07:34:43 GMT",
"version": "v1"
}
] | 2016-07-04 | [
[
"Ausmees",
"Kristiina",
""
],
[
"John",
"Pushpam Aji",
""
]
] | Since the arrival of next-generation sequencing technologies the amount of genetic sequencing data has increased dramatically. This has has fueled an increase in human genetics research. At the same time, with the recent advent of technologies in processing large data sets, lot of these technologies are proving valuable and efficient in analyzing these huge datasets. In this paper we use some of these technologies to analyze genetic sequencing data of 1000 Genomes Project,produce and evaluate a framework to process the sequencing data thereof and look into structural variations with respect to population groups. |
0808.3609 | Mareike Fischer | Mareike Fischer, Bhalchandra D. Thatte | Revisiting an equivalence between maximum parsimony and maximum
likelihood methods in phylogenetics | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tuffley and Steel (1997) proved that Maximum Likelihood and Maximum Parsimony
methods in phylogenetics are equivalent for sequences of characters under a
simple symmetric model of substitution with no common mechanism. This result
has been widely cited ever since. We show that small changes to the model
assumptions suffice to make the two methods inequivalent. In particular, we
analyze the case of bounded substitution probabilities as well as the molecular
clock assumption. We show that in these cases, even under no common mechanism,
Maximum Parsimony and Maximum Likelihood might make conflicting choices. We
also show that if there is an upper bound on the substitution probabilities
which is `sufficiently small', every Maximum Likelihood tree is also a Maximum
Parsimony tree (but not vice versa).
| [
{
"created": "Wed, 27 Aug 2008 00:08:56 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Jul 2009 16:20:44 GMT",
"version": "v2"
}
] | 2009-07-06 | [
[
"Fischer",
"Mareike",
""
],
[
"Thatte",
"Bhalchandra D.",
""
]
] | Tuffley and Steel (1997) proved that Maximum Likelihood and Maximum Parsimony methods in phylogenetics are equivalent for sequences of characters under a simple symmetric model of substitution with no common mechanism. This result has been widely cited ever since. We show that small changes to the model assumptions suffice to make the two methods inequivalent. In particular, we analyze the case of bounded substitution probabilities as well as the molecular clock assumption. We show that in these cases, even under no common mechanism, Maximum Parsimony and Maximum Likelihood might make conflicting choices. We also show that if there is an upper bound on the substitution probabilities which is `sufficiently small', every Maximum Likelihood tree is also a Maximum Parsimony tree (but not vice versa). |
2211.04730 | Marko Jusup | Hirotaka Ijima, Carolina Minte-Vera, Yi-Jay Chang, Daisuke Ochi,
Yuichi Tsuda, Marko Jusup | Inferring the ecology of north-Pacific albacore tuna from
catch-and-effort data | 9 pages, 4 figures | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Catch-and-effort data are among the primary sources of information for
assessing the status of terrestrial wildlife and fish. In fishery science,
elaborate stock-assessment models are fitted to such data in order to estimate
fish-population sizes and guide management decisions. Given the importance of
catch-and-effort data, we scoured a comprehensive dataset pertaining to
albacore tuna (Thunnus alalunga) in the north Pacific ocean for novel
ecological information content about this commercially valuable species.
Specifically, we used unsupervised learning based on finite mixture modelling
to reveal that the north Pacific albacore-tuna stock can be divided into four
pseudo-cohorts ranging in age from approximately 3 to 12 years old. We
discovered that smaller size pseudo-cohorts inhabit relatively high --
subtropical to temperate -- latitudes, with hotspots off the coast of Japan.
Larger size pseudo-cohorts inhabit lower -- tropical to subtropical --
latitudes, with hotspots in the western and central north Pacific. These
results offer evidence that albacore tuna prefer different habitats depending
on their size and age, and point to long-term migratory routes for the species
that the current tagging technology is unlikely to capture in full. We discuss
the implications of the results for data-driven modelling of albacore tuna in
the north Pacific, as well as the management of the north Pacific albacore-tuna
fishery.
| [
{
"created": "Wed, 9 Nov 2022 08:01:42 GMT",
"version": "v1"
}
] | 2022-11-10 | [
[
"Ijima",
"Hirotaka",
""
],
[
"Minte-Vera",
"Carolina",
""
],
[
"Chang",
"Yi-Jay",
""
],
[
"Ochi",
"Daisuke",
""
],
[
"Tsuda",
"Yuichi",
""
],
[
"Jusup",
"Marko",
""
]
] | Catch-and-effort data are among the primary sources of information for assessing the status of terrestrial wildlife and fish. In fishery science, elaborate stock-assessment models are fitted to such data in order to estimate fish-population sizes and guide management decisions. Given the importance of catch-and-effort data, we scoured a comprehensive dataset pertaining to albacore tuna (Thunnus alalunga) in the north Pacific ocean for novel ecological information content about this commercially valuable species. Specifically, we used unsupervised learning based on finite mixture modelling to reveal that the north Pacific albacore-tuna stock can be divided into four pseudo-cohorts ranging in age from approximately 3 to 12 years old. We discovered that smaller size pseudo-cohorts inhabit relatively high -- subtropical to temperate -- latitudes, with hotspots off the coast of Japan. Larger size pseudo-cohorts inhabit lower -- tropical to subtropical -- latitudes, with hotspots in the western and central north Pacific. These results offer evidence that albacore tuna prefer different habitats depending on their size and age, and point to long-term migratory routes for the species that the current tagging technology is unlikely to capture in full. We discuss the implications of the results for data-driven modelling of albacore tuna in the north Pacific, as well as the management of the north Pacific albacore-tuna fishery. |
2303.06076 | Zedong Bi | Zedong Bi | Cognition of time and thinkings beyond | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | A pervasive research protocol of cognitive neuroscience is to train subjects
to perform deliberately designed experiments and record brain activity
simultaneously, aiming to understand the brain mechanism underlying cognition.
However, how the results of this protocol can be applied in technology is
seldom discussed. Here, I review the studies on time processing of the brain as
examples of this protocol, as well as two main application areas of
neuroscience (neuroengineering and brain-inspired artificial intelligence).
Time processing is an indispensable dimension of cognition; time is also an
indispensable dimension of any real-world signal to be processed in technology.
So one may expect that the studies of time processing in cognition profoundly
influence brain-related technology. Surprisingly, I found that the results from
cognitive studies on timing processing are hardly helpful in solving practical
problems. This awkward situation may be due to the lack of generalizability of
the results of cognitive studies, which are under well-controlled laboratory
conditions, to real-life situations. This lack of generalizability may be
rooted in the fundamental unknowability of the world (including cognition).
Overall, this paper questions and criticizes the usefulness and prospect of the
above-mentioned research protocol of cognitive neuroscience. I then give three
suggestions for future research. First, to improve the generalizability of
research, it is better to study brain activity under real-life conditions
instead of in well-controlled laboratory experiments. Second, to overcome the
unknowability of the world, we can engineer an easily accessible surrogate of
the object under investigation, so that we can predict the behavior of the
object by experimenting on the surrogate. Third, I call for technology-oriented
research, with the aim of technology creation instead of knowledge discovery.
| [
{
"created": "Wed, 8 Mar 2023 01:20:08 GMT",
"version": "v1"
}
] | 2023-03-13 | [
[
"Bi",
"Zedong",
""
]
] | A pervasive research protocol of cognitive neuroscience is to train subjects to perform deliberately designed experiments and record brain activity simultaneously, aiming to understand the brain mechanism underlying cognition. However, how the results of this protocol can be applied in technology is seldom discussed. Here, I review the studies on time processing of the brain as examples of this protocol, as well as two main application areas of neuroscience (neuroengineering and brain-inspired artificial intelligence). Time processing is an indispensable dimension of cognition; time is also an indispensable dimension of any real-world signal to be processed in technology. So one may expect that the studies of time processing in cognition profoundly influence brain-related technology. Surprisingly, I found that the results from cognitive studies on timing processing are hardly helpful in solving practical problems. This awkward situation may be due to the lack of generalizability of the results of cognitive studies, which are under well-controlled laboratory conditions, to real-life situations. This lack of generalizability may be rooted in the fundamental unknowability of the world (including cognition). Overall, this paper questions and criticizes the usefulness and prospect of the above-mentioned research protocol of cognitive neuroscience. I then give three suggestions for future research. First, to improve the generalizability of research, it is better to study brain activity under real-life conditions instead of in well-controlled laboratory experiments. Second, to overcome the unknowability of the world, we can engineer an easily accessible surrogate of the object under investigation, so that we can predict the behavior of the object by experimenting on the surrogate. Third, I call for technology-oriented research, with the aim of technology creation instead of knowledge discovery. |
1910.02293 | Peter Shaffery | Peter Shaffery, Bret D. Elderd, and Vanja Dukic | A Note on Species Richness and the Variance of Epidemic Severity | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The commonly observed negative correlation between the number of species in
an ecological community and disease risk, typically referred to as "the
dilution effect", has received a substantial amount of attention over the past
decade. Attempts to test this relationship experimentally have revealed that,
in addition to the mean disease risk decreasing with species number, so too
does the variance of disease risk. This is referred to as the "variance
reduction effect", and has received relatively little attention in the
disease-diversity literature. Here, we set out to clarify and quantify some of
these relationships in an idealized model of a randomly assembled multi-species
community undergoing an epidemic. We specifically investigate the variance of
the community disease reproductive ratio, a multi-species extension of the
basic reproductive ratio R_0, for a family of random-parameter meta-community
SIR models, and show how the variance of community $R_0$ varies depending on
whether transmission is density or frequency-dependent. We finally outline
areas of further research on how changes in variance affect transmission
dynamics in other systems.
| [
{
"created": "Sat, 5 Oct 2019 16:34:18 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jan 2020 20:39:50 GMT",
"version": "v2"
}
] | 2020-01-22 | [
[
"Shaffery",
"Peter",
""
],
[
"Elderd",
"Bret D.",
""
],
[
"Dukic",
"Vanja",
""
]
] | The commonly observed negative correlation between the number of species in an ecological community and disease risk, typically referred to as "the dilution effect", has received a substantial amount of attention over the past decade. Attempts to test this relationship experimentally have revealed that, in addition to the mean disease risk decreasing with species number, so too does the variance of disease risk. This is referred to as the "variance reduction effect", and has received relatively little attention in the disease-diversity literature. Here, we set out to clarify and quantify some of these relationships in an idealized model of a randomly assembled multi-species community undergoing an epidemic. We specifically investigate the variance of the community disease reproductive ratio, a multi-species extension of the basic reproductive ratio R_0, for a family of random-parameter meta-community SIR models, and show how the variance of community $R_0$ varies depending on whether transmission is density or frequency-dependent. We finally outline areas of further research on how changes in variance affect transmission dynamics in other systems. |
0904.3844 | Guillermo Raul Zemba | Matias G. dell'Erba, Guillermo R. Zemba | Topological phase transition in a RNA model in the de Gennes regime | 15 pages, 4 figures | null | 10.1103/PhysRevE.80.041926 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a simplified model of the RNA molecule proposed by G. Vernizzi, H.
Orland and A. Zee in the regime of strong concentration of positive ions in
solution. The model considers a flexible chain of equal bases that can pairwise
interact with any other one along the chain, while preserving the property of
saturation of the interactions. In the regime considered, we observe the
emergence of a critical temperature T_c separating two phases that can be
characterized by the topology of the predominant configurations: in the large
temperature regime, the dominant configurations of the molecule have very large
genera (of the order of the size of the molecule), corresponding to a complex
topology, whereas in the opposite regime of low temperatures, the dominant
configurations are simple and have the topology of a sphere. We determine that
this topological phase transition is of first order and provide an analytic
expression for T_c. The regime studied for this model exhibits analogies with
that for the dense polymer systems studied by de Gennes
| [
{
"created": "Fri, 24 Apr 2009 13:08:47 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"dell'Erba",
"Matias G.",
""
],
[
"Zemba",
"Guillermo R.",
""
]
] | We study a simplified model of the RNA molecule proposed by G. Vernizzi, H. Orland and A. Zee in the regime of strong concentration of positive ions in solution. The model considers a flexible chain of equal bases that can pairwise interact with any other one along the chain, while preserving the property of saturation of the interactions. In the regime considered, we observe the emergence of a critical temperature T_c separating two phases that can be characterized by the topology of the predominant configurations: in the large temperature regime, the dominant configurations of the molecule have very large genera (of the order of the size of the molecule), corresponding to a complex topology, whereas in the opposite regime of low temperatures, the dominant configurations are simple and have the topology of a sphere. We determine that this topological phase transition is of first order and provide an analytic expression for T_c. The regime studied for this model exhibits analogies with that for the dense polymer systems studied by de Gennes |
0904.4705 | Michael Deem | Rao Zhou, Ramdas S. Pophale, and Michael W. Deem | Computer-assisted vaccine design | 25 pages; 5 figures; 1 table; to appear in Influenza: Molecular
Virology, Horizon Scientific Press, edited by Qinghua Wang and Yizhi Jane
Tao, 2009 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a new parameter to quantify the antigenic distance between two H3N2
influenza strains: we use this parameter to measure antigenic distance between
circulating H3N2 strains and the closest vaccine component of the influenza
vaccine. For the data between 1971 and 2004, the measure of antigenic distance
correlates better with efficacy in humans of the H3N2 influenza A annual
vaccine than do current state of the art measures of antigenic distance such as
phylogenetic sequence analysis or ferret antisera inhibition assays. We suggest
that this measure of antigenic distance could be used to guide the design of
the annual flu vaccine. We combine the measure of antigenic distance with a
multiple-strain avian influenza transmission model to study the threat of
simultaneous introduction of multiple avian influenza strains. For H3N2
influenza, the model is validated against observed viral fixation rates and
epidemic progression rates from the World Health Organization FluNet - Global
Influenza Surveillance Network. We find that a multiple-component avian
influenza vaccine is helpful to control a simultaneous multiple introduction of
bird-flu strains. We introduce Population at Risk (PaR) to quantify the risk of
a flu pandemic, and calculate by this metric the improvement that a multiple
vaccine offers.
| [
{
"created": "Wed, 29 Apr 2009 21:18:11 GMT",
"version": "v1"
},
{
"created": "Wed, 27 May 2009 21:24:17 GMT",
"version": "v2"
}
] | 2009-05-28 | [
[
"Zhou",
"Rao",
""
],
[
"Pophale",
"Ramdas S.",
""
],
[
"Deem",
"Michael W.",
""
]
] | We define a new parameter to quantify the antigenic distance between two H3N2 influenza strains: we use this parameter to measure antigenic distance between circulating H3N2 strains and the closest vaccine component of the influenza vaccine. For the data between 1971 and 2004, the measure of antigenic distance correlates better with efficacy in humans of the H3N2 influenza A annual vaccine than do current state of the art measures of antigenic distance such as phylogenetic sequence analysis or ferret antisera inhibition assays. We suggest that this measure of antigenic distance could be used to guide the design of the annual flu vaccine. We combine the measure of antigenic distance with a multiple-strain avian influenza transmission model to study the threat of simultaneous introduction of multiple avian influenza strains. For H3N2 influenza, the model is validated against observed viral fixation rates and epidemic progression rates from the World Health Organization FluNet - Global Influenza Surveillance Network. We find that a multiple-component avian influenza vaccine is helpful to control a simultaneous multiple introduction of bird-flu strains. We introduce Population at Risk (PaR) to quantify the risk of a flu pandemic, and calculate by this metric the improvement that a multiple vaccine offers. |
q-bio/0403033 | C. Soule | Christophe Soule | Graphic requirements for multistationarity | null | ComplexUs 1 (2003) 123-133 | null | null | q-bio.MN | null | We discuss properties which must be satisfied by a genetic network in order
for it to allow differentiation.
These conditions are expressed as follows in mathematical terms. Let $F$ be a
differentiable mapping from a finite dimensional real vector space to itself.
The signs of the entries of the Jacobian matrix of $F$ at a given point $a$
define an interaction graph, i.e. a finite oriented finite graph $G(a)$ where
each edge is equipped with a sign. Ren\'e Thomas conjectured twenty years ago
that, if $F$ has at least two non degenerate zeroes, there exists $a$ such that
$G(a)$ contains a positive circuit. Different authors proved this in special
cases, and we give here a general proof of the conjecture. In particular, we
get this way a necessary condition for genetic networks to lead to
multistationarity, and therefore to differentiation.
We use for our proof the mathematical literature on global univalence, and we
show how to derive from it several variants of Thomas' rule, some of which had
been anticipated by Kaufman and Thomas.
| [
{
"created": "Tue, 23 Mar 2004 16:31:35 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Soule",
"Christophe",
""
]
] | We discuss properties which must be satisfied by a genetic network in order for it to allow differentiation. These conditions are expressed as follows in mathematical terms. Let $F$ be a differentiable mapping from a finite dimensional real vector space to itself. The signs of the entries of the Jacobian matrix of $F$ at a given point $a$ define an interaction graph, i.e. a finite oriented finite graph $G(a)$ where each edge is equipped with a sign. Ren\'e Thomas conjectured twenty years ago that, if $F$ has at least two non degenerate zeroes, there exists $a$ such that $G(a)$ contains a positive circuit. Different authors proved this in special cases, and we give here a general proof of the conjecture. In particular, we get this way a necessary condition for genetic networks to lead to multistationarity, and therefore to differentiation. We use for our proof the mathematical literature on global univalence, and we show how to derive from it several variants of Thomas' rule, some of which had been anticipated by Kaufman and Thomas. |
0708.2121 | Ashok Palaniappan | Ashok Palaniappan | Detection of an ancient principle and an elegant solution to the protein
classification problem | 13p | null | null | null | q-bio.GN q-bio.BM q-bio.QM | null | This work is concerned with the development of a well-founded, theoretically
justified, and least complicated metric for the classification of proteins with
reference to enzymes. As the signature of an enzyme family, a catalytic domain
is easily fingerprinted. Given that the classification problem has so far
seemed intractable, a classification schema derived from the catalytic domain
would be satisfying. Here I show that there exists a natural ab initio if
nonobvious basis to theorize that the catalytic domain of an enzyme is uniquely
informative about its regulation. This annotates its function. Based on this
hypothesis, a method that correctly classifies potassium ion channels into
their respective subfamilies is described. To put the principle on firmer
ground, extra validation was sought and obtained through co-evolutionary
analyses. The co-evolutionary analyses reveal a departure from the notion that
potassium ion channel proteins are functionally modular. This finding is
discussed in light of the prevailing notion of domain. These studies establish
that significant co-evolution of the catalytic domain of a gene with its
conjoint domain is a specialized, necessary process following fusion and
swapping events in evolution. Instances of this discovery are likely to be
found pervasive in protein science.
| [
{
"created": "Thu, 16 Aug 2007 16:58:25 GMT",
"version": "v1"
}
] | 2007-08-17 | [
[
"Palaniappan",
"Ashok",
""
]
] | This work is concerned with the development of a well-founded, theoretically justified, and least complicated metric for the classification of proteins with reference to enzymes. As the signature of an enzyme family, a catalytic domain is easily fingerprinted. Given that the classification problem has so far seemed intractable, a classification schema derived from the catalytic domain would be satisfying. Here I show that there exists a natural ab initio if nonobvious basis to theorize that the catalytic domain of an enzyme is uniquely informative about its regulation. This annotates its function. Based on this hypothesis, a method that correctly classifies potassium ion channels into their respective subfamilies is described. To put the principle on firmer ground, extra validation was sought and obtained through co-evolutionary analyses. The co-evolutionary analyses reveal a departure from the notion that potassium ion channel proteins are functionally modular. This finding is discussed in light of the prevailing notion of domain. These studies establish that significant co-evolution of the catalytic domain of a gene with its conjoint domain is a specialized, necessary process following fusion and swapping events in evolution. Instances of this discovery are likely to be found pervasive in protein science. |
2311.16925 | Devin Greene | Devin Greene | Multiallelic Walsh transforms | null | null | null | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | A closed formula multiallelic Walsh (or Hadamard) transform is introduced.
Basic results are derived, and a statistical interpretation of some of the
resulting linear forms is discussed.
| [
{
"created": "Tue, 28 Nov 2023 16:30:58 GMT",
"version": "v1"
}
] | 2023-11-29 | [
[
"Greene",
"Devin",
""
]
] | A closed formula multiallelic Walsh (or Hadamard) transform is introduced. Basic results are derived, and a statistical interpretation of some of the resulting linear forms is discussed. |
2304.12378 | Gui Araujo | Gui Araujo | A framework of population dynamics from first principles | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The aim of this manuscript is to contain the arguments and define the
theoretical objects for building a general framework to model population
dynamics from the ground up, relying mainly on the probabilistic landscapes
defining the dynamics instead of the context-dependent physical specification
of systems. I intend to keep updating and correcting this manuscript. The goal
is for all the different parts to be able to communicate with each other and
for models to be directly comparable and to maintain an explicit connection to
the first principles sustaining them. This modeling paradigm will stem from a
Bayesian perspective on model definition and interpretation and will be
primarily concerned with ecological and evolutionary processes. Populations are
considered to be abstract collections of elements that relate in the same ways,
and the laws of motion ultimately depend on relational properties of elements,
at first irrespective of their constitution. The states of populations are
taken to be their spatial densities, the fundamental quantities shaping the
dynamics of their interactions.
| [
{
"created": "Mon, 24 Apr 2023 18:14:21 GMT",
"version": "v1"
}
] | 2023-04-26 | [
[
"Araujo",
"Gui",
""
]
] | The aim of this manuscript is to contain the arguments and define the theoretical objects for building a general framework to model population dynamics from the ground up, relying mainly on the probabilistic landscapes defining the dynamics instead of the context-dependent physical specification of systems. I intend to keep updating and correcting this manuscript. The goal is for all the different parts to be able to communicate with each other and for models to be directly comparable and to maintain an explicit connection to the first principles sustaining them. This modeling paradigm will stem from a Bayesian perspective on model definition and interpretation and will be primarily concerned with ecological and evolutionary processes. Populations are considered to be abstract collections of elements that relate in the same ways, and the laws of motion ultimately depend on relational properties of elements, at first irrespective of their constitution. The states of populations are taken to be their spatial densities, the fundamental quantities shaping the dynamics of their interactions. |
1902.04341 | Can Firtina | Can Firtina, Jeremie S. Kim, Mohammed Alser, Damla Senol Cali, A.
Ercument Cicek, Can Alkan, Onur Mutlu | Apollo: A Sequencing-Technology-Independent, Scalable, and Accurate
Assembly Polishing Algorithm | 9 pages, 1 figure. Accepted in Bioinformatics | Bioinformatics . 2020 Jun 1;36(12):3669-3679 | 10.1093/bioinformatics/btaa179 | null | q-bio.GN cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long reads produced by third-generation sequencing technologies are used to
construct an assembly (i.e., the subject's genome), which is further used in
downstream genome analysis. Unfortunately, long reads have high sequencing
error rates and a large proportion of bps in these long reads are incorrectly
identified. These errors propagate to the assembly and affect the accuracy of
genome analysis. Assembly polishing algorithms minimize such error propagation
by polishing or fixing errors in the assembly by using information from
alignments between reads and the assembly (i.e., read-to-assembly alignment
information). However, assembly polishing algorithms can only polish an
assembly using reads either from a certain sequencing technology or from a
small assembly. Such technology-dependency and assembly-size dependency require
researchers to 1) run multiple polishing algorithms and 2) use small chunks of
a large genome to use all available read sets and polish large genomes. We
introduce Apollo, a universal assembly polishing algorithm that scales well to
polish an assembly of any size (i.e., both large and small genomes) using reads
from all sequencing technologies (i.e., second- and third-generation). Our goal
is to provide a single algorithm that uses read sets from all available
sequencing technologies to improve the accuracy of assembly polishing and that
can polish large genomes. Apollo 1) models an assembly as a profile hidden
Markov model (pHMM), 2) uses read-to-assembly alignment to train the pHMM with
the Forward-Backward algorithm, and 3) decodes the trained model with the
Viterbi algorithm to produce a polished assembly. Our experiments with real
read sets demonstrate that Apollo is the only algorithm that 1) uses reads from
any sequencing technology within a single run and 2) scales well to polish
large assemblies without splitting the assembly into multiple parts.
| [
{
"created": "Tue, 12 Feb 2019 11:45:55 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Mar 2020 23:31:34 GMT",
"version": "v2"
}
] | 2020-10-29 | [
[
"Firtina",
"Can",
""
],
[
"Kim",
"Jeremie S.",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Cali",
"Damla Senol",
""
],
[
"Cicek",
"A. Ercument",
""
],
[
"Alkan",
"Can",
""
],
[
"Mutlu",
"Onur",
""
]
] | Long reads produced by third-generation sequencing technologies are used to construct an assembly (i.e., the subject's genome), which is further used in downstream genome analysis. Unfortunately, long reads have high sequencing error rates and a large proportion of bps in these long reads are incorrectly identified. These errors propagate to the assembly and affect the accuracy of genome analysis. Assembly polishing algorithms minimize such error propagation by polishing or fixing errors in the assembly by using information from alignments between reads and the assembly (i.e., read-to-assembly alignment information). However, assembly polishing algorithms can only polish an assembly using reads either from a certain sequencing technology or from a small assembly. Such technology-dependency and assembly-size dependency require researchers to 1) run multiple polishing algorithms and 2) use small chunks of a large genome to use all available read sets and polish large genomes. We introduce Apollo, a universal assembly polishing algorithm that scales well to polish an assembly of any size (i.e., both large and small genomes) using reads from all sequencing technologies (i.e., second- and third-generation). Our goal is to provide a single algorithm that uses read sets from all available sequencing technologies to improve the accuracy of assembly polishing and that can polish large genomes. Apollo 1) models an assembly as a profile hidden Markov model (pHMM), 2) uses read-to-assembly alignment to train the pHMM with the Forward-Backward algorithm, and 3) decodes the trained model with the Viterbi algorithm to produce a polished assembly. Our experiments with real read sets demonstrate that Apollo is the only algorithm that 1) uses reads from any sequencing technology within a single run and 2) scales well to polish large assemblies without splitting the assembly into multiple parts. |
1506.06359 | Sandrine Pavoine | Sandrine Pavoine | A guide through a family of phylogenetic dissimilarity measures among
sites | 88 pages, including main text, 5 figures and appendixes | null | 10.1111/oik.03262 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecological studies have now gone beyond measures of species turnover towards
measures of phylogenetic and functional dissimilarity with a main objective:
disentangling the processes that drive species distributions from local to
broad scales. A fundamental difference between phylogenetic and functional
analyses is that phylogeny is intrinsically dependent on a tree-like structure.
When the branches of a phylogenetic tree have lengths, then each evolutionary
unit on these branches can be considered as a basic entity on which
dissimilarities among sites should be measured. Several of the recent measures
of phylogenetic dissimilarities among sites thus are traditional dissimilarity
indices where species are replaced by evolutionary units. The resulting indices
were named PD-dissimilarity indices. Here I review and compare indices and
ordination approaches that, although first developed to analyse the differences
in the species compositions of sites, can be adapted to describe
PD-dissimilarities among sites, thus revealing how lineages are distributed
along environmental gradients, or among habitats or regions. As an
illustration, I show that the amount of PD-dissimilarities among the main
habitats of a disturbance gradient in Selva Lacandona of Chiapas, Mexico is
strongly dependent on whether species are weighted by their abundance or not,
and on the index used to measure PD-dissimilarity. Overall, the family of
PD-dissimilarity indices has a critical potential for future analyses of
phylogenetic diversity as it benefits from decades of research on the measure
of species dissimilarity. I provide clues to help to choose among many
potential indices, identifying which indices satisfy minimal basis properties,
and analysing their sensitivity to abundance, size, diversity, and joint
absences.
| [
{
"created": "Sun, 21 Jun 2015 12:45:35 GMT",
"version": "v1"
}
] | 2018-06-26 | [
[
"Pavoine",
"Sandrine",
""
]
] | Ecological studies have now gone beyond measures of species turnover towards measures of phylogenetic and functional dissimilarity with a main objective: disentangling the processes that drive species distributions from local to broad scales. A fundamental difference between phylogenetic and functional analyses is that phylogeny is intrinsically dependent on a tree-like structure. When the branches of a phylogenetic tree have lengths, then each evolutionary unit on these branches can be considered as a basic entity on which dissimilarities among sites should be measured. Several of the recent measures of phylogenetic dissimilarities among sites thus are traditional dissimilarity indices where species are replaced by evolutionary units. The resulting indices were named PD-dissimilarity indices. Here I review and compare indices and ordination approaches that, although first developed to analyse the differences in the species compositions of sites, can be adapted to describe PD-dissimilarities among sites, thus revealing how lineages are distributed along environmental gradients, or among habitats or regions. As an illustration, I show that the amount of PD-dissimilarities among the main habitats of a disturbance gradient in Selva Lacandona of Chiapas, Mexico is strongly dependent on whether species are weighted by their abundance or not, and on the index used to measure PD-dissimilarity. Overall, the family of PD-dissimilarity indices has a critical potential for future analyses of phylogenetic diversity as it benefits from decades of research on the measure of species dissimilarity. I provide clues to help to choose among many potential indices, identifying which indices satisfy minimal basis properties, and analysing their sensitivity to abundance, size, diversity, and joint absences. |
q-bio/0703029 | M. Cristina Marchetti | Tanniemola B. Liverpool and M. Cristina Marchetti | Hydrodynamic and rheology of active polar filaments | 30 pages, 5 figures. To appear in "Cell Motility", Peter Lenz, ed.
(Springer, New York, 2007) | in "Cell Motility", P. Lenz, editor (Springer, New York, 2007) | null | null | q-bio.CB cond-mat.soft | null | The cytoskeleton provides eukaryotic cells with mechanical support and helps
them perform their biological functions. It is a network of semiflexible polar
protein filaments and many accessory proteins that bind to these filaments,
regulate their assembly, link them to organelles and continuously remodel the
network. Here we review recent theoretical work that aims to describe the
cytoskeleton as a polar continuum driven out of equilibrium by internal
chemical reactions. This work uses methods from soft condensed matter physics
and has led to the formulation of a general framework for the description of
the structure and rheology of active suspension of polar filaments and
molecular motors.
| [
{
"created": "Mon, 12 Mar 2007 14:47:20 GMT",
"version": "v1"
}
] | 2008-01-01 | [
[
"Liverpool",
"Tanniemola B.",
""
],
[
"Marchetti",
"M. Cristina",
""
]
] | The cytoskeleton provides eukaryotic cells with mechanical support and helps them perform their biological functions. It is a network of semiflexible polar protein filaments and many accessory proteins that bind to these filaments, regulate their assembly, link them to organelles and continuously remodel the network. Here we review recent theoretical work that aims to describe the cytoskeleton as a polar continuum driven out of equilibrium by internal chemical reactions. This work uses methods from soft condensed matter physics and has led to the formulation of a general framework for the description of the structure and rheology of active suspension of polar filaments and molecular motors. |
2311.00085 | Brian Camley | Wei Wang and Brian A. Camley | Limits on the accuracy of contact inhibition of locomotion | null | Phys. Rev. E 109, 054408 (2024) | 10.1103/PhysRevE.109.054408 | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells that collide with each other repolarize away from contact, in a process
called contact inhibition of locomotion (CIL), which is necessary for correct
development of the embryo. CIL can occur even when cells make a micron-scale
contact with a neighbor - much smaller than their size. How precisely can a
cell sense cell-cell contact and repolarize in the correct direction? What
factors control whether a cell recognizes it has contacted a neighbor? We
propose a theoretical model for the limits of CIL where cells recognize the
presence of another cell by binding the protein ephrin with the Eph receptor.
This recognition is made difficult by the presence of interfering ligands that
bind nonspecifically. Both theoretical predictions and simulation results show
that it becomes more difficult to sense cell-cell contact when it is difficult
to distinguish ephrin from the interfering ligands, or when there are more
interfering ligands, or when the contact width decreases. However, the error of
estimating contact position remains almost constant when the contact width
changes. This happens because the cell gains spatial information largely from
the boundaries of cell-cell contact. We study using statistical decision theory
the likelihood of a false positive CIL event in the absence of cell-cell
contact, and the likelihood of a false negative where CIL does not occur when
another cell is present. Our results suggest that the cell is more likely to
make incorrect decisions when the contact width is very small or so large that
it nears the cell's perimeter. However, in general, we find that cells have the
ability to make reasonably reliable CIL decisions even for very narrow
(micron-scale) contacts, even if the concentration of interfering ligands is
ten times that of the correct ligands.
| [
{
"created": "Tue, 31 Oct 2023 18:52:48 GMT",
"version": "v1"
}
] | 2024-06-17 | [
[
"Wang",
"Wei",
""
],
[
"Camley",
"Brian A.",
""
]
] | Cells that collide with each other repolarize away from contact, in a process called contact inhibition of locomotion (CIL), which is necessary for correct development of the embryo. CIL can occur even when cells make a micron-scale contact with a neighbor - much smaller than their size. How precisely can a cell sense cell-cell contact and repolarize in the correct direction? What factors control whether a cell recognizes it has contacted a neighbor? We propose a theoretical model for the limits of CIL where cells recognize the presence of another cell by binding the protein ephrin with the Eph receptor. This recognition is made difficult by the presence of interfering ligands that bind nonspecifically. Both theoretical predictions and simulation results show that it becomes more difficult to sense cell-cell contact when it is difficult to distinguish ephrin from the interfering ligands, or when there are more interfering ligands, or when the contact width decreases. However, the error of estimating contact position remains almost constant when the contact width changes. This happens because the cell gains spatial information largely from the boundaries of cell-cell contact. We study using statistical decision theory the likelihood of a false positive CIL event in the absence of cell-cell contact, and the likelihood of a false negative where CIL does not occur when another cell is present. Our results suggest that the cell is more likely to make incorrect decisions when the contact width is very small or so large that it nears the cell's perimeter. However, in general, we find that cells have the ability to make reasonably reliable CIL decisions even for very narrow (micron-scale) contacts, even if the concentration of interfering ligands is ten times that of the correct ligands. |
2310.19950 | Reza Bozorgpour | Reza Bozorgpour, Sana Sheybanikashani, Matin Mohebi | Exploring the Role of Molecular Dynamics Simulations in Most Recent
Cancer Research: Insights into Treatment Strategies | 49 pages, 2 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Cancer is a complex disease that is characterized by uncontrolled growth and
division of cells. It involves a complex interplay between genetic and
environmental factors that lead to the initiation and progression of tumors.
Recent advances in molecular dynamics simulations have revolutionized our
understanding of the molecular mechanisms underlying cancer initiation and
progression. Molecular dynamics simulations enable researchers to study the
behavior of biomolecules at an atomic level, providing insights into the
dynamics and interactions of proteins, nucleic acids, and other molecules
involved in cancer development. In this review paper, we provide an overview of
the latest advances in molecular dynamics simulations of cancer cells. We will
discuss the principles of molecular dynamics simulations and their applications
in cancer research. We also explore the role of molecular dynamics simulations
in understanding the interactions between cancer cells and their
microenvironment, including signaling pathways, proteinprotein interactions,
and other molecular processes involved in tumor initiation and progression. In
addition, we highlight the current challenges and opportunities in this field
and discuss the potential for developing more accurate and personalized
simulations. Overall, this review paper aims to provide a comprehensive
overview of the current state of molecular dynamics simulations in cancer
research, with a focus on the molecular mechanisms underlying cancer initiation
and progression.
| [
{
"created": "Mon, 30 Oct 2023 19:01:55 GMT",
"version": "v1"
}
] | 2023-11-01 | [
[
"Bozorgpour",
"Reza",
""
],
[
"Sheybanikashani",
"Sana",
""
],
[
"Mohebi",
"Matin",
""
]
] | Cancer is a complex disease that is characterized by uncontrolled growth and division of cells. It involves a complex interplay between genetic and environmental factors that lead to the initiation and progression of tumors. Recent advances in molecular dynamics simulations have revolutionized our understanding of the molecular mechanisms underlying cancer initiation and progression. Molecular dynamics simulations enable researchers to study the behavior of biomolecules at an atomic level, providing insights into the dynamics and interactions of proteins, nucleic acids, and other molecules involved in cancer development. In this review paper, we provide an overview of the latest advances in molecular dynamics simulations of cancer cells. We will discuss the principles of molecular dynamics simulations and their applications in cancer research. We also explore the role of molecular dynamics simulations in understanding the interactions between cancer cells and their microenvironment, including signaling pathways, proteinprotein interactions, and other molecular processes involved in tumor initiation and progression. In addition, we highlight the current challenges and opportunities in this field and discuss the potential for developing more accurate and personalized simulations. Overall, this review paper aims to provide a comprehensive overview of the current state of molecular dynamics simulations in cancer research, with a focus on the molecular mechanisms underlying cancer initiation and progression. |
2308.07586 | Zachary Sexton | Zachary A. Sexton, Andrew R. Hudson, Jessica E. Herrmann, Dan J.
Shiwarski, Jonathan Pham, Jason M. Szafron, Sean M. Wu, Mark Skylar-Scott,
Adam W. Feinberg, Alison Marsden | Rapid model-guided design of organ-scale synthetic vasculature for
biomanufacturing | 58 pages (19 main and 39 supplement pages), 4 main figures, 9
supplement figures | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Our ability to produce human-scale bio-manufactured organs is critically
limited by the need for vascularization and perfusion. For tissues of variable
size and shape, including arbitrarily complex geometries, designing and
printing vasculature capable of adequate perfusion has posed a major hurdle.
Here, we introduce a model-driven design pipeline combining accelerated
optimization methods for fast synthetic vascular tree generation and
computational hemodynamics models. We demonstrate rapid generation, simulation,
and 3D printing of synthetic vasculature in complex geometries, from small
tissue constructs to organ scale networks. We introduce key algorithmic
advances that all together accelerate synthetic vascular generation by more
than 230-fold compared to standard methods and enable their use in arbitrarily
complex shapes through localized implicit functions. Furthermore, we provide
techniques for joining vascular trees into watertight networks suitable for
hemodynamic CFD and 3D fabrication. We demonstrate that organ-scale vascular
network models can be generated in silico within minutes and can be used to
perfuse engineered and anatomic models including a bioreactor, annulus,
bi-ventricular heart, and gyrus. We further show that this flexible pipeline
can be applied to two common modes of bioprinting with free-form reversible
embedding of suspended hydrogels and writing into soft matter. Our synthetic
vascular tree generation pipeline enables rapid, scalable vascular model
generation and fluid analysis for bio-manufactured tissues necessary for future
scale up and production.
| [
{
"created": "Tue, 15 Aug 2023 06:16:52 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Sexton",
"Zachary A.",
""
],
[
"Hudson",
"Andrew R.",
""
],
[
"Herrmann",
"Jessica E.",
""
],
[
"Shiwarski",
"Dan J.",
""
],
[
"Pham",
"Jonathan",
""
],
[
"Szafron",
"Jason M.",
""
],
[
"Wu",
"Sean M.",
""
],
[
"Skylar-Scott",
"Mark",
""
],
[
"Feinberg",
"Adam W.",
""
],
[
"Marsden",
"Alison",
""
]
] | Our ability to produce human-scale bio-manufactured organs is critically limited by the need for vascularization and perfusion. For tissues of variable size and shape, including arbitrarily complex geometries, designing and printing vasculature capable of adequate perfusion has posed a major hurdle. Here, we introduce a model-driven design pipeline combining accelerated optimization methods for fast synthetic vascular tree generation and computational hemodynamics models. We demonstrate rapid generation, simulation, and 3D printing of synthetic vasculature in complex geometries, from small tissue constructs to organ scale networks. We introduce key algorithmic advances that all together accelerate synthetic vascular generation by more than 230-fold compared to standard methods and enable their use in arbitrarily complex shapes through localized implicit functions. Furthermore, we provide techniques for joining vascular trees into watertight networks suitable for hemodynamic CFD and 3D fabrication. We demonstrate that organ-scale vascular network models can be generated in silico within minutes and can be used to perfuse engineered and anatomic models including a bioreactor, annulus, bi-ventricular heart, and gyrus. We further show that this flexible pipeline can be applied to two common modes of bioprinting with free-form reversible embedding of suspended hydrogels and writing into soft matter. Our synthetic vascular tree generation pipeline enables rapid, scalable vascular model generation and fluid analysis for bio-manufactured tissues necessary for future scale up and production. |
1710.11413 | Johann H. Mart\'inez | J. H. Mart\'inez, J. M. Buld\'u, D. Papo, F. De Vico Fallani and M.
Chavez | Role of inter-hemispheric connections in functional brain networks | 12 pages 5 figures (main), 9 pages and 8 figures (Supp Info) | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today the human brain can be modeled as a graph where nodes represent
different regions and links stand for statistical interactions between their
activities as recorded by different neuroimaging techniques. Empirical studies
have lead to the hypothesis that brain functions rely on the coordination of a
scattered mosaic of functionally specialized brain regions (modules or
sub-networks), forming a web-like structure of coordinated assemblies (a
network of networks). The study of brain dynamics would therefore benefit from
an inspection of how functional sub-networks interact between them. In this
paper, we model the brain as an interconnected system composed of two specific
sub-networks, the left (L) and right (R) hemispheres, which compete with each
other for centrality, a topological measure of importance in a networked
system. Specifically, we consideredfunctional brain networks derived from
high-density electroencephalographic (EEG) recordings and investigated how node
centrality is shaped by interhemispheric connections. Our results show that the
distribution of centrality strongly depends on the number of functional
connections between hemispheres and the way these connections are distributed.
Additionally, we investigated the consequences of node failure on hemispherical
centrality, and showed how the abundance of inter-hemispheric links favors the
functional balance of centrality distribution between the hemispheres.
| [
{
"created": "Tue, 31 Oct 2017 11:37:55 GMT",
"version": "v1"
}
] | 2017-11-01 | [
[
"Martínez",
"J. H.",
""
],
[
"Buldú",
"J. M.",
""
],
[
"Papo",
"D.",
""
],
[
"Fallani",
"F. De Vico",
""
],
[
"Chavez",
"M.",
""
]
] | Today the human brain can be modeled as a graph where nodes represent different regions and links stand for statistical interactions between their activities as recorded by different neuroimaging techniques. Empirical studies have lead to the hypothesis that brain functions rely on the coordination of a scattered mosaic of functionally specialized brain regions (modules or sub-networks), forming a web-like structure of coordinated assemblies (a network of networks). The study of brain dynamics would therefore benefit from an inspection of how functional sub-networks interact between them. In this paper, we model the brain as an interconnected system composed of two specific sub-networks, the left (L) and right (R) hemispheres, which compete with each other for centrality, a topological measure of importance in a networked system. Specifically, we consideredfunctional brain networks derived from high-density electroencephalographic (EEG) recordings and investigated how node centrality is shaped by interhemispheric connections. Our results show that the distribution of centrality strongly depends on the number of functional connections between hemispheres and the way these connections are distributed. Additionally, we investigated the consequences of node failure on hemispherical centrality, and showed how the abundance of inter-hemispheric links favors the functional balance of centrality distribution between the hemispheres. |
1909.11451 | Geoffrey Iwata | Geoffrey Z. Iwata, Yinan Hu, Tilmann Sander, Muthuraman Muthuraman,
Venkata Chaitanya Chirumamilla, Sergiu Groppa, Dmitry Budker, and Arne
Wickenbrock | Biomagnetic signals recorded during transcranial magnetic stimulation
(TMS)-evoked peripheral muscular activity | 16 pages, 4 figures | null | null | null | q-bio.NC eess.SP physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: We present magnetomyograms (MMG) of TMS-evoked movement in a human
hand, together with a simultaneous surface electromyograph (EMG) and
electroencephalograph (EEG) data. Approach: We combined TMS with non-contact
magnetic detection of TMS-evoked muscle activity in peripheral limbs to explore
a new diagnostic modality that enhances the utility of TMS as a clinical tool
by leveraging technological advances in magnetometry. We recorded measurements
in a regular hospital room using an array of optically pumped magnetometers
(OPM) inside a portable shield that encompasses only the forearm and hand of
the subject. Main Results: The biomagnetic signals recorded in the MMG provide
detailed spatial and temporal information that is complementary to that of the
electric signal channels. Moreover, we identify features in the magnetic
recording beyond those of the EMG. Significance: These results validate the
viability of MMG recording with a compact OPM based setup in small-sized
magnetic shielding, and provide proof-of-principle for a non-contact data
channel for detection and analysis of TMS-evoked muscle activity from
peripheral limbs.
| [
{
"created": "Wed, 25 Sep 2019 12:46:27 GMT",
"version": "v1"
},
{
"created": "Tue, 19 May 2020 15:03:40 GMT",
"version": "v2"
}
] | 2020-05-20 | [
[
"Iwata",
"Geoffrey Z.",
""
],
[
"Hu",
"Yinan",
""
],
[
"Sander",
"Tilmann",
""
],
[
"Muthuraman",
"Muthuraman",
""
],
[
"Chirumamilla",
"Venkata Chaitanya",
""
],
[
"Groppa",
"Sergiu",
""
],
[
"Budker",
"Dmitry",
""
],
[
"Wickenbrock",
"Arne",
""
]
] | Objective: We present magnetomyograms (MMG) of TMS-evoked movement in a human hand, together with a simultaneous surface electromyograph (EMG) and electroencephalograph (EEG) data. Approach: We combined TMS with non-contact magnetic detection of TMS-evoked muscle activity in peripheral limbs to explore a new diagnostic modality that enhances the utility of TMS as a clinical tool by leveraging technological advances in magnetometry. We recorded measurements in a regular hospital room using an array of optically pumped magnetometers (OPM) inside a portable shield that encompasses only the forearm and hand of the subject. Main Results: The biomagnetic signals recorded in the MMG provide detailed spatial and temporal information that is complementary to that of the electric signal channels. Moreover, we identify features in the magnetic recording beyond those of the EMG. Significance: These results validate the viability of MMG recording with a compact OPM based setup in small-sized magnetic shielding, and provide proof-of-principle for a non-contact data channel for detection and analysis of TMS-evoked muscle activity from peripheral limbs. |
2201.05198 | Konstantinos Xylogiannopoulos | Konstantinos Xylogiannopoulos | Multiple Genome Analytics Framework: The Case of All SARS-CoV-2 Complete
Variants | null | null | null | null | q-bio.GN cs.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Pattern detection and string matching are fundamental problems in computer
science and the accelerated expansion of bioinformatics and computational
biology have made them a core topic for both disciplines. The SARS-CoV-2
pandemic has made such problems more demanding with hundreds or thousands of
new genome variants discovered every week, because of constant mutations, and
there is a desperate need for fast and accurate analyses. The requirement for
computational tools for genomic analyses, such as sequence alignment, is very
important, although, in most cases the resources and computational power
required are enormous. The presented Multiple Genome Analytics Framework
combines data structures and algorithms, specifically built for text mining and
pattern detection, that can help to efficiently address several computational
biology and bioinformatics problems concurrently with minimal resources. A
single execution of advanced algorithms, with space and time complexity
O(nlogn), is enough to acquire knowledge on all repeated patterns that exist in
multiple genome sequences and this information can be used from other
meta-algorithms for further meta-analyses. The potential of the proposed
framework is demonstrated with the analysis of more than 300,000 SARS-CoV-2
genome sequences and the detection of all repeated patterns with length up to
60 nucleotides in these sequences. These results have been used to provide
answers to questions such as common patterns among all variants, sequence
alignment, palindromes and tandem repeats detection, different organism genome
comparisons, polymerase chain reaction primers detection, etc.
| [
{
"created": "Thu, 13 Jan 2022 20:19:35 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Xylogiannopoulos",
"Konstantinos",
""
]
] | Pattern detection and string matching are fundamental problems in computer science and the accelerated expansion of bioinformatics and computational biology have made them a core topic for both disciplines. The SARS-CoV-2 pandemic has made such problems more demanding with hundreds or thousands of new genome variants discovered every week, because of constant mutations, and there is a desperate need for fast and accurate analyses. The requirement for computational tools for genomic analyses, such as sequence alignment, is very important, although, in most cases the resources and computational power required are enormous. The presented Multiple Genome Analytics Framework combines data structures and algorithms, specifically built for text mining and pattern detection, that can help to efficiently address several computational biology and bioinformatics problems concurrently with minimal resources. A single execution of advanced algorithms, with space and time complexity O(nlogn), is enough to acquire knowledge on all repeated patterns that exist in multiple genome sequences and this information can be used from other meta-algorithms for further meta-analyses. The potential of the proposed framework is demonstrated with the analysis of more than 300,000 SARS-CoV-2 genome sequences and the detection of all repeated patterns with length up to 60 nucleotides in these sequences. These results have been used to provide answers to questions such as common patterns among all variants, sequence alignment, palindromes and tandem repeats detection, different organism genome comparisons, polymerase chain reaction primers detection, etc. |
2203.00722 | Lukas Brand | Lukas Brand, Moritz Garkisch, Sebastian Lotter, Maximilian Sch\"afer,
Andreas Burkovski, Heinrich Sticht, Kathrin Castiglione, and Robert Schober | Media Modulation based Molecular Communication | 16 pages double-column, 9 figures, 1 table. This work has been
published in IEEE Transactions on Communications | in IEEE Trans. Commun., vol. 70, no. 11, pp. 7207-7223, Nov. 2022 | 10.1109/TCOMM.2022.3205949 | null | q-bio.BM cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In conventional molecular communication (MC) systems, the signaling molecules
used for information transmission are stored, released, and then replenished by
a transmitter (TX). However, the replenishment of signaling molecules at the TX
is challenging in practice. Furthermore, in most envisioned MC applications,
e.g., in the medical field, it is not desirable to insert the TX into the MC
system, as this might impair natural biological processes. In this paper, we
propose the concept of media modulation based MC where the TX is placed outside
the channel and utilizes signaling molecules already present inside the system.
The signaling molecules can assume different states which can be switched by
external stimuli. Hence, in media modulation based MC, the TX modulates
information into the state of the signaling molecules. In particular, we
exploit the group of photochromic molecules, which undergo light-induced
reversible state transitions, for media modulation. We study the usage of these
molecules for information transmission in a three-dimensional duct system,
which contains an eraser, a TX, and a receiver for erasing, writing, and
reading of information via external light, respectively. We develop a
statistical model for the received signal which accounts for the distribution
of the signaling molecules in the system, the initial states of the signaling
molecules, the reliability of the state control mechanism, the randomness of
irrepressible, spontaneous state switching, and the randomness of molecule
propagation. We adopt a maximum likelihood detector and a threshold based
detector. Furthermore, we derive analytical expressions for the optimal
threshold value and the resulting bit error rate (BER), respectively. Our
results reveal that media modulation enables reliable information transmission,
validating it as a promising alternative to MC based on molecule emitting TXs.
| [
{
"created": "Tue, 1 Mar 2022 19:54:16 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Aug 2023 14:41:01 GMT",
"version": "v2"
}
] | 2023-08-16 | [
[
"Brand",
"Lukas",
""
],
[
"Garkisch",
"Moritz",
""
],
[
"Lotter",
"Sebastian",
""
],
[
"Schäfer",
"Maximilian",
""
],
[
"Burkovski",
"Andreas",
""
],
[
"Sticht",
"Heinrich",
""
],
[
"Castiglione",
"Kathrin",
""
],
[
"Schober",
"Robert",
""
]
] | In conventional molecular communication (MC) systems, the signaling molecules used for information transmission are stored, released, and then replenished by a transmitter (TX). However, the replenishment of signaling molecules at the TX is challenging in practice. Furthermore, in most envisioned MC applications, e.g., in the medical field, it is not desirable to insert the TX into the MC system, as this might impair natural biological processes. In this paper, we propose the concept of media modulation based MC where the TX is placed outside the channel and utilizes signaling molecules already present inside the system. The signaling molecules can assume different states which can be switched by external stimuli. Hence, in media modulation based MC, the TX modulates information into the state of the signaling molecules. In particular, we exploit the group of photochromic molecules, which undergo light-induced reversible state transitions, for media modulation. We study the usage of these molecules for information transmission in a three-dimensional duct system, which contains an eraser, a TX, and a receiver for erasing, writing, and reading of information via external light, respectively. We develop a statistical model for the received signal which accounts for the distribution of the signaling molecules in the system, the initial states of the signaling molecules, the reliability of the state control mechanism, the randomness of irrepressible, spontaneous state switching, and the randomness of molecule propagation. We adopt a maximum likelihood detector and a threshold based detector. Furthermore, we derive analytical expressions for the optimal threshold value and the resulting bit error rate (BER), respectively. Our results reveal that media modulation enables reliable information transmission, validating it as a promising alternative to MC based on molecule emitting TXs. |
0907.4115 | Davide Cora | Angela Re, Davide Cora', Daniela Taverna and Michele Caselle | Genome-Wide Survey of MicroRNA - Transcription Factor Feed-Forward
Regulatory Circuits in Human | 51 pages, 5 figures, 4 tables. Supporting information included.
Accepted for publication in Molecular BioSystems | Mol Biosyst. 2009 Aug;5(8):854-67 | 10.1039/b900177h | null | q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we describe a computational framework for the genome-wide
identification and characterization of mixed
transcriptional/post-transcriptional regulatory circuits in humans. We
concentrated in particular on feed-forward loops (FFL), in which a master
transcription factor regulates a microRNA, and together with it, a set of joint
target protein coding genes. The circuits were assembled with a two step
procedure. We first constructed separately the transcriptional and
post-transcriptional components of the human regulatory network by looking for
conserved over-represented motifs in human and mouse promoters, and 3'-UTRs.
Then, we combined the two subnetworks looking for mixed feed-forward regulatory
interactions, finding a total of 638 putative (merged) FFLs. In order to
investigate their biological relevance, we filtered these circuits using three
selection criteria: (I) GeneOntology enrichment among the joint targets of the
FFL, (II) independent computational evidence for the regulatory interactions of
the FFL, extracted from external databases, and (III) relevance of the FFL in
cancer. Most of the selected FFLs seem to be involved in various aspects of
organism development and differentiation. We finally discuss a few of the most
interesting cases in detail.
| [
{
"created": "Thu, 23 Jul 2009 16:36:16 GMT",
"version": "v1"
}
] | 2009-07-24 | [
[
"Re",
"Angela",
""
],
[
"Cora'",
"Davide",
""
],
[
"Taverna",
"Daniela",
""
],
[
"Caselle",
"Michele",
""
]
] | In this work, we describe a computational framework for the genome-wide identification and characterization of mixed transcriptional/post-transcriptional regulatory circuits in humans. We concentrated in particular on feed-forward loops (FFL), in which a master transcription factor regulates a microRNA, and together with it, a set of joint target protein coding genes. The circuits were assembled with a two step procedure. We first constructed separately the transcriptional and post-transcriptional components of the human regulatory network by looking for conserved over-represented motifs in human and mouse promoters, and 3'-UTRs. Then, we combined the two subnetworks looking for mixed feed-forward regulatory interactions, finding a total of 638 putative (merged) FFLs. In order to investigate their biological relevance, we filtered these circuits using three selection criteria: (I) GeneOntology enrichment among the joint targets of the FFL, (II) independent computational evidence for the regulatory interactions of the FFL, extracted from external databases, and (III) relevance of the FFL in cancer. Most of the selected FFLs seem to be involved in various aspects of organism development and differentiation. We finally discuss a few of the most interesting cases in detail. |
1611.02597 | Gerardo F. Goya | Beatriz Sanz, M. Pilar Calatayud, Teobaldo E. Torres, M\'onica L.
Fanarraga, M. Ricardo Ibarra and Gerardo F. Goya | Magnetic hyperthermia enhances cell toxicity with respect to exogenous
heating | 32 pages, Biomaterials 2017 | null | 10.1016/j.biomaterials.2016.11.008 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic hyperthermia is a new type of cancer treatment designed for
overcoming resistance to chemotherapy during the treatment of solid,
inaccessible human tumors. The main challenge of this technology is increasing
the local tumoral temperature with minimal side effects on the surrounding
healthy tissue. This work consists of an in vitro study that compared the
effect of hyperthermia in response to the application of exogenous heating
(EHT) sources with the corresponding effect produced by magnetic hyperthermia
(MHT) at the same target temperatures. Human neuroblastoma SH-SY5Y cells were
loaded with magnetic nanoparticles (MNPs) and packed into dense pellets to
generate an environment that is crudely similar to that expected in solid
micro-tumors, and the above-mentioned protocols were applied to these cells.
These experiments showed that for the same target temperatures, MHT induces a
decrease in cell viability that is larger than the corresponding EHT, up to a
maximum difference of approximately 45\% at T = 46{\deg}C. An analysis of the
data in terms of temperature efficiency demonstrated that MHT requires an
average temperature that is 6{\deg}C lower than that required with EHT to
produce a similar cytotoxic effect. An analysis of electron microscopy images
of the cells after the EHT and MHT treatments indicated that the enhanced
effectiveness observed with MHT is associated with local cell destruction
triggered by the magnetic nano-heaters. The present study is an essential step
toward the development of innovative adjuvant anti-cancer therapies based on
local hyperthermia treatments using magnetic particles as nano-heaters.
| [
{
"created": "Mon, 7 Nov 2016 14:04:03 GMT",
"version": "v1"
}
] | 2016-11-09 | [
[
"Sanz",
"Beatriz",
""
],
[
"Calatayud",
"M. Pilar",
""
],
[
"Torres",
"Teobaldo E.",
""
],
[
"Fanarraga",
"Mónica L.",
""
],
[
"Ibarra",
"M. Ricardo",
""
],
[
"Goya",
"Gerardo F.",
""
]
] | Magnetic hyperthermia is a new type of cancer treatment designed for overcoming resistance to chemotherapy during the treatment of solid, inaccessible human tumors. The main challenge of this technology is increasing the local tumoral temperature with minimal side effects on the surrounding healthy tissue. This work consists of an in vitro study that compared the effect of hyperthermia in response to the application of exogenous heating (EHT) sources with the corresponding effect produced by magnetic hyperthermia (MHT) at the same target temperatures. Human neuroblastoma SH-SY5Y cells were loaded with magnetic nanoparticles (MNPs) and packed into dense pellets to generate an environment that is crudely similar to that expected in solid micro-tumors, and the above-mentioned protocols were applied to these cells. These experiments showed that for the same target temperatures, MHT induces a decrease in cell viability that is larger than the corresponding EHT, up to a maximum difference of approximately 45\% at T = 46{\deg}C. An analysis of the data in terms of temperature efficiency demonstrated that MHT requires an average temperature that is 6{\deg}C lower than that required with EHT to produce a similar cytotoxic effect. An analysis of electron microscopy images of the cells after the EHT and MHT treatments indicated that the enhanced effectiveness observed with MHT is associated with local cell destruction triggered by the magnetic nano-heaters. The present study is an essential step toward the development of innovative adjuvant anti-cancer therapies based on local hyperthermia treatments using magnetic particles as nano-heaters. |
2309.04924 | Jai Pal | Bryan Hong, Jai Pal | MATLAB Plasmonic Nanoparticle Virion Counting and Interpretation System
in Urban Populations | 11 Pages, 16 Figures, Presented at 5 conferences | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | One of the biggest issues currently plaguing the field of medicine is the
lack of an accurate and efficient form of disease diagnosis especially in urban
settings such as major cities. For example, the two most commonly utilized test
diagnosis systems, the PCR and rapid test, sacrifice either accuracy or speed
to achieve the other, and this could slow down epidemiologists working to
combat the spread. Another issue currently present is the issue of viral
quantification or the counting of virions within a nasal sample. These can
provide doctors with crucial information in treating infections; however, the
current mediums are underdeveloped and unstandardized. This project's goals
were to 1) create an accurate and rapid RSV diagnostic test that could be
replicated and utilized efficiently in urban settings and 2) design a viral
quantification mechanism that counts the number of virions to provide more
information to healthcare workers. This diagnostic test involved a system that
pumped RSV-aggregated Au-nanoparticles and unaggregated Au-nanoparticles
through a microcapillary, whose cross-section was intersected by two laser
beams generating and detecting the nanobubbles. The signals between the
unaggregated and aggregated nanobubbles were calibrated, and the number of RSV
virions was recorded. The results yielded an accuracy of 99.99% and an average
time of 5.2 minutes, validating that this design is both faster and more
accurate compared to current tests. When cross-validated with Poisson
statistics, the virion counting system counted the number of virions with
98.52% accuracy. To verify the accuracy of our samples, the results were
compared to clinical trials of nasal samples, and our diagnostic system
predicted accurate diagnostics after statistical analysis. With further
testing, this diagnostic method could replace current standards of testing,
saving millions of lives every year.
| [
{
"created": "Sun, 10 Sep 2023 03:05:57 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Dec 2023 15:46:43 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jan 2024 02:26:41 GMT",
"version": "v3"
}
] | 2024-01-10 | [
[
"Hong",
"Bryan",
""
],
[
"Pal",
"Jai",
""
]
] | One of the biggest issues currently plaguing the field of medicine is the lack of an accurate and efficient form of disease diagnosis especially in urban settings such as major cities. For example, the two most commonly utilized test diagnosis systems, the PCR and rapid test, sacrifice either accuracy or speed to achieve the other, and this could slow down epidemiologists working to combat the spread. Another issue currently present is the issue of viral quantification or the counting of virions within a nasal sample. These can provide doctors with crucial information in treating infections; however, the current mediums are underdeveloped and unstandardized. This project's goals were to 1) create an accurate and rapid RSV diagnostic test that could be replicated and utilized efficiently in urban settings and 2) design a viral quantification mechanism that counts the number of virions to provide more information to healthcare workers. This diagnostic test involved a system that pumped RSV-aggregated Au-nanoparticles and unaggregated Au-nanoparticles through a microcapillary, whose cross-section was intersected by two laser beams generating and detecting the nanobubbles. The signals between the unaggregated and aggregated nanobubbles were calibrated, and the number of RSV virions was recorded. The results yielded an accuracy of 99.99% and an average time of 5.2 minutes, validating that this design is both faster and more accurate compared to current tests. When cross-validated with Poisson statistics, the virion counting system counted the number of virions with 98.52% accuracy. To verify the accuracy of our samples, the results were compared to clinical trials of nasal samples, and our diagnostic system predicted accurate diagnostics after statistical analysis. With further testing, this diagnostic method could replace current standards of testing, saving millions of lives every year. |
1302.5801 | Michael B\"orsch | Hendrik Sielaff, Thomas Heitkamp, Andrea Zappe, Nawid Zarrabi, Michael
Boersch | Subunit rotation in single FRET-labeled F1-ATPase hold in solution by an
anti-Brownian electrokinetic trap | 12 pages, 3 figures | null | 10.1117/12.2002955 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FoF1-ATP synthase catalyzes the synthesis of adenosine triphosphate (ATP).
The F1 portion can be stripped from the membrane-embedded Fo portion of the
enzyme. F1 acts as an ATP hydrolyzing enzyme, and ATP hydrolysis is associated
with stepwise rotation of the gamma and epsilon subunits of F1. This rotary
motion was studied in great detail for the last 15 years using single F1 parts
attached to surfaces. Subunit rotation of gamma was monitored by
videomicroscopy of bound fluorescent actin filaments, nanobeads or nanorods, or
single fluorophores. Alternatively, we applied single-molecule F\"orster
resonance energy transfer (FRET) to monitor subunit rotation in the holoenzyme
FoF1-ATP synthase which was reconstituted in liposomes. Now we aim to extend
the observation times of single FRET-labeled F1 in solution using a modified
version of the anti-Brownian electrokinetic trap (ABELtrap) invented by A. E.
Cohen and W. E. Moerner. We used Monte Carlo simulations to reveal that
stepwise FRET efficiency changes can be analyzed by Hidden Markov Models even
at the limit of a low signal-to-background ratio that was expected due to high
background count rates caused by the microfluidics of the ABELtrap.
| [
{
"created": "Sat, 23 Feb 2013 14:06:37 GMT",
"version": "v1"
}
] | 2018-02-14 | [
[
"Sielaff",
"Hendrik",
""
],
[
"Heitkamp",
"Thomas",
""
],
[
"Zappe",
"Andrea",
""
],
[
"Zarrabi",
"Nawid",
""
],
[
"Boersch",
"Michael",
""
]
] | FoF1-ATP synthase catalyzes the synthesis of adenosine triphosphate (ATP). The F1 portion can be stripped from the membrane-embedded Fo portion of the enzyme. F1 acts as an ATP hydrolyzing enzyme, and ATP hydrolysis is associated with stepwise rotation of the gamma and epsilon subunits of F1. This rotary motion was studied in great detail for the last 15 years using single F1 parts attached to surfaces. Subunit rotation of gamma was monitored by videomicroscopy of bound fluorescent actin filaments, nanobeads or nanorods, or single fluorophores. Alternatively, we applied single-molecule F\"orster resonance energy transfer (FRET) to monitor subunit rotation in the holoenzyme FoF1-ATP synthase which was reconstituted in liposomes. Now we aim to extend the observation times of single FRET-labeled F1 in solution using a modified version of the anti-Brownian electrokinetic trap (ABELtrap) invented by A. E. Cohen and W. E. Moerner. We used Monte Carlo simulations to reveal that stepwise FRET efficiency changes can be analyzed by Hidden Markov Models even at the limit of a low signal-to-background ratio that was expected due to high background count rates caused by the microfluidics of the ABELtrap. |
2012.00068 | Iaroslav Ispolatov | Michael Doebeli, Eduardo Cancino Jaque, Iaroslav Ispolatov | Boom-bust population dynamics can increase diversity in evolving
competitive communities | 37 pages, 9 figures | null | null | null | q-bio.PE cond-mat.dis-nn nlin.CD | http://creativecommons.org/licenses/by/4.0/ | The processes and mechanisms underlying the origin and maintenance of
biological diversity have long been of central importance in ecology and
evolution. The competitive exclusion principle states that the number of
coexisting species is limited by the number of resources, or by the species'
similarity in resource use. Natural systems such as the extreme diversity of
unicellular life in the oceans provide counter examples. It is known that
mathematical models incorporating population fluctuations can lead to
violations of the exclusion principle. Here we use simple eco-evolutionary
models to show that a certain type of population dynamics, boom-bust dynamics,
can allow for the evolution of much larger amounts of diversity than would be
expected with stable equilibrium dynamics. Boom-bust dynamics are characterized
by long periods of almost exponential growth (boom) and a subsequent population
crash due to competition (bust). When such ecological dynamics are incorporated
into an evolutionary model that allows for adaptive diversification in
continuous phenotype spaces, desynchronization of the boom-bust cycles of
coexisting species can lead to the maintenance of high levels of diversity.
| [
{
"created": "Mon, 30 Nov 2020 19:41:11 GMT",
"version": "v1"
}
] | 2020-12-02 | [
[
"Doebeli",
"Michael",
""
],
[
"Jaque",
"Eduardo Cancino",
""
],
[
"Ispolatov",
"Iaroslav",
""
]
] | The processes and mechanisms underlying the origin and maintenance of biological diversity have long been of central importance in ecology and evolution. The competitive exclusion principle states that the number of coexisting species is limited by the number of resources, or by the species' similarity in resource use. Natural systems such as the extreme diversity of unicellular life in the oceans provide counter examples. It is known that mathematical models incorporating population fluctuations can lead to violations of the exclusion principle. Here we use simple eco-evolutionary models to show that a certain type of population dynamics, boom-bust dynamics, can allow for the evolution of much larger amounts of diversity than would be expected with stable equilibrium dynamics. Boom-bust dynamics are characterized by long periods of almost exponential growth (boom) and a subsequent population crash due to competition (bust). When such ecological dynamics are incorporated into an evolutionary model that allows for adaptive diversification in continuous phenotype spaces, desynchronization of the boom-bust cycles of coexisting species can lead to the maintenance of high levels of diversity. |
1503.05575 | Vince Grolmusz | Csaba Kerepesi and Vince Grolmusz | The "Giant Virus Finder" Discovers an Abundance of Giant Viruses in the
Antarctic Dry Valleys | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The first giant virus was identified in 2003 from a biofilm of an industrial
water-cooling tower in England. Later, numerous new giant viruses were found in
oceans and freshwater habitats, some of them having even 2,500 genes. We have
demonstrated their very likely presence in four soil samples taken from the
Kutch Desert (Gujarat, India). Here we describe a bioinformatics work-flow,
called the "Giant Virus Finder" that is capable to discover the very likely
presence of the genomes of giant viruses in metagenomic shotgun-sequenced
datasets. The new tool is applied to numerous hot and cold desert soil samples
as well as some tundra- and forest soils. We show that most of these samples
contain giant viruses, and especially many were found in the Antarctic dry
valleys. The results imply that giant viruses could be frequent not only in
aqueous habitats, but in a wide spectrum of soils on our planet.
| [
{
"created": "Wed, 18 Mar 2015 20:33:18 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Aug 2015 11:34:07 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Nov 2015 21:17:08 GMT",
"version": "v3"
}
] | 2015-11-26 | [
[
"Kerepesi",
"Csaba",
""
],
[
"Grolmusz",
"Vince",
""
]
] | The first giant virus was identified in 2003 from a biofilm of an industrial water-cooling tower in England. Later, numerous new giant viruses were found in oceans and freshwater habitats, some of them having even 2,500 genes. We have demonstrated their very likely presence in four soil samples taken from the Kutch Desert (Gujarat, India). Here we describe a bioinformatics work-flow, called the "Giant Virus Finder" that is capable to discover the very likely presence of the genomes of giant viruses in metagenomic shotgun-sequenced datasets. The new tool is applied to numerous hot and cold desert soil samples as well as some tundra- and forest soils. We show that most of these samples contain giant viruses, and especially many were found in the Antarctic dry valleys. The results imply that giant viruses could be frequent not only in aqueous habitats, but in a wide spectrum of soils on our planet. |
1009.5667 | Kyung Hyuk Kim | Kyung Hyuk Kim and Herbert M. Sauro | Fan-out in Gene Regulatory Networks | 28 pages, 5 figures | Journal of Biological Engineering 2010, 4:16 | 10.1186/1754-1611-4-16 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In synthetic biology, gene regulatory circuits are often constructed by
combining smaller circuit components. Connections between components are
achieved by transcription factors acting on promoters. If the individual
components behave as true modules and certain module interface conditions are
satisfied, the function of the composite circuits can in principle be
predicted. In this paper, we investigate one of the interface conditions:
fan-out. We quantify the fan-out, a concept widely used in electric
engineering, to indicate the maximum number of the downstream inputs that an
upstream output transcription factor can regulate. We show that the fan-out is
closely related to retroactivity studied by Del Vecchio, et al. We propose an
efficient operational method for measuring the fan-out that can be applied to
various types of module interfaces. We also show that the fan-out can be
enhanced by self-inhibitory regulation on the output. We discuss the potential
role of the inhibitory regulations found in gene regulatory networks and
protein signal pathways. The proposed estimation method for fanout not only
provides an experimentally efficient way for quantifying the level of
modularity in gene regulatory circuits but also helps characterize and design
module interfaces, enabling the modular construction of gene circuits.
| [
{
"created": "Tue, 28 Sep 2010 19:51:13 GMT",
"version": "v1"
}
] | 2011-02-15 | [
[
"Kim",
"Kyung Hyuk",
""
],
[
"Sauro",
"Herbert M.",
""
]
] | In synthetic biology, gene regulatory circuits are often constructed by combining smaller circuit components. Connections between components are achieved by transcription factors acting on promoters. If the individual components behave as true modules and certain module interface conditions are satisfied, the function of the composite circuits can in principle be predicted. In this paper, we investigate one of the interface conditions: fan-out. We quantify the fan-out, a concept widely used in electric engineering, to indicate the maximum number of the downstream inputs that an upstream output transcription factor can regulate. We show that the fan-out is closely related to retroactivity studied by Del Vecchio, et al. We propose an efficient operational method for measuring the fan-out that can be applied to various types of module interfaces. We also show that the fan-out can be enhanced by self-inhibitory regulation on the output. We discuss the potential role of the inhibitory regulations found in gene regulatory networks and protein signal pathways. The proposed estimation method for fanout not only provides an experimentally efficient way for quantifying the level of modularity in gene regulatory circuits but also helps characterize and design module interfaces, enabling the modular construction of gene circuits. |
2305.09480 | Cheng Tan | Cheng Tan, Zhangyang Gao, Lirong Wu, Jun Xia, Jiangbin Zheng, Xihong
Yang, Yue Liu, Bozhen Hu, Stan Z. Li | Cross-Gate MLP with Protein Complex Invariant Embedding is A One-Shot
Antibody Designer | Accepted by AAAI 2024 | null | null | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibodies are crucial proteins produced by the immune system in response to
foreign substances or antigens. The specificity of an antibody is determined by
its complementarity-determining regions (CDRs), which are located in the
variable domains of the antibody chains and form the antigen-binding site.
Previous studies have utilized complex techniques to generate CDRs, but they
suffer from inadequate geometric modeling. Moreover, the common iterative
refinement strategies lead to an inefficient inference. In this paper, we
propose a \textit{simple yet effective} model that can co-design 1D sequences
and 3D structures of CDRs in a one-shot manner. To achieve this, we decouple
the antibody CDR design problem into two stages: (i) geometric modeling of
protein complex structures and (ii) sequence-structure co-learning. We develop
a novel macromolecular structure invariant embedding, typically for protein
complexes, that captures both intra- and inter-component interactions among the
backbone atoms, including C$\alpha$, N, C, and O atoms, to achieve
comprehensive geometric modeling. Then, we introduce a simple cross-gate MLP
for sequence-structure co-learning, allowing sequence and structure
representations to implicitly refine each other. This enables our model to
design desired sequences and structures in a one-shot manner. Extensive
experiments are conducted to evaluate our results at both the sequence and
structure levels, which demonstrate that our model achieves superior
performance compared to the state-of-the-art antibody CDR design methods.
| [
{
"created": "Fri, 21 Apr 2023 13:24:26 GMT",
"version": "v1"
},
{
"created": "Wed, 17 May 2023 13:13:00 GMT",
"version": "v2"
},
{
"created": "Sat, 20 May 2023 10:30:02 GMT",
"version": "v3"
},
{
"created": "Thu, 28 Dec 2023 06:33:30 GMT",
"version": "v4"
},
{
"created": "Wed, 10 Jan 2024 08:39:38 GMT",
"version": "v5"
}
] | 2024-01-11 | [
[
"Tan",
"Cheng",
""
],
[
"Gao",
"Zhangyang",
""
],
[
"Wu",
"Lirong",
""
],
[
"Xia",
"Jun",
""
],
[
"Zheng",
"Jiangbin",
""
],
[
"Yang",
"Xihong",
""
],
[
"Liu",
"Yue",
""
],
[
"Hu",
"Bozhen",
""
],
[
"Li",
"Stan Z.",
""
]
] | Antibodies are crucial proteins produced by the immune system in response to foreign substances or antigens. The specificity of an antibody is determined by its complementarity-determining regions (CDRs), which are located in the variable domains of the antibody chains and form the antigen-binding site. Previous studies have utilized complex techniques to generate CDRs, but they suffer from inadequate geometric modeling. Moreover, the common iterative refinement strategies lead to an inefficient inference. In this paper, we propose a \textit{simple yet effective} model that can co-design 1D sequences and 3D structures of CDRs in a one-shot manner. To achieve this, we decouple the antibody CDR design problem into two stages: (i) geometric modeling of protein complex structures and (ii) sequence-structure co-learning. We develop a novel macromolecular structure invariant embedding, typically for protein complexes, that captures both intra- and inter-component interactions among the backbone atoms, including C$\alpha$, N, C, and O atoms, to achieve comprehensive geometric modeling. Then, we introduce a simple cross-gate MLP for sequence-structure co-learning, allowing sequence and structure representations to implicitly refine each other. This enables our model to design desired sequences and structures in a one-shot manner. Extensive experiments are conducted to evaluate our results at both the sequence and structure levels, which demonstrate that our model achieves superior performance compared to the state-of-the-art antibody CDR design methods. |
q-bio/0503026 | Neda Zoltan | Z. Neda and M. Ravasz | Species Abundances Distribution in Neutral Community Models | Revtex, 13 pages, 3 figures | null | null | null | q-bio.PE q-bio.QM | null | An analytical approximation is derived for the Zero Sum Multinomial
distribution which gives the Species Abundance Distribution in Neutral
Community Models. The obtained distribution function describes well computer
simulation results on the model, and leads to an interesting relation between
the total number of individuals, total number of species and the size of the
most abundant species of the considered metacommunity. Computer simulations on
neutral community models, proves also the validity of this scaling relation.
| [
{
"created": "Thu, 17 Mar 2005 09:51:16 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Neda",
"Z.",
""
],
[
"Ravasz",
"M.",
""
]
] | An analytical approximation is derived for the Zero Sum Multinomial distribution which gives the Species Abundance Distribution in Neutral Community Models. The obtained distribution function describes well computer simulation results on the model, and leads to an interesting relation between the total number of individuals, total number of species and the size of the most abundant species of the considered metacommunity. Computer simulations on neutral community models, proves also the validity of this scaling relation. |
2005.06938 | Rolf Bader | Lenz Hartmann, Rolf Bader | Neural Synchronization of Music Large-Scale Form | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music large-scale form, the structure of musical units ranging over several
bars, are studied using EEG measurements of 25 participants listened to the
first four minutes of a piece of electronic dance music (EDM). Grand-averages
of event-related potentials (ERPs) calculated for all electrodes show dynamics
in phase synchronization between different brain regions. Here local maxima of
the perceptual parameters correspond to strong synchronization, which culminate
at time points, where musical large-scale form boundaries were perceptually
expected. Significant differences between local maxima and minima were found,
using a Paired Samples t-test, showing global neural synchronization between
different brain regions most strongly in the gamma-band EEG frequency range.
Such synchronization increases before musical large-scale form boundaries, and
decreases afterwards, therefore representing musical large-scale form
perception.
| [
{
"created": "Thu, 14 May 2020 13:18:29 GMT",
"version": "v1"
}
] | 2020-05-15 | [
[
"Hartmann",
"Lenz",
""
],
[
"Bader",
"Rolf",
""
]
] | Music large-scale form, the structure of musical units ranging over several bars, are studied using EEG measurements of 25 participants listened to the first four minutes of a piece of electronic dance music (EDM). Grand-averages of event-related potentials (ERPs) calculated for all electrodes show dynamics in phase synchronization between different brain regions. Here local maxima of the perceptual parameters correspond to strong synchronization, which culminate at time points, where musical large-scale form boundaries were perceptually expected. Significant differences between local maxima and minima were found, using a Paired Samples t-test, showing global neural synchronization between different brain regions most strongly in the gamma-band EEG frequency range. Such synchronization increases before musical large-scale form boundaries, and decreases afterwards, therefore representing musical large-scale form perception. |
2211.15667 | Thomas Jacob | Harsh Shah, Thomas Jacob, Amruta Parulekar, Anjali Amarapurkar, Amit
Sethi | Artificial Intelligence-based Eosinophil Counting in Gastrointestinal
Biopsies | 4 pages, 2 figures | null | null | null | q-bio.QM cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Normally eosinophils are present in the gastrointestinal (GI) tract of
healthy individuals. When the eosinophils increase beyond their usual amount in
the GI tract, a patient gets varied symptoms. Clinicians find it difficult to
diagnose this condition called eosinophilia. Early diagnosis can help in
treating patients. Histopathology is the gold standard in the diagnosis for
this condition. As this is an under-diagnosed condition, counting eosinophils
in the GI tract biopsies is important. In this study, we trained and tested a
deep neural network based on UNet to detect and count eosinophils in GI tract
biopsies. We used connected component analysis to extract the eosinophils. We
studied correlation of eosinophilic infiltration counted by AI with a manual
count. GI tract biopsy slides were stained with H&E stain. Slides were scanned
using a camera attached to a microscope and five high-power field images were
taken per slide. Pearson correlation coefficient was 85% between the
machine-detected and manual eosinophil counts on 300 held-out (test) images.
| [
{
"created": "Fri, 25 Nov 2022 07:18:28 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Shah",
"Harsh",
""
],
[
"Jacob",
"Thomas",
""
],
[
"Parulekar",
"Amruta",
""
],
[
"Amarapurkar",
"Anjali",
""
],
[
"Sethi",
"Amit",
""
]
] | Normally eosinophils are present in the gastrointestinal (GI) tract of healthy individuals. When the eosinophils increase beyond their usual amount in the GI tract, a patient gets varied symptoms. Clinicians find it difficult to diagnose this condition called eosinophilia. Early diagnosis can help in treating patients. Histopathology is the gold standard in the diagnosis for this condition. As this is an under-diagnosed condition, counting eosinophils in the GI tract biopsies is important. In this study, we trained and tested a deep neural network based on UNet to detect and count eosinophils in GI tract biopsies. We used connected component analysis to extract the eosinophils. We studied correlation of eosinophilic infiltration counted by AI with a manual count. GI tract biopsy slides were stained with H&E stain. Slides were scanned using a camera attached to a microscope and five high-power field images were taken per slide. Pearson correlation coefficient was 85% between the machine-detected and manual eosinophil counts on 300 held-out (test) images. |
1304.5031 | Yu-Fei He | Min Hu, Yu-Fei He | Tumor can originate from not only rare cancer stem cells | This work was finished 4 years ago and is still valuable nowadays | null | null | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumors are believed to consist of a heterogeneous population of tumor cells
originating from rare cancer stem cells (CSCs). However, emerging evidences
show that tumor may also originate from non-CSCs. Here, we give evidences
supporting that the number of tumorigenic tumor cells is higher than the number
of CSCs and tumor can also derive from non-CSCs. First, we applied an idealized
mathematical model and theoretically calculated that non-CSCs could initiate
tumor if their proliferation potential was adequate. Next, we demonstrated by
experimental studies that 17.7%, 38.6% and 5.2% of tumor cells in murine B16
solid melanoma, H22 hepatoma and Lewis lung carcinoma, respectively, were
potentially tumorigenic. We propose that the rare CSCs, if exist, are not the
only origination of a tumor.
| [
{
"created": "Thu, 18 Apr 2013 06:26:56 GMT",
"version": "v1"
}
] | 2013-04-19 | [
[
"Hu",
"Min",
""
],
[
"He",
"Yu-Fei",
""
]
] | Tumors are believed to consist of a heterogeneous population of tumor cells originating from rare cancer stem cells (CSCs). However, emerging evidences show that tumor may also originate from non-CSCs. Here, we give evidences supporting that the number of tumorigenic tumor cells is higher than the number of CSCs and tumor can also derive from non-CSCs. First, we applied an idealized mathematical model and theoretically calculated that non-CSCs could initiate tumor if their proliferation potential was adequate. Next, we demonstrated by experimental studies that 17.7%, 38.6% and 5.2% of tumor cells in murine B16 solid melanoma, H22 hepatoma and Lewis lung carcinoma, respectively, were potentially tumorigenic. We propose that the rare CSCs, if exist, are not the only origination of a tumor. |
1908.10523 | Dan Chen | Dan Chen, Yusuke Kikuchi, Kenichiro Fujiyama, Shunsuke Akimoto, Shinji
Oominato, Toshihiro Hasegawa | Improving the soil water module of the Decision Support System for
Agrotechnology Transfer cropping system model for subsurface irrigation | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring that crops use water and nutrients efficiently is an important
strategy for increasing the profitability of farming and reducing the
environmental load from agriculture. Subsurface irrigation can be an
alternative to surface irrigation as a means of losing less irrigation water,
but the application timing and amount are often difficult to determine.
Well-defined soil and crop models are useful for assisting decision support,
but most of the models developed to date have been for surface irrigation. The
present study examines whether the Decision Support System for Agrotechnology
Transfer (DSSAT, version 4.5) cropping system model is applicable for the
production of processing tomatoes with subsurface irrigation, and it revises
the soil module to simulate irrigation schemes with subsurface irrigation. Five
farmed fields in California, USA, are used to test the performance of the
model. The original DSSAT model fails to produce fruit yield by overestimating
the water deficiency. The soil water module is then revised by introducing the
movement of soil moisture due to a vertical soil moisture gradient. Moreover,
an external parameter optimization system is constructed to minimize the error
between the simulation and observations. The revised module reduces the errors
in the soil moisture profile at each field compared to those by the original
DSSAT model. The average soil moisture error decreases from 0.065m^3/m^3 to
0.029m^3/m^3. The yields estimated by the modified model are in a reasonable
range from 80 to 150 ton/ha, which is commonly observed under well-managed
conditions. The present results show that although further testing is required
for yield prediction, the present modification to the original DSSAT model
improves the precision of the soil moisture profile under subsurface irrigation
and can be used for decision support for efficient producting of processing
tomatoes.
| [
{
"created": "Wed, 28 Aug 2019 02:25:19 GMT",
"version": "v1"
}
] | 2019-08-29 | [
[
"Chen",
"Dan",
""
],
[
"Kikuchi",
"Yusuke",
""
],
[
"Fujiyama",
"Kenichiro",
""
],
[
"Akimoto",
"Shunsuke",
""
],
[
"Oominato",
"Shinji",
""
],
[
"Hasegawa",
"Toshihiro",
""
]
] | Ensuring that crops use water and nutrients efficiently is an important strategy for increasing the profitability of farming and reducing the environmental load from agriculture. Subsurface irrigation can be an alternative to surface irrigation as a means of losing less irrigation water, but the application timing and amount are often difficult to determine. Well-defined soil and crop models are useful for assisting decision support, but most of the models developed to date have been for surface irrigation. The present study examines whether the Decision Support System for Agrotechnology Transfer (DSSAT, version 4.5) cropping system model is applicable for the production of processing tomatoes with subsurface irrigation, and it revises the soil module to simulate irrigation schemes with subsurface irrigation. Five farmed fields in California, USA, are used to test the performance of the model. The original DSSAT model fails to produce fruit yield by overestimating the water deficiency. The soil water module is then revised by introducing the movement of soil moisture due to a vertical soil moisture gradient. Moreover, an external parameter optimization system is constructed to minimize the error between the simulation and observations. The revised module reduces the errors in the soil moisture profile at each field compared to those by the original DSSAT model. The average soil moisture error decreases from 0.065m^3/m^3 to 0.029m^3/m^3. The yields estimated by the modified model are in a reasonable range from 80 to 150 ton/ha, which is commonly observed under well-managed conditions. The present results show that although further testing is required for yield prediction, the present modification to the original DSSAT model improves the precision of the soil moisture profile under subsurface irrigation and can be used for decision support for efficient producting of processing tomatoes. |
2405.18587 | Gorka Zamora-L\'opez | Rapha\"el Bergoin and Alessandro Torcini and Gustavo Deco and Mathias
Quoy and Gorka Zamora-L\'opez | Emergence and long-term maintenance of modularity in plastic networks of
spiking neurons | 28 pages, 11 figures | null | null | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last three decades it has become clear that cortical regions,
interconnected via white-matter fibers, form a modular and hierarchical
network. This organization, which has also been seen at the microscopic level
in the form of interconnected neural assemblies, is believed to support the
coexistence of segregation (specialization) and integration (binding) of
information. A fundamental open question is to understand how this complex
structure can emerge in the brain. Here, we made a first step to address this
question and propose that adaptation to various inputs could be the key driving
mechanism for the formation of structural assemblies. To test this idea, we
develop a model of quadratic integrate-and-fire spiking neurons, trained to
stimuli targetting distinct sub-populations. The model is designed to satisfy
several biologically plausible constraints: (i) the network contains excitatory
and inhibitory neurons with Hebbian and anti-Hebbian STDP; and (ii) neither the
neuronal activity nor the synaptic weights are frozen after the learning phase.
Instead, the network continues firing spontaneously while synaptic plasticity
remains active. We find that only the combination of the two inhibitory STDP
sub-populations allows for the formation of stable modular organization in the
network, with each sub-population playing a distinct role. The Hebbian
sub-population controls for the firing rate and the anti-Hebbian mediates
pattern selectivity. After the learning phase, the network activity settles
into an asynchronous irregular resting-state, resembling the behaviour
typically observed in-vivo in the cortex. This post-learning activity also
displays spontaneous memory recalls, which are fundamental for the long-term
consolidation of the learned memory items. The model introduced represents a
starting point for the joint investigation of neural dynamics, connectivity and
plasticity.
| [
{
"created": "Tue, 28 May 2024 21:07:53 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jul 2024 10:36:25 GMT",
"version": "v2"
}
] | 2024-07-16 | [
[
"Bergoin",
"Raphaël",
""
],
[
"Torcini",
"Alessandro",
""
],
[
"Deco",
"Gustavo",
""
],
[
"Quoy",
"Mathias",
""
],
[
"Zamora-López",
"Gorka",
""
]
] | In the last three decades it has become clear that cortical regions, interconnected via white-matter fibers, form a modular and hierarchical network. This organization, which has also been seen at the microscopic level in the form of interconnected neural assemblies, is believed to support the coexistence of segregation (specialization) and integration (binding) of information. A fundamental open question is to understand how this complex structure can emerge in the brain. Here, we made a first step to address this question and propose that adaptation to various inputs could be the key driving mechanism for the formation of structural assemblies. To test this idea, we develop a model of quadratic integrate-and-fire spiking neurons, trained to stimuli targetting distinct sub-populations. The model is designed to satisfy several biologically plausible constraints: (i) the network contains excitatory and inhibitory neurons with Hebbian and anti-Hebbian STDP; and (ii) neither the neuronal activity nor the synaptic weights are frozen after the learning phase. Instead, the network continues firing spontaneously while synaptic plasticity remains active. We find that only the combination of the two inhibitory STDP sub-populations allows for the formation of stable modular organization in the network, with each sub-population playing a distinct role. The Hebbian sub-population controls for the firing rate and the anti-Hebbian mediates pattern selectivity. After the learning phase, the network activity settles into an asynchronous irregular resting-state, resembling the behaviour typically observed in-vivo in the cortex. This post-learning activity also displays spontaneous memory recalls, which are fundamental for the long-term consolidation of the learned memory items. The model introduced represents a starting point for the joint investigation of neural dynamics, connectivity and plasticity. |
2305.08830 | Josinaldo Menezes | J. Menezes and M. Tenorio | Spatial patterns and biodiversity in rock-paper-scissors models with
regional unevenness | 17 pages, 7 figures | Journal of Physics: Complexity 4, 025015 (2023) | 10.1088/2632-072X/acd610 | null | q-bio.PE nlin.AO nlin.PS physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Climate changes may affect ecosystems destabilising relationships among
species. We investigate the spatial rock-paper-scissors models with a regional
unevenness that reduces the selection capacity of organisms of one species. Our
results show that the regionally weak species predominates in the local
ecosystem, while spiral patterns appear far from the region, where individuals
of every species play the rock-paper-scissors game with the same strength.
Because the weak species controls all local territory, it is attractive for the
other species to enter the local ecosystem to conquer the territory. However,
our stochastic simulations show that the transitory waves formed when organisms
of the strong species reach the region are quickly destroyed because of local
strength unbalance in the selection game rules. Computing the effect of the
topology on population dynamics, we find that the prevalence of the weak
species becomes more significant if the transition of the selection capacity to
the area of uneven rock-paper-scissors rules is smooth. Finally, our findings
show that the biodiversity loss due to the arising of regional unevenness is
minimised if the transition to the region where the cyclic game is unbalanced
is abrupt. Our results may be helpful to biologists in comprehending the
consequences of changes in the environmental conditions on species coexistence
and spatial patterns in complex systems.
| [
{
"created": "Mon, 15 May 2023 17:45:50 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Menezes",
"J.",
""
],
[
"Tenorio",
"M.",
""
]
] | Climate changes may affect ecosystems destabilising relationships among species. We investigate the spatial rock-paper-scissors models with a regional unevenness that reduces the selection capacity of organisms of one species. Our results show that the regionally weak species predominates in the local ecosystem, while spiral patterns appear far from the region, where individuals of every species play the rock-paper-scissors game with the same strength. Because the weak species controls all local territory, it is attractive for the other species to enter the local ecosystem to conquer the territory. However, our stochastic simulations show that the transitory waves formed when organisms of the strong species reach the region are quickly destroyed because of local strength unbalance in the selection game rules. Computing the effect of the topology on population dynamics, we find that the prevalence of the weak species becomes more significant if the transition of the selection capacity to the area of uneven rock-paper-scissors rules is smooth. Finally, our findings show that the biodiversity loss due to the arising of regional unevenness is minimised if the transition to the region where the cyclic game is unbalanced is abrupt. Our results may be helpful to biologists in comprehending the consequences of changes in the environmental conditions on species coexistence and spatial patterns in complex systems. |
1712.09462 | Tetsuya Kobayashi | Tetsuya J. Kobayashi and Yuki Sughiyama | Individual Sensing can Gain more Fitness than its Information | 5figures | null | null | null | q-bio.PE physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutual information and its causal variant, directed information, have been
widely used to quantitatively characterize the performance of biological
sensing and information transduction. However, once coupled with selection in
response to decision-making, the sensing signal could have more or less
evolutionary value than its mutual or directed information. In this work, we
show that an individually sensed signal always has a better fitness value, on
average, than its mutual or directed information. The fitness gain, which
satisfies fluctuation relations (FRs), is attributed to the selection of
organisms in a population that obtain a better sensing signal by chance. A new
quantity, similar to the coarse-grained entropy production in information
thermodynamics, is introduced to quantify the total fitness gain from
individual sensing, which also satisfies FRs. Using this quantity, the
optimizing fitness gain from individual sensing is shown to be related to
fidelity allocations for individual environmental histories. Our results are
supplemented by numerical verifications of FRs, and a discussion on how this
problem is linked to information encoding and decoding.
| [
{
"created": "Wed, 27 Dec 2017 00:15:39 GMT",
"version": "v1"
}
] | 2017-12-29 | [
[
"Kobayashi",
"Tetsuya J.",
""
],
[
"Sughiyama",
"Yuki",
""
]
] | Mutual information and its causal variant, directed information, have been widely used to quantitatively characterize the performance of biological sensing and information transduction. However, once coupled with selection in response to decision-making, the sensing signal could have more or less evolutionary value than its mutual or directed information. In this work, we show that an individually sensed signal always has a better fitness value, on average, than its mutual or directed information. The fitness gain, which satisfies fluctuation relations (FRs), is attributed to the selection of organisms in a population that obtain a better sensing signal by chance. A new quantity, similar to the coarse-grained entropy production in information thermodynamics, is introduced to quantify the total fitness gain from individual sensing, which also satisfies FRs. Using this quantity, the optimizing fitness gain from individual sensing is shown to be related to fidelity allocations for individual environmental histories. Our results are supplemented by numerical verifications of FRs, and a discussion on how this problem is linked to information encoding and decoding. |
2406.11178 | Nathaniel Linden | Nathaniel Linden-Santangeli, Jin Zhang, Boris Kramer, Padmini
Rangamani | Increasing certainty in systems biology models using Bayesian multimodel
inference | 25 pages; 5 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mathematical models are indispensable to the system biology toolkit for
studying the structure and behavior of intracellular signaling networks. A
common approach to modeling is to develop a system of equations that encode the
known biology using approximations and simplifying assumptions. As a result,
the same signaling pathway can be represented by multiple models, each with its
set of underlying assumptions, which opens up challenges for model selection
and decreases certainty in model predictions. Here, we use Bayesian multimodel
inference to develop a framework to increase certainty in systems biology
models. Using models of the extracellular regulated kinase (ERK) pathway, we
first show that multimodel inference increases predictive certainty and yields
predictors that are robust to changes in the set of available models. We then
show that predictions made with multimodel inference are robust to data
uncertainties introduced by decreasing the measurement duration and reducing
the sample size. Finally, we use multimodel inference to identify a new model
to explain experimentally measured sub-cellular location-specific ERK activity
dynamics. In summary, our framework highlights multimodel inference as a
disciplined approach to increasing the certainty of intracellular signaling
activity predictions.
| [
{
"created": "Mon, 17 Jun 2024 03:30:20 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Linden-Santangeli",
"Nathaniel",
""
],
[
"Zhang",
"Jin",
""
],
[
"Kramer",
"Boris",
""
],
[
"Rangamani",
"Padmini",
""
]
] | Mathematical models are indispensable to the system biology toolkit for studying the structure and behavior of intracellular signaling networks. A common approach to modeling is to develop a system of equations that encode the known biology using approximations and simplifying assumptions. As a result, the same signaling pathway can be represented by multiple models, each with its set of underlying assumptions, which opens up challenges for model selection and decreases certainty in model predictions. Here, we use Bayesian multimodel inference to develop a framework to increase certainty in systems biology models. Using models of the extracellular regulated kinase (ERK) pathway, we first show that multimodel inference increases predictive certainty and yields predictors that are robust to changes in the set of available models. We then show that predictions made with multimodel inference are robust to data uncertainties introduced by decreasing the measurement duration and reducing the sample size. Finally, we use multimodel inference to identify a new model to explain experimentally measured sub-cellular location-specific ERK activity dynamics. In summary, our framework highlights multimodel inference as a disciplined approach to increasing the certainty of intracellular signaling activity predictions. |
q-bio/0608006 | Ulrich S. Schwarz | Ulrich S. Schwarz, Thorsten Erdmann and Ilka B. Bischofs (Heidelberg
University) | Focal adhesions as mechanosensors: the two-spring model | Latex, 17 pages, 5 postscript figures included | BioSystems 83: 225-232 (2006) | null | null | q-bio.SC q-bio.BM | null | Adhesion-dependent cells actively sense the mechanical properties of their
environment through mechanotransductory processes at focal adhesions, which are
integrin-based contacts connecting the extracellular matrix to the
cytoskeleton. Here we present first steps towards a quantitative understanding
of focal adhesions as mechanosensors. It has been shown experimentally that
high levels of force are related to growth of and signaling at focal adhesions.
In particular, activation of the small GTPase Rho through focal adhesions leads
to the formation of stress fibers. Here we discuss one way in which force might
regulate the internal state of focal adhesions, namely by modulating the
internal rupture dynamics of focal adhesions. A simple two-spring model shows
that the stiffer the environment, the more efficient cellular force is built up
at focal adhesions by molecular motors interacting with the actin filaments.
| [
{
"created": "Thu, 3 Aug 2006 15:38:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Schwarz",
"Ulrich S.",
"",
"Heidelberg\n University"
],
[
"Erdmann",
"Thorsten",
"",
"Heidelberg\n University"
],
[
"Bischofs",
"Ilka B.",
"",
"Heidelberg\n University"
]
] | Adhesion-dependent cells actively sense the mechanical properties of their environment through mechanotransductory processes at focal adhesions, which are integrin-based contacts connecting the extracellular matrix to the cytoskeleton. Here we present first steps towards a quantitative understanding of focal adhesions as mechanosensors. It has been shown experimentally that high levels of force are related to growth of and signaling at focal adhesions. In particular, activation of the small GTPase Rho through focal adhesions leads to the formation of stress fibers. Here we discuss one way in which force might regulate the internal state of focal adhesions, namely by modulating the internal rupture dynamics of focal adhesions. A simple two-spring model shows that the stiffer the environment, the more efficient cellular force is built up at focal adhesions by molecular motors interacting with the actin filaments. |
1211.2160 | Christophe Dessimoz | Stefano Iantorno, Kevin Gori, Nick Goldman, Manuel Gil, and Christophe
Dessimoz | Who Watches the Watchmen? An Appraisal of Benchmarks for Multiple
Sequence Alignment | Review | Methods Mol Biol. 2014;1079:59-73 | 10.1007/978-1-62703-646-7_4 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple sequence alignment (MSA) is a fundamental and ubiquitous technique
in bioinformatics used to infer related residues among biological sequences.
Thus alignment accuracy is crucial to a vast range of analyses, often in ways
difficult to assess in those analyses. To compare the performance of different
aligners and help detect systematic errors in alignments, a number of
benchmarking strategies have been pursued. Here we present an overview of the
main strategies--based on simulation, consistency, protein structure, and
phylogeny--and discuss their different advantages and associated risks. We
outline a set of desirable characteristics for effective benchmarking, and
evaluate each strategy in light of them. We conclude that there is currently no
universally applicable means of benchmarking MSA, and that developers and users
of alignment tools should base their choice of benchmark depending on the
context of application--with a keen awareness of the assumptions underlying
each benchmarking strategy.
| [
{
"created": "Fri, 9 Nov 2012 15:26:44 GMT",
"version": "v1"
}
] | 2015-01-09 | [
[
"Iantorno",
"Stefano",
""
],
[
"Gori",
"Kevin",
""
],
[
"Goldman",
"Nick",
""
],
[
"Gil",
"Manuel",
""
],
[
"Dessimoz",
"Christophe",
""
]
] | Multiple sequence alignment (MSA) is a fundamental and ubiquitous technique in bioinformatics used to infer related residues among biological sequences. Thus alignment accuracy is crucial to a vast range of analyses, often in ways difficult to assess in those analyses. To compare the performance of different aligners and help detect systematic errors in alignments, a number of benchmarking strategies have been pursued. Here we present an overview of the main strategies--based on simulation, consistency, protein structure, and phylogeny--and discuss their different advantages and associated risks. We outline a set of desirable characteristics for effective benchmarking, and evaluate each strategy in light of them. We conclude that there is currently no universally applicable means of benchmarking MSA, and that developers and users of alignment tools should base their choice of benchmark depending on the context of application--with a keen awareness of the assumptions underlying each benchmarking strategy. |
1907.12071 | Xiaohan Lin | Yuanyuan Mi, Xiaohan Lin, Xiaolong Zou, Zilong Ji, Tiejun Huang, Si Wu | Spatiotemporal Information Processing with a Reservoir Decision-making
Network | 9 pages, 6 figures | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatiotemporal information processing is fundamental to brain functions. The
present study investigates a canonic neural network model for spatiotemporal
pattern recognition. Specifically, the model consists of two modules, a
reservoir subnetwork and a decision-making subnetwork. The former projects
complex spatiotemporal patterns into spatially separated neural
representations, and the latter reads out these neural representations via
integrating information over time; the two modules are combined together via
supervised-learning using known examples. We elucidate the working mechanism of
the model and demonstrate its feasibility for discriminating complex
spatiotemporal patterns. Our model reproduces the phenomenon of recognizing
looming patterns in the neural system, and can learn to discriminate gait with
very few training examples. We hope this study gives us insight into
understanding how spatiotemporal information is processed in the brain and
helps us to develop brain-inspired application algorithms.
| [
{
"created": "Sun, 28 Jul 2019 11:04:34 GMT",
"version": "v1"
}
] | 2019-07-30 | [
[
"Mi",
"Yuanyuan",
""
],
[
"Lin",
"Xiaohan",
""
],
[
"Zou",
"Xiaolong",
""
],
[
"Ji",
"Zilong",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Wu",
"Si",
""
]
] | Spatiotemporal information processing is fundamental to brain functions. The present study investigates a canonic neural network model for spatiotemporal pattern recognition. Specifically, the model consists of two modules, a reservoir subnetwork and a decision-making subnetwork. The former projects complex spatiotemporal patterns into spatially separated neural representations, and the latter reads out these neural representations via integrating information over time; the two modules are combined together via supervised-learning using known examples. We elucidate the working mechanism of the model and demonstrate its feasibility for discriminating complex spatiotemporal patterns. Our model reproduces the phenomenon of recognizing looming patterns in the neural system, and can learn to discriminate gait with very few training examples. We hope this study gives us insight into understanding how spatiotemporal information is processed in the brain and helps us to develop brain-inspired application algorithms. |
2002.04945 | Tianyu Zeng | Tianyu Zeng, Yunong Zhang, Zhenyu Li, Xiao Liu, and Binbin Qiu | Predictions of 2019-nCoV Transmission Ending via Comprehensive Methods | null | null | null | null | q-bio.PE cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the SARS outbreak in 2003, a lot of predictive epidemiological models
have been proposed. At the end of 2019, a novel coronavirus, termed as
2019-nCoV, has broken out and is propagating in China and the world. Here we
propose a multi-model ordinary differential equation set neural network
(MMODEs-NN) and model-free methods to predict the interprovincial transmissions
in mainland China, especially those from Hubei Province. Compared with the
previously proposed epidemiological models, the proposed network can simulate
the transportations with the ODEs activation method, while the model-free
methods based on the sigmoid function, Gaussian function, and Poisson
distribution are linear and fast to generate reasonable predictions. According
to the numerical experiments and the realities, the special policies for
controlling the disease are successful in some provinces, and the transmission
of the epidemic, whose outbreak time is close to the beginning of China Spring
Festival travel rush, is more likely to decelerate before February 18 and to
end before April 2020. The proposed mathematical and artificial intelligence
methods can give consistent and reasonable predictions of the 2019-nCoV ending.
We anticipate our work to be a starting point for comprehensive prediction
researches of the 2019-nCoV.
| [
{
"created": "Wed, 12 Feb 2020 12:26:08 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Feb 2020 06:08:07 GMT",
"version": "v2"
}
] | 2020-02-21 | [
[
"Zeng",
"Tianyu",
""
],
[
"Zhang",
"Yunong",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Liu",
"Xiao",
""
],
[
"Qiu",
"Binbin",
""
]
] | Since the SARS outbreak in 2003, a lot of predictive epidemiological models have been proposed. At the end of 2019, a novel coronavirus, termed as 2019-nCoV, has broken out and is propagating in China and the world. Here we propose a multi-model ordinary differential equation set neural network (MMODEs-NN) and model-free methods to predict the interprovincial transmissions in mainland China, especially those from Hubei Province. Compared with the previously proposed epidemiological models, the proposed network can simulate the transportations with the ODEs activation method, while the model-free methods based on the sigmoid function, Gaussian function, and Poisson distribution are linear and fast to generate reasonable predictions. According to the numerical experiments and the realities, the special policies for controlling the disease are successful in some provinces, and the transmission of the epidemic, whose outbreak time is close to the beginning of China Spring Festival travel rush, is more likely to decelerate before February 18 and to end before April 2020. The proposed mathematical and artificial intelligence methods can give consistent and reasonable predictions of the 2019-nCoV ending. We anticipate our work to be a starting point for comprehensive prediction researches of the 2019-nCoV. |
2105.07069 | Joshua Faskowitz | Joshua Faskowitz, Richard F. Betzel, Olaf Sporns | Edges in Brain Networks: Contributions to Models of Structure and
Function | 35 pages, 4 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Network models describe the brain as sets of nodes and edges that represent
its distributed organization. So far, most discoveries in network neuroscience
have prioritized insights that highlight distinct groupings and specialized
functional contributions of network nodes. Importantly, these functional
contributions are determined and expressed by the web of their
interrelationships, formed by network edges. Here, we underscore the important
contributions made by brain network edges for understanding distributed brain
organization. Different types of edges represent different types of
relationships, including connectivity and similarity among nodes. Adopting a
specific definition of edges can fundamentally alter how we analyze and
interpret a brain network. Furthermore, edges can associate into collectives
and higher-order arrangements, describe time series, and form edge communities
that provide insights into brain network topology complementary to the
traditional node-centric perspective. Focusing on the edges, and the
higher-order or dynamic information they can provide, discloses previously
underappreciated aspects of structural and functional network organization.
| [
{
"created": "Fri, 14 May 2021 21:12:53 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Faskowitz",
"Joshua",
""
],
[
"Betzel",
"Richard F.",
""
],
[
"Sporns",
"Olaf",
""
]
] | Network models describe the brain as sets of nodes and edges that represent its distributed organization. So far, most discoveries in network neuroscience have prioritized insights that highlight distinct groupings and specialized functional contributions of network nodes. Importantly, these functional contributions are determined and expressed by the web of their interrelationships, formed by network edges. Here, we underscore the important contributions made by brain network edges for understanding distributed brain organization. Different types of edges represent different types of relationships, including connectivity and similarity among nodes. Adopting a specific definition of edges can fundamentally alter how we analyze and interpret a brain network. Furthermore, edges can associate into collectives and higher-order arrangements, describe time series, and form edge communities that provide insights into brain network topology complementary to the traditional node-centric perspective. Focusing on the edges, and the higher-order or dynamic information they can provide, discloses previously underappreciated aspects of structural and functional network organization. |
1008.3358 | P. Grassberger | Orion Penner, Peter Grassberger, Maya Paczuski | Sequence alignment, mutual information, and dissimilarity measures for
constructing phylogenies | 19 pages + 16 pages of supplementary material | PloS one, Vol. 6, No. 1. (4 January 2011) | 10.1371/journal.pone.0014373 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing sequence alignment algorithms use heuristic scoring schemes which
cannot be used as objective distance metrics. Therefore one relies on measures
like the p- or log-det distances, or makes explicit, and often simplistic,
assumptions about sequence evolution. Information theory provides an
alternative, in the form of mutual information (MI) which is, in principle, an
objective and model independent similarity measure. MI can be estimated by
concatenating and zipping sequences, yielding thereby the "normalized
compression distance". So far this has produced promising results, but with
uncontrolled errors. We describe a simple approach to get robust estimates of
MI from global pairwise alignments. Using standard alignment algorithms, this
gives for animal mitochondrial DNA estimates that are strikingly close to
estimates obtained from the alignment free methods mentioned above. Our main
result uses algorithmic (Kolmogorov) information theory, but we show that
similar results can also be obtained from Shannon theory. Due to the fact that
it is not additive, normalized compression distance is not an optimal metric
for phylogenetics, but we propose a simple modification that overcomes the
issue of additivity. We test several versions of our MI based distance measures
on a large number of randomly chosen quartets and demonstrate that they all
perform better than traditional measures like the Kimura or log-det (resp.
paralinear) distances. Even a simplified version based on single letter Shannon
entropies, which can be easily incorporated in existing software packages, gave
superior results throughout the entire animal kingdom. But we see the main
virtue of our approach in a more general way. For example, it can also help to
judge the relative merits of different alignment algorithms, by estimating the
significance of specific alignments.
| [
{
"created": "Thu, 19 Aug 2010 17:26:27 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"Penner",
"Orion",
""
],
[
"Grassberger",
"Peter",
""
],
[
"Paczuski",
"Maya",
""
]
] | Existing sequence alignment algorithms use heuristic scoring schemes which cannot be used as objective distance metrics. Therefore one relies on measures like the p- or log-det distances, or makes explicit, and often simplistic, assumptions about sequence evolution. Information theory provides an alternative, in the form of mutual information (MI) which is, in principle, an objective and model independent similarity measure. MI can be estimated by concatenating and zipping sequences, yielding thereby the "normalized compression distance". So far this has produced promising results, but with uncontrolled errors. We describe a simple approach to get robust estimates of MI from global pairwise alignments. Using standard alignment algorithms, this gives for animal mitochondrial DNA estimates that are strikingly close to estimates obtained from the alignment free methods mentioned above. Our main result uses algorithmic (Kolmogorov) information theory, but we show that similar results can also be obtained from Shannon theory. Due to the fact that it is not additive, normalized compression distance is not an optimal metric for phylogenetics, but we propose a simple modification that overcomes the issue of additivity. We test several versions of our MI based distance measures on a large number of randomly chosen quartets and demonstrate that they all perform better than traditional measures like the Kimura or log-det (resp. paralinear) distances. Even a simplified version based on single letter Shannon entropies, which can be easily incorporated in existing software packages, gave superior results throughout the entire animal kingdom. But we see the main virtue of our approach in a more general way. For example, it can also help to judge the relative merits of different alignment algorithms, by estimating the significance of specific alignments. |
1008.1359 | Alexander Bershadskii | A. Bershadskii | Dynamic cluster-scaling in DNA | null | Physics Letters A 375 (2011) 335-338 | 10.1016/j.physleta.2010.11.039 | null | q-bio.BM nlin.CD q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown that the nucleotide sequences in DNA molecules have
cluster-scaling properties (discovered for the first time in turbulent
processes: Sreenivasan and Bershadskii, 2006, J. Stat. Phys., 125, 1141-1153.).
These properties are relevant to both types of nucleotide pair-bases
interactions: hydrogen bonds and stacking interactions. It is shown that taking
into account the cluster-scaling properties can help to improve heterogeneous
models of the DNA dynamics. Two human genes: BRCA2 and NRXN1, have been
considered as examples.
| [
{
"created": "Sat, 7 Aug 2010 18:47:31 GMT",
"version": "v1"
}
] | 2011-03-04 | [
[
"Bershadskii",
"A.",
""
]
] | It is shown that the nucleotide sequences in DNA molecules have cluster-scaling properties (discovered for the first time in turbulent processes: Sreenivasan and Bershadskii, 2006, J. Stat. Phys., 125, 1141-1153.). These properties are relevant to both types of nucleotide pair-bases interactions: hydrogen bonds and stacking interactions. It is shown that taking into account the cluster-scaling properties can help to improve heterogeneous models of the DNA dynamics. Two human genes: BRCA2 and NRXN1, have been considered as examples. |
2008.11602 | Jorge Vila | Jorge A. Vila | About the Protein Space Vastness | A Letter of 9 pages without figures, tables, or supporting
information | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An accurate estimation of the Protein Space size, in light of the factors
that govern it, is a long-standing problem and of paramount importance in
evolutionary biology, since it determines the nature of protein evolvability. A
simple analysis will enable us to, firstly, reduce an unrealistic Protein Space
size of ~10^130 sequences, for a 100-residues polypeptide chain, to ~10^9
functional proteins and, secondly, estimate a robust average-mutation rate per
amino acid (x ~1.23) and infer from it, in light of the protein marginal
stability, that only a fraction of the sequence will be available at any one
time for a functional protein to evolve. Although this result does not solve
the Protein Space vastness problem, frames it in a more rational one.
| [
{
"created": "Wed, 26 Aug 2020 14:57:15 GMT",
"version": "v1"
}
] | 2020-08-27 | [
[
"Vila",
"Jorge A.",
""
]
] | An accurate estimation of the Protein Space size, in light of the factors that govern it, is a long-standing problem and of paramount importance in evolutionary biology, since it determines the nature of protein evolvability. A simple analysis will enable us to, firstly, reduce an unrealistic Protein Space size of ~10^130 sequences, for a 100-residues polypeptide chain, to ~10^9 functional proteins and, secondly, estimate a robust average-mutation rate per amino acid (x ~1.23) and infer from it, in light of the protein marginal stability, that only a fraction of the sequence will be available at any one time for a functional protein to evolve. Although this result does not solve the Protein Space vastness problem, frames it in a more rational one. |
1803.10653 | Karolis Misiunas | Karolis Misiunas, Niklas Ermann, Ulrich F. Keyser | QuipuNet: convolutional neural network for single-molecule nanopore
sensing | null | Nano Lett. 18, 6, 4040-4045 (2018) | 10.1021/acs.nanolett.8b01709 | null | q-bio.QM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nanopore sensing is a versatile technique for the analysis of molecules on
the single-molecule level. However, extracting information from data with
established algorithms usually requires time-consuming checks by an experienced
researcher due to inherent variability of solid-state nanopores. Here, we
develop a convolutional neural network (CNN) for the fully automated extraction
of information from the time-series signals obtained by nanopore sensors. In
our demonstration, we use a previously published dataset on multiplexed
single-molecule protein sensing. The neural network learns to classify
translocation events with greater accuracy than previously possible, while also
increasing the number of analysable events by a factor of five. Our results
demonstrate that deep learning can achieve significant improvements in single
molecule nanopore detection with potential applications in rapid diagnostics.
| [
{
"created": "Tue, 27 Mar 2018 17:22:39 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Apr 2018 16:54:57 GMT",
"version": "v2"
},
{
"created": "Tue, 29 May 2018 17:08:33 GMT",
"version": "v3"
}
] | 2018-06-14 | [
[
"Misiunas",
"Karolis",
""
],
[
"Ermann",
"Niklas",
""
],
[
"Keyser",
"Ulrich F.",
""
]
] | Nanopore sensing is a versatile technique for the analysis of molecules on the single-molecule level. However, extracting information from data with established algorithms usually requires time-consuming checks by an experienced researcher due to inherent variability of solid-state nanopores. Here, we develop a convolutional neural network (CNN) for the fully automated extraction of information from the time-series signals obtained by nanopore sensors. In our demonstration, we use a previously published dataset on multiplexed single-molecule protein sensing. The neural network learns to classify translocation events with greater accuracy than previously possible, while also increasing the number of analysable events by a factor of five. Our results demonstrate that deep learning can achieve significant improvements in single molecule nanopore detection with potential applications in rapid diagnostics. |
1710.06259 | Eugenio Cinquemani | Eugenio Cinquemani | Stochastic reaction networks with input processes: Analysis and
applications to reporter gene systems | null | null | null | null | q-bio.QM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic reaction network models are widely utilized in biology and
chemistry to describe the probabilistic dynamics of biochemical systems in
general, and gene interaction networks in particular. Most often, statistical
analysis and inference of these systems is addressed by parametric approaches,
where the laws governing exogenous input processes, if present, are themselves
fixed in advance. Motivated by reporter gene systems, widely utilized in
biology to monitor gene activation at the individual cell level, we address the
analysis of reaction networks with state-affine reaction rates and arbitrary
input processes. We derive a generalization of the so-called moment equations
where the dynamics of the network statistics are expressed as a function of the
input process statistics. In stationary conditions, we provide a spectral
analysis of the system and elaborate on connections with linear filtering. We
then apply the theoretical results to develop a method for the reconstruction
of input process statistics, namely the gene activation autocovariance
function, from reporter gene population snapshot data, and demonstrate its
performance on a simulated case study.
| [
{
"created": "Tue, 17 Oct 2017 13:29:10 GMT",
"version": "v1"
}
] | 2017-10-18 | [
[
"Cinquemani",
"Eugenio",
""
]
] | Stochastic reaction network models are widely utilized in biology and chemistry to describe the probabilistic dynamics of biochemical systems in general, and gene interaction networks in particular. Most often, statistical analysis and inference of these systems is addressed by parametric approaches, where the laws governing exogenous input processes, if present, are themselves fixed in advance. Motivated by reporter gene systems, widely utilized in biology to monitor gene activation at the individual cell level, we address the analysis of reaction networks with state-affine reaction rates and arbitrary input processes. We derive a generalization of the so-called moment equations where the dynamics of the network statistics are expressed as a function of the input process statistics. In stationary conditions, we provide a spectral analysis of the system and elaborate on connections with linear filtering. We then apply the theoretical results to develop a method for the reconstruction of input process statistics, namely the gene activation autocovariance function, from reporter gene population snapshot data, and demonstrate its performance on a simulated case study. |
2110.02031 | Shirsendu Podder | Shirsendu Podder, Simone Righi, Francesca Pancotto | Reputation and Punishment sustain cooperation in the Optional Public
Goods Game | Phil. Trans. R. Soc | null | 10.1098/rstb.2020.0293 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Cooperative behaviour has been extensively studied as a choice between
cooperation and defection. However, the possibility to not participate is also
frequently available. This type of problem can be studied through the optional
public goods game. The introduction of the "Loner" strategy, allows players to
withdraw from the game, which leads to a cooperator-defector-loner cycle. While
prosocial punishment can help increase cooperation, anti-social punishment --
where defectors punish cooperators -- causes its downfall in both experimental
and theoretical studies.
In this paper, we introduce social norms that allow agents to condition their
behaviour to the reputation of their peers. We benchmark this both with respect
to the standard optional public goods game and to the variant where all types
of punishment are allowed. We find that a social norm imposing a more moderate
reputational penalty for opting out than for defecting, increases cooperation.
When, besides reputation, punishment is also possible, the two mechanisms work
synergically under all social norms that do not assign to loners a strictly
worse reputation than to defectors. Under this latter setup, the high levels of
cooperation are sustained by conditional strategies, which largely reduce the
use of pro-social punishment and almost completely eliminate anti-social
punishment.
| [
{
"created": "Tue, 5 Oct 2021 13:24:00 GMT",
"version": "v1"
}
] | 2021-10-06 | [
[
"Podder",
"Shirsendu",
""
],
[
"Righi",
"Simone",
""
],
[
"Pancotto",
"Francesca",
""
]
] | Cooperative behaviour has been extensively studied as a choice between cooperation and defection. However, the possibility to not participate is also frequently available. This type of problem can be studied through the optional public goods game. The introduction of the "Loner" strategy, allows players to withdraw from the game, which leads to a cooperator-defector-loner cycle. While prosocial punishment can help increase cooperation, anti-social punishment -- where defectors punish cooperators -- causes its downfall in both experimental and theoretical studies. In this paper, we introduce social norms that allow agents to condition their behaviour to the reputation of their peers. We benchmark this both with respect to the standard optional public goods game and to the variant where all types of punishment are allowed. We find that a social norm imposing a more moderate reputational penalty for opting out than for defecting, increases cooperation. When, besides reputation, punishment is also possible, the two mechanisms work synergically under all social norms that do not assign to loners a strictly worse reputation than to defectors. Under this latter setup, the high levels of cooperation are sustained by conditional strategies, which largely reduce the use of pro-social punishment and almost completely eliminate anti-social punishment. |
1906.02321 | Olga Vsevolozhskaya | Olga A Vsevolozhskaya, Min Shi, Fengjiao Hu, Dmitri V Zaykin | DOT: Gene-set analysis by combining decorrelated association statistics | null | null | 10.1371/journal.pcbi.1007819 | null | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Historically, the majority of statistical association methods have been
designed assuming availability of SNP-level information. However, modern
genetic and sequencing data present new challenges to access and sharing of
genotype-phenotype datasets, including cost management, difficulties in
consolidation of records across research groups, etc. These issues make methods
based on SNP-level summary statistics for a joint analysis of variants in a
group particularly appealing. The most common form of combining statistics is a
sum of SNP-level squared scores, possibly weighted, as in burden tests for rare
variants. The overall significance of the resulting statistic is evaluated
using its distribution under the null hypothesis. Here, we demonstrate that
this basic approach can be substantially improved by decorrelating scores prior
to their addition, resulting in remarkable power gains in situations that are
most commonly encountered in practice; namely, under heterogeneity of effect
sizes and diversity between pairwise LD. In these situations, the power of the
traditional test, based on the added squared scores, quickly reaches a ceiling,
as the number of variants increases. Thus, the traditional approach does not
benefit from information potentially contained in any additional SNPs, while
our decorrelation by orthogonal transformation (DOT) method yields steady gain
in power. We present theoretical and computational analyses of both approaches,
and reveal causes behind sometimes dramatic difference in their respective
powers. We showcase DOT by analyzing breast cancer data, in which our method
strengthened levels of previously reported associations and implied the
possibility of multiple new alleles that jointly confer breast cancer risk.
| [
{
"created": "Wed, 5 Jun 2019 21:49:09 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2019 20:26:07 GMT",
"version": "v2"
}
] | 2020-07-01 | [
[
"Vsevolozhskaya",
"Olga A",
""
],
[
"Shi",
"Min",
""
],
[
"Hu",
"Fengjiao",
""
],
[
"Zaykin",
"Dmitri V",
""
]
] | Historically, the majority of statistical association methods have been designed assuming availability of SNP-level information. However, modern genetic and sequencing data present new challenges to access and sharing of genotype-phenotype datasets, including cost management, difficulties in consolidation of records across research groups, etc. These issues make methods based on SNP-level summary statistics for a joint analysis of variants in a group particularly appealing. The most common form of combining statistics is a sum of SNP-level squared scores, possibly weighted, as in burden tests for rare variants. The overall significance of the resulting statistic is evaluated using its distribution under the null hypothesis. Here, we demonstrate that this basic approach can be substantially improved by decorrelating scores prior to their addition, resulting in remarkable power gains in situations that are most commonly encountered in practice; namely, under heterogeneity of effect sizes and diversity between pairwise LD. In these situations, the power of the traditional test, based on the added squared scores, quickly reaches a ceiling, as the number of variants increases. Thus, the traditional approach does not benefit from information potentially contained in any additional SNPs, while our decorrelation by orthogonal transformation (DOT) method yields steady gain in power. We present theoretical and computational analyses of both approaches, and reveal causes behind sometimes dramatic difference in their respective powers. We showcase DOT by analyzing breast cancer data, in which our method strengthened levels of previously reported associations and implied the possibility of multiple new alleles that jointly confer breast cancer risk. |
1802.06090 | Yamila Garc\'ia-Martinez | Y. B. Ruiz-Blanco, Y. Almeida, C. M. Sotomayor-Torres and Y. Garc\'ia | Unveiled electric profiles within hydrogen bonds suggest DNA base pairs
with similar bond strengths | Full version is available on:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0185638 | PLoS ONE 12(10): e0185638 (2017) | 10.1371/journal.pone.0185638 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrical forces are the background of all the interactions occurring in
biochemical systems. From here and by using a combination of ab-initio and
ad-hoc models, we introduce the first description of electric field profiles
with intrabond resolution to support a characterization of single bond forces
attending to its electrical origin. This fundamental issue has eluded a
physical description so far. Our method is applied to describe hydrogen bonds
(HB) in DNA base pairs. Numerical results reveal that base pairs in DNA could
be equivalent considering HB strength contributions, which challenges previous
interpretations of thermodynamic properties of DNA based on the assumption that
Adenine/Thymine pairs are weaker than Guanine/Cytosine pairs due to the sole
difference in the number of HB. Thus, our methodology provides solid
foundations to support the development of extended models intended to go deeper
into the molecular mechanisms of DNA functioning.
| [
{
"created": "Sun, 4 Feb 2018 22:37:08 GMT",
"version": "v1"
}
] | 2018-02-20 | [
[
"Ruiz-Blanco",
"Y. B.",
""
],
[
"Almeida",
"Y.",
""
],
[
"Sotomayor-Torres",
"C. M.",
""
],
[
"García",
"Y.",
""
]
] | Electrical forces are the background of all the interactions occurring in biochemical systems. From here and by using a combination of ab-initio and ad-hoc models, we introduce the first description of electric field profiles with intrabond resolution to support a characterization of single bond forces attending to its electrical origin. This fundamental issue has eluded a physical description so far. Our method is applied to describe hydrogen bonds (HB) in DNA base pairs. Numerical results reveal that base pairs in DNA could be equivalent considering HB strength contributions, which challenges previous interpretations of thermodynamic properties of DNA based on the assumption that Adenine/Thymine pairs are weaker than Guanine/Cytosine pairs due to the sole difference in the number of HB. Thus, our methodology provides solid foundations to support the development of extended models intended to go deeper into the molecular mechanisms of DNA functioning. |
2102.11066 | Xiyun Zhang | Xiyun Zhang, Zhongyuan Ruan, Muhua Zheng, Jie Zhou, Stefano Boccaletti
and Baruch Barzel | Epidemic spreading under mutually independent intra- and inter-host
pathogen evolution | null | Nat Commun 13, 6218 (2022) | 10.1038/s41467-022-34027-9 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of epidemic spreading is often reduced to the single control
parameter $R_0$, whose value, above or below unity, determines the state of the
contagion. If, however, the pathogen evolves as it spreads, $R_0$ may change
over time, potentially leading to a mutation-driven spread, in which an
initially sub-pandemic pathogen undergoes a breakthrough mutation. To predict
the boundaries of this pandemic phase, we introduce here a modeling framework
to couple the network spreading patterns with the intra-host evolutionary
dynamics. For many pathogens these two processes, intra- and inter-host, are
driven by different selection forces. And yet here we show that even in the
extreme case when these two forces are mutually independent, mutations can
still fundamentally alter the pandemic phase-diagram, whose transitions are now
shaped, not just by $R_0$, but also by the balance between the epidemic and the
evolutionary timescales. If mutations are too slow, the pathogen prevalence
decays prior to the appearance of a critical mutation. On the other hand, if
mutations are too rapid, the pathogen evolution becomes volatile and, once
again, it fails to spread. Between these two extremes, however, we identify a
broad range of conditions in which an initially sub-pandemic pathogen can break
through to gain widespread prevalence.
| [
{
"created": "Fri, 19 Feb 2021 05:18:29 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 09:36:11 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Nov 2022 04:55:12 GMT",
"version": "v3"
}
] | 2022-11-07 | [
[
"Zhang",
"Xiyun",
""
],
[
"Ruan",
"Zhongyuan",
""
],
[
"Zheng",
"Muhua",
""
],
[
"Zhou",
"Jie",
""
],
[
"Boccaletti",
"Stefano",
""
],
[
"Barzel",
"Baruch",
""
]
] | The dynamics of epidemic spreading is often reduced to the single control parameter $R_0$, whose value, above or below unity, determines the state of the contagion. If, however, the pathogen evolves as it spreads, $R_0$ may change over time, potentially leading to a mutation-driven spread, in which an initially sub-pandemic pathogen undergoes a breakthrough mutation. To predict the boundaries of this pandemic phase, we introduce here a modeling framework to couple the network spreading patterns with the intra-host evolutionary dynamics. For many pathogens these two processes, intra- and inter-host, are driven by different selection forces. And yet here we show that even in the extreme case when these two forces are mutually independent, mutations can still fundamentally alter the pandemic phase-diagram, whose transitions are now shaped, not just by $R_0$, but also by the balance between the epidemic and the evolutionary timescales. If mutations are too slow, the pathogen prevalence decays prior to the appearance of a critical mutation. On the other hand, if mutations are too rapid, the pathogen evolution becomes volatile and, once again, it fails to spread. Between these two extremes, however, we identify a broad range of conditions in which an initially sub-pandemic pathogen can break through to gain widespread prevalence. |
1902.02014 | Md Fazlul Karim Khan | Fazlul MKK, Najnin A, Farzana Y, Rashid MA, Deepthi S, Srikumar C, SS
Rashid, Nazmul MHM | Detection of virulence factors and beta-lactamase encoding genes among
the clinical isolates of Pseudomonas aeruginosa | International Journal of Pharmaceutical Research, 2019 | International Journal of Pharmaceutical Research, 2019 | 10.31838/ijpr/2019.11.01.031 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Pseudomonas aeruginosa has emerged as a significant opportunistic
bacterial pathogen that causes nosocomial infections in healthcare settings
resulting in treatment failure throughout the world. This study was carried out
to compare the relatedness between virulence characteristics and
\b{eta}-lactamase encoding genes producing Pseudomonas aeruginosa. Methods: A
total of 120 P. aeruginosa isolates were obtained from both paediatric and
adult patients of Selayang Hospital, Kuala Lumpur, Malaysia. Phenotypic methods
were used to detect various virulence factors (Phospholipase, Hemolysin,
Gelatinase, DNAse, and Biofilm). All the isolates were evaluated for production
of extended spectrum beta-lactamase (ESBL) as well as metallo \b{eta}-lactamase
(MBL) by Double-disk synergy test (DDST) and E-test while AmpC
\b{eta}-lactamase production was detected by disk antagonism test. Results: In
this study, 120 Pseudomonas aeruginosa isolates (20 each from blood, wounds,
respiratory secretions, stools, urine, and sputum samples) were studied. Among
Pseudomonas aeruginosa isolates, the distribution of virulence factors was
positive for hemolysin (48.33%), DNAse (43.33%), phospholipase (40.83%),
gelatinase (31.66%) production and biofilm formation (34%) respectively. The
prevalence of multiple \b{eta}-lactamase in P. aeruginosa showed 19.16% ESBL,
7.5% MBL and 10.83% AmpC production respectively. Conclusion: A regular
surveillance is required to reduce public healt
| [
{
"created": "Wed, 6 Feb 2019 03:52:59 GMT",
"version": "v1"
}
] | 2019-08-13 | [
[
"MKK",
"Fazlul",
""
],
[
"A",
"Najnin",
""
],
[
"Y",
"Farzana",
""
],
[
"MA",
"Rashid",
""
],
[
"S",
"Deepthi",
""
],
[
"C",
"Srikumar",
""
],
[
"Rashid",
"SS",
""
],
[
"MHM",
"Nazmul",
""
]
] | Background: Pseudomonas aeruginosa has emerged as a significant opportunistic bacterial pathogen that causes nosocomial infections in healthcare settings resulting in treatment failure throughout the world. This study was carried out to compare the relatedness between virulence characteristics and \b{eta}-lactamase encoding genes producing Pseudomonas aeruginosa. Methods: A total of 120 P. aeruginosa isolates were obtained from both paediatric and adult patients of Selayang Hospital, Kuala Lumpur, Malaysia. Phenotypic methods were used to detect various virulence factors (Phospholipase, Hemolysin, Gelatinase, DNAse, and Biofilm). All the isolates were evaluated for production of extended spectrum beta-lactamase (ESBL) as well as metallo \b{eta}-lactamase (MBL) by Double-disk synergy test (DDST) and E-test while AmpC \b{eta}-lactamase production was detected by disk antagonism test. Results: In this study, 120 Pseudomonas aeruginosa isolates (20 each from blood, wounds, respiratory secretions, stools, urine, and sputum samples) were studied. Among Pseudomonas aeruginosa isolates, the distribution of virulence factors was positive for hemolysin (48.33%), DNAse (43.33%), phospholipase (40.83%), gelatinase (31.66%) production and biofilm formation (34%) respectively. The prevalence of multiple \b{eta}-lactamase in P. aeruginosa showed 19.16% ESBL, 7.5% MBL and 10.83% AmpC production respectively. Conclusion: A regular surveillance is required to reduce public healt |
2103.10667 | Simone Pigolotti | Qiao Lu, Deepak Bhat, Darya Stepanenko, and Simone Pigolotti | Search and localization dynamics of the CRISPR/Cas9 system | 8 pages, 10 figures. Combined Main Text + SI. Accepted for
publication in Physical Review Letters | Phys. Rev. Lett. 127, 208102, 2021 | 10.1103/PhysRevLett.127.208102 | null | q-bio.SC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The CRISPR/Cas9 system acts as the prokaryotic immune system and has
important applications in gene editing. The protein Cas9 is one of its crucial
components. The role of Cas9 is to search for specific target sequences on the
DNA and cleave them. In this Letter, we introduce a model of facilitated
diffusion for Cas9 and fit its parameters to single-molecule experiments. Our
model confirms that Cas9 search for targets by sliding, but shows that its
sliding length is rather short. We then investigate how Cas9 explores a long
stretch of DNA containing randomly placed targets. We solve this problem by
mapping it into the theory of Anderson localization in condensed matter
physics. Our theoretical approach rationalizes experimental evidences on the
distribution of Cas9 molecules along the DNA.
| [
{
"created": "Fri, 19 Mar 2021 07:27:43 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Oct 2021 01:11:14 GMT",
"version": "v2"
}
] | 2021-11-30 | [
[
"Lu",
"Qiao",
""
],
[
"Bhat",
"Deepak",
""
],
[
"Stepanenko",
"Darya",
""
],
[
"Pigolotti",
"Simone",
""
]
] | The CRISPR/Cas9 system acts as the prokaryotic immune system and has important applications in gene editing. The protein Cas9 is one of its crucial components. The role of Cas9 is to search for specific target sequences on the DNA and cleave them. In this Letter, we introduce a model of facilitated diffusion for Cas9 and fit its parameters to single-molecule experiments. Our model confirms that Cas9 search for targets by sliding, but shows that its sliding length is rather short. We then investigate how Cas9 explores a long stretch of DNA containing randomly placed targets. We solve this problem by mapping it into the theory of Anderson localization in condensed matter physics. Our theoretical approach rationalizes experimental evidences on the distribution of Cas9 molecules along the DNA. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.