id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.07082 | Tyman Stanford | Tyman E. Stanford, Christopher J. Bagley, Patty J. Solomon | Informed baseline subtraction of proteomic mass spectrometry data aided
by a novel sliding window algorithm | 50 pages, 19 figures | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear
time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein
profiles from biological samples with the aim of discovering biomarkers for
disease. However, the raw protein profiles suffer from several sources of bias
or systematic variation which need to be removed via pre-processing before
meaningful downstream analysis of the data can be undertaken. Baseline
subtraction, an early pre-processing step that removes the non-peptide signal
from the spectra, is complicated by the following: (i) each spectrum has, on
average, wider peaks for peptides with higher mass-to-charge ratios (m/z), and
(ii) the time-consuming and error-prone trial-and-error process for optimising
the baseline subtraction input arguments. With reference to the aforementioned
complications, we present an automated pipeline that includes (i) a novel
`continuous' line segment algorithm that efficiently operates over data with a
transformed m/z-axis to remove the relationship between peptide mass and peak
width, and (ii) an input-free algorithm to estimate peak widths on the
transformed m/z scale. The automated baseline subtraction method was deployed
on six publicly available proteomic MS datasets using six different m/z-axis
transformations. Optimality of the automated baseline subtraction pipeline was
assessed quantitatively using the mean absolute scaled error (MASE) when
compared to a gold-standard baseline subtracted signal. Near-optimal baseline
subtraction was achieved using the automated pipeline. The advantages of the
proposed pipeline include informed and data specific input arguments for
baseline subtraction methods, the avoidance of time-intensive and subjective
piecewise baseline subtraction, and the ability to automate baseline
subtraction completely. Moreover, individual steps can be adopted as
stand-alone routines.
| [
{
"created": "Wed, 23 Mar 2016 07:39:21 GMT",
"version": "v1"
}
] | 2016-03-24 | [
[
"Stanford",
"Tyman E.",
""
],
[
"Bagley",
"Christopher J.",
""
],
[
"Solomon",
"Patty J.",
""
]
] | Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios (m/z), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel `continuous' line segment algorithm that efficiently operates over data with a transformed m/z-axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m/z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Near-optimal baseline subtraction was achieved using the automated pipeline. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines. |
1403.6801 | Markus Dahlem | Markus A. Dahlem, Julia Schumacher, and Niklas H\"ubel | Linking a genetic defect in migraine to spreading depression in a
computational model | 15 pages, 5 figures | null | 10.7717/peerj.379 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Familial hemiplegic migraine (FHM) is a rare subtype of migraine with aura. A
mutation causing FHM type 3 (FHM3) has been identified in SCN1A encoding the
Nav1.1 Na$^+$ channel. This genetic defect affects the inactivation gate. While
the Na$^+$ tail currents following voltage steps are consistent with both
hyperexcitability and hypoexcitability, in this computational study, we
investigate functional consequences beyond these isolated events. Our extended
Hodgkin-Huxley framework establishes a connection between genotype and cellular
phenotype, i.e., the pathophysiological dynamics that spans over multiple time
scales and is relevant to migraine with aura. In particular, we investigate the
dynamical repertoire from normal spiking (milliseconds) to spreading depression
and anoxic depolarization (tens of seconds) and show that FHM3 mutations render
gray matter tissue more vulnerable to spreading depression despite opposing
effects associated with action potential generation. We conclude that the
classification in terms of hypoexcitability vs. hyperexcitability is too simple
a scheme. Our mathematical analysis provides further basic insight into also
previously discussed criticisms against this scheme based on psychophysical and
clinical data.
| [
{
"created": "Wed, 26 Mar 2014 19:26:06 GMT",
"version": "v1"
}
] | 2014-08-13 | [
[
"Dahlem",
"Markus A.",
""
],
[
"Schumacher",
"Julia",
""
],
[
"Hübel",
"Niklas",
""
]
] | Familial hemiplegic migraine (FHM) is a rare subtype of migraine with aura. A mutation causing FHM type 3 (FHM3) has been identified in SCN1A encoding the Nav1.1 Na$^+$ channel. This genetic defect affects the inactivation gate. While the Na$^+$ tail currents following voltage steps are consistent with both hyperexcitability and hypoexcitability, in this computational study, we investigate functional consequences beyond these isolated events. Our extended Hodgkin-Huxley framework establishes a connection between genotype and cellular phenotype, i.e., the pathophysiological dynamics that spans over multiple time scales and is relevant to migraine with aura. In particular, we investigate the dynamical repertoire from normal spiking (milliseconds) to spreading depression and anoxic depolarization (tens of seconds) and show that FHM3 mutations render gray matter tissue more vulnerable to spreading depression despite opposing effects associated with action potential generation. We conclude that the classification in terms of hypoexcitability vs. hyperexcitability is too simple a scheme. Our mathematical analysis provides further basic insight into also previously discussed criticisms against this scheme based on psychophysical and clinical data. |
2006.07341 | Peter Cotton | Peter Cotton | Addressing the Herd Immunity Paradox Using Symmetry, Convexity
Adjustments and Bond Prices | null | null | null | null | q-bio.PE physics.soc-ph q-fin.MF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In constant parameter compartmental models an early onset of herd immunity is
at odds with estimates of R values from early stage growth. This paper utilizes
a result from the theory of interest rate modeling, namely a bond pricing
formula of Vasicek, and an approach inspired by a foundational result in
statistics, de Finetti's Theorem, to show how the modeling discrepancy can be
explained. Moreover the difference between predictions of classic constant
parameter epidemiological models and those with variation and stochastic
evolution can be reduced to simple "convexity" formulas. A novel feature of
this approach is that we do not attempt to locate a true model but only a model
that is equivalent after permutations. Convexity adjustments can also be used
for cross sectional comparisons and we derive easy to use rules of thumb for
estimating threshold infection level in one region given knowledge of threshold
infection in another.
| [
{
"created": "Fri, 12 Jun 2020 17:32:02 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jun 2020 16:29:28 GMT",
"version": "v2"
}
] | 2020-06-18 | [
[
"Cotton",
"Peter",
""
]
] | In constant parameter compartmental models an early onset of herd immunity is at odds with estimates of R values from early stage growth. This paper utilizes a result from the theory of interest rate modeling, namely a bond pricing formula of Vasicek, and an approach inspired by a foundational result in statistics, de Finetti's Theorem, to show how the modeling discrepancy can be explained. Moreover the difference between predictions of classic constant parameter epidemiological models and those with variation and stochastic evolution can be reduced to simple "convexity" formulas. A novel feature of this approach is that we do not attempt to locate a true model but only a model that is equivalent after permutations. Convexity adjustments can also be used for cross sectional comparisons and we derive easy to use rules of thumb for estimating threshold infection level in one region given knowledge of threshold infection in another. |
2208.06348 | Jielin Qiu | William Han, Jielin Qiu, Jiacheng Zhu, Mengdi Xu, Douglas Weber, Bo
Li, Ding Zhao | Can Brain Signals Reveal Inner Alignment with Human Languages? | EMNLP 2023 Findings | null | null | null | q-bio.NC cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain Signals, such as Electroencephalography (EEG), and human languages have
been widely explored independently for many downstream tasks, however, the
connection between them has not been well explored. In this study, we explore
the relationship and dependency between EEG and language. To study at the
representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal
\textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated
representations between the two modalities. We used various relationship
alignment-seeking techniques, such as Canonical Correlation Analysis and
Wasserstein Distance, as loss functions to transfigure features. On downstream
applications, sentiment analysis and relation detection, we achieved new
state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method
achieved an F1-score improvement of 1.7% on K-EmoCon and 9.3% on Zuco datasets
for sentiment analysis, and 7.4% on ZuCo for relation detection. In addition,
we provide interpretations of the performance improvement: (1) feature
distribution shows the effectiveness of the alignment module for discovering
and encoding the relationship between EEG and language; (2) alignment weights
show the influence of different language semantics as well as EEG frequency
features; (3) brain topographical maps provide an intuitive demonstration of
the connectivity in the brain regions. Our code is available at
\url{https://github.com/Jason-Qiu/EEG_Language_Alignment}.
| [
{
"created": "Wed, 10 Aug 2022 17:51:34 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Aug 2022 01:37:22 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Sep 2022 19:10:44 GMT",
"version": "v3"
},
{
"created": "Wed, 18 Oct 2023 21:29:45 GMT",
"version": "v4"
},
{
"created": "Sat, 4 May 2024 21:07:04 GMT",
"version": "v5"
}
] | 2024-05-07 | [
[
"Han",
"William",
""
],
[
"Qiu",
"Jielin",
""
],
[
"Zhu",
"Jiacheng",
""
],
[
"Xu",
"Mengdi",
""
],
[
"Weber",
"Douglas",
""
],
[
"Li",
"Bo",
""
],
[
"Zhao",
"Ding",
""
]
] | Brain Signals, such as Electroencephalography (EEG), and human languages have been widely explored independently for many downstream tasks, however, the connection between them has not been well explored. In this study, we explore the relationship and dependency between EEG and language. To study at the representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal \textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated representations between the two modalities. We used various relationship alignment-seeking techniques, such as Canonical Correlation Analysis and Wasserstein Distance, as loss functions to transfigure features. On downstream applications, sentiment analysis and relation detection, we achieved new state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method achieved an F1-score improvement of 1.7% on K-EmoCon and 9.3% on Zuco datasets for sentiment analysis, and 7.4% on ZuCo for relation detection. In addition, we provide interpretations of the performance improvement: (1) feature distribution shows the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) alignment weights show the influence of different language semantics as well as EEG frequency features; (3) brain topographical maps provide an intuitive demonstration of the connectivity in the brain regions. Our code is available at \url{https://github.com/Jason-Qiu/EEG_Language_Alignment}. |
2201.09508 | Bin Liu | Bin Liu, Dimitrios Papadopoulos, Fragkiskos D. Malliaros, Grigorios
Tsoumakas, Apostolos N. Papadopoulos | Multiple Similarity Drug-Target Interaction Prediction with Random Walks
and Matrix Factorization | null | Briefings in Bioinformatics, Volume 23, Issue 5, 2022 | 10.1093/bib/bbac353 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of drug-target interactions (DTIs) is a very promising area of
research with great potential. The accurate identification of reliable
interactions among drugs and proteins via computational methods, which
typically leverage heterogeneous information retrieved from diverse data
sources, can boost the development of effective pharmaceuticals. Although
random walk and matrix factorization techniques are widely used in DTI
prediction, they have several limitations. Random walk-based embedding
generation is usually conducted in an unsupervised manner, while the linear
similarity combination in matrix factorization distorts individual insights
offered by different views. To tackle these issues, we take a multi-layered
network approach to handle diverse drug and target similarities, and propose a
novel optimization framework, called Multiple similarity DeepWalk-based Matrix
Factorization (MDMF), for DTI prediction. The framework unifies embedding
generation and interaction prediction, learning vector representations of drugs
and targets that not only retain higher-order proximity across all hyper-layers
and layer-specific local invariance, but also approximate the interactions with
their inner product. Furthermore, we develop an ensemble method (MDMF2A) that
integrates two instantiations of the MDMF model, optimizing the area under the
precision-recall curve (AUPR) and the area under the receiver operating
characteristic curve (AUC) respectively. The empirical study on real-world DTI
datasets shows that our method achieves statistically significant improvement
over current state-of-the-art approaches in four different settings. Moreover,
the validation of highly ranked non-interacting pairs also demonstrates the
potential of MDMF2A to discover novel DTIs.
| [
{
"created": "Mon, 24 Jan 2022 08:02:05 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Aug 2022 03:59:44 GMT",
"version": "v2"
}
] | 2022-12-06 | [
[
"Liu",
"Bin",
""
],
[
"Papadopoulos",
"Dimitrios",
""
],
[
"Malliaros",
"Fragkiskos D.",
""
],
[
"Tsoumakas",
"Grigorios",
""
],
[
"Papadopoulos",
"Apostolos N.",
""
]
] | The discovery of drug-target interactions (DTIs) is a very promising area of research with great potential. The accurate identification of reliable interactions among drugs and proteins via computational methods, which typically leverage heterogeneous information retrieved from diverse data sources, can boost the development of effective pharmaceuticals. Although random walk and matrix factorization techniques are widely used in DTI prediction, they have several limitations. Random walk-based embedding generation is usually conducted in an unsupervised manner, while the linear similarity combination in matrix factorization distorts individual insights offered by different views. To tackle these issues, we take a multi-layered network approach to handle diverse drug and target similarities, and propose a novel optimization framework, called Multiple similarity DeepWalk-based Matrix Factorization (MDMF), for DTI prediction. The framework unifies embedding generation and interaction prediction, learning vector representations of drugs and targets that not only retain higher-order proximity across all hyper-layers and layer-specific local invariance, but also approximate the interactions with their inner product. Furthermore, we develop an ensemble method (MDMF2A) that integrates two instantiations of the MDMF model, optimizing the area under the precision-recall curve (AUPR) and the area under the receiver operating characteristic curve (AUC) respectively. The empirical study on real-world DTI datasets shows that our method achieves statistically significant improvement over current state-of-the-art approaches in four different settings. Moreover, the validation of highly ranked non-interacting pairs also demonstrates the potential of MDMF2A to discover novel DTIs. |
2107.09706 | Chuang Xu | Carsten Wiuf and Chuang Xu | Fiber decomposition of deterministic reaction networks with applications | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic reaction networks (RNs) are tools to model diverse biological
phenomena characterized by particle systems, when there are abundant number of
particles. Examples include but are not limited to biochemistry, molecular
biology, genetics, epidemiology, and social sciences. In this chapter we
propose a new type of decomposition of RNs, called fiber decomposition. Using
this decomposition, we establish lifting of mass-action RNs preserving
stationary properties, including multistationarity and absolute concentration
robustness. Such lifting scheme is simple and explicit which imposes little
restriction on the reaction networks. We provide examples to illustrate how
this lifting can be used to construct RNs preserving certain dynamical
properties.
| [
{
"created": "Sun, 18 Jul 2021 07:08:26 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Wiuf",
"Carsten",
""
],
[
"Xu",
"Chuang",
""
]
] | Deterministic reaction networks (RNs) are tools to model diverse biological phenomena characterized by particle systems, when there are abundant number of particles. Examples include but are not limited to biochemistry, molecular biology, genetics, epidemiology, and social sciences. In this chapter we propose a new type of decomposition of RNs, called fiber decomposition. Using this decomposition, we establish lifting of mass-action RNs preserving stationary properties, including multistationarity and absolute concentration robustness. Such lifting scheme is simple and explicit which imposes little restriction on the reaction networks. We provide examples to illustrate how this lifting can be used to construct RNs preserving certain dynamical properties. |
1810.08152 | Thomas Kahle | Carsten Conradi and Alexandru Iosif and Thomas Kahle | Multistationarity in the space of total concentrations for systems that
admit a monomial parametrization | 35 pages, 3 figures; V2: revision after referee comments, V3: final
version as in Bull. Math. Bio | null | 10.1007/s11538-019-00639-4 | null | q-bio.MN math.AG math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply tools from real algebraic geometry to the problem of
multistationarity of chemical reaction networks. A particular focus is on the
case of reaction networks whose steady states admit a monomial parametrization.
For such systems we show that in the space of total concentrations
multistationarity is scale invariant: if there is multistationarity for some
value of the total concentrations, then there is multistationarity on the
entire ray containing this value (possibly for different rate constants) -- and
vice versa. Moreover, for these networks it is possible to decide about
multistationarity independent of the rate constants by formulating
semi-algebraic conditions that involve only concentration variables. These
conditions can easily be extended to include total concentrations. Hence
quantifier elimination may give new insights into multistationarity regions in
the space of total concentrations. To demonstrate this, we show that for the
distributive phosphorylation of a protein at two binding sites
multistationarity is only possible if the total concentration of the substrate
is larger than either the total concentration of the kinase or the total
concentration of the phosphatase. This result is enabled by the chamber
decomposition of the space of total concentrations from polyhedral geometry.
Together with the corresponding sufficiency result of Bihan et al. this yields
a characterization of multistationarity up to lower dimensional regions.
| [
{
"created": "Thu, 18 Oct 2018 16:42:50 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2019 15:43:22 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Aug 2019 12:54:09 GMT",
"version": "v3"
}
] | 2019-08-14 | [
[
"Conradi",
"Carsten",
""
],
[
"Iosif",
"Alexandru",
""
],
[
"Kahle",
"Thomas",
""
]
] | We apply tools from real algebraic geometry to the problem of multistationarity of chemical reaction networks. A particular focus is on the case of reaction networks whose steady states admit a monomial parametrization. For such systems we show that in the space of total concentrations multistationarity is scale invariant: if there is multistationarity for some value of the total concentrations, then there is multistationarity on the entire ray containing this value (possibly for different rate constants) -- and vice versa. Moreover, for these networks it is possible to decide about multistationarity independent of the rate constants by formulating semi-algebraic conditions that involve only concentration variables. These conditions can easily be extended to include total concentrations. Hence quantifier elimination may give new insights into multistationarity regions in the space of total concentrations. To demonstrate this, we show that for the distributive phosphorylation of a protein at two binding sites multistationarity is only possible if the total concentration of the substrate is larger than either the total concentration of the kinase or the total concentration of the phosphatase. This result is enabled by the chamber decomposition of the space of total concentrations from polyhedral geometry. Together with the corresponding sufficiency result of Bihan et al. this yields a characterization of multistationarity up to lower dimensional regions. |
1605.08553 | Marvin A B\"ottcher | Marvin A. B\"ottcher and Jan Nagler | Promotion of Cooperation by Selective Group Extinction | Accepted for publication in New Journal of Physics | null | 10.1088/1367-2630/18/6/063008 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilevel selection is an important organizing principle that crucially
underlies evolutionary processes from the emergence of cells to eusociality and
the economics of nations. Previous studies on multilevel selection assumed that
the effective higher-level selection emerges from lower-level reproduction.
This leads to selection among groups, although only individuals reproduce. We
introduce selective group extinction, where groups die with a probability
inversely proportional to their group fitness. When accounting for this the
critical benefit-to-cost ratio is substantially lowered. Because in game theory
and evolutionary dynamics the degree of cooperation crucially depends on this
ratio above which cooperation emerges previous studies may have substantially
underestimated the establishment and maintenance of cooperation.
| [
{
"created": "Fri, 27 May 2016 09:24:58 GMT",
"version": "v1"
}
] | 2016-06-22 | [
[
"Böttcher",
"Marvin A.",
""
],
[
"Nagler",
"Jan",
""
]
] | Multilevel selection is an important organizing principle that crucially underlies evolutionary processes from the emergence of cells to eusociality and the economics of nations. Previous studies on multilevel selection assumed that the effective higher-level selection emerges from lower-level reproduction. This leads to selection among groups, although only individuals reproduce. We introduce selective group extinction, where groups die with a probability inversely proportional to their group fitness. When accounting for this the critical benefit-to-cost ratio is substantially lowered. Because in game theory and evolutionary dynamics the degree of cooperation crucially depends on this ratio above which cooperation emerges previous studies may have substantially underestimated the establishment and maintenance of cooperation. |
1110.5704 | Alexander Sch\"onhuth | Iman Hajirasouliha, Alexander Sch\"onhuth, David Juan, Alfonso
Valencia and S.Cenk Sahinalp | Mirroring co-evolving trees in the light of their topologies | 13 pages, 2 figures, Iman Hajirasouliha and Alexander Sch\"onhuth are
joint first authors | Bioinformatics, 28(9), 1202-1208, 2012 | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determining the interaction partners among protein/domain families poses hard
computational problems, in particular in the presence of paralogous proteins.
Available approaches aim to identify interaction partners among protein/domain
families through maximizing the similarity between trimmed versions of their
phylogenetic trees. Since maximization of any natural similarity score is
computationally difficult, many approaches employ heuristics to maximize the
distance matrices corresponding to the tree topologies in question. In this
paper we devise an efficient deterministic algorithm which directly maximizes
the similarity between two leaf labeled trees with edge lengths, obtaining a
score-optimal alignment of the two trees in question.
Our algorithm is significantly faster than those methods based on distance
matrix comparison: 1 minute on a single processor vs. 730 hours on a
supercomputer. Furthermore we have advantages over the current state-of-the-art
heuristic search approach in terms of precision as well as a recently suggested
overall performance measure for mirrortree approaches, while incurring only
acceptable losses in recall.
A C implementation of the method demonstrated in this paper is available at
http://compbio.cs.sfu.ca/mirrort.htm
| [
{
"created": "Wed, 26 Oct 2011 05:43:01 GMT",
"version": "v1"
}
] | 2015-01-14 | [
[
"Hajirasouliha",
"Iman",
""
],
[
"Schönhuth",
"Alexander",
""
],
[
"Juan",
"David",
""
],
[
"Valencia",
"Alfonso",
""
],
[
"Sahinalp",
"S. Cenk",
""
]
] | Determining the interaction partners among protein/domain families poses hard computational problems, in particular in the presence of paralogous proteins. Available approaches aim to identify interaction partners among protein/domain families through maximizing the similarity between trimmed versions of their phylogenetic trees. Since maximization of any natural similarity score is computationally difficult, many approaches employ heuristics to maximize the distance matrices corresponding to the tree topologies in question. In this paper we devise an efficient deterministic algorithm which directly maximizes the similarity between two leaf labeled trees with edge lengths, obtaining a score-optimal alignment of the two trees in question. Our algorithm is significantly faster than those methods based on distance matrix comparison: 1 minute on a single processor vs. 730 hours on a supercomputer. Furthermore we have advantages over the current state-of-the-art heuristic search approach in terms of precision as well as a recently suggested overall performance measure for mirrortree approaches, while incurring only acceptable losses in recall. A C implementation of the method demonstrated in this paper is available at http://compbio.cs.sfu.ca/mirrort.htm |
1012.3044 | Raul Isea | Raul Isea and Rafael Mayo | Distributed computing applied applied to the identification of new drugs | 12 pages, Spanish | RET. Revista de Estudios Transdisciplinarios (2010) Vol. 2(2), pp.
79-86 | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This work emphasizes the assets of implementing the distributed computing for
the intensive use in computational science devoted to the search of new
medicines that could be applied in public healthy problems.
| [
{
"created": "Tue, 14 Dec 2010 14:37:05 GMT",
"version": "v1"
}
] | 2010-12-15 | [
[
"Isea",
"Raul",
""
],
[
"Mayo",
"Rafael",
""
]
] | This work emphasizes the assets of implementing the distributed computing for the intensive use in computational science devoted to the search of new medicines that could be applied in public healthy problems. |
0805.4455 | Kyung Hyuk Kim | Kyung Hyuk Kim, Hong Qian, and Herbert M. Sauro | Sensitivity Regulation based on Noise Propagation in Stochastic Reaction
Networks | 16 pages. Acknowledgement is corrected | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we focus on how noise propagates in biochemical reaction
networks and affects sensitivities of the system. We discover that the
stochastic fluctuations can enhance sensitivities in one region of the value of
control parameters by reducing sensitivities in another region. Based on this
compensation principle, we designed a concentration detector in which enhanced
amplification is achieved by an incoherent feedforward reaction network.
| [
{
"created": "Thu, 29 May 2008 19:25:34 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Jun 2008 17:08:00 GMT",
"version": "v2"
}
] | 2008-06-24 | [
[
"Kim",
"Kyung Hyuk",
""
],
[
"Qian",
"Hong",
""
],
[
"Sauro",
"Herbert M.",
""
]
] | In this work we focus on how noise propagates in biochemical reaction networks and affects sensitivities of the system. We discover that the stochastic fluctuations can enhance sensitivities in one region of the value of control parameters by reducing sensitivities in another region. Based on this compensation principle, we designed a concentration detector in which enhanced amplification is achieved by an incoherent feedforward reaction network. |
1311.2924 | Charles Cannon | Charles Cannon and Manuel Lerdau | Fuzzy Mating Behavior Enhances Species Coexistence and Delays Extinction
in Diverse Communities | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current theories about mechanisms promoting species co-existence in diverse
communities assume that species only interact ecologically. Species are treated
as discrete evolutionary entities, even though abundant empirical evidence
indicates that patterns of gene flow such as selfing and hybridization
frequently occur in plant and animal groups. Here, we allow mating behavior to
respond to local species composition and abundance in a data-driven
meta-community model of species co-existence. While individuals primarily
out-cross, they also maintain some small capacity for selfing and
hybridization. Mating choice is treated as a 'fuzzy' behavior, determined by an
interaction between intrinsic properties affecting mate choice and extrinsic
properties of the local community, primarily the density and availability of
sympatric inter-fertile species. When mate choice is strongly limited, even low
survivorship of selfed offspring (<10%) can prevent extinction of rare species.
With increasing mate choice, low hybridization success rates (~20%) maintain
community level diversity for substantially longer periods of time. Given the
low species densities and high diversity of tropical tree communities, the
evolutionary costs of competition among sympatric congeneric species are
negligible because direct competition is infrequent. In diverse communities,
many species are chronically rare and thus vulnerable to stochastic extinction,
which occurs rapidly if species are completely reproductively isolated. By
incorporating fuzzy mating behavior into models of species co-existence, a more
realistic understanding of the extinction process can be developed. Fuzzy
mating strategies, potentially an important mechanism for rare species to
escape extinction and gain local adaptations, should be incorporated into
forest management strategies.
| [
{
"created": "Tue, 12 Nov 2013 20:55:08 GMT",
"version": "v1"
}
] | 2013-11-13 | [
[
"Cannon",
"Charles",
""
],
[
"Lerdau",
"Manuel",
""
]
] | Current theories about mechanisms promoting species co-existence in diverse communities assume that species only interact ecologically. Species are treated as discrete evolutionary entities, even though abundant empirical evidence indicates that patterns of gene flow such as selfing and hybridization frequently occur in plant and animal groups. Here, we allow mating behavior to respond to local species composition and abundance in a data-driven meta-community model of species co-existence. While individuals primarily out-cross, they also maintain some small capacity for selfing and hybridization. Mating choice is treated as a 'fuzzy' behavior, determined by an interaction between intrinsic properties affecting mate choice and extrinsic properties of the local community, primarily the density and availability of sympatric inter-fertile species. When mate choice is strongly limited, even low survivorship of selfed offspring (<10%) can prevent extinction of rare species. With increasing mate choice, low hybridization success rates (~20%) maintain community level diversity for substantially longer periods of time. Given the low species densities and high diversity of tropical tree communities, the evolutionary costs of competition among sympatric congeneric species are negligible because direct competition is infrequent. In diverse communities, many species are chronically rare and thus vulnerable to stochastic extinction, which occurs rapidly if species are completely reproductively isolated. By incorporating fuzzy mating behavior into models of species co-existence, a more realistic understanding of the extinction process can be developed. Fuzzy mating strategies, potentially an important mechanism for rare species to escape extinction and gain local adaptations, should be incorporated into forest management strategies. |
2101.10554 | Pavel Loskot | Pavel Loskot | pdfPapers: shell-script utilities for frequency-based multi-word phrase
extraction from PDF documents | 23 pages, 4 figures, 10 tables | null | null | null | q-bio.QM cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomedical research is intensive in processing information in the previously
published papers. This motivated a lot of efforts to provide tools for text
mining and information extraction from PDF documents over the past decade. The
*nix (Unix/Linux) operating systems offer many tools for working with text
files, however, very few such tools are available for processing the contents
of PDF files. This paper reports our effort to develop shell script utilities
for *nix systems with the core functionality focused on viewing and searching
multiple PDF documents combining logical and regular expressions, and enabling
more reliable text extraction from PDF documents with subsequent manipulation
of the resulting blocks of text. Furthermore, a procedure for extracting the
most frequently occurring multi-word phrases was devised and then demonstrated
on several scientific papers in life sciences. Our experiments revealed that
the procedure is surprisingly robust to deficiencies in text extraction and the
actual scoring function used to rank the phrases in terms of their importance
or relevance. The keyword relevance is strongly context dependent, the word
stemming did not provide any recognizable advantage, and the stop-words should
only be removed from the beginning and the end of phrases. In addition, the
developed utilities were used to convert the list of acronyms and the index
from a PDF e-book into a large list of biochemical terms which can be exploited
in other text mining tasks. All shell scripts and data files are available in a
public repository named \pp\ on the Github. The key lesson learned in this work
is that semi-automated methods combining the power of algorithms with the
capabilities of research experience are the most promising for improving the
research efficiency.
| [
{
"created": "Tue, 26 Jan 2021 04:35:20 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Loskot",
"Pavel",
""
]
] | Biomedical research is intensive in processing information in the previously published papers. This motivated a lot of efforts to provide tools for text mining and information extraction from PDF documents over the past decade. The *nix (Unix/Linux) operating systems offer many tools for working with text files, however, very few such tools are available for processing the contents of PDF files. This paper reports our effort to develop shell script utilities for *nix systems with the core functionality focused on viewing and searching multiple PDF documents combining logical and regular expressions, and enabling more reliable text extraction from PDF documents with subsequent manipulation of the resulting blocks of text. Furthermore, a procedure for extracting the most frequently occurring multi-word phrases was devised and then demonstrated on several scientific papers in life sciences. Our experiments revealed that the procedure is surprisingly robust to deficiencies in text extraction and the actual scoring function used to rank the phrases in terms of their importance or relevance. The keyword relevance is strongly context dependent, the word stemming did not provide any recognizable advantage, and the stop-words should only be removed from the beginning and the end of phrases. In addition, the developed utilities were used to convert the list of acronyms and the index from a PDF e-book into a large list of biochemical terms which can be exploited in other text mining tasks. All shell scripts and data files are available in a public repository named \pp\ on the Github. The key lesson learned in this work is that semi-automated methods combining the power of algorithms with the capabilities of research experience are the most promising for improving the research efficiency. |
1908.10162 | Benedetta Franceschiello Dr. | Benedetta Franceschiello, Alessandro Sarti, Giovanna Citti | A neuro-mathematical model for size and context related illusions | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide here a mathematical model of size/context illusions, inspired by
the functional architecture of the visual cortex. We first recall previous
models of scale and orientation, in particular the one in (Sarti et al., 2008),
and simplify it, only considering the feature of scale. Then we recall the
deformation model of illusion, introduced by (Franceschiello et al. 2017), to
describe orientation related GOIs, and adapt it to size illusions. We finally
apply the model to the Ebbinghaus and Delboeuf illusions, validating the
results by comparing with experimental data from (Massaro et al., 1971) and
(Roberts et al., 2005).
| [
{
"created": "Tue, 27 Aug 2019 12:36:41 GMT",
"version": "v1"
}
] | 2019-08-28 | [
[
"Franceschiello",
"Benedetta",
""
],
[
"Sarti",
"Alessandro",
""
],
[
"Citti",
"Giovanna",
""
]
] | We provide here a mathematical model of size/context illusions, inspired by the functional architecture of the visual cortex. We first recall previous models of scale and orientation, in particular the one in (Sarti et al., 2008), and simplify it, only considering the feature of scale. Then we recall the deformation model of illusion, introduced by (Franceschiello et al. 2017), to describe orientation related GOIs, and adapt it to size illusions. We finally apply the model to the Ebbinghaus and Delboeuf illusions, validating the results by comparing with experimental data from (Massaro et al., 1971) and (Roberts et al., 2005). |
2403.14799 | Henrique Lima | Dimitri Marques Abramov, Henrique Santos Lima, Vladimir Lazarev, Paulo
Ricardo Galhanone, and Constantino Tsallis | Identifying Attention-Deficit/Hyperactivity Disorder through the
electroencephalogram complexity | 11 pages | null | null | null | q-bio.NC cond-mat.stat-mech physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | There are reasons to suggest that a number of mental disorders may be related
to alteration in the neural complexity (NC). Thus, quantitative analysis of NC
could be helpful in classifying mental conditions and clarifying our
understanding about them. Here, we have worked with youths, typical and with
attention-deficit/hyperactivity disorder (ADHD), whose neural complexity was
assessed using q-statistics applied to the electroencephalogram (EEG). The EEG
was recorded while subjects performed the visual Attention Network Test (ANT)
based on the OddBall paradigm and during a shortpretask period of resting
state. Time intervals of the EEG amplitudes that passed a threshold (signal
regularity indicator) were collected from task and pretask signals from each
subject. The data were satisfactorily fitted with a stretched $q$-exponential
including a power-law prefactor(characterized by the exponent c), thus
determining the best $(c, q)$ for each subject, indicative of their individual
complexity. We found larger values of $q$ and $c$ in ADHD subjects as compared
with the typical subjects both at task and pretask periods, the task values for
both groups being larger than at rest. The $c$ parameter was highly specific in
relation to DSM diagnosis for inattention, where well-defined clusters were
observed. The parameter values were organized in four well-defined clusters in
$(c, q)$-space. As expected, the tasks apparently induced greater complexity in
neural functional states with likely greater amount of internal information
processing. The results suggest that complexity is higher in ADHD subjects than
in typical pairs. The distribution of values in the $(c, q)$-space derived from
$q$-statistics seems to be a promising biomarker for ADHD diagnosis.
| [
{
"created": "Thu, 21 Mar 2024 19:26:52 GMT",
"version": "v1"
}
] | 2024-03-25 | [
[
"Abramov",
"Dimitri Marques",
""
],
[
"Lima",
"Henrique Santos",
""
],
[
"Lazarev",
"Vladimir",
""
],
[
"Galhanone",
"Paulo Ricardo",
""
],
[
"Tsallis",
"Constantino",
""
]
] | There are reasons to suggest that a number of mental disorders may be related to alteration in the neural complexity (NC). Thus, quantitative analysis of NC could be helpful in classifying mental conditions and clarifying our understanding about them. Here, we have worked with youths, typical and with attention-deficit/hyperactivity disorder (ADHD), whose neural complexity was assessed using q-statistics applied to the electroencephalogram (EEG). The EEG was recorded while subjects performed the visual Attention Network Test (ANT) based on the OddBall paradigm and during a shortpretask period of resting state. Time intervals of the EEG amplitudes that passed a threshold (signal regularity indicator) were collected from task and pretask signals from each subject. The data were satisfactorily fitted with a stretched $q$-exponential including a power-law prefactor(characterized by the exponent c), thus determining the best $(c, q)$ for each subject, indicative of their individual complexity. We found larger values of $q$ and $c$ in ADHD subjects as compared with the typical subjects both at task and pretask periods, the task values for both groups being larger than at rest. The $c$ parameter was highly specific in relation to DSM diagnosis for inattention, where well-defined clusters were observed. The parameter values were organized in four well-defined clusters in $(c, q)$-space. As expected, the tasks apparently induced greater complexity in neural functional states with likely greater amount of internal information processing. The results suggest that complexity is higher in ADHD subjects than in typical pairs. The distribution of values in the $(c, q)$-space derived from $q$-statistics seems to be a promising biomarker for ADHD diagnosis. |
1403.0630 | Christopher Eastaugh PhD | C.S. Eastaugh, C. Thurnher, H. Hasenauer, J.K. Vanclay | Stephenson et al.'s ecological fallacy | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After more than a century of research the typical growth pattern of a tree
was thought to be fairly well understood. Following germination height growth
accelerates for some time, then increment peaks and the added height each year
becomes less and less. The cross sectional area (basal area) of the tree
follows a similar pattern, but the maximum basal area increment occurs at some
time after the maximum height increment. An increase in basal area in a tall
tree will add more volume to the stem than the same increase in a short tree,
so the increment in stem volume (or mass) peaks very late. Stephenson et al.
challenge this paradigm, and suggest that mass increment increases
continuously. Their analysis methods however are a textbook example of the
ecological fallacy, and their conclusions therefore unsupported.
| [
{
"created": "Mon, 3 Mar 2014 23:11:28 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Eastaugh",
"C. S.",
""
],
[
"Thurnher",
"C.",
""
],
[
"Hasenauer",
"H.",
""
],
[
"Vanclay",
"J. K.",
""
]
] | After more than a century of research the typical growth pattern of a tree was thought to be fairly well understood. Following germination height growth accelerates for some time, then increment peaks and the added height each year becomes less and less. The cross sectional area (basal area) of the tree follows a similar pattern, but the maximum basal area increment occurs at some time after the maximum height increment. An increase in basal area in a tall tree will add more volume to the stem than the same increase in a short tree, so the increment in stem volume (or mass) peaks very late. Stephenson et al. challenge this paradigm, and suggest that mass increment increases continuously. Their analysis methods however are a textbook example of the ecological fallacy, and their conclusions therefore unsupported. |
1811.12642 | Tomasz Rutkowski | Tomasz M. Rutkowski, Qibin Zhao, Masao S. Abe, Mihoko Otake | AI Neurotechnology for Aging Societies -- Task-load and Dementia EEG
Digital Biomarker Development Using Information Geometry Machine Learning
Methods | 5 pages, 2 figures, NeurIPS 2018 AI for Social Good Workshop at the
Neural Information Processing Systems (NeurIPS = formerly NIPS) 2018.
Montreal, Canada | null | null | null | q-bio.NC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Dementia and especially Alzheimer's disease (AD) are the most common causes
of cognitive decline in elderly people. A spread of the above mentioned mental
health problems in aging societies is causing a significant medical and
economic burden in many countries around the world. According to a recent World
Health Organization (WHO) report, it is approximated that currently, worldwide,
about 47 million people live with a dementia spectrum of neurocognitive
disorders. This number is expected to triple by 2050, which calls for possible
application of AI-based technologies to support an early screening for
preventive interventions and a subsequent mental wellbeing monitoring as well
as maintenance with so-called digital-pharma or beyond a pill therapeutical
approaches. This paper discusses our attempt and preliminary results of
brainwave (EEG) techniques to develop digital biomarkers for dementia progress
detection and monitoring. We present an information geometry-based
classification approach for automatic EEG-derived event related responses
(ERPs) discrimination of low versus high task-load auditory or tactile stimuli
recognition, of which amplitude and latency variabilities are similar to those
in dementia. The discussed approach is a step forward to develop AI, and
especially machine learning (ML) approaches, for the subsequent application to
mild-cognitive impairment (MCI) and AD diagnostics.
| [
{
"created": "Fri, 30 Nov 2018 06:58:16 GMT",
"version": "v1"
}
] | 2018-12-03 | [
[
"Rutkowski",
"Tomasz M.",
""
],
[
"Zhao",
"Qibin",
""
],
[
"Abe",
"Masao S.",
""
],
[
"Otake",
"Mihoko",
""
]
] | Dementia and especially Alzheimer's disease (AD) are the most common causes of cognitive decline in elderly people. A spread of the above mentioned mental health problems in aging societies is causing a significant medical and economic burden in many countries around the world. According to a recent World Health Organization (WHO) report, it is approximated that currently, worldwide, about 47 million people live with a dementia spectrum of neurocognitive disorders. This number is expected to triple by 2050, which calls for possible application of AI-based technologies to support an early screening for preventive interventions and a subsequent mental wellbeing monitoring as well as maintenance with so-called digital-pharma or beyond a pill therapeutical approaches. This paper discusses our attempt and preliminary results of brainwave (EEG) techniques to develop digital biomarkers for dementia progress detection and monitoring. We present an information geometry-based classification approach for automatic EEG-derived event related responses (ERPs) discrimination of low versus high task-load auditory or tactile stimuli recognition, of which amplitude and latency variabilities are similar to those in dementia. The discussed approach is a step forward to develop AI, and especially machine learning (ML) approaches, for the subsequent application to mild-cognitive impairment (MCI) and AD diagnostics. |
1410.2337 | Masaki Sasai | S. S. Ashwin and Masaki Sasai | Epigenetic Dynamics of Cell Reprogramming | null | null | null | null | q-bio.MN cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reprogramming is a process of transforming differentiated cells into
pluripotent stem cells by inducing specific modifying factors in the cells.
Reprogramming is a non-equilibrium process involving a collaboration at levels
separated by orders of magnitude in time scale, namely transcription factor
binding/unbinding, protein synthesis/degradation, and epigenetic histone
modification. We propose a model of reprogramming by integrating these
temporally separated processes and show that stable states on the epigenetic
landscape should be viewed as a superposition of basin minima generated in the
fixed histone states. Slow histone modification is responsible for the narrow
valleys connecting the pluripotent and differentiated states on the epigenetic
landscape, and the pathways which largely overlap with these valleys explain
the observed heterogeneity of latencies in reprogramming. We show that histone
dynamics also creates an intermediary state observed in experiments. A change
in the mechanism of histone modification alters the pathway to bypass the
barrier, thereby accelerating the reprogramming and reducing the heterogeneity
of latencies.
| [
{
"created": "Thu, 9 Oct 2014 02:32:50 GMT",
"version": "v1"
}
] | 2014-10-13 | [
[
"Ashwin",
"S. S.",
""
],
[
"Sasai",
"Masaki",
""
]
] | Reprogramming is a process of transforming differentiated cells into pluripotent stem cells by inducing specific modifying factors in the cells. Reprogramming is a non-equilibrium process involving a collaboration at levels separated by orders of magnitude in time scale, namely transcription factor binding/unbinding, protein synthesis/degradation, and epigenetic histone modification. We propose a model of reprogramming by integrating these temporally separated processes and show that stable states on the epigenetic landscape should be viewed as a superposition of basin minima generated in the fixed histone states. Slow histone modification is responsible for the narrow valleys connecting the pluripotent and differentiated states on the epigenetic landscape, and the pathways which largely overlap with these valleys explain the observed heterogeneity of latencies in reprogramming. We show that histone dynamics also creates an intermediary state observed in experiments. A change in the mechanism of histone modification alters the pathway to bypass the barrier, thereby accelerating the reprogramming and reducing the heterogeneity of latencies. |
2208.02088 | Michael D Nicholson | Michael D. Nicholson, David Cheek, Tibor Antal | Sequential mutations in exponentially growing populations | null | null | null | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by/4.0/ | Stochastic models of sequential mutation acquisition are widely used to
quantify cancer and bacterial evolution. Across manifold scenarios, recurrent
research questions are: how many cells are there with $n$ alterations, and how
long will it take for these cells to appear. For exponentially growing
populations, these questions have been tackled only in special cases so far.
Here, within a multitype branching process framework, we consider a general
mutational path where mutations may be advantageous, neutral or deleterious. In
the biologically relevant limiting regimes of large times and small mutation
rates, we derive probability distributions for the number, and arrival time, of
cells with n mutations. Surprisingly, the two quantities respectively follow
Mittag-Leffler and logistic distributions regardless of $n$ or the mutations'
selective effects. Our results provide a rapid method to assess how altering
the fundamental division, death, and mutation rates impacts the arrival time,
and number, of mutant cells. We highlight consequences for mutation rate
inference in fluctuation assays.
| [
{
"created": "Wed, 3 Aug 2022 14:13:34 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Apr 2023 11:35:52 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Jul 2023 15:54:46 GMT",
"version": "v3"
}
] | 2023-07-07 | [
[
"Nicholson",
"Michael D.",
""
],
[
"Cheek",
"David",
""
],
[
"Antal",
"Tibor",
""
]
] | Stochastic models of sequential mutation acquisition are widely used to quantify cancer and bacterial evolution. Across manifold scenarios, recurrent research questions are: how many cells are there with $n$ alterations, and how long will it take for these cells to appear. For exponentially growing populations, these questions have been tackled only in special cases so far. Here, within a multitype branching process framework, we consider a general mutational path where mutations may be advantageous, neutral or deleterious. In the biologically relevant limiting regimes of large times and small mutation rates, we derive probability distributions for the number, and arrival time, of cells with n mutations. Surprisingly, the two quantities respectively follow Mittag-Leffler and logistic distributions regardless of $n$ or the mutations' selective effects. Our results provide a rapid method to assess how altering the fundamental division, death, and mutation rates impacts the arrival time, and number, of mutant cells. We highlight consequences for mutation rate inference in fluctuation assays. |
0901.2470 | Simon Childs | S. J. Childs | A Model of Pupal Water Loss in Glossina | 32 pages, 12 figures, 3 tables | Mathematical Biosciences, 221: 77--90, 2009 | null | null | q-bio.QM q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The results of a long-established investigation into pupal transpiration are
used as a rudimentary data set. These data are then generalised to all
temperatures and humidities by invoking the property of multiplicative
separability, as well as by converting established relationships in terms of
constant humidity at fixed temperature, to alternatives in terms of a
calculated water loss. In this way a formulation which is a series of very
simple, first order, ordinary differential equations is devised. The model is
extended to include a variety of Glossina species using their relative surface
areas, their relative pupal and puparial loss rates and their different 4th
instar excretions. The resulting computational model calculates total, pupal
water loss, consequent mortality and emergence. Remaining fat reserves are a
more tenuous result.
The model suggests that, while conventional wisdom is often correct in
dismissing variability in transpiration-related pupal mortality as
insignificant, the effects of transpiration can be profound under adverse
conditions and for some species, in general. The model demonstrates how two
gender effects, the more significant one at the drier extremes of tsetse fly
habitat, might arise. The agreement between calculated and measured critical
water losses suggests very little difference in the behaviour of the different
species.
| [
{
"created": "Fri, 16 Jan 2009 12:39:14 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jan 2009 15:55:51 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Mar 2009 12:04:43 GMT",
"version": "v3"
},
{
"created": "Mon, 25 May 2015 11:42:21 GMT",
"version": "v4"
}
] | 2015-05-26 | [
[
"Childs",
"S. J.",
""
]
] | The results of a long-established investigation into pupal transpiration are used as a rudimentary data set. These data are then generalised to all temperatures and humidities by invoking the property of multiplicative separability, as well as by converting established relationships in terms of constant humidity at fixed temperature, to alternatives in terms of a calculated water loss. In this way a formulation which is a series of very simple, first order, ordinary differential equations is devised. The model is extended to include a variety of Glossina species using their relative surface areas, their relative pupal and puparial loss rates and their different 4th instar excretions. The resulting computational model calculates total, pupal water loss, consequent mortality and emergence. Remaining fat reserves are a more tenuous result. The model suggests that, while conventional wisdom is often correct in dismissing variability in transpiration-related pupal mortality as insignificant, the effects of transpiration can be profound under adverse conditions and for some species, in general. The model demonstrates how two gender effects, the more significant one at the drier extremes of tsetse fly habitat, might arise. The agreement between calculated and measured critical water losses suggests very little difference in the behaviour of the different species. |
1805.03530 | Tandy Warnow | Tandy Warnow | Supertree Construction: Opportunities and Challenges | 28 pages, will be part of Festschrift volume for Bernard More | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supertree construction is the process by which a set of phylogenetic trees,
each on a subset of the overall set X of species, is combined into a tree on
the full set S. The traditional use of supertree methods is the assembly of a
large species tree from previously computed smaller species trees; however,
supertree methods are also used to address large-scale tree estimation using
divide-and-conquer (i.e., a dataset is divided into overlapping subsets, trees
are constructed on the subsets, and then combined using the supertree method).
Because most supertree methods are heuristics for NP-hard optimization
problems, the use of supertree estimation on large datasets is challenging,
both in terms of scalability and accuracy. In this paper, we describe the
current state of the art in supertree construction and the use of supertree
methods in divide-and-conquer strategies. Finally, we identify directions where
future research could lead to improved supertree methods.
| [
{
"created": "Wed, 9 May 2018 13:53:48 GMT",
"version": "v1"
}
] | 2018-05-10 | [
[
"Warnow",
"Tandy",
""
]
] | Supertree construction is the process by which a set of phylogenetic trees, each on a subset of the overall set X of species, is combined into a tree on the full set S. The traditional use of supertree methods is the assembly of a large species tree from previously computed smaller species trees; however, supertree methods are also used to address large-scale tree estimation using divide-and-conquer (i.e., a dataset is divided into overlapping subsets, trees are constructed on the subsets, and then combined using the supertree method). Because most supertree methods are heuristics for NP-hard optimization problems, the use of supertree estimation on large datasets is challenging, both in terms of scalability and accuracy. In this paper, we describe the current state of the art in supertree construction and the use of supertree methods in divide-and-conquer strategies. Finally, we identify directions where future research could lead to improved supertree methods. |
2404.07852 | Claudio Masciovecchio | S. H. Mejias, A. L. Cortajarena, R. Mincigrucci, C. Svetina and C.
Masciovecchio | A New Route for the Determination of Protein Structure and Function | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding complex biological macromolecules, especially proteins, is
vital for grasping their diverse chemical functions with direct impact in
biology and pharmacology. While techniques like X-ray crystallography and
cryo-electron microscopy have been valuable, they face limitations such as
radiation damage and difficulties in crystallizing certain proteins. X-ray
free-electron lasers (XFELs) offer promising solutions with their ultrafast,
high-intensity pulses, potentially enabling structural determination before
radiation damage occurs. However, challenges like low signal-to-noise ratio
persist, particularly for single protein molecules. To address this, we propose
a new method involving engineered protein scaffolds to create ordered arrays of
proteins with controlled orientations, aiming at enhancing the signal at the
detector. This innovative strategy has the potential to address signal
limitations and protein crystallization issues, opening avenues for determining
protein structures under physiological conditions. Moreover, it holds promise
for studying conformational changes resulting from photo-induced changes,
protein-drug and/or protein-protein interactions. Indeed, the prediction of
protein-protein interactions, fundamental to numerous biochemical and cellular
processes, and the time-dependent conformational changes they undergo, continue
to pose a considerable challenge in biology and biochemistry.
| [
{
"created": "Thu, 11 Apr 2024 15:47:17 GMT",
"version": "v1"
}
] | 2024-04-12 | [
[
"Mejias",
"S. H.",
""
],
[
"Cortajarena",
"A. L.",
""
],
[
"Mincigrucci",
"R.",
""
],
[
"Svetina",
"C.",
""
],
[
"Masciovecchio",
"C.",
""
]
] | Understanding complex biological macromolecules, especially proteins, is vital for grasping their diverse chemical functions with direct impact in biology and pharmacology. While techniques like X-ray crystallography and cryo-electron microscopy have been valuable, they face limitations such as radiation damage and difficulties in crystallizing certain proteins. X-ray free-electron lasers (XFELs) offer promising solutions with their ultrafast, high-intensity pulses, potentially enabling structural determination before radiation damage occurs. However, challenges like low signal-to-noise ratio persist, particularly for single protein molecules. To address this, we propose a new method involving engineered protein scaffolds to create ordered arrays of proteins with controlled orientations, aiming at enhancing the signal at the detector. This innovative strategy has the potential to address signal limitations and protein crystallization issues, opening avenues for determining protein structures under physiological conditions. Moreover, it holds promise for studying conformational changes resulting from photo-induced changes, protein-drug and/or protein-protein interactions. Indeed, the prediction of protein-protein interactions, fundamental to numerous biochemical and cellular processes, and the time-dependent conformational changes they undergo, continue to pose a considerable challenge in biology and biochemistry. |
1511.05390 | Alex McAvoy | Alex McAvoy | Stochastic selection processes | 25 pages; comments welcome | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a mathematical framework for natural selection in finite
populations. Traditionally, many of the selection-based processes used to
describe cultural and genetic evolution (such as imitation and birth-death
models) have been studied on a case-by-case basis. Over time, these models have
grown in sophistication to include population structure, differing phenotypes,
and various forms of interaction asymmetry, among other features. Furthermore,
many processes inspired by natural selection, such as evolutionary algorithms
in computer science, possess characteristics that should fall within the realm
of a "selection process," but so far there is no overarching theory
encompassing these evolutionary processes. The framework of $\textit{stochastic
selection processes}$ we present here provides such a theory and consists of
three main components: a $\textit{population state space}$, an
$\textit{aggregate payoff function}$, and an $\textit{update rule}$. A
population state space is a generalization of the notion of population
structure, and it can include non-spatial information such as strategy-mutation
rates and phenotypes. An aggregate payoff function allows one to generically
talk about the fitness of traits without explicitly specifying a method of
payoff accounting or even the nature of the interactions that determine
payoff/fitness. An update rule is a fitness-based function that updates a
population based on its current state, and it includes as special cases the
classical update mechanisms (Moran, Wright-Fisher, etc.) as well as more
complicated mechanisms involving chromosomal crossover, mutation, and even
complex cultural syntheses of strategies of neighboring individuals. Our
framework covers models with variable population size as well as with
arbitrary, measurable trait spaces.
| [
{
"created": "Tue, 17 Nov 2015 13:18:14 GMT",
"version": "v1"
}
] | 2015-11-18 | [
[
"McAvoy",
"Alex",
""
]
] | We propose a mathematical framework for natural selection in finite populations. Traditionally, many of the selection-based processes used to describe cultural and genetic evolution (such as imitation and birth-death models) have been studied on a case-by-case basis. Over time, these models have grown in sophistication to include population structure, differing phenotypes, and various forms of interaction asymmetry, among other features. Furthermore, many processes inspired by natural selection, such as evolutionary algorithms in computer science, possess characteristics that should fall within the realm of a "selection process," but so far there is no overarching theory encompassing these evolutionary processes. The framework of $\textit{stochastic selection processes}$ we present here provides such a theory and consists of three main components: a $\textit{population state space}$, an $\textit{aggregate payoff function}$, and an $\textit{update rule}$. A population state space is a generalization of the notion of population structure, and it can include non-spatial information such as strategy-mutation rates and phenotypes. An aggregate payoff function allows one to generically talk about the fitness of traits without explicitly specifying a method of payoff accounting or even the nature of the interactions that determine payoff/fitness. An update rule is a fitness-based function that updates a population based on its current state, and it includes as special cases the classical update mechanisms (Moran, Wright-Fisher, etc.) as well as more complicated mechanisms involving chromosomal crossover, mutation, and even complex cultural syntheses of strategies of neighboring individuals. Our framework covers models with variable population size as well as with arbitrary, measurable trait spaces. |
2010.00531 | Nabil Abid | Nabil Abid, Giovanni Chillemi, and Ahmed Rebai | Did circoviruses intermediate the recombination between bat and pangolin
coronaviruses, yielding SARS-CoV-2? | 18 pages, 4 figures | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the first reports of a coronavirus (CoV) disease 2019 (COVID-19) caused
by severe acute respiratory syndrome virus (SARS-CoV-2) in Wuhan, Hubei
province, China, scientists are working around the clock to find sound answers
to the issue of its origin. While the number of scientific articles on
SARS-CoV-2 is increasing, there are still many gaps as to its origin. All
studies failed to find a coronavirus in other animals that is more similar to
human SARS-COV2 than the bat virus, considered to be the primary reservoir. In
this paper we address a new hypothesis, based on a possible recombination
between a DNA and SARS-CoV viruses, to explain the rise of SRAS-CoV-2. By
comparing SARS-CoV-2 and related CoVs with circoviruses (CVs), we found strong
sequence similarity of the genomic region at the 3-end of Bat-CoV ORF1a and the
origin of replication (Ori) of porcine CV type 2 (PCV2), as well as similar RNA
secondary structures of the region encompassing the cleavage site of CoV S gene
with the PCV2 Ori. This constitutes a primary evidence that supports a possible
recombination, which occurrence might explain the origin of SARS-CoV-2.
| [
{
"created": "Tue, 29 Sep 2020 19:43:46 GMT",
"version": "v1"
}
] | 2020-10-02 | [
[
"Abid",
"Nabil",
""
],
[
"Chillemi",
"Giovanni",
""
],
[
"Rebai",
"Ahmed",
""
]
] | Since the first reports of a coronavirus (CoV) disease 2019 (COVID-19) caused by severe acute respiratory syndrome virus (SARS-CoV-2) in Wuhan, Hubei province, China, scientists are working around the clock to find sound answers to the issue of its origin. While the number of scientific articles on SARS-CoV-2 is increasing, there are still many gaps as to its origin. All studies failed to find a coronavirus in other animals that is more similar to human SARS-COV2 than the bat virus, considered to be the primary reservoir. In this paper we address a new hypothesis, based on a possible recombination between a DNA and SARS-CoV viruses, to explain the rise of SRAS-CoV-2. By comparing SARS-CoV-2 and related CoVs with circoviruses (CVs), we found strong sequence similarity of the genomic region at the 3-end of Bat-CoV ORF1a and the origin of replication (Ori) of porcine CV type 2 (PCV2), as well as similar RNA secondary structures of the region encompassing the cleavage site of CoV S gene with the PCV2 Ori. This constitutes a primary evidence that supports a possible recombination, which occurrence might explain the origin of SARS-CoV-2. |
0901.1248 | Martin Weigt | M. Weigt, R.A. White, H. Szurmant, J.A. Hoch, T. Hwa | Identification of direct residue contacts in protein-protein interaction
by message passing | Supplementary information available on
http://www.pnas.org/content/106/1/67.abstract | Proc. Natl. Acad. Sci. 106(1), 67-72 (2009) | 10.1073/pnas.0805923106 | null | q-bio.BM cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the molecular determinants of specificity in protein-protein
interaction is an outstanding challenge of postgenome biology. The availability
of large protein databases generated from sequences of hundreds of bacterial
genomes enables various statistical approaches to this problem. In this context
covariance-based methods have been used to identify correlation between amino
acid positions in interacting proteins. However, these methods have an
important shortcoming, in that they cannot distinguish between directly and
indirectly correlated residues. We developed a method that combines covariance
analysis with global inference analysis, adopted from use in statistical
physics. Applied to a set of >2,500 representatives of the bacterial
two-component signal transduction system, the combination of covariance with
global inference successfully and robustly identified residue pairs that are
proximal in space without resorting to ad hoc tuning parameters, both for
heterointeractions between sensor kinase (SK) and response regulator (RR)
proteins and for homointeractions between RR proteins. The spectacular success
of this approach illustrates the effectiveness of the global inference approach
in identifying direct interaction based on sequence information alone. We
expect this method to be applicable soon to interaction surfaces between
proteins present in only 1 copy per genome as the number of sequenced genomes
continues to expand. Use of this method could significantly increase the
potential targets for therapeutic intervention, shed light on the mechanism of
protein-protein interaction, and establish the foundation for the accurate
prediction of interacting protein partners.
| [
{
"created": "Fri, 9 Jan 2009 13:46:58 GMT",
"version": "v1"
}
] | 2009-01-12 | [
[
"Weigt",
"M.",
""
],
[
"White",
"R. A.",
""
],
[
"Szurmant",
"H.",
""
],
[
"Hoch",
"J. A.",
""
],
[
"Hwa",
"T.",
""
]
] | Understanding the molecular determinants of specificity in protein-protein interaction is an outstanding challenge of postgenome biology. The availability of large protein databases generated from sequences of hundreds of bacterial genomes enables various statistical approaches to this problem. In this context covariance-based methods have been used to identify correlation between amino acid positions in interacting proteins. However, these methods have an important shortcoming, in that they cannot distinguish between directly and indirectly correlated residues. We developed a method that combines covariance analysis with global inference analysis, adopted from use in statistical physics. Applied to a set of >2,500 representatives of the bacterial two-component signal transduction system, the combination of covariance with global inference successfully and robustly identified residue pairs that are proximal in space without resorting to ad hoc tuning parameters, both for heterointeractions between sensor kinase (SK) and response regulator (RR) proteins and for homointeractions between RR proteins. The spectacular success of this approach illustrates the effectiveness of the global inference approach in identifying direct interaction based on sequence information alone. We expect this method to be applicable soon to interaction surfaces between proteins present in only 1 copy per genome as the number of sequenced genomes continues to expand. Use of this method could significantly increase the potential targets for therapeutic intervention, shed light on the mechanism of protein-protein interaction, and establish the foundation for the accurate prediction of interacting protein partners. |
2005.14602 | Ali Demirci | Ayse Peker-Dobie, Semra Ahmetolan, Ayse Humeyra Bilge, Ali Demirci | A New Susceptible-Infectious (SI) Model With Endemic Equilibrium | 14 pages, 11 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The focus of this article is on the dynamics of a new susceptible-infected
model which consists of a susceptible group ($S$) and two different infectious
groups ($I_1$ and $I_2$). Once infected, an individual becomes a member of one
of these infectious groups which have different clinical forms of infection. In
addition, during the progress of the illness, an infected individual in group
$I_1$ may pass to the infectious group $I_2$ which has a higher mortality rate.
In this study, positiveness of the solutions for the model is proved. Stability
analysis of species extinction, $I_1$-free equilibrium and endemic equilibrium
as well as disease-free equilibrium is studied. Relation between the basic
reproduction number of the disease and the basic reproduction number of each
infectious stage is examined. The model is investigated under a specific
condition and its exact solution is obtained.
| [
{
"created": "Fri, 29 May 2020 14:38:16 GMT",
"version": "v1"
}
] | 2020-06-01 | [
[
"Peker-Dobie",
"Ayse",
""
],
[
"Ahmetolan",
"Semra",
""
],
[
"Bilge",
"Ayse Humeyra",
""
],
[
"Demirci",
"Ali",
""
]
] | The focus of this article is on the dynamics of a new susceptible-infected model which consists of a susceptible group ($S$) and two different infectious groups ($I_1$ and $I_2$). Once infected, an individual becomes a member of one of these infectious groups which have different clinical forms of infection. In addition, during the progress of the illness, an infected individual in group $I_1$ may pass to the infectious group $I_2$ which has a higher mortality rate. In this study, positiveness of the solutions for the model is proved. Stability analysis of species extinction, $I_1$-free equilibrium and endemic equilibrium as well as disease-free equilibrium is studied. Relation between the basic reproduction number of the disease and the basic reproduction number of each infectious stage is examined. The model is investigated under a specific condition and its exact solution is obtained. |
2206.11255 | Hui Liu | Yiming Fang, Xuejun Liu, and Hui Liu | Attention-aware contrastive learning for predicting T cell
receptor-antigen binding specificity | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | It has been verified that only a small fraction of the neoantigens presented
by MHC class I molecules on the cell surface can elicit T cells. The limitation
can be attributed to the binding specificity of T cell receptor (TCR) to
peptide-MHC complex (pMHC). Computational prediction of T cell binding to
neoantigens is an challenging and unresolved task. In this paper, we propose an
attentive-mask contrastive learning model, ATMTCR, for inferring TCR-antigen
binding specificity. For each input TCR sequence, we used Transformer encoder
to transform it to latent representation, and then masked a proportion of
residues guided by attention weights to generate its contrastive view.
Pretraining on large-scale TCR CDR3 sequences, we verified that contrastive
learning significantly improved the prediction performance of TCR binding to
peptide-MHC complex (pMHC). Beyond the detection of important amino acids and
their locations in the TCR sequence, our model can also extracted high-order
semantic information underlying the TCR-antigen binding specificity. Comparison
experiments were conducted on two independent datasets, our method achieved
better performance than other existing algorithms. Moreover, we effectively
identified important amino acids and their positional preferences through
attention weights, which indicated the interpretability of our proposed model.
| [
{
"created": "Tue, 17 May 2022 10:53:32 GMT",
"version": "v1"
}
] | 2022-06-24 | [
[
"Fang",
"Yiming",
""
],
[
"Liu",
"Xuejun",
""
],
[
"Liu",
"Hui",
""
]
] | It has been verified that only a small fraction of the neoantigens presented by MHC class I molecules on the cell surface can elicit T cells. The limitation can be attributed to the binding specificity of T cell receptor (TCR) to peptide-MHC complex (pMHC). Computational prediction of T cell binding to neoantigens is an challenging and unresolved task. In this paper, we propose an attentive-mask contrastive learning model, ATMTCR, for inferring TCR-antigen binding specificity. For each input TCR sequence, we used Transformer encoder to transform it to latent representation, and then masked a proportion of residues guided by attention weights to generate its contrastive view. Pretraining on large-scale TCR CDR3 sequences, we verified that contrastive learning significantly improved the prediction performance of TCR binding to peptide-MHC complex (pMHC). Beyond the detection of important amino acids and their locations in the TCR sequence, our model can also extracted high-order semantic information underlying the TCR-antigen binding specificity. Comparison experiments were conducted on two independent datasets, our method achieved better performance than other existing algorithms. Moreover, we effectively identified important amino acids and their positional preferences through attention weights, which indicated the interpretability of our proposed model. |
2112.00777 | Young-Eun Hwang | Young-Eun Hwang, Young-Bo Kim, and Young-Don Son | Finding Cortical Subregions Regarding the Dorsal Language Pathway Based
on the Structural Connectivity | 35 pages, 9 figures, 3 tables, 3 supplementary Figures, 5
supplementary tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the language-related fiber pathways in the human brain, such as
superior longitudinal fasciculus (SLF) and arcuate fasciculus (AF), are already
well-known, understanding more sophisticated cortical regions connected by the
fiber tracts is essential to scrutinizing the structural connectivity of
language circuits. With the regions of interest that were selected based on the
Brainnetome atlas, the fiber orientation distribution estimation method for
tractography was used to produce further elaborate connectivity information.
The results indicated that both fiber bundles had two distinct connections with
the prefrontal corte (PFC). The SLF-II and dorsal AF are mainly connected to
the rostrodorsal part of the inferior parietal cortex (IPC) and lateral part of
the fusiform gyrus with the inferior frontal junction (IFJ), respectively. In
contrast, the SLF-III and ventral AF were primary linked to the anterior part
of the supramarginal gyrus and superior part of the temporal cortex with the
inferior frontal cortex, including the Broca's area. Moreover, the IFJ in the
PFC, which has rarely been emphasized as a language-related subretion, also had
the strongest connectivity with the previously known language-related
subregions among the PFC; consequently, we proposed that these specific regions
are interconnected via the SLF and AF within the PFC, IPC, and temporal cortex
as language-related circuitry.
| [
{
"created": "Wed, 1 Dec 2021 19:03:25 GMT",
"version": "v1"
}
] | 2021-12-03 | [
[
"Hwang",
"Young-Eun",
""
],
[
"Kim",
"Young-Bo",
""
],
[
"Son",
"Young-Don",
""
]
] | Although the language-related fiber pathways in the human brain, such as superior longitudinal fasciculus (SLF) and arcuate fasciculus (AF), are already well-known, understanding more sophisticated cortical regions connected by the fiber tracts is essential to scrutinizing the structural connectivity of language circuits. With the regions of interest that were selected based on the Brainnetome atlas, the fiber orientation distribution estimation method for tractography was used to produce further elaborate connectivity information. The results indicated that both fiber bundles had two distinct connections with the prefrontal corte (PFC). The SLF-II and dorsal AF are mainly connected to the rostrodorsal part of the inferior parietal cortex (IPC) and lateral part of the fusiform gyrus with the inferior frontal junction (IFJ), respectively. In contrast, the SLF-III and ventral AF were primary linked to the anterior part of the supramarginal gyrus and superior part of the temporal cortex with the inferior frontal cortex, including the Broca's area. Moreover, the IFJ in the PFC, which has rarely been emphasized as a language-related subretion, also had the strongest connectivity with the previously known language-related subregions among the PFC; consequently, we proposed that these specific regions are interconnected via the SLF and AF within the PFC, IPC, and temporal cortex as language-related circuitry. |
1509.02044 | Jaroslav Ilnytskyi Dr. | J.M. Ilnytskyi, Y. Kozitsky, H.I. Ilnytskyi, O. Haiduchok | Stationary states and spatial patterning in an $SIS$ epidemiology model
with implicit mobility | 17 pages, 9 figures | null | 10.1016/j.physa.2016.05.006 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By means of the asynchronous cellular automata algorithm we study stationary
states and spatial patterning in an $SIS$ model, in which the individuals' are
attached to the vertices of a graph and their mobility is mimicked by varying
the neighbourhood size $q$. The versions with fixed $q$ and those taken at
random at each step and for each individual are studied. Numerical data on the
local behaviour of the model are mapped onto the solution of its zero
dimensional version, corresponding to the limit $q\to +\infty$ and equivalent
to the logistic growth model. This allows for deducing an explicit form of the
dependence of the fraction of infected individuals on the curing rate $\gamma$.
A detailed analysis of the appearance of spatial patterns of infected
individuals in the stationary state is performed.
| [
{
"created": "Mon, 7 Sep 2015 13:55:04 GMT",
"version": "v1"
}
] | 2016-06-22 | [
[
"Ilnytskyi",
"J. M.",
""
],
[
"Kozitsky",
"Y.",
""
],
[
"Ilnytskyi",
"H. I.",
""
],
[
"Haiduchok",
"O.",
""
]
] | By means of the asynchronous cellular automata algorithm we study stationary states and spatial patterning in an $SIS$ model, in which the individuals' are attached to the vertices of a graph and their mobility is mimicked by varying the neighbourhood size $q$. The versions with fixed $q$ and those taken at random at each step and for each individual are studied. Numerical data on the local behaviour of the model are mapped onto the solution of its zero dimensional version, corresponding to the limit $q\to +\infty$ and equivalent to the logistic growth model. This allows for deducing an explicit form of the dependence of the fraction of infected individuals on the curing rate $\gamma$. A detailed analysis of the appearance of spatial patterns of infected individuals in the stationary state is performed. |
1406.7750 | Yujun Cui | Yujun Cui, Xianwei Yang, Xavier Didelot, Chenyi Guo, Dongfang Li,
Yanfeng Yan, Yiquan Zhang, Yanting Yuan, Huanming Yang, Jian Wang, Jun Wang,
Yajun Song, Dongsheng Zhou, Daniel Falush, Ruifu Yang | Epidemic clones, oceanic gene pools and eco-LD in the free living marine
pathogen Vibrio parahaemolyticus | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigated global patterns of variation in 157 whole genome sequences of
Vibrio parahaemolyticus, a free-living and seafood associated marine bacterium.
Pandemic clones, responsible for recent outbreaks of gastroenteritis in humans
have spread globally. However, there are oceanic gene pools, one located in the
oceans surrounding Asia and another in the Mexican Gulf. Frequent recombination
means that most isolates have acquired the genetic profile of their current
location. We investigated the genetic structure in the Asian gene pool by
calculating the effective population size in two different ways. Under standard
neutral models, the two estimates should give similar answers but we found a
thirty fold difference. We propose that this discrepancy is caused by the
subdivision of the species into a hundred or more ecotypes which are maintained
stably in the population. To investigate the genetic factors involved, we used
51 unrelated isolates to conduct a genome-wide scan for epistatically
interacting loci. We found a single example of strong epistasis between distant
genome regions. A majority of strains had a type VI secretion system associated
with bacterial killing. The remaining strains had genes associated with biofilm
formation and regulated by c-di-GMP signaling. All strains had one or other of
the two systems and none of isolate had complete complements of both systems,
although several strains had remnants. Further top-down analysis of patterns of
linkage disequilibrium within frequently recombining species will allow a
detailed understanding of how selection acts to structure the pattern of
variation within natural bacterial populations.
| [
{
"created": "Mon, 30 Jun 2014 14:22:17 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Nov 2014 11:55:39 GMT",
"version": "v2"
}
] | 2014-12-02 | [
[
"Cui",
"Yujun",
""
],
[
"Yang",
"Xianwei",
""
],
[
"Didelot",
"Xavier",
""
],
[
"Guo",
"Chenyi",
""
],
[
"Li",
"Dongfang",
""
],
[
"Yan",
"Yanfeng",
""
],
[
"Zhang",
"Yiquan",
""
],
[
"Yuan",
"Yanting",
""
],
[
"Yang",
"Huanming",
""
],
[
"Wang",
"Jian",
""
],
[
"Wang",
"Jun",
""
],
[
"Song",
"Yajun",
""
],
[
"Zhou",
"Dongsheng",
""
],
[
"Falush",
"Daniel",
""
],
[
"Yang",
"Ruifu",
""
]
] | We investigated global patterns of variation in 157 whole genome sequences of Vibrio parahaemolyticus, a free-living and seafood associated marine bacterium. Pandemic clones, responsible for recent outbreaks of gastroenteritis in humans have spread globally. However, there are oceanic gene pools, one located in the oceans surrounding Asia and another in the Mexican Gulf. Frequent recombination means that most isolates have acquired the genetic profile of their current location. We investigated the genetic structure in the Asian gene pool by calculating the effective population size in two different ways. Under standard neutral models, the two estimates should give similar answers but we found a thirty fold difference. We propose that this discrepancy is caused by the subdivision of the species into a hundred or more ecotypes which are maintained stably in the population. To investigate the genetic factors involved, we used 51 unrelated isolates to conduct a genome-wide scan for epistatically interacting loci. We found a single example of strong epistasis between distant genome regions. A majority of strains had a type VI secretion system associated with bacterial killing. The remaining strains had genes associated with biofilm formation and regulated by c-di-GMP signaling. All strains had one or other of the two systems and none of isolate had complete complements of both systems, although several strains had remnants. Further top-down analysis of patterns of linkage disequilibrium within frequently recombining species will allow a detailed understanding of how selection acts to structure the pattern of variation within natural bacterial populations. |
2201.08747 | Jalal Mirakhorli | Jalal Mirakhorli | Inferring Brain Dynamics via Multimodal Joint Graph Representation
EEG-fMRI | 13 pages, 2 figures | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have shown that multi-modeling methods can provide new
insights into the analysis of brain components that are not possible when each
modality is acquired separately. The joint representations of different
modalities is a robust model to analyze simultaneously acquired
electroencephalography and functional magnetic resonance imaging (EEG-fMRI).
Advances in precision instruments have given us the ability to observe the
spatiotemporal neural dynamics of the human brain through non-invasive
neuroimaging techniques such as EEG & fMRI. Nonlinear fusion methods of streams
can extract effective brain components in different dimensions of temporal and
spatial. Graph-based analyzes, which have many similarities to brain structure,
can overcome the complexities of brain mapping analysis. Throughout, we outline
the correlations of several different media in time shifts from one source with
graph-based and deep learning methods. Determining overlaps can provide a new
perspective for diagnosing functional changes in neuroplasticity studies.
| [
{
"created": "Fri, 21 Jan 2022 15:39:48 GMT",
"version": "v1"
}
] | 2022-01-24 | [
[
"Mirakhorli",
"Jalal",
""
]
] | Recent studies have shown that multi-modeling methods can provide new insights into the analysis of brain components that are not possible when each modality is acquired separately. The joint representations of different modalities is a robust model to analyze simultaneously acquired electroencephalography and functional magnetic resonance imaging (EEG-fMRI). Advances in precision instruments have given us the ability to observe the spatiotemporal neural dynamics of the human brain through non-invasive neuroimaging techniques such as EEG & fMRI. Nonlinear fusion methods of streams can extract effective brain components in different dimensions of temporal and spatial. Graph-based analyzes, which have many similarities to brain structure, can overcome the complexities of brain mapping analysis. Throughout, we outline the correlations of several different media in time shifts from one source with graph-based and deep learning methods. Determining overlaps can provide a new perspective for diagnosing functional changes in neuroplasticity studies. |
2104.01611 | John Vandermeer | John Vandermeer and Zachary Hajian-Forooshani | The interplay of extinction and synchrony in the dynamics of
metapopulation formation | 35 pages, 21 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The idea of a metapopulation has become canonical in ecology. Its original
mean field form provides the important intuition that migration and extinction
interact to determine the dynamics of a population composed of subpopulations.
From its conception, it has been evident that the very essence of the
metapopulation paradigm centers on the process of local extinction. We note
that there are two qualitatively distinct types of extinction, gradual and
catastrophic, and explore their impact on the dynamics of metapopulation
formation using discrete iterative maps. First, by modifying the classic
logistic map with the addition of the Allee effect, we show that catastrophic
local extinctions in subpopulations are a pre-requisite of metapopulation
formation. When subpopulations experience gradual extinction, increased
migration rates force synchrony and drive the metapopulation below the Allee
point resulting in migration induced destabilization of the system across
parameter space. Second, a sawtooth map (an extension of the Bernoulli bit
shift map) is employed to simultaneously explore the increasing and decreasing
modes of population behavior. We conclude with four generalizations. 1. At low
migration rates, a metapopulation may go extinct faster than completely
unconnected subpopulations. 2. There exists a gradient between stable
metapopulation formation and population synchrony, with critical transitions
from no metapopulation to metapopulation to synchronization, the latter
frequently inducing metapopulation extinction. 3. Synchronization patterns
emerge through time, resulting in synchrony groups and chimeric populations
existing simultaneously. 4. There are two distinct mechanisms of
synchronization: i. extinction and rescue and, ii.) stretch reversals in a
modification of the classic chaotic stretching and folding.
| [
{
"created": "Sun, 4 Apr 2021 13:34:28 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Jun 2021 10:19:20 GMT",
"version": "v2"
}
] | 2021-07-01 | [
[
"Vandermeer",
"John",
""
],
[
"Hajian-Forooshani",
"Zachary",
""
]
] | The idea of a metapopulation has become canonical in ecology. Its original mean field form provides the important intuition that migration and extinction interact to determine the dynamics of a population composed of subpopulations. From its conception, it has been evident that the very essence of the metapopulation paradigm centers on the process of local extinction. We note that there are two qualitatively distinct types of extinction, gradual and catastrophic, and explore their impact on the dynamics of metapopulation formation using discrete iterative maps. First, by modifying the classic logistic map with the addition of the Allee effect, we show that catastrophic local extinctions in subpopulations are a pre-requisite of metapopulation formation. When subpopulations experience gradual extinction, increased migration rates force synchrony and drive the metapopulation below the Allee point resulting in migration induced destabilization of the system across parameter space. Second, a sawtooth map (an extension of the Bernoulli bit shift map) is employed to simultaneously explore the increasing and decreasing modes of population behavior. We conclude with four generalizations. 1. At low migration rates, a metapopulation may go extinct faster than completely unconnected subpopulations. 2. There exists a gradient between stable metapopulation formation and population synchrony, with critical transitions from no metapopulation to metapopulation to synchronization, the latter frequently inducing metapopulation extinction. 3. Synchronization patterns emerge through time, resulting in synchrony groups and chimeric populations existing simultaneously. 4. There are two distinct mechanisms of synchronization: i. extinction and rescue and, ii.) stretch reversals in a modification of the classic chaotic stretching and folding. |
1811.03162 | Hiroaki Yamada | Shu-ichi Kinoshita and Hiroaki S. Yamada | Role of self-loop in cell-cycle network of budding yeast | 6 pages, 5 figures | null | null | null | q-bio.MN cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Study of network dynamics is very active area in biological and social
sciences. However, the relationship between the network structure and the
attractors of the dynamics has not been fully understood yet. In this study, we
numerically investigated the role of degenerate self-loops on the attractors
and its basin size using the budding yeast cell-cycle network model. In the
network, all self-loops negatively surpress the node (self-inhibition loops)
and the attractors are only fixed points, i.e. point attractors. It is found
that there is a simple division rule of the state space by removing the
self-loops when the attractors consist only of point attractors. The point
attractor with largest basin size is robust against the change of the
self-inhibition loop. Furthermore, some limit cycles of period 2 appear as new
attractor when a self-activation loop is added to the original network. It is
also shown that even in that case, the point attractor with largest basin size
is robust.
| [
{
"created": "Wed, 7 Nov 2018 22:05:07 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Nov 2018 04:52:02 GMT",
"version": "v2"
}
] | 2018-11-21 | [
[
"Kinoshita",
"Shu-ichi",
""
],
[
"Yamada",
"Hiroaki S.",
""
]
] | Study of network dynamics is very active area in biological and social sciences. However, the relationship between the network structure and the attractors of the dynamics has not been fully understood yet. In this study, we numerically investigated the role of degenerate self-loops on the attractors and its basin size using the budding yeast cell-cycle network model. In the network, all self-loops negatively surpress the node (self-inhibition loops) and the attractors are only fixed points, i.e. point attractors. It is found that there is a simple division rule of the state space by removing the self-loops when the attractors consist only of point attractors. The point attractor with largest basin size is robust against the change of the self-inhibition loop. Furthermore, some limit cycles of period 2 appear as new attractor when a self-activation loop is added to the original network. It is also shown that even in that case, the point attractor with largest basin size is robust. |
0905.1682 | Andrey Shabalin | Andrey A. Shabalin, Victor J. Weigman, Charles M. Perou, Andrew B.
Nobel | Finding large average submatrices in high dimensional data | Published in at http://dx.doi.org/10.1214/09-AOAS239 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org) | Annals of Applied Statistics 2009, Vol. 3, No. 3, 985-1012 | 10.1214/09-AOAS239 | IMS-AOAS-AOAS239 | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The search for sample-variable associations is an important problem in the
exploratory analysis of high dimensional data. Biclustering methods search for
sample-variable associations in the form of distinguished submatrices of the
data matrix. (The rows and columns of a submatrix need not be contiguous.) In
this paper we propose and evaluate a statistically motivated biclustering
procedure (LAS) that finds large average submatrices within a given real-valued
data matrix. The procedure operates in an iterative-residual fashion, and is
driven by a Bonferroni-based significance score that effectively trades off
between submatrix size and average value. We examine the performance and
potential utility of LAS, and compare it with a number of existing methods,
through an extensive three-part validation study using two gene expression
datasets. The validation study examines quantitative properties of biclusters,
biological and clinical assessments using auxiliary information, and
classification of disease subtypes using bicluster membership. In addition, we
carry out a simulation study to assess the effectiveness and noise sensitivity
of the LAS search procedure. These results suggest that LAS is an effective
exploratory tool for the discovery of biologically relevant structures in high
dimensional data. Software is available at https://genome.unc.edu/las/.
| [
{
"created": "Mon, 11 May 2009 19:32:54 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Oct 2009 08:48:32 GMT",
"version": "v2"
}
] | 2009-10-08 | [
[
"Shabalin",
"Andrey A.",
""
],
[
"Weigman",
"Victor J.",
""
],
[
"Perou",
"Charles M.",
""
],
[
"Nobel",
"Andrew B.",
""
]
] | The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. The procedure operates in an iterative-residual fashion, and is driven by a Bonferroni-based significance score that effectively trades off between submatrix size and average value. We examine the performance and potential utility of LAS, and compare it with a number of existing methods, through an extensive three-part validation study using two gene expression datasets. The validation study examines quantitative properties of biclusters, biological and clinical assessments using auxiliary information, and classification of disease subtypes using bicluster membership. In addition, we carry out a simulation study to assess the effectiveness and noise sensitivity of the LAS search procedure. These results suggest that LAS is an effective exploratory tool for the discovery of biologically relevant structures in high dimensional data. Software is available at https://genome.unc.edu/las/. |
2303.01650 | Maximilian Nguyen | Maximilian Nguyen, Ari Freedman, Sinan Ozbay, and Simon Levin | Fundamental Bound on Epidemic Overshoot in the SIR Model | Main text: 7 pages + 2 figures. Supplement: 7 pages + 6 figures +
code | Journal of the Royal Society Interface 20.209 (2023): 20230322 | 10.1098/rsif.2023.0322 | null | q-bio.PE physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We derive an exact upper bound on the epidemic overshoot for the
Kermack-McKendrick SIR model. This maximal overshoot value of 0.2984... occurs
at $R_0^*$ = 2.151... . In considering the utility of the notion of overshoot,
a rudimentary analysis of data from the first wave of the COVID-19 pandemic in
Manaus, Brazil highlights the public health hazard posed by overshoot for
epidemics with $R_0$ near 2. Using the general analysis framework presented
within, we then consider more complex SIR models that incorporate vaccination.
| [
{
"created": "Fri, 3 Mar 2023 00:52:41 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2023 21:23:39 GMT",
"version": "v2"
},
{
"created": "Thu, 4 May 2023 01:12:41 GMT",
"version": "v3"
},
{
"created": "Wed, 31 May 2023 19:28:19 GMT",
"version": "v4"
},
{
"created": "Fri, 3 Nov 2023 23:11:49 GMT",
"version": "v5"
}
] | 2024-03-28 | [
[
"Nguyen",
"Maximilian",
""
],
[
"Freedman",
"Ari",
""
],
[
"Ozbay",
"Sinan",
""
],
[
"Levin",
"Simon",
""
]
] | We derive an exact upper bound on the epidemic overshoot for the Kermack-McKendrick SIR model. This maximal overshoot value of 0.2984... occurs at $R_0^*$ = 2.151... . In considering the utility of the notion of overshoot, a rudimentary analysis of data from the first wave of the COVID-19 pandemic in Manaus, Brazil highlights the public health hazard posed by overshoot for epidemics with $R_0$ near 2. Using the general analysis framework presented within, we then consider more complex SIR models that incorporate vaccination. |
0809.2010 | Jorge F. Mejias | Jorge F. Mejias and Joaquin J. Torres | Maximum memory capacity on neural networks with short-term depression
and facilitation | 6 figures, accepted in Neural Computation | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we study, analytically and employing Monte Carlo simulations,
the influence of the competition between several activity-dependent synaptic
processes, such as short-term synaptic facilitation and depression, on the
maximum memory storage capacity in a neural network. In contrast with the case
of synaptic depression, which drastically reduces the capacity of the network
to store and retrieve "static" activity patterns, synaptic facilitation
enhances the storage capacity in different contexts. In particular, we found
optimal values of the relevant synaptic parameters (such as the
neurotransmitter release probability or the characteristic facilitation time
constant) for which the storage capacity can be maximal and similar to the one
obtained with static synapses, that is, without activity-dependent processes.
We conclude that depressing synapses with a certain level of facilitation allow
to recover the good retrieval properties of networks with static synapses while
maintaining the nonlinear characteristics of dynamic synapses, convenient for
information processing and coding.
| [
{
"created": "Thu, 11 Sep 2008 14:31:54 GMT",
"version": "v1"
}
] | 2010-07-23 | [
[
"Mejias",
"Jorge F.",
""
],
[
"Torres",
"Joaquin J.",
""
]
] | In this work we study, analytically and employing Monte Carlo simulations, the influence of the competition between several activity-dependent synaptic processes, such as short-term synaptic facilitation and depression, on the maximum memory storage capacity in a neural network. In contrast with the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve "static" activity patterns, synaptic facilitation enhances the storage capacity in different contexts. In particular, we found optimal values of the relevant synaptic parameters (such as the neurotransmitter release probability or the characteristic facilitation time constant) for which the storage capacity can be maximal and similar to the one obtained with static synapses, that is, without activity-dependent processes. We conclude that depressing synapses with a certain level of facilitation allow to recover the good retrieval properties of networks with static synapses while maintaining the nonlinear characteristics of dynamic synapses, convenient for information processing and coding. |
2106.10772 | Jiabin Wu | Ethan Holdahl and Jiabin Wu | Conflicts, Assortative Matching, and the Evolution of Signaling Norms | 27 page, 11 figures | null | null | null | q-bio.PE econ.TH | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a model to explain the potential role of inter-group
conflicts in determining the rise and fall of signaling norms. Individuals in a
population are characterized by high and low productivity types and they are
matched in pairs to form social relationships such as mating or foraging
relationships. In each relationship, an individual's payoff is increasing in
its own type and its partner's type. Hence, the payoff structure of a
relationship does not resemble a dilemma situation. Assume that types are not
observable. In one population, assortative matching according to types is
sustained by signaling. In the other population, individuals do not signal and
they are randomly matched. Types evolve within each population. At the same
time, the two populations may engage in conflicts. Due to assortative matching,
high types grow faster in the population with signaling, yet they bear the cost
of signaling, which lowers their population's fitness in the long run. Through
simulations, we show that the survival of the signaling population depends
crucially on the timing and the efficiency of weapon used in inter-group
conflicts.
| [
{
"created": "Sun, 20 Jun 2021 22:47:37 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 23:59:49 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Dec 2021 23:45:06 GMT",
"version": "v3"
},
{
"created": "Sun, 6 Aug 2023 07:00:41 GMT",
"version": "v4"
}
] | 2023-08-08 | [
[
"Holdahl",
"Ethan",
""
],
[
"Wu",
"Jiabin",
""
]
] | This paper proposes a model to explain the potential role of inter-group conflicts in determining the rise and fall of signaling norms. Individuals in a population are characterized by high and low productivity types and they are matched in pairs to form social relationships such as mating or foraging relationships. In each relationship, an individual's payoff is increasing in its own type and its partner's type. Hence, the payoff structure of a relationship does not resemble a dilemma situation. Assume that types are not observable. In one population, assortative matching according to types is sustained by signaling. In the other population, individuals do not signal and they are randomly matched. Types evolve within each population. At the same time, the two populations may engage in conflicts. Due to assortative matching, high types grow faster in the population with signaling, yet they bear the cost of signaling, which lowers their population's fitness in the long run. Through simulations, we show that the survival of the signaling population depends crucially on the timing and the efficiency of weapon used in inter-group conflicts. |
1409.4238 | Sara Jabbari | Lucy Ternent, Rosemary J. Dyson, Anne-Marie Krachler, Sara Jabbari | Bacterial fitness shapes the population dynamics of antibiotic-resistant
and -susceptible bacteria in a model of combined antibiotic and
anti-virulence treatment | Pre-review manuscript. Submitted to Journal of Theoretical Biology,
July 21st 2014 | null | null | null | q-bio.CB q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial resistance to antibiotic treatment is a huge concern: introduction
of any new antibiotic is shortly followed by the emergence of resistant
bacterial isolates in the clinic. This issue is compounded by a severe lack of
new antibiotics reaching the market. The significant rise in clinical
resistance to antibiotics is especially problematic in nosocomial infections,
where already vulnerable patients may fail to respond to treatment, causing
even greater health concern. A recent focus has been on the development of
anti-virulence drugs as a second line of defence in the treatment of
antibiotic-resistant infections. This treatment, which weakens bacteria by
reducing their virulence rather than killing them, should allow infections to
be cleared through the body's natural defence mechanisms. In this way there
should be little to no selective pressure exerted on the organism and, as such,
a predominantly resistant population would be unlikely to emerge. However, much
controversy surrounds this approach with many believing it would not be
powerful enough to clear existing infections, restricting its potential
application to prophylaxis. We have developed a mathematical model that
provides a theoretical framework to reveal the circumstances under which
anti-virulence drugs may or may not be successful. We demonstrate that by
harnessing and combining the advantages of antibiotics with those provided by
anti-virulence drugs, given infection-specific parameters, it is possible to
identify treatment strategies that would efficiently clear bacterial
infections, while preventing the emergence of resistant subpopulations. Our
findings strongly support the continuation of research into anti-virulence
drugs and demonstrate that their applicability may reach beyond infection
prevention.
| [
{
"created": "Mon, 15 Sep 2014 13:06:25 GMT",
"version": "v1"
}
] | 2014-09-16 | [
[
"Ternent",
"Lucy",
""
],
[
"Dyson",
"Rosemary J.",
""
],
[
"Krachler",
"Anne-Marie",
""
],
[
"Jabbari",
"Sara",
""
]
] | Bacterial resistance to antibiotic treatment is a huge concern: introduction of any new antibiotic is shortly followed by the emergence of resistant bacterial isolates in the clinic. This issue is compounded by a severe lack of new antibiotics reaching the market. The significant rise in clinical resistance to antibiotics is especially problematic in nosocomial infections, where already vulnerable patients may fail to respond to treatment, causing even greater health concern. A recent focus has been on the development of anti-virulence drugs as a second line of defence in the treatment of antibiotic-resistant infections. This treatment, which weakens bacteria by reducing their virulence rather than killing them, should allow infections to be cleared through the body's natural defence mechanisms. In this way there should be little to no selective pressure exerted on the organism and, as such, a predominantly resistant population would be unlikely to emerge. However, much controversy surrounds this approach with many believing it would not be powerful enough to clear existing infections, restricting its potential application to prophylaxis. We have developed a mathematical model that provides a theoretical framework to reveal the circumstances under which anti-virulence drugs may or may not be successful. We demonstrate that by harnessing and combining the advantages of antibiotics with those provided by anti-virulence drugs, given infection-specific parameters, it is possible to identify treatment strategies that would efficiently clear bacterial infections, while preventing the emergence of resistant subpopulations. Our findings strongly support the continuation of research into anti-virulence drugs and demonstrate that their applicability may reach beyond infection prevention. |
1509.04568 | Alan McKane | Luis F. Lafuerza and Alan J. McKane | The role of demographic stochasticity in a speciation model with sexual
reproduction | 10 pages, 4 figures | Phys. Rev. E 93, 032121 (2016) | 10.1103/PhysRevE.93.032121 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent theoretical studies have shown that demographic stochasticity can
greatly increase the tendency of asexually reproducing phenotypically diverse
organisms to spontaneously evolve into localised clusters, suggesting a simple
mechanism for sympatric speciation. Here we study the role of demographic
stochasticity in a model of competing organisms subject to assortative mating.
We find that in models with sexual reproduction, noise can also lead to the
formation of phenotypic clusters in parameter ranges where deterministic models
would lead to a homogeneous distribution. In some cases, noise can have a
sizeable effect, rendering the deterministic modelling insufficient to
understand the phenotypic distribution.
| [
{
"created": "Tue, 15 Sep 2015 14:04:51 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Mar 2016 14:35:09 GMT",
"version": "v2"
}
] | 2016-03-23 | [
[
"Lafuerza",
"Luis F.",
""
],
[
"McKane",
"Alan J.",
""
]
] | Recent theoretical studies have shown that demographic stochasticity can greatly increase the tendency of asexually reproducing phenotypically diverse organisms to spontaneously evolve into localised clusters, suggesting a simple mechanism for sympatric speciation. Here we study the role of demographic stochasticity in a model of competing organisms subject to assortative mating. We find that in models with sexual reproduction, noise can also lead to the formation of phenotypic clusters in parameter ranges where deterministic models would lead to a homogeneous distribution. In some cases, noise can have a sizeable effect, rendering the deterministic modelling insufficient to understand the phenotypic distribution. |
2105.01223 | Luyan Yu | Luyan Yu and Thibaud Taillefumier | Metastable spiking networks in the replica-mean-field limit | 40 pages, 14 figures | null | 10.1371/journal.pcbi.1010215 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterizing metastable neural dynamics in finite-size spiking networks
remains a daunting challenge. We propose to address this challenge in the
recently introduced replica-mean-field (RMF) limit. In this limit, networks are
made of infinitely many replicas of the finite network of interest, but with
randomized interactions across replicas. Such randomization renders certain
excitatory networks fully tractable at the cost of neglecting activity
correlations, but with explicit dependence on the finite size of the neural
constituents. However, metastable dynamics typically unfold in networks with
mixed inhibition and excitation. Here, we extend the RMF computational
framework to point-process-based neural network models with exponential
stochastic intensities, allowing for mixed excitation and inhibition. Within
this setting, we show that metastable finite-size networks admit multistable
RMF limits, which are fully characterized by stationary firing rates.
Technically, these stationary rates are determined as the solutions of a set of
delayed differential equations under certain regularity conditions that any
physical solutions shall satisfy. We solve this original problem by combining
the resolvent formalism and singular-perturbation theory. Importantly, we find
that these rates specify probabilistic pseudo-equilibria which accurately
capture the neural variability observed in the original finite-size network. We
also discuss the emergence of metastability as a stochastic bifurcation, which
can be interpreted as a static phase transition in the RMF limits. In turn, we
expect to leverage the static picture of RMF limits to infer purely dynamical
features of metastable finite-size networks, such as the transition rates
between pseudo-equilibria.
| [
{
"created": "Tue, 4 May 2021 00:22:37 GMT",
"version": "v1"
},
{
"created": "Wed, 5 May 2021 05:04:26 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Mar 2022 19:14:20 GMT",
"version": "v3"
}
] | 2022-10-12 | [
[
"Yu",
"Luyan",
""
],
[
"Taillefumier",
"Thibaud",
""
]
] | Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria. |
1502.07083 | Xin Chen | Xin Chen | Approximating the Minimum Breakpoint Linearization Problem for Genetic
Maps without Gene Strandedness | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of genetic map linearization leads to a combinatorial hard problem,
called the {\em minimum breakpoint linearization} (MBL) problem. It is aimed at
finding a linearization of a partial order which attains the minimum breakpoint
distance to a reference total order. The approximation algorithms previously
developed for the MBL problem are only applicable to genetic maps in which
genes or markers are represented as signed integers. However, current genetic
mapping techniques generally do not specify gene strandedness so that genes can
only be represented as unsigned integers. In this paper, we study the MBL
problem in the latter more realistic case. An approximation algorithm is thus
developed, which achieves a ratio of $(m^2+2m-1)$ and runs in $O(n^7)$ time,
where $m$ is the number of genetic maps used to construct the input partial
order and $n$ the total number of distinct genes in these maps.
| [
{
"created": "Wed, 25 Feb 2015 08:35:16 GMT",
"version": "v1"
}
] | 2015-02-26 | [
[
"Chen",
"Xin",
""
]
] | The study of genetic map linearization leads to a combinatorial hard problem, called the {\em minimum breakpoint linearization} (MBL) problem. It is aimed at finding a linearization of a partial order which attains the minimum breakpoint distance to a reference total order. The approximation algorithms previously developed for the MBL problem are only applicable to genetic maps in which genes or markers are represented as signed integers. However, current genetic mapping techniques generally do not specify gene strandedness so that genes can only be represented as unsigned integers. In this paper, we study the MBL problem in the latter more realistic case. An approximation algorithm is thus developed, which achieves a ratio of $(m^2+2m-1)$ and runs in $O(n^7)$ time, where $m$ is the number of genetic maps used to construct the input partial order and $n$ the total number of distinct genes in these maps. |
1910.02634 | Seung Ki Baek | Yohsuke Murase and Seung Ki Baek | Automata representation of successful strategies for social dilemmas | 11 pages, 7 figures | Sci. Rep. 10, 13370 (2020) | 10.1038/s41598-020-70281-x | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a social dilemma, cooperation is collectively optimal, yet individually
each group member prefers to defect. A class of successful strategies of direct
reciprocity were recently found for the iterated prisoner's dilemma and for the
iterated three-person public-goods game: By a successful strategy, we mean that
it constitutes a cooperative Nash equilibrium under implementation error, with
assuring that the long-term payoff never becomes less than the co-players'
regardless of their strategies, when the error rate is small. Although we have
a list of actions prescribed by each successful strategy, the rationale behind
them has not been fully understood for the iterated public-goods game because
the list has hundreds of entries to deal with every relevant history of
previous interactions. In this paper, we propose a method to convert such
history-based representation into an automaton with a minimal number of states.
Our main finding is that a successful strategy for the iterated three-person
public-goods game can be represented as a $10$-state automaton by this method.
In this automaton, each state can be interpreted as the player's internal
judgement of the situation, such as trustworthiness of the co-players and the
need to redeem oneself after defection. This result thus suggests a
comprehensible way to choose an appropriate action at each step towards
cooperation based on a situational judgement, which is mapped from the history
of interactions.
| [
{
"created": "Mon, 7 Oct 2019 06:56:16 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Feb 2020 06:43:29 GMT",
"version": "v2"
},
{
"created": "Fri, 9 Oct 2020 13:22:42 GMT",
"version": "v3"
}
] | 2020-10-12 | [
[
"Murase",
"Yohsuke",
""
],
[
"Baek",
"Seung Ki",
""
]
] | In a social dilemma, cooperation is collectively optimal, yet individually each group member prefers to defect. A class of successful strategies of direct reciprocity were recently found for the iterated prisoner's dilemma and for the iterated three-person public-goods game: By a successful strategy, we mean that it constitutes a cooperative Nash equilibrium under implementation error, with assuring that the long-term payoff never becomes less than the co-players' regardless of their strategies, when the error rate is small. Although we have a list of actions prescribed by each successful strategy, the rationale behind them has not been fully understood for the iterated public-goods game because the list has hundreds of entries to deal with every relevant history of previous interactions. In this paper, we propose a method to convert such history-based representation into an automaton with a minimal number of states. Our main finding is that a successful strategy for the iterated three-person public-goods game can be represented as a $10$-state automaton by this method. In this automaton, each state can be interpreted as the player's internal judgement of the situation, such as trustworthiness of the co-players and the need to redeem oneself after defection. This result thus suggests a comprehensible way to choose an appropriate action at each step towards cooperation based on a situational judgement, which is mapped from the history of interactions. |
2201.13092 | Jaroslav Albert | Jaroslav Albert | A detailed model of gene promoter dynamics reveals the entry into
productive elongation to be a highly punctual process | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Gene transcription is a stochastic process that involves thousands of
reactions. The first set of these reactions, which happen near a gene promoter,
are considered to be the most important in the context of stochastic noise. The
most common models of transcription are primarily concerned with the effect of
activators/repressors on the overall transcription rate and approximate the
basal transcription processes as a one step event. According to such effective
models, the Fano factor of mRNA copy distributions is always greater than
(super-Poissonian) or equal to 1 (Poissonian), and the only way to go below
this limit (sub-Poissonian) is via a negative feedback. It is partly due to
this limit that the first stage of transcription is held responsible for most
of the stochastic noise in mRNA copy numbers. However, by considering all major
reactions that build and drive the basal transcription machinery, from the
first protein that binds a promoter to the entrance of the transcription
complex (TC) into productive elongation, it is shown that the first two stages
of transcription, namely the pre-initiation complex (PIC) formation and the
promoter proximal pausing (PPP), is a highly punctual process. In other words,
the time between the first and the last step of this process is narrowly
distributed, which gives rise to sub-Poissonian distributions for the number of
TCs that have entered productive elongation. In fact, having simulated the PIC
formation and the PPP via the Gillespie algorithm using 2000 distinct parameter
sets and 4 different reaction network topologies, it is shown that only 4.4%
give rise to a Fano factor that is > 1 with the upper bound of 1.7, while for
31% of cases the Fano factor is below 0.5, with 0.19 as the lower bound. These
results cast doubt on the notion that most of the stochastic noise observed in
mRNA distributions always originates at the promoter.
| [
{
"created": "Mon, 31 Jan 2022 10:02:23 GMT",
"version": "v1"
}
] | 2022-02-01 | [
[
"Albert",
"Jaroslav",
""
]
] | Gene transcription is a stochastic process that involves thousands of reactions. The first set of these reactions, which happen near a gene promoter, are considered to be the most important in the context of stochastic noise. The most common models of transcription are primarily concerned with the effect of activators/repressors on the overall transcription rate and approximate the basal transcription processes as a one step event. According to such effective models, the Fano factor of mRNA copy distributions is always greater than (super-Poissonian) or equal to 1 (Poissonian), and the only way to go below this limit (sub-Poissonian) is via a negative feedback. It is partly due to this limit that the first stage of transcription is held responsible for most of the stochastic noise in mRNA copy numbers. However, by considering all major reactions that build and drive the basal transcription machinery, from the first protein that binds a promoter to the entrance of the transcription complex (TC) into productive elongation, it is shown that the first two stages of transcription, namely the pre-initiation complex (PIC) formation and the promoter proximal pausing (PPP), is a highly punctual process. In other words, the time between the first and the last step of this process is narrowly distributed, which gives rise to sub-Poissonian distributions for the number of TCs that have entered productive elongation. In fact, having simulated the PIC formation and the PPP via the Gillespie algorithm using 2000 distinct parameter sets and 4 different reaction network topologies, it is shown that only 4.4% give rise to a Fano factor that is > 1 with the upper bound of 1.7, while for 31% of cases the Fano factor is below 0.5, with 0.19 as the lower bound. These results cast doubt on the notion that most of the stochastic noise observed in mRNA distributions always originates at the promoter. |
1812.07731 | Michael Fanton | Kaveh Laksari, Michael Fanton, Lyndia C. Wu, Taylor H. Nguyen, Mehmet
Kurt, Chiara Giordano, Eoin Kelly, Eoin O'Keeffe, Eugene Wallace, Colin
Doherty, Matthew Campbell, Stephen Tiernan, Gerald Grant, Jesse Ruan, Saeed
Barbat, David B. Camarillo | Multi-directional dynamic model for traumatic brain injury detection | 10 figures, 3 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traumatic brain injury (TBI) is a complex injury that is hard to predict and
diagnose, with many studies focused on associating head kinematics to brain
injury risk. Recently, there has been a push towards using computationally
expensive finite element (FE) models of the brain to create tissue deformation
metrics of brain injury. Here, we developed a 3 degree-of-freedom
lumped-parameter brain model, built based on the measured natural frequencies
of a FE brain model simulated with live human impact data, to be used to
rapidly estimate peak brain strains experienced during head rotational
accelerations. On our dataset, the simplified model correlates with peak
principal FE strain by an R2 of 0.80. Further, coronal and axial model
displacement correlated with fiber-oriented peak strain in the corpus callosum
with an R2 of 0.77. Using the maximum displacement predicted by our brain
model, we propose an injury criteria and compare it against a number of
existing rotational and translational kinematic injury metrics on a dataset of
head kinematics from 27 clinically diagnosed injuries and 887 non-injuries. We
found that our proposed metric performed comparably to peak angular
acceleration, linear acceleration, and angular velocity in classifying injury
and non-injury events. Metrics which separated time traces into their
directional components had improved deviance to those which combined components
into a single time trace magnitude. Our brain model can be used in future work
as a computationally efficient alternative to FE models for classifying
injuries over a wide range of loading conditions.
| [
{
"created": "Wed, 19 Dec 2018 02:16:03 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Apr 2019 00:22:17 GMT",
"version": "v2"
}
] | 2019-04-04 | [
[
"Laksari",
"Kaveh",
""
],
[
"Fanton",
"Michael",
""
],
[
"Wu",
"Lyndia C.",
""
],
[
"Nguyen",
"Taylor H.",
""
],
[
"Kurt",
"Mehmet",
""
],
[
"Giordano",
"Chiara",
""
],
[
"Kelly",
"Eoin",
""
],
[
"O'Keeffe",
"Eoin",
""
],
[
"Wallace",
"Eugene",
""
],
[
"Doherty",
"Colin",
""
],
[
"Campbell",
"Matthew",
""
],
[
"Tiernan",
"Stephen",
""
],
[
"Grant",
"Gerald",
""
],
[
"Ruan",
"Jesse",
""
],
[
"Barbat",
"Saeed",
""
],
[
"Camarillo",
"David B.",
""
]
] | Traumatic brain injury (TBI) is a complex injury that is hard to predict and diagnose, with many studies focused on associating head kinematics to brain injury risk. Recently, there has been a push towards using computationally expensive finite element (FE) models of the brain to create tissue deformation metrics of brain injury. Here, we developed a 3 degree-of-freedom lumped-parameter brain model, built based on the measured natural frequencies of a FE brain model simulated with live human impact data, to be used to rapidly estimate peak brain strains experienced during head rotational accelerations. On our dataset, the simplified model correlates with peak principal FE strain by an R2 of 0.80. Further, coronal and axial model displacement correlated with fiber-oriented peak strain in the corpus callosum with an R2 of 0.77. Using the maximum displacement predicted by our brain model, we propose an injury criteria and compare it against a number of existing rotational and translational kinematic injury metrics on a dataset of head kinematics from 27 clinically diagnosed injuries and 887 non-injuries. We found that our proposed metric performed comparably to peak angular acceleration, linear acceleration, and angular velocity in classifying injury and non-injury events. Metrics which separated time traces into their directional components had improved deviance to those which combined components into a single time trace magnitude. Our brain model can be used in future work as a computationally efficient alternative to FE models for classifying injuries over a wide range of loading conditions. |
1409.5672 | Rapha\"el Li\'egeois | Rapha\"el Li\'egeois, Erik Ziegler, Pierre Geurts, Francisco Gomez,
Mohamed Ali Bahri, Christophe Phillips, Andrea Soddu, Audrey Vanhaudenhuyse,
Steven Laureys, Rodolphe Sepulchre | Cerebral functional connectivity periodically (de)synchronizes with
anatomical constraints | 11 pages, 7 figures in main text, submitted to Human Brain Mapping | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the link between resting-state functional connectivity
(FC), measured by the correlations of the fMRI BOLD time courses, and
structural connectivity (SC), estimated through fiber tractography. Instead of
a static analysis based on the correlation between SC and the FC averaged over
the entire fMRI time series, we propose a dynamic analysis, based on the time
evolution of the correlation between SC and a suitably windowed FC. Assessing
the statistical significance of the time series against random phase
permutations, our data show a pronounced peak of significance for time window
widths around 20-30 TR (40-60 sec). Using the appropriate window width, we show
that FC patterns oscillate between phases of high modularity, primarily shaped
by anatomy, and phases of low modularity, primarily shaped by inter-network
connectivity. Building upon recent results in dynamic FC, this emphasizes the
potential role of SC as a transitory architecture between different highly
connected resting state FC patterns. Finally, we show that networks implied in
consciousness-related processes, such as the default mode network (DMN),
contribute more to these brain-level fluctuations compared to other networks,
such as the motor or somatosensory networks. This suggests that the
fluctuations between FC and SC are capturing mind-wandering effects.
| [
{
"created": "Fri, 19 Sep 2014 14:23:24 GMT",
"version": "v1"
}
] | 2014-09-22 | [
[
"Liégeois",
"Raphaël",
""
],
[
"Ziegler",
"Erik",
""
],
[
"Geurts",
"Pierre",
""
],
[
"Gomez",
"Francisco",
""
],
[
"Bahri",
"Mohamed Ali",
""
],
[
"Phillips",
"Christophe",
""
],
[
"Soddu",
"Andrea",
""
],
[
"Vanhaudenhuyse",
"Audrey",
""
],
[
"Laureys",
"Steven",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | This paper studies the link between resting-state functional connectivity (FC), measured by the correlations of the fMRI BOLD time courses, and structural connectivity (SC), estimated through fiber tractography. Instead of a static analysis based on the correlation between SC and the FC averaged over the entire fMRI time series, we propose a dynamic analysis, based on the time evolution of the correlation between SC and a suitably windowed FC. Assessing the statistical significance of the time series against random phase permutations, our data show a pronounced peak of significance for time window widths around 20-30 TR (40-60 sec). Using the appropriate window width, we show that FC patterns oscillate between phases of high modularity, primarily shaped by anatomy, and phases of low modularity, primarily shaped by inter-network connectivity. Building upon recent results in dynamic FC, this emphasizes the potential role of SC as a transitory architecture between different highly connected resting state FC patterns. Finally, we show that networks implied in consciousness-related processes, such as the default mode network (DMN), contribute more to these brain-level fluctuations compared to other networks, such as the motor or somatosensory networks. This suggests that the fluctuations between FC and SC are capturing mind-wandering effects. |
2006.08058 | Migun Shakya | Chien-Chi Lo, Migun Shakya, Karen Davenport, Mark Flynn, Ad\'an Myers
y Guti\'errez, Bin Hu, Po-E Li, Elais Player Jackson, Yan Xu, Patrick S. G.
Chain | EDGE COVID-19: A Web Platform to generate submission-ready genomes for
SARS-CoV-2 sequencing efforts | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Genomics has become an essential technology for surveilling emerging
infectious disease outbreaks. A wide range of technologies and strategies for
pathogen genome enrichment and sequencing are being used by laboratories
worldwide, together with different, and sometimes ad hoc, analytical procedures
for generating genome sequences. As a result, public repositories now contain
non-standard entries of varying quality. A standardized analytical process for
consensus genome sequence determination, particularly for outbreaks such as the
ongoing COVID-19 pandemic, is critical to provide a solid genomic basis for
epidemiological analyses and well-informed decision making. To address this
need, we have developed a bioinformatic workflow to standardize the analysis of
SARS-CoV-2 sequencing data generated with either the Illumina or Oxford
Nanopore platforms. Using an intuitive web-based interface, this workflow
automates SARS-CoV-2 reference-based genome assembly, variant calling, lineage
determination, and provides the ability to submit the consensus sequence and
necessary metadata to GenBank or GISAID. Given a raw Illumina or Oxford
Nanopore FASTQ read file, this web-based platform enables non-bioinformatics
experts to automatically produce a SARS-CoV-2 genome that is ready for
submission to GISAID or GenBank.
Availability:https://edge-covid19.edgebioinformatics.org;https://github.com/LANL-Bioinformatics/EDGE/tree/SARS-CoV2
| [
{
"created": "Mon, 15 Jun 2020 00:08:10 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jun 2020 14:06:59 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Jun 2021 19:53:44 GMT",
"version": "v3"
},
{
"created": "Thu, 24 Jun 2021 15:14:20 GMT",
"version": "v4"
}
] | 2021-06-25 | [
[
"Lo",
"Chien-Chi",
""
],
[
"Shakya",
"Migun",
""
],
[
"Davenport",
"Karen",
""
],
[
"Flynn",
"Mark",
""
],
[
"Gutiérrez",
"Adán Myers y",
""
],
[
"Hu",
"Bin",
""
],
[
"Li",
"Po-E",
""
],
[
"Jackson",
"Elais Player",
""
],
[
"Xu",
"Yan",
""
],
[
"Chain",
"Patrick S. G.",
""
]
] | Genomics has become an essential technology for surveilling emerging infectious disease outbreaks. A wide range of technologies and strategies for pathogen genome enrichment and sequencing are being used by laboratories worldwide, together with different, and sometimes ad hoc, analytical procedures for generating genome sequences. As a result, public repositories now contain non-standard entries of varying quality. A standardized analytical process for consensus genome sequence determination, particularly for outbreaks such as the ongoing COVID-19 pandemic, is critical to provide a solid genomic basis for epidemiological analyses and well-informed decision making. To address this need, we have developed a bioinformatic workflow to standardize the analysis of SARS-CoV-2 sequencing data generated with either the Illumina or Oxford Nanopore platforms. Using an intuitive web-based interface, this workflow automates SARS-CoV-2 reference-based genome assembly, variant calling, lineage determination, and provides the ability to submit the consensus sequence and necessary metadata to GenBank or GISAID. Given a raw Illumina or Oxford Nanopore FASTQ read file, this web-based platform enables non-bioinformatics experts to automatically produce a SARS-CoV-2 genome that is ready for submission to GISAID or GenBank. Availability:https://edge-covid19.edgebioinformatics.org;https://github.com/LANL-Bioinformatics/EDGE/tree/SARS-CoV2 |
1909.11317 | Ali Demirci | Ayse Peker Dobie, Ali Demirci, Ayse Humeyra Bilge, Semra Ahmetolan | On the time shift phenomena in epidemic models | null | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the standard Susceptible-Infected-Removed (SIR) and
Susceptible-Exposed-Infected-Removed (SEIR) models, the peak of infected
individuals coincides with the in ection point of removed individuals.
Nevertheless, a survey based on the data of the 2009 H1N1 epidemic in Istanbul,
Turkey [19] displayed an unexpected time shift between the hospital referrals
and fatalities. With the motivation of investigating the underlying reason, we
use multistage SIR and SEIR models to provide an explanation for this time
shift. Numerical solutions of these models present strong evidences that the
delay is approximately half of the infection period of the epidemic disease. In
addition, graphs of the classical SIR and the multistage SIR models; and the
classical SEIR and the multistage SEIR models are compared for various epidemic
parameters. Depending on the number of stages, it is observed that the delay
varies for relatively small stage numbers whereas it does not change for large
numbers in multistage systems. One important result that follows immediately
from this observation is that this fixed delay for large numbers explains the
time shift. Additionally, depending on the stage number and the duration of the
epidemic disease, the distance between the points where each infectious stage
reaches its maximum is found approximately both graphically and qualitatively
for both systems. Variations of the time shift, the maximum point of the sum of
all infectious stages, and the in ection point of the removed stage are
observed subject to the stage number N and it is shown that these variations
stay unchanged for greater values of N.
| [
{
"created": "Wed, 25 Sep 2019 07:34:55 GMT",
"version": "v1"
}
] | 2019-09-26 | [
[
"Dobie",
"Ayse Peker",
""
],
[
"Demirci",
"Ali",
""
],
[
"Bilge",
"Ayse Humeyra",
""
],
[
"Ahmetolan",
"Semra",
""
]
] | In the standard Susceptible-Infected-Removed (SIR) and Susceptible-Exposed-Infected-Removed (SEIR) models, the peak of infected individuals coincides with the in ection point of removed individuals. Nevertheless, a survey based on the data of the 2009 H1N1 epidemic in Istanbul, Turkey [19] displayed an unexpected time shift between the hospital referrals and fatalities. With the motivation of investigating the underlying reason, we use multistage SIR and SEIR models to provide an explanation for this time shift. Numerical solutions of these models present strong evidences that the delay is approximately half of the infection period of the epidemic disease. In addition, graphs of the classical SIR and the multistage SIR models; and the classical SEIR and the multistage SEIR models are compared for various epidemic parameters. Depending on the number of stages, it is observed that the delay varies for relatively small stage numbers whereas it does not change for large numbers in multistage systems. One important result that follows immediately from this observation is that this fixed delay for large numbers explains the time shift. Additionally, depending on the stage number and the duration of the epidemic disease, the distance between the points where each infectious stage reaches its maximum is found approximately both graphically and qualitatively for both systems. Variations of the time shift, the maximum point of the sum of all infectious stages, and the in ection point of the removed stage are observed subject to the stage number N and it is shown that these variations stay unchanged for greater values of N. |
1503.03322 | Karthik Shankar | Karthik H. Shankar and Inder Singh and Marc W. Howard | Neural mechanism to simulate a scale-invariant future timeline | 12 pages, 7 figures | Neural Computation (2016), Vol. 28, No. 12, Pages: 2594-2627 | 10.1162/NECO_a_00891 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting future events, and their order, is important for efficient
planning. We propose a neural mechanism to non-destructively translate the
current state of memory into the future, so as to construct an ordered set of
future predictions. This framework applies equally well to translations in time
or in one-dimensional position. In a two-layer memory network that encodes the
Laplace transform of the external input in real time, translation can be
accomplished by modulating the weights between the layers. We propose that
within each cycle of hippocampal theta oscillations, the memory state is swept
through a range of translations to yield an ordered set of future predictions.
We operationalize several neurobiological findings into phenomenological
equations constraining translation. Combined with constraints based on physical
principles requiring scale-invariance and coherence in translation across
memory nodes, the proposition results in Weber-Fechner spacing for the
representation of both past (memory) and future (prediction) timelines. The
resulting expressions are consistent with findings from phase precession
experiments in different regions of the hippocampus and reward systems in the
ventral striatum. The model makes several experimental predictions that can be
tested with existing technology.
| [
{
"created": "Wed, 11 Mar 2015 13:23:25 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jul 2015 14:07:46 GMT",
"version": "v2"
}
] | 2017-03-28 | [
[
"Shankar",
"Karthik H.",
""
],
[
"Singh",
"Inder",
""
],
[
"Howard",
"Marc W.",
""
]
] | Predicting future events, and their order, is important for efficient planning. We propose a neural mechanism to non-destructively translate the current state of memory into the future, so as to construct an ordered set of future predictions. This framework applies equally well to translations in time or in one-dimensional position. In a two-layer memory network that encodes the Laplace transform of the external input in real time, translation can be accomplished by modulating the weights between the layers. We propose that within each cycle of hippocampal theta oscillations, the memory state is swept through a range of translations to yield an ordered set of future predictions. We operationalize several neurobiological findings into phenomenological equations constraining translation. Combined with constraints based on physical principles requiring scale-invariance and coherence in translation across memory nodes, the proposition results in Weber-Fechner spacing for the representation of both past (memory) and future (prediction) timelines. The resulting expressions are consistent with findings from phase precession experiments in different regions of the hippocampus and reward systems in the ventral striatum. The model makes several experimental predictions that can be tested with existing technology. |
1602.05175 | Bahram Houchmandzadeh | Bahram Houchmandzadeh, Marcel Vallade | A simple, general result for the variance of substitution number in
molecular evolution | null | null | null | null | q-bio.PE physics.bio-ph q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of substitutions (of nucleotides, amino acids, ...) that take
place during the evolution of a sequence is a stochastic variable of
fundamental importance in the field of molecular evolution. Although the mean
number of substitutions during molecular evolution of a sequence can be
estimated for a given substitution model, no simple solution exists for the
variance of this random variable. We show in this article that the computation
of the variance is as simple as that of the mean number of substitutions for
both short and long times. Apart from its fundamental importance, this result
can be used to investigate the dispersion index R , i.e. the ratio of the
variance to the mean substitution number, which is of prime importance in the
neutral theory of molecular evolution. By investigating large classes of
substitution models, we demonstrate that although R\ge1 , to obtain R
significantly larger than unity necessitates in general additional hypotheses
on the structure of the substitution model.
| [
{
"created": "Tue, 16 Feb 2016 20:39:53 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Houchmandzadeh",
"Bahram",
""
],
[
"Vallade",
"Marcel",
""
]
] | The number of substitutions (of nucleotides, amino acids, ...) that take place during the evolution of a sequence is a stochastic variable of fundamental importance in the field of molecular evolution. Although the mean number of substitutions during molecular evolution of a sequence can be estimated for a given substitution model, no simple solution exists for the variance of this random variable. We show in this article that the computation of the variance is as simple as that of the mean number of substitutions for both short and long times. Apart from its fundamental importance, this result can be used to investigate the dispersion index R , i.e. the ratio of the variance to the mean substitution number, which is of prime importance in the neutral theory of molecular evolution. By investigating large classes of substitution models, we demonstrate that although R\ge1 , to obtain R significantly larger than unity necessitates in general additional hypotheses on the structure of the substitution model. |
2004.10172 | Gregory Wellenius | Gregory A. Wellenius (1 and 2), Swapnil Vispute (1) Valeria Espinosa
(1), Alex Fabrikant (1), Thomas C. Tsai (4 and 5), Jonathan Hennessy (1),
Andrew Dai (1), Brian Williams (1), Krishna Gadepalli (1), Adam Boulanger
(1), Adam Pearce (1), Chaitanya Kamath (1), Arran Schlosberg (1), Catherine
Bendebury (1), Chinmoy Mandayam (1), Charlotte Stanton (1), Shailesh
Bavadekar (1), Christopher Pluntke (1), Damien Desfontaines (1 and 3),
Benjamin Jacobson (5), Zan Armstrong (1), Bryant Gipson (1), Royce Wilson
(1), Andrew Widdowson (1), Katherine Chou (1), Andrew Oplinger (1), Tomer
Shekel (1), Ashish K. Jha (5 and 6), Evgeniy Gabrilovich (1) ((1) Google,
Inc., Mountain View, CA, (2) Department of Environmental Health, Boston
University School of Public Health, Boston, MA, (3) ETH Zurich, Switzerland,
(4) Department of Surgery, Brigham and Women's Hospital and Harvard Medical
School, Boston, MA, (5) Department of Health Policy and Management, Harvard
T. H. Chan School of Public Health, Boston, MA, (6) Brown University School
of Public Health, Providence, RI) | Impacts of Social Distancing Policies on Mobility and COVID-19 Case
Growth in the US | Co-first Authors: GAW, SV, VE, and AF contributed equally.
Corresponding Author: Dr. Evgeniy Gabrilovich, gabr@google.com 32 pages
(including supplemental material), 4 figures in the main text, additional
figures in the supplemental material | Nat Commun 12, 3118 (2021) | 10.1038/s41467-021-23404-5 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Social distancing remains an important strategy to combat the COVID-19
pandemic in the United States. However, the impacts of specific state-level
policies on mobility and subsequent COVID-19 case trajectories have not been
completely quantified. Using anonymized and aggregated mobility data from
opted-in Google users, we found that state-level emergency declarations
resulted in a 9.9% reduction in time spent away from places of residence.
Implementation of one or more social distancing policies resulted in an
additional 24.5% reduction in mobility the following week, and subsequent
shelter-in-place mandates yielded an additional 29.0% reduction. Decreases in
mobility were associated with substantial reductions in case growth 2 to 4
weeks later. For example, a 10% reduction in mobility was associated with a
17.5% reduction in case growth 2 weeks later. Given the continued reliance on
social distancing policies to limit the spread of COVID-19, these results may
be helpful to public health officials trying to balance infection control with
the economic and social consequences of these policies.
| [
{
"created": "Tue, 21 Apr 2020 17:26:42 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Apr 2020 17:31:52 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Nov 2020 18:29:52 GMT",
"version": "v3"
},
{
"created": "Thu, 27 May 2021 19:10:00 GMT",
"version": "v4"
}
] | 2021-05-31 | [
[
"Wellenius",
"Gregory A.",
"",
"1 and 2"
],
[
"Vispute",
"Swapnil",
"",
"4 and 5"
],
[
"Espinosa",
"Valeria",
"",
"4 and 5"
],
[
"Fabrikant",
"Alex",
"",
"4 and 5"
],
[
"Tsai",
"Thomas C.",
"",
"4 and 5"
],
[
"Hennessy",
"Jonathan",
"",
"1 and 3"
],
[
"Dai",
"Andrew",
"",
"1 and 3"
],
[
"Williams",
"Brian",
"",
"1 and 3"
],
[
"Gadepalli",
"Krishna",
"",
"1 and 3"
],
[
"Boulanger",
"Adam",
"",
"1 and 3"
],
[
"Pearce",
"Adam",
"",
"1 and 3"
],
[
"Kamath",
"Chaitanya",
"",
"1 and 3"
],
[
"Schlosberg",
"Arran",
"",
"1 and 3"
],
[
"Bendebury",
"Catherine",
"",
"1 and 3"
],
[
"Mandayam",
"Chinmoy",
"",
"1 and 3"
],
[
"Stanton",
"Charlotte",
"",
"1 and 3"
],
[
"Bavadekar",
"Shailesh",
"",
"1 and 3"
],
[
"Pluntke",
"Christopher",
"",
"1 and 3"
],
[
"Desfontaines",
"Damien",
"",
"1 and 3"
],
[
"Jacobson",
"Benjamin",
"",
"5 and 6"
],
[
"Armstrong",
"Zan",
"",
"5 and 6"
],
[
"Gipson",
"Bryant",
"",
"5 and 6"
],
[
"Wilson",
"Royce",
"",
"5 and 6"
],
[
"Widdowson",
"Andrew",
"",
"5 and 6"
],
[
"Chou",
"Katherine",
"",
"5 and 6"
],
[
"Oplinger",
"Andrew",
"",
"5 and 6"
],
[
"Shekel",
"Tomer",
"",
"5 and 6"
],
[
"Jha",
"Ashish K.",
"",
"5 and 6"
],
[
"Gabrilovich",
"Evgeniy",
""
]
] | Social distancing remains an important strategy to combat the COVID-19 pandemic in the United States. However, the impacts of specific state-level policies on mobility and subsequent COVID-19 case trajectories have not been completely quantified. Using anonymized and aggregated mobility data from opted-in Google users, we found that state-level emergency declarations resulted in a 9.9% reduction in time spent away from places of residence. Implementation of one or more social distancing policies resulted in an additional 24.5% reduction in mobility the following week, and subsequent shelter-in-place mandates yielded an additional 29.0% reduction. Decreases in mobility were associated with substantial reductions in case growth 2 to 4 weeks later. For example, a 10% reduction in mobility was associated with a 17.5% reduction in case growth 2 weeks later. Given the continued reliance on social distancing policies to limit the spread of COVID-19, these results may be helpful to public health officials trying to balance infection control with the economic and social consequences of these policies. |
1707.05372 | Valerey Grytsay Dr | V.I. Grytsay | Self-Organization and Chaos in the Metabolism of Hemostasis in a Blood
Vessel | 10 pages, 8 figures | Ukr. J. Phys., Vol. 61, N 7, p.648-655 (2016) | 10.15407/ujpe61.07.0648 | null | q-bio.TO nlin.AO nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mathematical model of the metabolic process of formation of the hemostasis
in a blood-carrying vessel is constructed. As distinct from the earlier
developed model of the multienzyme prostacyclin-thromboxane system of blood,
this model includes, for the first time, the influence of the level of "bad
cholesterol", i.e., low-density lipoproteins (LDLs), on the hemostasis. The
conditions, under which the self-organization of the system appears, and the
modes of autooscillations and chaos in the metabolic process, which affects the
formation of hemostasis and the development of thrombophilia, are found. With
the aid of a phase-parametric diagram, the scenario of their appearance is
studied. The bifurcations of doubling of a period and the transition to chaotic
oscillations as a result of the intermittence are found. The obtained strange
attractors are formed due to a mixing funnel. The full spectra of Lyapunov's
indices, KS-entropies, "predictability horizons", and Lyapunov's dimensions of
strange attractors are calculated. The reasons for a change in the cyclicity of
the given metabolic process, its stability, and the physiological manifestation
in the blood-carrying system are discussed. The role of physiologically active
substances in a decrease in the level of cholesterol in blood vessels is
estimated.
| [
{
"created": "Fri, 14 Jul 2017 10:35:02 GMT",
"version": "v1"
}
] | 2017-07-19 | [
[
"Grytsay",
"V. I.",
""
]
] | A mathematical model of the metabolic process of formation of the hemostasis in a blood-carrying vessel is constructed. As distinct from the earlier developed model of the multienzyme prostacyclin-thromboxane system of blood, this model includes, for the first time, the influence of the level of "bad cholesterol", i.e., low-density lipoproteins (LDLs), on the hemostasis. The conditions, under which the self-organization of the system appears, and the modes of autooscillations and chaos in the metabolic process, which affects the formation of hemostasis and the development of thrombophilia, are found. With the aid of a phase-parametric diagram, the scenario of their appearance is studied. The bifurcations of doubling of a period and the transition to chaotic oscillations as a result of the intermittence are found. The obtained strange attractors are formed due to a mixing funnel. The full spectra of Lyapunov's indices, KS-entropies, "predictability horizons", and Lyapunov's dimensions of strange attractors are calculated. The reasons for a change in the cyclicity of the given metabolic process, its stability, and the physiological manifestation in the blood-carrying system are discussed. The role of physiologically active substances in a decrease in the level of cholesterol in blood vessels is estimated. |
1708.04819 | Alla Slynko | Alla Slynko and Axel Benner | Measures of hydroxymethylation | null | null | null | null | q-bio.QM q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hydroxymethylcytosine (5hmC) methylation is known to be a possible epigenetic
mark impacting genome stability. In this paper, we address the existing 5hmC
measure $\Delta \beta$ and discuss its properties both analytically and
empirically on real data. Then we introduce several alternative
hydroxymethylation measures and compare their properties with those of $\Delta
\beta$. All results will be illustrated by means of real data analyses.
| [
{
"created": "Wed, 16 Aug 2017 09:05:51 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2017 07:13:07 GMT",
"version": "v2"
}
] | 2017-08-18 | [
[
"Slynko",
"Alla",
""
],
[
"Benner",
"Axel",
""
]
] | Hydroxymethylcytosine (5hmC) methylation is known to be a possible epigenetic mark impacting genome stability. In this paper, we address the existing 5hmC measure $\Delta \beta$ and discuss its properties both analytically and empirically on real data. Then we introduce several alternative hydroxymethylation measures and compare their properties with those of $\Delta \beta$. All results will be illustrated by means of real data analyses. |
2010.03068 | Emilie Purvine | Song Feng, Emily Heath, Brett Jefferson, Cliff Joslyn, Henry Kvinge,
Hugh D. Mitchell, Brenda Praggastis, Amie J. Eisfeld, Amy C. Sims, Larissa B.
Thackray, Shufang Fan, Kevin B. Walters, Peter J. Halfmann, Danielle
Westhoff-Smith, Qing Tan, Vineet D. Menachery, Timothy P. Sheahan, Adam S.
Cockrell, Jacob F. Kocher, Kelly G. Stratton, Natalie C. Heller, Lisa M.
Bramer, Michael S. Diamond, Ralph S. Baric, Katrina M. Waters, Yoshihiro
Kawaoka, Jason E. McDermott, Emilie Purvine | Hypergraph Models of Biological Networks to Identify Genes Critical to
Pathogenic Viral Response | null | null | null | null | q-bio.QM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Representing biological networks as graphs is a powerful approach
to reveal underlying patterns, signatures, and critical components from
high-throughput biomolecular data. However, graphs do not natively capture the
multi-way relationships present among genes and proteins in biological systems.
Hypergraphs are generalizations of graphs that naturally model multi-way
relationships and have shown promise in modeling systems such as protein
complexes and metabolic reactions. In this paper we seek to understand how
hypergraphs can more faithfully identify, and potentially predict, important
genes based on complex relationships inferred from genomic expression data
sets.
Results: We compiled a novel data set of transcriptional host response to
pathogenic viral infections and formulated relationships between genes as a
hypergraph where hyperedges represent significantly perturbed genes, and
vertices represent individual biological samples with specific experimental
conditions. We find that hypergraph betweenness centrality is a superior method
for identification of genes important to viral response when compared with
graph centrality.
Conclusions: Our results demonstrate the utility of using hypergraphs to
represent complex biological systems and highlight central important responses
in common to a variety of highly pathogenic viruses.
| [
{
"created": "Tue, 6 Oct 2020 22:45:19 GMT",
"version": "v1"
}
] | 2020-10-08 | [
[
"Feng",
"Song",
""
],
[
"Heath",
"Emily",
""
],
[
"Jefferson",
"Brett",
""
],
[
"Joslyn",
"Cliff",
""
],
[
"Kvinge",
"Henry",
""
],
[
"Mitchell",
"Hugh D.",
""
],
[
"Praggastis",
"Brenda",
""
],
[
"Eisfeld",
"Amie J.",
""
],
[
"Sims",
"Amy C.",
""
],
[
"Thackray",
"Larissa B.",
""
],
[
"Fan",
"Shufang",
""
],
[
"Walters",
"Kevin B.",
""
],
[
"Halfmann",
"Peter J.",
""
],
[
"Westhoff-Smith",
"Danielle",
""
],
[
"Tan",
"Qing",
""
],
[
"Menachery",
"Vineet D.",
""
],
[
"Sheahan",
"Timothy P.",
""
],
[
"Cockrell",
"Adam S.",
""
],
[
"Kocher",
"Jacob F.",
""
],
[
"Stratton",
"Kelly G.",
""
],
[
"Heller",
"Natalie C.",
""
],
[
"Bramer",
"Lisa M.",
""
],
[
"Diamond",
"Michael S.",
""
],
[
"Baric",
"Ralph S.",
""
],
[
"Waters",
"Katrina M.",
""
],
[
"Kawaoka",
"Yoshihiro",
""
],
[
"McDermott",
"Jason E.",
""
],
[
"Purvine",
"Emilie",
""
]
] | Background: Representing biological networks as graphs is a powerful approach to reveal underlying patterns, signatures, and critical components from high-throughput biomolecular data. However, graphs do not natively capture the multi-way relationships present among genes and proteins in biological systems. Hypergraphs are generalizations of graphs that naturally model multi-way relationships and have shown promise in modeling systems such as protein complexes and metabolic reactions. In this paper we seek to understand how hypergraphs can more faithfully identify, and potentially predict, important genes based on complex relationships inferred from genomic expression data sets. Results: We compiled a novel data set of transcriptional host response to pathogenic viral infections and formulated relationships between genes as a hypergraph where hyperedges represent significantly perturbed genes, and vertices represent individual biological samples with specific experimental conditions. We find that hypergraph betweenness centrality is a superior method for identification of genes important to viral response when compared with graph centrality. Conclusions: Our results demonstrate the utility of using hypergraphs to represent complex biological systems and highlight central important responses in common to a variety of highly pathogenic viruses. |
1911.02346 | Benoit Viollet | Adrien Grenier, Pierre Sujobert (CRCL), S\'everine Olivier (IC UM3
(UMR 8104 / U1016)), H\'el\`ene Guermouche, Johanna Mondesir, Olivier
Kosmider (IC UM3 (UMR 8104 / U1016)), Benoit Viollet (EMD), J\'er\^ome
Tamburini (IC UM3 (UMR 8104 / U1016)) | Knockdown of human AMPK using the CRISPR-Cas9 genome-editing system | null | Methods Mol Biol, 1732, pp.171-194, 2018 | 10.1007/978-1-4939-7598-3_11 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AMP activated protein kinase (AMPK) is a critical energy sensor, regulating
signaling networks involved in pathology including metabolic diseases and
cancer. This increasingly recognized role of AMPK has prompted tremendous
research efforts to develop new pharmacological AMPK activators. To precisely
study the role of AMPK, and the specificity and activity of AMPK activators in
cellular models, genetic AMPK inactivating tools are required. We report here
methods for genetic inactivation of AMPK $\alpha1/ \alpha2$ catalytic subunits
in human cell lines by the CRISPR/Cas9 technology, a recent breakthrough
technique for genome editing.
| [
{
"created": "Wed, 6 Nov 2019 12:52:58 GMT",
"version": "v1"
}
] | 2019-11-07 | [
[
"Grenier",
"Adrien",
"",
"CRCL"
],
[
"Sujobert",
"Pierre",
"",
"CRCL"
],
[
"Olivier",
"Séverine",
"",
"IC UM3"
],
[
"Guermouche",
"Hélène",
"",
"IC UM3"
],
[
"Mondesir",
"Johanna",
"",
"IC UM3"
],
[
"Kosmider",
"Olivier",
"",
"IC UM3"
],
[
"Viollet",
"Benoit",
"",
"EMD"
],
[
"Tamburini",
"Jérôme",
"",
"IC UM3"
]
] | AMP activated protein kinase (AMPK) is a critical energy sensor, regulating signaling networks involved in pathology including metabolic diseases and cancer. This increasingly recognized role of AMPK has prompted tremendous research efforts to develop new pharmacological AMPK activators. To precisely study the role of AMPK, and the specificity and activity of AMPK activators in cellular models, genetic AMPK inactivating tools are required. We report here methods for genetic inactivation of AMPK $\alpha1/ \alpha2$ catalytic subunits in human cell lines by the CRISPR/Cas9 technology, a recent breakthrough technique for genome editing. |
q-bio/0511037 | Giuseppe Vitiello | Walter J. Freeman (2) and Giuseppe Vitiello (2) ((1) Department of
Molecular and Cell Biology, University of California, Berkeley CA, USA (2)
Dipartimento di Fisica ``E.R. Caianiello'', and INFN, Universita' degli Studi
di Salerno, Salerno, Italia) | Nonlinear brain dynamics as macroscopic manifestation of underlying
many-body field dynamics | 31 pages | Physics of Life Reviews 3, 93-118 (2006) | 10.1016/j.plrev.2006.02.001 | null | q-bio.OT quant-ph | null | Neural activity patterns related to behavior occur at many scales in time and
space from the atomic and molecular to the whole brain. Here we explore the
feasibility of interpreting neurophysiological data in the context of many-body
physics by using tools that physicists have devised to analyze comparable
hierarchies in other fields of science. We focus on a mesoscopic level that
offers a multi-step pathway between the microscopic functions of neurons and
the macroscopic functions of brain systems revealed by hemodynamic imaging. We
use electroencephalographic (EEG) records collected from high-density electrode
arrays fixed on the epidural surfaces of primary sensory and limbic areas in
rabbits and cats trained to discriminate conditioned stimuli (CS) in the
various modalities. High temporal resolution of EEG signals with the Hilbert
transform gives evidence for diverse intermittent spatial patterns of amplitude
(AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize
in the beta and gamma ranges at near zero time lags over long distances. The
dominant mechanism for neural interactions by axodendritic synaptic
transmission should impose distance-dependent delays on the EEG oscillations
owing to finite propagation velocities. It does not. EEGs instead show evidence
for anomalous dispersion: the existence in neural populations of a low velocity
range of information and energy transfers, and a high velocity range of the
spread of phase transitions. This distinction labels the phenomenon but does
not explain it. In this report we explore the analysis of these phenomena using
concepts of energy dissipation, the maintenance by cortex of multiple ground
states corresponding to AM patterns, and the exclusive selection by spontaneous
breakdown of symmetry (SBS) of single states in sequences.
| [
{
"created": "Tue, 22 Nov 2005 18:00:56 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Freeman",
"Walter J.",
""
],
[
"Vitiello",
"Giuseppe",
""
]
] | Neural activity patterns related to behavior occur at many scales in time and space from the atomic and molecular to the whole brain. Here we explore the feasibility of interpreting neurophysiological data in the context of many-body physics by using tools that physicists have devised to analyze comparable hierarchies in other fields of science. We focus on a mesoscopic level that offers a multi-step pathway between the microscopic functions of neurons and the macroscopic functions of brain systems revealed by hemodynamic imaging. We use electroencephalographic (EEG) records collected from high-density electrode arrays fixed on the epidural surfaces of primary sensory and limbic areas in rabbits and cats trained to discriminate conditioned stimuli (CS) in the various modalities. High temporal resolution of EEG signals with the Hilbert transform gives evidence for diverse intermittent spatial patterns of amplitude (AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize in the beta and gamma ranges at near zero time lags over long distances. The dominant mechanism for neural interactions by axodendritic synaptic transmission should impose distance-dependent delays on the EEG oscillations owing to finite propagation velocities. It does not. EEGs instead show evidence for anomalous dispersion: the existence in neural populations of a low velocity range of information and energy transfers, and a high velocity range of the spread of phase transitions. This distinction labels the phenomenon but does not explain it. In this report we explore the analysis of these phenomena using concepts of energy dissipation, the maintenance by cortex of multiple ground states corresponding to AM patterns, and the exclusive selection by spontaneous breakdown of symmetry (SBS) of single states in sequences. |
q-bio/0409023 | Ayse Humeyra Bilge | A.H. Bilge, A. Erzan and D. Balcan | The Shift-Match Number and String Matching Probabilities for Binary
Sequences | 17 pages, 2 figures | null | null | null | q-bio.GN q-bio.QM | null | We define the ``shift-match number'' for a binary string and we compute the
probability of occurrence of a given string as a subsequence in longer strings
in terms of its shift-match number. We thus prove that the string matching
probabilities depend not only on the length of shorter strings, but also on the
equivalence class of the shorter string determined by its shift-match number.
| [
{
"created": "Tue, 21 Sep 2004 09:38:24 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bilge",
"A. H.",
""
],
[
"Erzan",
"A.",
""
],
[
"Balcan",
"D.",
""
]
] | We define the ``shift-match number'' for a binary string and we compute the probability of occurrence of a given string as a subsequence in longer strings in terms of its shift-match number. We thus prove that the string matching probabilities depend not only on the length of shorter strings, but also on the equivalence class of the shorter string determined by its shift-match number. |
1608.05778 | Pedro Maia | Pedro D. Maia and J. Nathan Kutz | Reaction time impairments in decision-making networks as a diagnostic
marker for traumatic brain injuries and neurodegenerative diseases | 18 pages, 10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of diffuse Focal Axonal Swellings (FAS) is a hallmark cellular
feature in many neurodegenerative diseases and traumatic brain injury. Among
other things, the FAS have a significant impact on spike-train encodings that
propagate through the affected neurons, leading to compromised signal
processing on a neuronal network level. This work merges, for the first time,
three fields of study: (i) signal processing in excitatory-inhibitory (EI)
networks of neurons via population codes, (ii) decision-making theory driven by
the production of evidence from stimulus, and (iii) compromised spike-train
propagation through FAS. As such, we demonstrate a mathematical architecture
capable of characterizing compromised decision-making driven by cellular
mechanisms. The computational model also leads to several novel predictions and
diagnostics for understanding injury level and cognitive deficits, including a
key finding that decision-making reaction times, rather than accuracy, are
indicative of network level damage. The results have a number of translational
implications, including that the level of network damage can be characterized
by the reaction times in simple cognitive and motor tests.
| [
{
"created": "Sat, 20 Aug 2016 04:23:27 GMT",
"version": "v1"
}
] | 2016-08-23 | [
[
"Maia",
"Pedro D.",
""
],
[
"Kutz",
"J. Nathan",
""
]
] | The presence of diffuse Focal Axonal Swellings (FAS) is a hallmark cellular feature in many neurodegenerative diseases and traumatic brain injury. Among other things, the FAS have a significant impact on spike-train encodings that propagate through the affected neurons, leading to compromised signal processing on a neuronal network level. This work merges, for the first time, three fields of study: (i) signal processing in excitatory-inhibitory (EI) networks of neurons via population codes, (ii) decision-making theory driven by the production of evidence from stimulus, and (iii) compromised spike-train propagation through FAS. As such, we demonstrate a mathematical architecture capable of characterizing compromised decision-making driven by cellular mechanisms. The computational model also leads to several novel predictions and diagnostics for understanding injury level and cognitive deficits, including a key finding that decision-making reaction times, rather than accuracy, are indicative of network level damage. The results have a number of translational implications, including that the level of network damage can be characterized by the reaction times in simple cognitive and motor tests. |
1207.4617 | Ri\'ansares Arriazu PhD | Esther Dur\'an, and Ri\'ansares Arriazu | Immunoexpression of Nm23: correlation with traditional prognostic
markers in breast carcinoma | 9 pages, 5 tables, 7 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fifty cases of breast carcinoma were investigated for Nm23, LPA1, Ki67 and
p53 antigen expression, using immunohistochemical techniques. Correlation
between markers was studied. Nm23 showed an inverse correlation with LPA1 and
Ki67, and positive with p53. Nm23/LPA1presented significant (p<0.001) negative
correlation with p53, but not difference were observed with Ki67. It can be
concluded that: 1) Nm23 and LPA1 tend to be inversely correlated; 2) Nm23
correlates positively with p53, regardless of tumour type and patient's
hormonal status; 3) Nm23 shows a reverse trend in comparison with Ki67.
However, more studies are needed to determinate if hormone receptors play some
role in the inverse relationship between Nm23-LPA1 and to know the tumour type
implication in these relationships.
| [
{
"created": "Thu, 19 Jul 2012 11:32:06 GMT",
"version": "v1"
}
] | 2012-07-20 | [
[
"Durán",
"Esther",
""
],
[
"Arriazu",
"Riánsares",
""
]
] | Fifty cases of breast carcinoma were investigated for Nm23, LPA1, Ki67 and p53 antigen expression, using immunohistochemical techniques. Correlation between markers was studied. Nm23 showed an inverse correlation with LPA1 and Ki67, and positive with p53. Nm23/LPA1presented significant (p<0.001) negative correlation with p53, but not difference were observed with Ki67. It can be concluded that: 1) Nm23 and LPA1 tend to be inversely correlated; 2) Nm23 correlates positively with p53, regardless of tumour type and patient's hormonal status; 3) Nm23 shows a reverse trend in comparison with Ki67. However, more studies are needed to determinate if hormone receptors play some role in the inverse relationship between Nm23-LPA1 and to know the tumour type implication in these relationships. |
2204.02770 | Georg Diez | Georg Diez, Daniel Nagel, Gerhard Stock | Correlation-based feature selection to identify functional dynamics in
proteins | null | null | 10.1021/acs.jctc.2c00337 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To interpret molecular dynamics simulations of biomolecular systems,
systematic dimensionality reduction methods are commonly employed. Among
others, this includes principal component analysis (PCA) and time-lagged
independent component analysis (TICA), which aim to maximize the variance and
the timescale of the first components, respectively. A crucial first step of
such an analysis is the identification of suitable and relevant input
coordinates (the so-called features), such as backbone dihedral angles and
interresidue distances. As typically only a small subset of those coordinates
is involved in a specific biomolecular process, it is important to discard the
remaining uncorrelated motions or weakly correlated noise coordinates. This is
because they may exhibit large amplitudes or long timescales and therefore will
be erroneously be considered important by PCA and TICA, respectively. To
discriminate collective motions underlying functional dynamics from
uncorrelated motions, the correlation matrix of the input coordinates is
block-diagonalized by a clustering method. This strategy avoids possible bias
due to presumed functional observables and conformational states or variation
principles that maximize variance or timescales. Considering several linear and
nonlinear correlation measures and various clustering algorithms, it is shown
that the combination of linear correlation and the Leiden community detection
algorithm yields excellent results for all considered model systems. These
include the functional motion of T4 lysozyme to demonstrate the successful
identification of collective motion, as well as the folding of villin headpiece
to highlight the physical interpretation of the correlated motions in terms of
a functional mechanism.
| [
{
"created": "Wed, 6 Apr 2022 12:28:55 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Apr 2022 08:43:13 GMT",
"version": "v2"
}
] | 2022-07-20 | [
[
"Diez",
"Georg",
""
],
[
"Nagel",
"Daniel",
""
],
[
"Stock",
"Gerhard",
""
]
] | To interpret molecular dynamics simulations of biomolecular systems, systematic dimensionality reduction methods are commonly employed. Among others, this includes principal component analysis (PCA) and time-lagged independent component analysis (TICA), which aim to maximize the variance and the timescale of the first components, respectively. A crucial first step of such an analysis is the identification of suitable and relevant input coordinates (the so-called features), such as backbone dihedral angles and interresidue distances. As typically only a small subset of those coordinates is involved in a specific biomolecular process, it is important to discard the remaining uncorrelated motions or weakly correlated noise coordinates. This is because they may exhibit large amplitudes or long timescales and therefore will be erroneously be considered important by PCA and TICA, respectively. To discriminate collective motions underlying functional dynamics from uncorrelated motions, the correlation matrix of the input coordinates is block-diagonalized by a clustering method. This strategy avoids possible bias due to presumed functional observables and conformational states or variation principles that maximize variance or timescales. Considering several linear and nonlinear correlation measures and various clustering algorithms, it is shown that the combination of linear correlation and the Leiden community detection algorithm yields excellent results for all considered model systems. These include the functional motion of T4 lysozyme to demonstrate the successful identification of collective motion, as well as the folding of villin headpiece to highlight the physical interpretation of the correlated motions in terms of a functional mechanism. |
1604.04008 | Daozhou Gao | Daozhou Gao, Yijun Lou, Daihai He, Travis C. Porco, Yang Kuang,
Gerardo Chowell, Shigui Ruan | Prevention and control of Zika fever as a mosquito-borne and sexually
transmitted disease | null | Scientific Reports, 2016 6: 28070 | 10.1038/srep28070 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ongoing Zika virus (ZIKV) epidemic poses a major global public health
emergency. It is known that ZIKV is spread by \textit{Aedes} mosquitoes, recent
studies show that ZIKV can also be transmitted via sexual contact and cases of
sexually transmitted ZIKV have been confirmed in the U.S., France, and Italy.
How sexual transmission affects the spread and control of ZIKV infection is not
well-understood. We presented a mathematical model to investigate the impact of
mosquito-borne and sexual transmission on spread and control of ZIKV and used
the model to fit the ZIKV data in Brazil, Colombia, and El Salvador. Based on
the estimated parameter values, we calculated the median and confidence
interval of the basic reproduction number R0=2.055 (95% CI: 0.523-6.300), in
which the distribution of the percentage of contribution by sexual transmission
is 3.044 (95% CI: 0.123-45.73). Our study indicates that R0 is most sensitive
to the biting rate and mortality rate of mosquitoes while sexual transmission
increases the risk of infection and epidemic size and prolongs the outbreak. In
order to prevent and control the transmission of ZIKV, it must be treated as
not only a mosquito-borne disease but also a sexually transmitted disease.
| [
{
"created": "Thu, 14 Apr 2016 01:34:19 GMT",
"version": "v1"
}
] | 2016-07-08 | [
[
"Gao",
"Daozhou",
""
],
[
"Lou",
"Yijun",
""
],
[
"He",
"Daihai",
""
],
[
"Porco",
"Travis C.",
""
],
[
"Kuang",
"Yang",
""
],
[
"Chowell",
"Gerardo",
""
],
[
"Ruan",
"Shigui",
""
]
] | The ongoing Zika virus (ZIKV) epidemic poses a major global public health emergency. It is known that ZIKV is spread by \textit{Aedes} mosquitoes, recent studies show that ZIKV can also be transmitted via sexual contact and cases of sexually transmitted ZIKV have been confirmed in the U.S., France, and Italy. How sexual transmission affects the spread and control of ZIKV infection is not well-understood. We presented a mathematical model to investigate the impact of mosquito-borne and sexual transmission on spread and control of ZIKV and used the model to fit the ZIKV data in Brazil, Colombia, and El Salvador. Based on the estimated parameter values, we calculated the median and confidence interval of the basic reproduction number R0=2.055 (95% CI: 0.523-6.300), in which the distribution of the percentage of contribution by sexual transmission is 3.044 (95% CI: 0.123-45.73). Our study indicates that R0 is most sensitive to the biting rate and mortality rate of mosquitoes while sexual transmission increases the risk of infection and epidemic size and prolongs the outbreak. In order to prevent and control the transmission of ZIKV, it must be treated as not only a mosquito-borne disease but also a sexually transmitted disease. |
2203.07193 | Vikram Singh | Bhanu Sharma, Spandan Kumar, Subhendu Ghosh, Vikram Singh | Emergent dynamics in an astrocyte-neuronal network coupled via nitric
oxide | 18 pages, 6 figures, 3 tables | null | 10.1088/1478-3975/ace8e6 | null | q-bio.NC q-bio.CB q-bio.MN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the brain, both neurons and glial cells work in conjunction with each
other during information processing. Stimulation of neurons can cause calcium
oscillations in astrocytes which in turn can affect neuronal calcium dynamics.
The "glissandi" effect is one such phenomenon, associated with a decrease in
infraslow fluctuations, in which synchronized calcium oscillations propagate as
a wave in hundreds of astrocytes. Nitric oxide molecules released from the
astrocytes contribute to synaptic functions on the basis of the underlying
astrocyte-neuron interaction network. In this study, by defining an
astrocyte-neuronal (A-N) unit as an integrated circuit of one neuron and one
astrocyte, we developed a minimal model of neuronal stimulus-dependent and
nitric oxide-mediated emergence of calcium waves in astrocytes. Incorporating
inter-unit communication via nitric oxide molecules, a coupled network of 1,000
such A-N units is developed in which multiple stable regimes were found to
emerge in astrocytes. We examined the ranges of neuronal stimulus strength and
the coupling strength between A-N units that give rise to such dynamical
behaviors. We also report that there exists a range of coupling strength,
wherein units not receiving stimulus also start showing oscillations and become
synchronized. Our results support the hypothesis that glissandi-like phenomena
exhibiting synchronized calcium oscillations in astrocytes help in efficient
synaptic transmission by reducing the energy demand of the process.
| [
{
"created": "Mon, 14 Mar 2022 15:32:37 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Sharma",
"Bhanu",
""
],
[
"Kumar",
"Spandan",
""
],
[
"Ghosh",
"Subhendu",
""
],
[
"Singh",
"Vikram",
""
]
] | In the brain, both neurons and glial cells work in conjunction with each other during information processing. Stimulation of neurons can cause calcium oscillations in astrocytes which in turn can affect neuronal calcium dynamics. The "glissandi" effect is one such phenomenon, associated with a decrease in infraslow fluctuations, in which synchronized calcium oscillations propagate as a wave in hundreds of astrocytes. Nitric oxide molecules released from the astrocytes contribute to synaptic functions on the basis of the underlying astrocyte-neuron interaction network. In this study, by defining an astrocyte-neuronal (A-N) unit as an integrated circuit of one neuron and one astrocyte, we developed a minimal model of neuronal stimulus-dependent and nitric oxide-mediated emergence of calcium waves in astrocytes. Incorporating inter-unit communication via nitric oxide molecules, a coupled network of 1,000 such A-N units is developed in which multiple stable regimes were found to emerge in astrocytes. We examined the ranges of neuronal stimulus strength and the coupling strength between A-N units that give rise to such dynamical behaviors. We also report that there exists a range of coupling strength, wherein units not receiving stimulus also start showing oscillations and become synchronized. Our results support the hypothesis that glissandi-like phenomena exhibiting synchronized calcium oscillations in astrocytes help in efficient synaptic transmission by reducing the energy demand of the process. |
1112.5678 | Yi Wei | Yi Wei and Alexei Koulakov | An Exactly Solvable Model of Random Site-Specific Recombinations | null | Bulletin of Mathematical Biology, Volume 74, Issue 12, pp
2897-2916, December 2012 | 10.1007/s11538-012-9788-z. | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cre-lox and other systems are used as genetic tools to control site-specific
recombination (SSR) events in genomic DNA. If multiple recombination sites are
organized in a compact cluster within the same genome, a series of random
recombination events may generate substantial cell specific genomic diversity.
This diversity is used, for example, to distinguish neurons in the brain of the
same multicellular mosaic organism, within the brainbow approach to neuronal
connectome. In this paper we study an exactly solvable statistical model for
SSR operating on a cluster of recombination sites. We consider two types of
recombination events: inversions and excisions. Both of these events are
available in the Cre-lox system. We derive three properties of the sequences
generated by multiple recombination events. First, we describe the set of
sequences that can in principle be generated by multiple inversions operating
on the given initial sequence. We call this description the ergodicity theorem.
On the basis of this description we calculate the number of sequences that can
be generated from an initial sequence. This number of sequences is
experimentally testable. Second, we demonstrate that after a large number of
random inversions every sequence that can be generated is generated with equal
probability. Lastly, we derive the equations for the probability to find a
sequence as a function of time in the limit when excisions are much less
frequent than inversions, such as in shufflon sequences.
| [
{
"created": "Sat, 24 Dec 2011 00:41:18 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Jun 2014 00:15:15 GMT",
"version": "v2"
}
] | 2014-06-30 | [
[
"Wei",
"Yi",
""
],
[
"Koulakov",
"Alexei",
""
]
] | Cre-lox and other systems are used as genetic tools to control site-specific recombination (SSR) events in genomic DNA. If multiple recombination sites are organized in a compact cluster within the same genome, a series of random recombination events may generate substantial cell specific genomic diversity. This diversity is used, for example, to distinguish neurons in the brain of the same multicellular mosaic organism, within the brainbow approach to neuronal connectome. In this paper we study an exactly solvable statistical model for SSR operating on a cluster of recombination sites. We consider two types of recombination events: inversions and excisions. Both of these events are available in the Cre-lox system. We derive three properties of the sequences generated by multiple recombination events. First, we describe the set of sequences that can in principle be generated by multiple inversions operating on the given initial sequence. We call this description the ergodicity theorem. On the basis of this description we calculate the number of sequences that can be generated from an initial sequence. This number of sequences is experimentally testable. Second, we demonstrate that after a large number of random inversions every sequence that can be generated is generated with equal probability. Lastly, we derive the equations for the probability to find a sequence as a function of time in the limit when excisions are much less frequent than inversions, such as in shufflon sequences. |
1907.05138 | Joel Miller | Joel C Miller | Distribution of outbreak sizes for SIR disease in finite populations | null | null | null | null | q-bio.PE math.CO physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the spread of a Susceptible-Infected-Recovered (SIR) disease
through finite populations and derive an expression for the final size
distribution. Our derivation allows arbitrary distributions of the number of
transmissions caused by an infected individual. We show how this calculation
can be used to infer parameters of the infectious disease through observations
in multiple small populations. The inference suffers from some identifiability
difficulties, and it requires many observations to distinguish between
parameter combinations that correspond to the same reproductive number.
| [
{
"created": "Thu, 11 Jul 2019 12:20:47 GMT",
"version": "v1"
}
] | 2019-07-12 | [
[
"Miller",
"Joel C",
""
]
] | We consider the spread of a Susceptible-Infected-Recovered (SIR) disease through finite populations and derive an expression for the final size distribution. Our derivation allows arbitrary distributions of the number of transmissions caused by an infected individual. We show how this calculation can be used to infer parameters of the infectious disease through observations in multiple small populations. The inference suffers from some identifiability difficulties, and it requires many observations to distinguish between parameter combinations that correspond to the same reproductive number. |
1612.07908 | Michael Manhart | Michael Manhart, Bharat V. Adkar, and Eugene I. Shakhnovich | Tradeoffs between microbial growth phases lead to frequency-dependent
and non-transitive selection | null | Proc R Soc B 285: 20172459, 2018 | 10.1098/rspb.2017.2459 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutations in a microbial population can increase the frequency of a genotype
not only by increasing its exponential growth rate, but also by decreasing its
lag time or adjusting the yield (resource efficiency). The contribution of
multiple life-history traits to selection is a critical question for
evolutionary biology as we seek to predict the evolutionary fates of mutations.
Here we use a model of microbial growth to show there are two distinct
components of selection corresponding to the growth and lag phases, while the
yield modulates their relative importance. The model predicts rich population
dynamics when there are tradeoffs between phases: multiple strains can coexist
or exhibit bistability due to frequency-dependent selection, and strains can
engage in rock-paper-scissors interactions due to non-transitive selection. We
characterize the environmental conditions and patterns of traits necessary to
realize these phenomena, which we show to be readily accessible to experiments.
Our results provide a theoretical framework for analyzing high-throughput
measurements of microbial growth traits, especially interpreting the pleiotropy
and correlations between traits across mutants. This work also highlights the
need for more comprehensive measurements of selection in simple microbial
systems, where the concept of an ordinary fitness landscape breaks down.
| [
{
"created": "Fri, 23 Dec 2016 09:12:42 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2017 22:34:22 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Jan 2018 22:42:30 GMT",
"version": "v3"
}
] | 2018-02-15 | [
[
"Manhart",
"Michael",
""
],
[
"Adkar",
"Bharat V.",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | Mutations in a microbial population can increase the frequency of a genotype not only by increasing its exponential growth rate, but also by decreasing its lag time or adjusting the yield (resource efficiency). The contribution of multiple life-history traits to selection is a critical question for evolutionary biology as we seek to predict the evolutionary fates of mutations. Here we use a model of microbial growth to show there are two distinct components of selection corresponding to the growth and lag phases, while the yield modulates their relative importance. The model predicts rich population dynamics when there are tradeoffs between phases: multiple strains can coexist or exhibit bistability due to frequency-dependent selection, and strains can engage in rock-paper-scissors interactions due to non-transitive selection. We characterize the environmental conditions and patterns of traits necessary to realize these phenomena, which we show to be readily accessible to experiments. Our results provide a theoretical framework for analyzing high-throughput measurements of microbial growth traits, especially interpreting the pleiotropy and correlations between traits across mutants. This work also highlights the need for more comprehensive measurements of selection in simple microbial systems, where the concept of an ordinary fitness landscape breaks down. |
1412.2369 | Guowei Wei | Kelin Xia and Xin Feng and Yiying Tong and Guo Wei We | Persistent Homology for The Quantitative Prediction of Fullerene
Stability | 12 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Persistent homology is a relatively new tool often used for
\emph{qualitative} analysis of intrinsic topological features in images and
data originated from scientific and engineering applications. In this paper, we
report novel \emph{quantitative} predictions of the energy and stability of
fullerene molecules, the very first attempt in employing persistent homology in
this context. The ground-state structures of a series of small fullerene
molecules are first investigated with the standard Vietoris-Rips complex. We
decipher all the barcodes, including both short-lived local bars and long-lived
global bars arising from topological invariants, and associate them with
fullerene structural details. By using accumulated bar lengths, we build
quantitative models to correlate local and global Betti-2 bars respectively
with the heat of formation and total curvature energies of fullerenes. It is
found that the heat of formation energy is related to the local hexagonal
cavities of small fullerenes, while the total curvature energies of fullerene
isomers are associated with their sphericities, which are measured by the
lengths of their long-lived Betti-2 bars. Excellent correlation coefficients
($>0.94$) between persistent homology predictions and those of quantum or
curvature analysis have been observed. A correlation matrix based filtration is
introduced to further verify our findings.
| [
{
"created": "Sun, 7 Dec 2014 16:58:13 GMT",
"version": "v1"
}
] | 2014-12-09 | [
[
"Xia",
"Kelin",
""
],
[
"Feng",
"Xin",
""
],
[
"Tong",
"Yiying",
""
],
[
"We",
"Guo Wei",
""
]
] | Persistent homology is a relatively new tool often used for \emph{qualitative} analysis of intrinsic topological features in images and data originated from scientific and engineering applications. In this paper, we report novel \emph{quantitative} predictions of the energy and stability of fullerene molecules, the very first attempt in employing persistent homology in this context. The ground-state structures of a series of small fullerene molecules are first investigated with the standard Vietoris-Rips complex. We decipher all the barcodes, including both short-lived local bars and long-lived global bars arising from topological invariants, and associate them with fullerene structural details. By using accumulated bar lengths, we build quantitative models to correlate local and global Betti-2 bars respectively with the heat of formation and total curvature energies of fullerenes. It is found that the heat of formation energy is related to the local hexagonal cavities of small fullerenes, while the total curvature energies of fullerene isomers are associated with their sphericities, which are measured by the lengths of their long-lived Betti-2 bars. Excellent correlation coefficients ($>0.94$) between persistent homology predictions and those of quantum or curvature analysis have been observed. A correlation matrix based filtration is introduced to further verify our findings. |
2107.05741 | EunJung Cho | Eunjung Cho, Yeonggyeong Kang, and Youngsang Cho | Effects of Fine Particulate Matter on Cardiovascular Disease Morbidity:
A Study on Seven Metropolitan Cities in South Korea | null | null | 10.3389/ijph.2022.1604389 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objectives: The primary purpose of this study is to analyze the relationship
between the first occurrence of hospitalization for cardiovascular disease
(CVD) and particulate matter less than 2.5 micrometre in diameter (PM2.5)
exposure, considering average PM2.5 concentration and the frequency of high
PM2.5 concentration simultaneously. Methods: We used large-scale cohort data
from seven metropolitan cities in South Korea. We estimated hazard ratios (HRs)
and 95% confidence intervals (CIs) using the Cox proportional-hazards model,
including annual average PM2.5 and annual hours of PM2.5 concentration
exceeding 55.5 microgram/m3 (FH55). Results: We found that the risk was
elevated by 11.6% (95% CI, 9.7-13.6) for all CVD per 2.9 microgram/m3 increase
of average PM2.5. In addition, a 94-h increase in FH55 increased the risk of
all CVD by 3.8% (95% CI, 2.8-4.7). Regarding stroke, we found that people who
were older and had a history of hypertension were more vulnerable to PM2.5
exposure. Conclusion: Based on the findings, we conclude that accurate
forecasting, information dissemination, and timely warning of high
concentrations of PM2.5 at the national level may reduce the risk of CVD
occurrence.
| [
{
"created": "Wed, 7 Jul 2021 06:57:01 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jan 2023 05:03:57 GMT",
"version": "v2"
}
] | 2023-01-27 | [
[
"Cho",
"Eunjung",
""
],
[
"Kang",
"Yeonggyeong",
""
],
[
"Cho",
"Youngsang",
""
]
] | Objectives: The primary purpose of this study is to analyze the relationship between the first occurrence of hospitalization for cardiovascular disease (CVD) and particulate matter less than 2.5 micrometre in diameter (PM2.5) exposure, considering average PM2.5 concentration and the frequency of high PM2.5 concentration simultaneously. Methods: We used large-scale cohort data from seven metropolitan cities in South Korea. We estimated hazard ratios (HRs) and 95% confidence intervals (CIs) using the Cox proportional-hazards model, including annual average PM2.5 and annual hours of PM2.5 concentration exceeding 55.5 microgram/m3 (FH55). Results: We found that the risk was elevated by 11.6% (95% CI, 9.7-13.6) for all CVD per 2.9 microgram/m3 increase of average PM2.5. In addition, a 94-h increase in FH55 increased the risk of all CVD by 3.8% (95% CI, 2.8-4.7). Regarding stroke, we found that people who were older and had a history of hypertension were more vulnerable to PM2.5 exposure. Conclusion: Based on the findings, we conclude that accurate forecasting, information dissemination, and timely warning of high concentrations of PM2.5 at the national level may reduce the risk of CVD occurrence. |
1406.0212 | David Bortz | Dustin D. Keck and David M. Bortz | Generalized sensitivity functions for size-structured population models | 18 pages, 11 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Size-structured population models provide a popular means to mathematically
describe phenomena such as bacterial aggregation, schooling fish, and
planetesimal evolution. For parameter estimation, generalized sensitivity
functions (GSFs) provide a tool that quantifies the impact of data from
specific regions of the experimental domain. These functions help identify the
most relevant data subdomains, which enhances the optimization of experimental
design. To our knowledge, GSFs have not been used in the partial differential
equation (PDE) realm, so we provide a novel PDE extension of the discrete and
continuous ordinary differential equation (ODE) concepts of Thomaseth and
Cobelli and Banks et al. respectively. We analyze the GSFs in the context of
size-structured population models, and specifically analyze the Smoluchowski
coagulation equation to determine the most relevant time and volume domains for
three, distinct aggregation kernels. Finally, we provide evidence that
parameter estimation for the Smoluchowski coagulation equation does not require
post-gelation data.
| [
{
"created": "Sun, 1 Jun 2014 22:06:03 GMT",
"version": "v1"
}
] | 2014-06-03 | [
[
"Keck",
"Dustin D.",
""
],
[
"Bortz",
"David M.",
""
]
] | Size-structured population models provide a popular means to mathematically describe phenomena such as bacterial aggregation, schooling fish, and planetesimal evolution. For parameter estimation, generalized sensitivity functions (GSFs) provide a tool that quantifies the impact of data from specific regions of the experimental domain. These functions help identify the most relevant data subdomains, which enhances the optimization of experimental design. To our knowledge, GSFs have not been used in the partial differential equation (PDE) realm, so we provide a novel PDE extension of the discrete and continuous ordinary differential equation (ODE) concepts of Thomaseth and Cobelli and Banks et al. respectively. We analyze the GSFs in the context of size-structured population models, and specifically analyze the Smoluchowski coagulation equation to determine the most relevant time and volume domains for three, distinct aggregation kernels. Finally, we provide evidence that parameter estimation for the Smoluchowski coagulation equation does not require post-gelation data. |
2012.15392 | James Van Yperen | James Van Yperen, Eduard Campillo-Funollet, Rebecca Inkpen, Anjum
Memon, Anotida Madzvamuse | A hospital demand and capacity intervention approach for COVID-19 in the
UK | null | PLoS ONE 18 (2023) | 10.1371/journal.pone.0283350 | null | q-bio.PE math.GM physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The mathematical interpretation of interventions for the mitigation of
epidemics and pandemics in the literature often involves finding the optimal
time to initiate an intervention and/or the use of infections to manage impact.
Whilst these methods may work in theory, in order to implement they may require
information which is likely not available whilst one is in the midst of an
epidemic, or they may require impeccable data about infection levels in the
community. In practice, testing and cases data is only as good as the policy of
implementation and the compliance of the individuals, which means that
understanding the levels of infections becomes difficult or complicated from
the data that is provided. In this paper, we aim to develop a different
approach to the mathematical modelling of interventions, not based on
optimality, but based on demand and capacity of local authorities who have to
deal with the epidemic on a day to day basis. In particular, we use data-driven
modelling to calibrate an Susceptible Exposed Infectious Recovered-Died
(SEIR-D) model to infer parameters that depict the dynamics of the epidemic in
a region of the UK. We use the calibrated parameters for forecasting scenarios
and understand, given a maximum capacity of hospital healthcare services, how
the timing of interventions, severity of interventions, and conditions for the
releasing of interventions affect the overall epidemic-picture.
| [
{
"created": "Thu, 31 Dec 2020 01:39:21 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2022 14:35:08 GMT",
"version": "v2"
}
] | 2023-05-17 | [
[
"Van Yperen",
"James",
""
],
[
"Campillo-Funollet",
"Eduard",
""
],
[
"Inkpen",
"Rebecca",
""
],
[
"Memon",
"Anjum",
""
],
[
"Madzvamuse",
"Anotida",
""
]
] | The mathematical interpretation of interventions for the mitigation of epidemics and pandemics in the literature often involves finding the optimal time to initiate an intervention and/or the use of infections to manage impact. Whilst these methods may work in theory, in order to implement they may require information which is likely not available whilst one is in the midst of an epidemic, or they may require impeccable data about infection levels in the community. In practice, testing and cases data is only as good as the policy of implementation and the compliance of the individuals, which means that understanding the levels of infections becomes difficult or complicated from the data that is provided. In this paper, we aim to develop a different approach to the mathematical modelling of interventions, not based on optimality, but based on demand and capacity of local authorities who have to deal with the epidemic on a day to day basis. In particular, we use data-driven modelling to calibrate an Susceptible Exposed Infectious Recovered-Died (SEIR-D) model to infer parameters that depict the dynamics of the epidemic in a region of the UK. We use the calibrated parameters for forecasting scenarios and understand, given a maximum capacity of hospital healthcare services, how the timing of interventions, severity of interventions, and conditions for the releasing of interventions affect the overall epidemic-picture. |
1712.07857 | Bernard Piette | Eleanor F. Banwell, Bernard Piette, Anne Taormina, and Jonathan Heddle | Reciprocal Nucleopeptides as the Ancestral Darwinian Self-Replicator | 28 pages, 6 figures | Eleanor F Banwell, Bernard M A G Piette, Anne Taormina, Jonathan G
Heddle; Reciprocal Nucleopeptides as the Ancestral Darwinian Self-Replicator,
Molecular Biology and Evolution, msx292 (2017) | 10.1093/molbev/msx292 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Even the simplest organisms are too complex to have spontaneously arisen
fully-formed, yet precursors to first life must have emerged ab initio from
their environment. A watershed event was the appearance of the first entity
capable of evolution: the Initial Darwinian Ancestor. Here we suggest that
nucleopeptide reciprocal replicators could have carried out this important role
and contend that this is the simplest way to explain extant replication systems
in a mathematically consistent way. We propose short nucleic-acid templates on
which amino-acylated adapters assembled. Spatial localization drives peptide
ligation from activated precursors to generate phosphodiester-bond-catalytic
peptides. Comprising autocatalytic protein and nucleic acid sequences, this
dynamical system links and unifies several previous hypotheses and provides a
plausible model for the emergence of DNA and the operational code.
| [
{
"created": "Thu, 21 Dec 2017 10:17:30 GMT",
"version": "v1"
}
] | 2017-12-22 | [
[
"Banwell",
"Eleanor F.",
""
],
[
"Piette",
"Bernard",
""
],
[
"Taormina",
"Anne",
""
],
[
"Heddle",
"Jonathan",
""
]
] | Even the simplest organisms are too complex to have spontaneously arisen fully-formed, yet precursors to first life must have emerged ab initio from their environment. A watershed event was the appearance of the first entity capable of evolution: the Initial Darwinian Ancestor. Here we suggest that nucleopeptide reciprocal replicators could have carried out this important role and contend that this is the simplest way to explain extant replication systems in a mathematically consistent way. We propose short nucleic-acid templates on which amino-acylated adapters assembled. Spatial localization drives peptide ligation from activated precursors to generate phosphodiester-bond-catalytic peptides. Comprising autocatalytic protein and nucleic acid sequences, this dynamical system links and unifies several previous hypotheses and provides a plausible model for the emergence of DNA and the operational code. |
1002.0051 | St InterJRI InterJRI | Babu A. Manjasetty, Sunil Kumar, Andrew P. Turnbull, Niraj Kanti
Tripathy | Structural Investigations into Shwachman Bodian Diamond Syndrome SBDS
using a Bioinformatics Approach | 8 pages, 6 figures | InterJRI Science and Technology, Volume 1, pp 83-90, 2009 | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The functional correlation of missense mutations which cause disease remains
a challenge to understanding the basis of genetic diseases. This is
particularly true for proteins related to diseases for which there are no
available three dimensional structures. One such disease is Shwachman Diamond
syndrome SDS OMIM 260400, a multi system disease arising from loss of
functional mutations. The Homo sapiens Shwachman Bodian Diamond Syndrome gene
hSBDS is responsible for SDS. hSBDS is expressed in all tissues and encodes a
protein of 250 amino acids SwissProt accession code Q9Y3A5. Sequence analysis
of disease associated alleles has identified more than 20 different mutations
in affected individuals. While a number of these mutations have been described
as leading to the loss of protein function due to truncation, translation or
surface epitope association, the structural basis for these mutations has yet
to be determined due to the lack of a three-dimensional structure for SBDS.
| [
{
"created": "Sat, 30 Jan 2010 07:53:46 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Manjasetty",
"Babu A.",
""
],
[
"Kumar",
"Sunil",
""
],
[
"Turnbull",
"Andrew P.",
""
],
[
"Tripathy",
"Niraj Kanti",
""
]
] | The functional correlation of missense mutations which cause disease remains a challenge to understanding the basis of genetic diseases. This is particularly true for proteins related to diseases for which there are no available three dimensional structures. One such disease is Shwachman Diamond syndrome SDS OMIM 260400, a multi system disease arising from loss of functional mutations. The Homo sapiens Shwachman Bodian Diamond Syndrome gene hSBDS is responsible for SDS. hSBDS is expressed in all tissues and encodes a protein of 250 amino acids SwissProt accession code Q9Y3A5. Sequence analysis of disease associated alleles has identified more than 20 different mutations in affected individuals. While a number of these mutations have been described as leading to the loss of protein function due to truncation, translation or surface epitope association, the structural basis for these mutations has yet to be determined due to the lack of a three-dimensional structure for SBDS. |
1610.01217 | Yahya Karimipanah | Yahya Karimipanah, Zhengyu Ma and Ralf Wessel | New hallmarks of criticality in recurrent neural networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A rigorous understanding of brain dynamics and function requires a conceptual
bridge between multiple levels of organization, including neural spiking and
network-level population activity. Mounting evidence suggests that neural
networks of cerebral cortex operate at criticality. How operating near this
network state impacts the variability of neuronal spiking is largely unknown.
Here we show in a computational model that two prevalent features of cortical
single-neuron activity, irregular spiking and the decline of response
variability at stimulus onset, are both emergent properties of a recurrent
network operating near criticality. Importantly, our work reveals that the
relation between the irregularity of spiking and the number of input
connections to a neuron, i.e., the in-degree, is maximized at criticality. Our
findings establish criticality as a unifying principle for the variability of
single-neuron spiking and the collective behavior of recurrent circuits in
cerebral cortex.
| [
{
"created": "Tue, 4 Oct 2016 21:49:27 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Oct 2016 05:11:51 GMT",
"version": "v2"
}
] | 2016-10-11 | [
[
"Karimipanah",
"Yahya",
""
],
[
"Ma",
"Zhengyu",
""
],
[
"Wessel",
"Ralf",
""
]
] | A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at criticality. How operating near this network state impacts the variability of neuronal spiking is largely unknown. Here we show in a computational model that two prevalent features of cortical single-neuron activity, irregular spiking and the decline of response variability at stimulus onset, are both emergent properties of a recurrent network operating near criticality. Importantly, our work reveals that the relation between the irregularity of spiking and the number of input connections to a neuron, i.e., the in-degree, is maximized at criticality. Our findings establish criticality as a unifying principle for the variability of single-neuron spiking and the collective behavior of recurrent circuits in cerebral cortex. |
1212.4229 | Dirson Jian Li | Dirson Jian Li | The tectonic cause of mass extinctions and the genomic contribution to
biodiversification | 64 pages, 24 figures, 7 tables | null | null | null | q-bio.PE q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite numerous mass extinctions in the Phanerozoic eon, the overall trend
in biodiversity evolution was not blocked and the life has never been wiped
out. Almost all possible catastrophic events (large igneous province, asteroid
impact, climate change, regression and transgression, anoxia, acidification,
sudden release of methane clathrate, multi-cause etc.) have been proposed to
explain the mass extinctions. However, we should, above all, clarify at what
timescale and at what possible levels should we explain the mass extinction?
Even though the mass extinctions occurred at short-timescale and at the species
level, we reveal that their cause should be explained in a broader context at
tectonic timescale and at both the molecular level and the species level. The
main result in this paper is that the Phanerozoic biodiversity evolution has
been explained by reconstructing the Sepkoski curve based on climatic, eustatic
and genomic data. Consequently, we point out that the P-Tr extinction was
caused by the tectonically originated climate instability. We also clarify that
the overall trend of biodiversification originated from the underlying genome
size evolution, and that the fluctuation of biodiversity originated from the
interactions among the earth's spheres. The evolution at molecular level had
played a significant role for the survival of life from environmental
disasters.
| [
{
"created": "Tue, 18 Dec 2012 05:03:02 GMT",
"version": "v1"
}
] | 2015-04-14 | [
[
"Li",
"Dirson Jian",
""
]
] | Despite numerous mass extinctions in the Phanerozoic eon, the overall trend in biodiversity evolution was not blocked and the life has never been wiped out. Almost all possible catastrophic events (large igneous province, asteroid impact, climate change, regression and transgression, anoxia, acidification, sudden release of methane clathrate, multi-cause etc.) have been proposed to explain the mass extinctions. However, we should, above all, clarify at what timescale and at what possible levels should we explain the mass extinction? Even though the mass extinctions occurred at short-timescale and at the species level, we reveal that their cause should be explained in a broader context at tectonic timescale and at both the molecular level and the species level. The main result in this paper is that the Phanerozoic biodiversity evolution has been explained by reconstructing the Sepkoski curve based on climatic, eustatic and genomic data. Consequently, we point out that the P-Tr extinction was caused by the tectonically originated climate instability. We also clarify that the overall trend of biodiversification originated from the underlying genome size evolution, and that the fluctuation of biodiversity originated from the interactions among the earth's spheres. The evolution at molecular level had played a significant role for the survival of life from environmental disasters. |
1303.3779 | Guillermo Abramson | Guillermo Abramson, Sebastian Gon\c{c}alves, Marcelo F. C. Gomes | Epidemic oscillations: Interaction between delays and seasonality | Presented at the 9th AIMS Conference on Dynamical Systems,
Differential Equations and Applications, Orlando, FL, USA, July 1-5, 2012 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional epidemic models consider that individual processes occur at
constant rates. That is, an infected individual has a constant probability per
unit time of recovering from infection after contagion. This assumption
certainly fails for almost all infectious diseases, in which the infection time
usually follows a probability distribution more or less spread around a mean
value. We show a general treatment for an SIRS model in which both the infected
and the immune phases admit such a description. The general behavior of the
system shows transitions between endemic and oscillating situations that could
be relevant in many real scenarios. The interaction with the other main source
of oscillations, seasonality, is also discussed.
| [
{
"created": "Fri, 15 Mar 2013 14:08:42 GMT",
"version": "v1"
}
] | 2013-03-18 | [
[
"Abramson",
"Guillermo",
""
],
[
"Gonçalves",
"Sebastian",
""
],
[
"Gomes",
"Marcelo F. C.",
""
]
] | Traditional epidemic models consider that individual processes occur at constant rates. That is, an infected individual has a constant probability per unit time of recovering from infection after contagion. This assumption certainly fails for almost all infectious diseases, in which the infection time usually follows a probability distribution more or less spread around a mean value. We show a general treatment for an SIRS model in which both the infected and the immune phases admit such a description. The general behavior of the system shows transitions between endemic and oscillating situations that could be relevant in many real scenarios. The interaction with the other main source of oscillations, seasonality, is also discussed. |
2202.05704 | Alessio Ragno | Alessio Ragno, Dylan Savoia, Roberto Capobianco | Semi-Supervised GCN for learning Molecular Structure-Activity
Relationships | ELLIS Machine Learning for Molecules workshop (ML4Molecules) 2021 | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Since the introduction of artificial intelligence in medicinal chemistry, the
necessity has emerged to analyse how molecular property variation is modulated
by either single atoms or chemical groups. In this paper, we propose to train
graph-to-graph neural network using semi-supervised learning for attributing
structure-property relationships. As initial case studies we apply the method
to solubility and molecular acidity while checking its consistency in
comparison with known experimental chemical data. As final goal, our approach
could represent a valuable tool to deal with problems such as activity cliffs,
lead optimization and de-novo drug design.
| [
{
"created": "Tue, 25 Jan 2022 09:09:43 GMT",
"version": "v1"
}
] | 2022-02-14 | [
[
"Ragno",
"Alessio",
""
],
[
"Savoia",
"Dylan",
""
],
[
"Capobianco",
"Roberto",
""
]
] | Since the introduction of artificial intelligence in medicinal chemistry, the necessity has emerged to analyse how molecular property variation is modulated by either single atoms or chemical groups. In this paper, we propose to train graph-to-graph neural network using semi-supervised learning for attributing structure-property relationships. As initial case studies we apply the method to solubility and molecular acidity while checking its consistency in comparison with known experimental chemical data. As final goal, our approach could represent a valuable tool to deal with problems such as activity cliffs, lead optimization and de-novo drug design. |
1610.09625 | Daniel Harari | Shimon Ullman, Nimrod Dorfman, Daniel Harari | Discovering containment: from infants to machines | null | Cognition 183 (2019) 67-81 | 10.1016/j.cognition.2018.11.001 | null | q-bio.NC cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current artificial learning systems can recognize thousands of visual
categories, or play Go at a champion"s level, but cannot explain infants
learning, in particular the ability to learn complex concepts without guidance,
in a specific order. A notable example is the category of 'containers' and the
notion of containment, one of the earliest spatial relations to be learned,
starting already at 2.5 months, and preceding other common relations (e.g.,
support). Such spontaneous unsupervised learning stands in contrast with
current highly successful computational models, which learn in a supervised
manner, that is, by using large data sets of labeled examples. How can
meaningful concepts be learned without guidance, and what determines the
trajectory of infant learning, making some notions appear consistently earlier
than others?
| [
{
"created": "Sun, 30 Oct 2016 10:26:22 GMT",
"version": "v1"
}
] | 2020-06-23 | [
[
"Ullman",
"Shimon",
""
],
[
"Dorfman",
"Nimrod",
""
],
[
"Harari",
"Daniel",
""
]
] | Current artificial learning systems can recognize thousands of visual categories, or play Go at a champion"s level, but cannot explain infants learning, in particular the ability to learn complex concepts without guidance, in a specific order. A notable example is the category of 'containers' and the notion of containment, one of the earliest spatial relations to be learned, starting already at 2.5 months, and preceding other common relations (e.g., support). Such spontaneous unsupervised learning stands in contrast with current highly successful computational models, which learn in a supervised manner, that is, by using large data sets of labeled examples. How can meaningful concepts be learned without guidance, and what determines the trajectory of infant learning, making some notions appear consistently earlier than others? |
1512.08913 | Sanzo Miyazawa | Sanzo Miyazawa | Selection maintaining protein stability at equilibrium | 40 pages, 13 figures, and supplement | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The common understanding of protein evolution has been that neutral or
slightly deleterious mutations are fixed by random drift, and evolutionary rate
is determined primarily by the proportion of neutral mutations. However, recent
studies have revealed that highly expressed genes evolve slowly because of
fitness costs due to misfolded proteins. Here we study selection maintaining
protein stability.
Protein fitness is taken to be $s = \kappa \exp(\beta\Delta G) (1 -
\exp(\beta\Delta\Delta G))$, where $s$ and $\Delta\Delta G$ are selective
advantage and stability change of a mutant protein, $\Delta G$ is the folding
free energy of the wild-type protein, and $\kappa$ represents protein abundance
and indispensability. The distribution of $\Delta\Delta G$ is approximated to
be a bi-Gaussian function, which represents structurally slightly- or
highly-constrained sites. Also, the mean of the distribution is negatively
proportional to $\Delta G$.
The evolution of this gene has an equilibrium ($\Delta G_e$) of protein
stability, the range of which is consistent with experimental values. The
probability distribution of $K_a/K_s$, the ratio of nonsynonymous to synonymous
substitution rate per site, over fixed mutants in the vicinity of the
equilibrium shows that nearly neutral selection is predominant only in
low-abundant, non-essential proteins of $\Delta G_e > -2.5$ kcal/mol. In the
other proteins, positive selection on stabilizing mutations is significant to
maintain protein stability at equilibrium as well as random drift on slightly
negative mutations, although the average $\langle K_a/K_s \rangle$ is less than
1. Slow evolutionary rates can be caused by high protein
abundance/indispensability, which produces positive shifts of $\Delta\Delta G$
through decreasing $\Delta G_e$, and by strong structural constraints, which
directly make $\Delta\Delta G$ more positive.
| [
{
"created": "Wed, 30 Dec 2015 11:41:58 GMT",
"version": "v1"
}
] | 2015-12-31 | [
[
"Miyazawa",
"Sanzo",
""
]
] | The common understanding of protein evolution has been that neutral or slightly deleterious mutations are fixed by random drift, and evolutionary rate is determined primarily by the proportion of neutral mutations. However, recent studies have revealed that highly expressed genes evolve slowly because of fitness costs due to misfolded proteins. Here we study selection maintaining protein stability. Protein fitness is taken to be $s = \kappa \exp(\beta\Delta G) (1 - \exp(\beta\Delta\Delta G))$, where $s$ and $\Delta\Delta G$ are selective advantage and stability change of a mutant protein, $\Delta G$ is the folding free energy of the wild-type protein, and $\kappa$ represents protein abundance and indispensability. The distribution of $\Delta\Delta G$ is approximated to be a bi-Gaussian function, which represents structurally slightly- or highly-constrained sites. Also, the mean of the distribution is negatively proportional to $\Delta G$. The evolution of this gene has an equilibrium ($\Delta G_e$) of protein stability, the range of which is consistent with experimental values. The probability distribution of $K_a/K_s$, the ratio of nonsynonymous to synonymous substitution rate per site, over fixed mutants in the vicinity of the equilibrium shows that nearly neutral selection is predominant only in low-abundant, non-essential proteins of $\Delta G_e > -2.5$ kcal/mol. In the other proteins, positive selection on stabilizing mutations is significant to maintain protein stability at equilibrium as well as random drift on slightly negative mutations, although the average $\langle K_a/K_s \rangle$ is less than 1. Slow evolutionary rates can be caused by high protein abundance/indispensability, which produces positive shifts of $\Delta\Delta G$ through decreasing $\Delta G_e$, and by strong structural constraints, which directly make $\Delta\Delta G$ more positive. |
1408.6819 | Krystal Blanco | Krystal Blanco, Kamal Barley, Anuj Mubayi | Population Dynamics of Wolves and Coyotes at Yellowstone National Park:
Modeling Interference Competition with an Infectious Disease | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gray wolves were reintroduced to Yellowstone National Park (YNP) in 1995. The
population initially flourished, but since 2003 the population has experience
significant reductions due to factors that may include disease-induced
mortality, illegal hunting, park control pro- grams, vehicle induced deaths and
intra-species aggression. Despite facing similar conditions, and interference
competition with the wolves, the coyote population at YNP has persisted. In
this paper we introduce an epidemiological framework that incorporates natural,
human-caused and disease-induced mortality as well as interference competition
between two species of predators. The outcomes generated by this theoretical
framework are used to explore the impact of competition and death-induced
mechanisms on predators coexistence. It is the hope that these results on the
competitive dynamics of carnivores in Yellowstone National Park will provide
park management insights that result in policies that keep the reintroduction
of wolves successful.
| [
{
"created": "Fri, 25 Jul 2014 08:56:04 GMT",
"version": "v1"
}
] | 2014-08-29 | [
[
"Blanco",
"Krystal",
""
],
[
"Barley",
"Kamal",
""
],
[
"Mubayi",
"Anuj",
""
]
] | Gray wolves were reintroduced to Yellowstone National Park (YNP) in 1995. The population initially flourished, but since 2003 the population has experience significant reductions due to factors that may include disease-induced mortality, illegal hunting, park control pro- grams, vehicle induced deaths and intra-species aggression. Despite facing similar conditions, and interference competition with the wolves, the coyote population at YNP has persisted. In this paper we introduce an epidemiological framework that incorporates natural, human-caused and disease-induced mortality as well as interference competition between two species of predators. The outcomes generated by this theoretical framework are used to explore the impact of competition and death-induced mechanisms on predators coexistence. It is the hope that these results on the competitive dynamics of carnivores in Yellowstone National Park will provide park management insights that result in policies that keep the reintroduction of wolves successful. |
1403.2658 | Jian-Jun Shu | Jian-Jun Shu and Li Shan Ouw | Pairwise alignment of the DNA sequence using hypercomplex number
representation | null | Bulletin of Mathematical Biology, Vol. 66, No. 5, pp. 1423-1438,
2004 | 10.1016/j.bulm.2004.01.005 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new set of DNA base-nucleic acid codes and their hypercomplex number
representation have been introduced for taking the probability of each
nucleotide into full account. A new scoring system has been proposed to suit
the hypercomplex number representation of the DNA base-nucleic acid codes and
incorporated with the method of dot matrix analysis and various algorithms of
sequence alignment. The problem of DNA sequence alignment can be processed in a
rather similar way as pairwise alignment of protein sequence.
| [
{
"created": "Sun, 9 Mar 2014 15:13:28 GMT",
"version": "v1"
}
] | 2014-03-12 | [
[
"Shu",
"Jian-Jun",
""
],
[
"Ouw",
"Li Shan",
""
]
] | A new set of DNA base-nucleic acid codes and their hypercomplex number representation have been introduced for taking the probability of each nucleotide into full account. A new scoring system has been proposed to suit the hypercomplex number representation of the DNA base-nucleic acid codes and incorporated with the method of dot matrix analysis and various algorithms of sequence alignment. The problem of DNA sequence alignment can be processed in a rather similar way as pairwise alignment of protein sequence. |
1504.05467 | Jian Peng | Jian Peng, Raghavendra Hosur, Bonnie Berger, and Jinbo Xu | iTreePack: Protein Complex Side-Chain Packing by Dual Decomposition | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein side-chain packing is a critical component in obtaining the 3D
coordinates of a structure and drug discovery. Single-domain protein side-chain
packing has been thoroughly studied. A major challenge in generalizing these
methods to protein complexes is that they, unlike monomers, often have very
large treewidth, and thus algorithms such as TreePack cannot be directly
applied. To address this issue, SCWRL4 treats the complex effectively as a
monomer, heuristically excluding weak interactions to decrease treewidth; as a
result, SCWRL4 generates poor packings on protein interfaces. To date, few
side-chain packing methods exist that are specifically designed for protein
complexes. In this paper, we introduce a method, iTreePack, which solves the
side-chain packing problem for complexes by using a novel combination of dual
decomposition and tree decomposition. In particular, iTreePack overcomes the
problem of large treewidth by decomposing a protein complex into smaller
subgraphs and novelly reformulating the complex side-chain packing problem as a
dual relaxation problem; this allows us to solve the side-chain packing of each
small subgraph separately using tree-decomposition. A projected subgradient
algorithm is applied to enforcing the consistency among the side-chain packings
of all the small subgraphs. Computational results demonstrate that our
iTreePack program outperforms SCWRL4 on protein complexes. In particular,
iTreePack places side-chain atoms much more accurately on very large complexes,
which constitute a significant portion of protein-protein interactions.
Moreover, the advantage of iTreePack over SCWRL4 increases with respect to the
treewidth of a complex. Even for monomeric proteins, iTreePack is much more
efficient than SCWRL and slightly more accurate.
| [
{
"created": "Tue, 21 Apr 2015 15:27:13 GMT",
"version": "v1"
}
] | 2015-04-22 | [
[
"Peng",
"Jian",
""
],
[
"Hosur",
"Raghavendra",
""
],
[
"Berger",
"Bonnie",
""
],
[
"Xu",
"Jinbo",
""
]
] | Protein side-chain packing is a critical component in obtaining the 3D coordinates of a structure and drug discovery. Single-domain protein side-chain packing has been thoroughly studied. A major challenge in generalizing these methods to protein complexes is that they, unlike monomers, often have very large treewidth, and thus algorithms such as TreePack cannot be directly applied. To address this issue, SCWRL4 treats the complex effectively as a monomer, heuristically excluding weak interactions to decrease treewidth; as a result, SCWRL4 generates poor packings on protein interfaces. To date, few side-chain packing methods exist that are specifically designed for protein complexes. In this paper, we introduce a method, iTreePack, which solves the side-chain packing problem for complexes by using a novel combination of dual decomposition and tree decomposition. In particular, iTreePack overcomes the problem of large treewidth by decomposing a protein complex into smaller subgraphs and novelly reformulating the complex side-chain packing problem as a dual relaxation problem; this allows us to solve the side-chain packing of each small subgraph separately using tree-decomposition. A projected subgradient algorithm is applied to enforcing the consistency among the side-chain packings of all the small subgraphs. Computational results demonstrate that our iTreePack program outperforms SCWRL4 on protein complexes. In particular, iTreePack places side-chain atoms much more accurately on very large complexes, which constitute a significant portion of protein-protein interactions. Moreover, the advantage of iTreePack over SCWRL4 increases with respect to the treewidth of a complex. Even for monomeric proteins, iTreePack is much more efficient than SCWRL and slightly more accurate. |
1201.3995 | Louxin Zhang | Yu Zheng, Taoyang Wu, Louxin Zhang | Reconciliation of Gene and Species Trees With Polytomies | 37 pages, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Millions of genes in the modern species belong to only thousands
of `gene families'. A gene family includes instances of the same gene in
different species (orthologs) and duplicate genes in the same species
(paralogs). Genes are gained and lost during evolution. With advances in
sequencing technology, researchers are able to investigate the important roles
of gene duplications and losses in adaptive evolution. Because of gene complex
evolution, ortholog identification is a basic but difficult task in comparative
genomics. A key method for the task is to use an explicit model of the
evolutionary history of the genes being studied, called the gene (family) tree.
It compares the gene tree with the evolutionary history of the species in which
the genes reside, called the species tree, using the procedure known as tree
reconciliation. Reconciling binary gene and specific trees is simple. However,
both gene and species trees may be non-binary in practice and thus tree
reconciliation presents challenging problems. Here, non-binary gene and species
tree reconciliation is studied in a binary refinement model.
Results: The problem of reconciling arbitrary gene and species trees is
proved NP-hard even for the duplication cost. We then present the first
efficient method for reconciling a non-binary gene tree and a non-binary
species tree. It attempts to find binary refinements of the given gene and
species trees that minimize reconciliation cost. Our algorithms have been
implemented into a software to support quick automated analysis of large data
sets.
Availability: The program, together with the source code, is available at its
online server http://phylotoo.appspot.com.
| [
{
"created": "Thu, 19 Jan 2012 09:32:51 GMT",
"version": "v1"
},
{
"created": "Thu, 3 May 2012 02:23:39 GMT",
"version": "v2"
}
] | 2012-05-04 | [
[
"Zheng",
"Yu",
""
],
[
"Wu",
"Taoyang",
""
],
[
"Zhang",
"Louxin",
""
]
] | Motivation: Millions of genes in the modern species belong to only thousands of `gene families'. A gene family includes instances of the same gene in different species (orthologs) and duplicate genes in the same species (paralogs). Genes are gained and lost during evolution. With advances in sequencing technology, researchers are able to investigate the important roles of gene duplications and losses in adaptive evolution. Because of gene complex evolution, ortholog identification is a basic but difficult task in comparative genomics. A key method for the task is to use an explicit model of the evolutionary history of the genes being studied, called the gene (family) tree. It compares the gene tree with the evolutionary history of the species in which the genes reside, called the species tree, using the procedure known as tree reconciliation. Reconciling binary gene and specific trees is simple. However, both gene and species trees may be non-binary in practice and thus tree reconciliation presents challenging problems. Here, non-binary gene and species tree reconciliation is studied in a binary refinement model. Results: The problem of reconciling arbitrary gene and species trees is proved NP-hard even for the duplication cost. We then present the first efficient method for reconciling a non-binary gene tree and a non-binary species tree. It attempts to find binary refinements of the given gene and species trees that minimize reconciliation cost. Our algorithms have been implemented into a software to support quick automated analysis of large data sets. Availability: The program, together with the source code, is available at its online server http://phylotoo.appspot.com. |
1608.03471 | George Constable Dr | George W. A. Constable, Tim Rogers, Alan J. McKane, and Corina E.
Tarnita | Demographic noise can reverse the direction of deterministic selection | 25 pages, 12 figures | Proc. Natl. Acad. Sci., 113:E4745-E4754 (2016) | 10.1073/pnas.1603693113 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic evolutionary theory robustly predicts that populations
displaying altruistic behaviours will be driven to extinction by mutant cheats
that absorb common benefits but do not themselves contribute. Here we show that
when demographic stochasticity is accounted for, selection can in fact act in
the reverse direction to that predicted deterministically, instead favouring
cooperative behaviors that appreciably increase the carrying capacity of the
population. Populations that exist in larger numbers experience a selective
advantage by being more stochastically robust to invasions than smaller
populations, and this advantage can persist even in the presence of
reproductive costs. We investigate this general effect in the specific context
of public goods production and find conditions for stochastic selection
reversal leading to the success of public good producers. This insight,
developed here analytically, is missed by both the deterministic analysis as
well as standard game theoretic models that enforce a fixed population size.
The effect is found to be amplified by space; in this scenario we find that
selection reversal occurs within biologically reasonable parameter regimes for
microbial populations. Beyond the public good problem, we formulate a general
mathematical framework for models that may exhibit stochastic selection
reversal. In this context, we describe a stochastic analogue to r-K theory, by
which small populations can evolve to higher densities in the absence of
disturbance.
| [
{
"created": "Thu, 11 Aug 2016 14:13:21 GMT",
"version": "v1"
}
] | 2016-10-03 | [
[
"Constable",
"George W. A.",
""
],
[
"Rogers",
"Tim",
""
],
[
"McKane",
"Alan J.",
""
],
[
"Tarnita",
"Corina E.",
""
]
] | Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviours will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favouring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by both the deterministic analysis as well as standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analogue to r-K theory, by which small populations can evolve to higher densities in the absence of disturbance. |
2312.16662 | James P. Crutchfield | James P. Crutchfield and David D. Dunn and Alexandra M. Jurgens | Whales in Space: Experiencing Aquatic Animals in Their Natural Place
with the Hydroambiphone | 11 pages, 4 figures;
https://csc.ucdavis.edu/~cmg/compmech/pubs/whalesinspace.htm | null | null | null | q-bio.PE physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recording the undersea three-dimensional bioacoustic sound field in real-time
promises major benefits to marine behavior studies. We describe a novel
hydrophone array -- the hydroambiphone (HAP) -- that adapts ambisonic
spatial-audio theory to sound propagation in ocean waters to realize many of
these benefits through spatial localization and acoustic immersion. Deploying
it to monitor the humpback whales (Megaptera novaeangliae) of southeast Alaska
demonstrates that HAP recording provides a qualitatively-improved experience of
their undersea behaviors; revealing, for example, new aspects of social
coordination during bubble-net feeding. On the practical side, spatialized
hydrophone recording greatly reduces post-field analytical and computational
challenges -- such as the "cocktail party problem" of distinguishing single
sources in a complicated and crowded auditory environment -- that are common to
field recordings. On the scientific side, comparing the HAP's capabilities to
single-hydrophone and nonspatialized recordings yields new insights into the
spatial information that allows animals to thrive in complex acoustic
environments. Spatialized bioacoustics markedly improves access to the
humpbacks' undersea acoustic environment and expands our appreciation of their
rich vocal lives.
| [
{
"created": "Wed, 27 Dec 2023 18:17:15 GMT",
"version": "v1"
}
] | 2023-12-29 | [
[
"Crutchfield",
"James P.",
""
],
[
"Dunn",
"David D.",
""
],
[
"Jurgens",
"Alexandra M.",
""
]
] | Recording the undersea three-dimensional bioacoustic sound field in real-time promises major benefits to marine behavior studies. We describe a novel hydrophone array -- the hydroambiphone (HAP) -- that adapts ambisonic spatial-audio theory to sound propagation in ocean waters to realize many of these benefits through spatial localization and acoustic immersion. Deploying it to monitor the humpback whales (Megaptera novaeangliae) of southeast Alaska demonstrates that HAP recording provides a qualitatively-improved experience of their undersea behaviors; revealing, for example, new aspects of social coordination during bubble-net feeding. On the practical side, spatialized hydrophone recording greatly reduces post-field analytical and computational challenges -- such as the "cocktail party problem" of distinguishing single sources in a complicated and crowded auditory environment -- that are common to field recordings. On the scientific side, comparing the HAP's capabilities to single-hydrophone and nonspatialized recordings yields new insights into the spatial information that allows animals to thrive in complex acoustic environments. Spatialized bioacoustics markedly improves access to the humpbacks' undersea acoustic environment and expands our appreciation of their rich vocal lives. |
1610.02471 | Liane Gabora | Liane Gabora | The Creative Process in Musical Composition: An Introspective Account | 9 pages | In Hans-Joachim Braun (Ed.) Creativity: Technology and music (pp.
131-141). Frankfurt / New York: Peter Lang (2016) | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter charts the creative process in the composition of a piece of
music titled 'Stream not gone dry' that unfolded, while I was primarily
occupied with other matters, over the course of nearly two decades. It avoids
discussion of the technical aspects of musical composition, and it can be read
by someone with no formal knowledge of music. The focus here is on what the
process of composing this particular piece of music says about how the creative
process works. My interpretation of the music-making process may be biased by
my academic view of creativity, but I believe that the influence works
primarily in the other direction, my understanding of how the creative process
works is derived from experiences creating. This intuitive understanding is
shaped over time by the process of reading scholarly papers on creativity and
working them into my own evolving theory of creativity, but the papers that I
resonate with and incorporate are those that are in line with my experience.
This chapter just makes the influence of personal experience more explicit than
in other more scholarly writings on creativity.
| [
{
"created": "Sat, 8 Oct 2016 03:16:51 GMT",
"version": "v1"
}
] | 2016-10-11 | [
[
"Gabora",
"Liane",
""
]
] | This chapter charts the creative process in the composition of a piece of music titled 'Stream not gone dry' that unfolded, while I was primarily occupied with other matters, over the course of nearly two decades. It avoids discussion of the technical aspects of musical composition, and it can be read by someone with no formal knowledge of music. The focus here is on what the process of composing this particular piece of music says about how the creative process works. My interpretation of the music-making process may be biased by my academic view of creativity, but I believe that the influence works primarily in the other direction, my understanding of how the creative process works is derived from experiences creating. This intuitive understanding is shaped over time by the process of reading scholarly papers on creativity and working them into my own evolving theory of creativity, but the papers that I resonate with and incorporate are those that are in line with my experience. This chapter just makes the influence of personal experience more explicit than in other more scholarly writings on creativity. |
1502.07045 | Andrew Francis | Andrew R. Francis and Mike Steel | Which phylogenetic networks are merely trees with additional arcs? | The final version of this article will appear in Systematic Biology.
20 pages, 7 figures | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A binary phylogenetic network may or may not be obtainable from a tree by the
addition of directed edges (arcs) between tree arcs. Here, we establish a
precise and easily tested criterion (based on `2-SAT') that efficiently
determines whether or not any given network can be realized in this way.
Moreover, the proof provides a polynomial-time algorithm for finding one or
more trees (when they exist) on which the network can be based. A number of
interesting consequences are presented as corollaries; these lead to some
further relevant questions and observations, which we outline in the
conclusion.
| [
{
"created": "Wed, 25 Feb 2015 03:58:35 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Apr 2015 08:42:21 GMT",
"version": "v2"
},
{
"created": "Thu, 21 May 2015 20:19:00 GMT",
"version": "v3"
}
] | 2015-05-25 | [
[
"Francis",
"Andrew R.",
""
],
[
"Steel",
"Mike",
""
]
] | A binary phylogenetic network may or may not be obtainable from a tree by the addition of directed edges (arcs) between tree arcs. Here, we establish a precise and easily tested criterion (based on `2-SAT') that efficiently determines whether or not any given network can be realized in this way. Moreover, the proof provides a polynomial-time algorithm for finding one or more trees (when they exist) on which the network can be based. A number of interesting consequences are presented as corollaries; these lead to some further relevant questions and observations, which we outline in the conclusion. |
q-bio/0404010 | Matthew Berryman | Matthew J. Berryman, Andrew Allison and Derek Abbott | Mutual information for examining correlations in DNA | 8 pages, 3 figures | null | null | null | q-bio.PE | null | This paper examines two methods for finding whether long-range correlations
exist in DNA: a fractal measure and a mutual information technique. We evaluate
the performance and implications of these methods in detail. In particular we
explore their use comparing DNA sequences from a variety of sources. Using
software for performing in silico mutations, we also consider evolutionary
events leading to long range correlations and analyse these correlations using
the techniques presented. Comparisons are made between these virtual sequences,
randomly generated sequences, and real sequences. We also explore correlations
in chromosomes from different species.
| [
{
"created": "Wed, 7 Apr 2004 07:49:26 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Berryman",
"Matthew J.",
""
],
[
"Allison",
"Andrew",
""
],
[
"Abbott",
"Derek",
""
]
] | This paper examines two methods for finding whether long-range correlations exist in DNA: a fractal measure and a mutual information technique. We evaluate the performance and implications of these methods in detail. In particular we explore their use comparing DNA sequences from a variety of sources. Using software for performing in silico mutations, we also consider evolutionary events leading to long range correlations and analyse these correlations using the techniques presented. Comparisons are made between these virtual sequences, randomly generated sequences, and real sequences. We also explore correlations in chromosomes from different species. |
2304.03204 | Peter Taylor | Vytene Janiukstyte, Thomas W Owen, Umair J Chaudhary, Beate Diehl,
Louis Lemieux, John S Duncan, Jane de Tisi, Yujiang Wang, Peter N Taylor | Normative brain mapping using scalp EEG and potential clinical
application | 4 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | A normative electrographic activity map could be a powerful resource to
understand normal brain function and identify abnormal activity. Here, we
present a normative brain map using scalp EEG in terms of relative band power.
In this exploratory study we investigate its temporal stability, its similarity
to other imaging modalities, and explore a potential clinical application.
We constructed scalp EEG normative maps of brain dynamics from 17 healthy
controls using source-localised resting-state scalp recordings. We then
correlated these maps with those acquired from MEG and intracranial EEG to
investigate their similarity. Lastly, we use the normative maps to lateralise
abnormal regions in epilepsy.
Spatial patterns of band powers were broadly consistent with previous
literature and stable across recordings. Scalp EEG normative maps were most
similar to other modalities in the alpha band, and relatively similar across
most bands. Towards a clinical application in epilepsy, we found abnormal
temporal regions ipsilateral to the epileptogenic hemisphere.
Scalp EEG relative band power normative maps are spatially stable across
time, in keeping with MEG and intracranial EEG results. Normative mapping is
feasible and may be potentially clinically useful in epilepsy. Future studies
with larger sample sizes and high-density EEG are now required for validation.
| [
{
"created": "Thu, 6 Apr 2023 16:26:55 GMT",
"version": "v1"
}
] | 2023-04-07 | [
[
"Janiukstyte",
"Vytene",
""
],
[
"Owen",
"Thomas W",
""
],
[
"Chaudhary",
"Umair J",
""
],
[
"Diehl",
"Beate",
""
],
[
"Lemieux",
"Louis",
""
],
[
"Duncan",
"John S",
""
],
[
"de Tisi",
"Jane",
""
],
[
"Wang",
"Yujiang",
""
],
[
"Taylor",
"Peter N",
""
]
] | A normative electrographic activity map could be a powerful resource to understand normal brain function and identify abnormal activity. Here, we present a normative brain map using scalp EEG in terms of relative band power. In this exploratory study we investigate its temporal stability, its similarity to other imaging modalities, and explore a potential clinical application. We constructed scalp EEG normative maps of brain dynamics from 17 healthy controls using source-localised resting-state scalp recordings. We then correlated these maps with those acquired from MEG and intracranial EEG to investigate their similarity. Lastly, we use the normative maps to lateralise abnormal regions in epilepsy. Spatial patterns of band powers were broadly consistent with previous literature and stable across recordings. Scalp EEG normative maps were most similar to other modalities in the alpha band, and relatively similar across most bands. Towards a clinical application in epilepsy, we found abnormal temporal regions ipsilateral to the epileptogenic hemisphere. Scalp EEG relative band power normative maps are spatially stable across time, in keeping with MEG and intracranial EEG results. Normative mapping is feasible and may be potentially clinically useful in epilepsy. Future studies with larger sample sizes and high-density EEG are now required for validation. |
1709.05971 | Juan Campos Quemada | Juan Campos Quemada | Quantum origin of life: methodological, epistemological and ontological
issues | 63 pages, six appendix, essay | null | null | null | q-bio.OT quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this essay is to analyze the role of quantum mechanics as an
inherent characteristic of life. During the last ten years the problem of the
origin of life has become an innovative research subject approached by many
authors. The essay is divided in to three parts: the first deals with the
problem of life from a philosophical and biological perspective. The second
presents the conceptual and methodological basis of the essay which is founded
on the Information Theory and the Quantum Theory. This basis is then used, in
the third part, to discuss the different arguments and conjectures of a quantum
origin of life. There are many philosophical views on the problem of life, two
of which are especially important at the moment: reductive physicalism and
biosystemic emergentism. From a scientific perspective, all the theories and
experimental evidences put forward by Biology can be summed up in to two main
research themes: the RNA world and the vesicular theory. The RNA world, from a
physicalist point of view, maintains that replication is the essence of life
while the vesicular theory, founded on biosystemic grounds, believes the
essence of life can be found in cellular metabolism. This essay uses the
Information Theory to discard the idea of a spontaneous emergency of life
through replication. Understanding the nature and basis of quantum mechanics is
fundamental in order to be able to comprehend the advantages of using quantum
computation to be able increase the probabilities of existence of auto
replicative structures. Different arguments are set forth such as the inherence
of quantum mechanics to the origin of life. Finally, in order to try to resolve
the question of auto replication, three scientific propositions are put
forward: Q-life, the quantum combinatory library and the role of algorithms in
the origin of genetic language.
| [
{
"created": "Fri, 15 Sep 2017 11:34:14 GMT",
"version": "v1"
}
] | 2017-09-19 | [
[
"Quemada",
"Juan Campos",
""
]
] | The aim of this essay is to analyze the role of quantum mechanics as an inherent characteristic of life. During the last ten years the problem of the origin of life has become an innovative research subject approached by many authors. The essay is divided in to three parts: the first deals with the problem of life from a philosophical and biological perspective. The second presents the conceptual and methodological basis of the essay which is founded on the Information Theory and the Quantum Theory. This basis is then used, in the third part, to discuss the different arguments and conjectures of a quantum origin of life. There are many philosophical views on the problem of life, two of which are especially important at the moment: reductive physicalism and biosystemic emergentism. From a scientific perspective, all the theories and experimental evidences put forward by Biology can be summed up in to two main research themes: the RNA world and the vesicular theory. The RNA world, from a physicalist point of view, maintains that replication is the essence of life while the vesicular theory, founded on biosystemic grounds, believes the essence of life can be found in cellular metabolism. This essay uses the Information Theory to discard the idea of a spontaneous emergency of life through replication. Understanding the nature and basis of quantum mechanics is fundamental in order to be able to comprehend the advantages of using quantum computation to be able increase the probabilities of existence of auto replicative structures. Different arguments are set forth such as the inherence of quantum mechanics to the origin of life. Finally, in order to try to resolve the question of auto replication, three scientific propositions are put forward: Q-life, the quantum combinatory library and the role of algorithms in the origin of genetic language. |
2307.02172 | K. Anton Feenstra | Maurits Dijkstra, Punto Bawono, Isabel Houtkamp, Jose
Gavald\'a-Garci\'a, Mascha Okounev, Robbin Bouwmeester, Bas Stringer, Jaap
Heringa, Sanne Abeln, K. Anton Feenstra, Juami H. M. van Gils | Structural Property Prediction | editorial responsability: Juami H. M. van Gils, K. Anton Feenstra,
Sanne Abeln. This chapter is part of the book "Introduction to Protein
Structural Bioinformatics". The Preface arXiv:1801.09442 contains links to
all the (published) chapters | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | While many good textbooks are available on Protein Structure, Molecular
Simulations, Thermodynamics and Bioinformatics methods in general, there is no
good introductory level book for the field of Structural Bioinformatics. This
book aims to give an introduction into Structural Bioinformatics, which is
where the previous topics meet to explore three dimensional protein structures
through computational analysis. We provide an overview of existing
computational techniques, to validate, simulate, predict and analyse protein
structures. More importantly, it will aim to provide practical knowledge about
how and when to use such techniques. We will consider proteins from three major
vantage points: Protein structure quantification, Protein structure prediction,
and Protein simulation & dynamics.
Some structural properties of proteins that are closely linked to their
function may be easier (or much faster) to predict from sequence than the
complete tertiary structure; for example, secondary structure, surface
accessibility, flexibility, disorder, interface regions or hydrophobic patches.
Serving as building blocks for the native protein fold, these structural
properties also contain important structural and functional information not
apparent from the amino acid sequence. Here, we will first give an introduction
into the application of machine learning for structural property prediction,
and explain the concepts of cross-validation and benchmarking. Next, we will
review various methods that incorporate knowledge of these concepts to predict
those structural properties, such as secondary structure, surface
accessibility, disorder and flexibility, and aggregation.
| [
{
"created": "Wed, 5 Jul 2023 10:13:02 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 15:51:28 GMT",
"version": "v2"
}
] | 2023-09-07 | [
[
"Dijkstra",
"Maurits",
""
],
[
"Bawono",
"Punto",
""
],
[
"Houtkamp",
"Isabel",
""
],
[
"Gavaldá-Garciá",
"Jose",
""
],
[
"Okounev",
"Mascha",
""
],
[
"Bouwmeester",
"Robbin",
""
],
[
"Stringer",
"Bas",
""
],
[
"Heringa",
"Jaap",
""
],
[
"Abeln",
"Sanne",
""
],
[
"Feenstra",
"K. Anton",
""
],
[
"van Gils",
"Juami H. M.",
""
]
] | While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. Some structural properties of proteins that are closely linked to their function may be easier (or much faster) to predict from sequence than the complete tertiary structure; for example, secondary structure, surface accessibility, flexibility, disorder, interface regions or hydrophobic patches. Serving as building blocks for the native protein fold, these structural properties also contain important structural and functional information not apparent from the amino acid sequence. Here, we will first give an introduction into the application of machine learning for structural property prediction, and explain the concepts of cross-validation and benchmarking. Next, we will review various methods that incorporate knowledge of these concepts to predict those structural properties, such as secondary structure, surface accessibility, disorder and flexibility, and aggregation. |
1309.5362 | Angelo Calv\~ao | A. M. Calv\~ao and E. Brigatti | The role of neighbours selection on cohesion and order of swarms | 15 pages, 9 figures, Plos One, May 2014 | null | 10.1371/journal.pone.0094221 | null | q-bio.QM physics.soc-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a multi-agent model for exploring how selection of neighbours
determines some aspects of order and cohesion in swarms. The model algorithm
states that every agents' motion seeks for an optimal distance from the nearest
topological neighbour encompassed in a limited attention field. Despite the
great simplicity of the implementation, varying the amplitude of the attention
landscape, swarms pass from cohesive and regular structures towards fragmented
and irregular configurations. Interestingly, this movement rule is an ideal
candidate for implementing the selfish herd hypothesis which explains
aggregation of alarmed group of social animals.
| [
{
"created": "Fri, 20 Sep 2013 19:13:25 GMT",
"version": "v1"
},
{
"created": "Mon, 12 May 2014 15:40:23 GMT",
"version": "v2"
}
] | 2014-05-13 | [
[
"Calvão",
"A. M.",
""
],
[
"Brigatti",
"E.",
""
]
] | We introduce a multi-agent model for exploring how selection of neighbours determines some aspects of order and cohesion in swarms. The model algorithm states that every agents' motion seeks for an optimal distance from the nearest topological neighbour encompassed in a limited attention field. Despite the great simplicity of the implementation, varying the amplitude of the attention landscape, swarms pass from cohesive and regular structures towards fragmented and irregular configurations. Interestingly, this movement rule is an ideal candidate for implementing the selfish herd hypothesis which explains aggregation of alarmed group of social animals. |
1510.01100 | Karin Vadovi\v{c}ov\'a | Karin Vadovi\v{c}ov\'a | SWS promoting MHb-IPN-MRN circuit opposes the theta promoting circuit,
active wake and REM sleep | Conclusion section was added to highlight some predicted interactions
between regions involved in sleep/wake control and contextual memory. Line on
MHb, IPN and LHb inhibition of MS/vDBB (theta inducing) and hDBB (gamma
coupling inducing) was added. This paper was also submitted to Frontiers in
Neuroscience, on 5th October 2015, and re-submitted after the Reviewers
withdrew after 5 months | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hippocampus was by multisynaptic axonal tracts linked to medial habenula in
my DTI study. These tracts linked hippocampus to septum, and amygdala to BNST.
Septal and BNST axons passed by anteromedial thalamic nucleus to MHb, which
projected to pineal gland. Question is what is the MHb doing and why it
receives information from hippocampus via septum? This study explores the MHb
connectivity, predicts its functional role and how it is linked to memory.
Combination of known findings about the septum and MHb connectivity and
function led to this circuit-based idea that posterior septum activates MHb,
MHb activates IPN,and IPN stimulates MRN and its serotonin release. Proposed
idea is that this MHb-IPN-MRN circuit promotes slow wave sleep (SWS), high
serotonin and low acetylcholine state. My prediction is that this SWS-promoting
circuit reciprocally suppresses the theta oscillations promoting circuit,
linked to high acetylcholine levels in brain, and formed by supramamillary area
projections to the medial septum (MS) that induces theta rhythm in hippocampus
and other theta-coupled regions. The MHb-IPN-MRN pathway likely inhibits,
possibly reciprocally,some regions that have stimulating input to the
theta-generating SUM and MS, such as the wake promoting nucleus incertus,
posterior and lateral hypothalamus and LDT, but also the REM sleep inducing
neurons in LDT and PnO. Thus, proposed SWS- promoting circuit attenuates the
output of theta-promoting regions, both the active wake-on and REM-on regions.
The theta rhythm in wake state is linked to recording and binding information
with their spatio-temporal and relational context by hippocampus, while the SWS
supports rest and replay of hippocampaly stored information and their cortical
reactivations, e. g. in retrosplenial cortex linked to autobiographic memory or
in prefrontal cortex that can combine information from any source.
| [
{
"created": "Mon, 5 Oct 2015 11:10:15 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2015 16:58:30 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Oct 2015 18:53:23 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Oct 2015 14:43:56 GMT",
"version": "v4"
},
{
"created": "Thu, 29 Oct 2015 12:40:31 GMT",
"version": "v5"
},
{
"created": "Mon, 27 Jun 2016 08:32:43 GMT",
"version": "v6"
}
] | 2016-06-28 | [
[
"Vadovičová",
"Karin",
""
]
] | Hippocampus was by multisynaptic axonal tracts linked to medial habenula in my DTI study. These tracts linked hippocampus to septum, and amygdala to BNST. Septal and BNST axons passed by anteromedial thalamic nucleus to MHb, which projected to pineal gland. Question is what is the MHb doing and why it receives information from hippocampus via septum? This study explores the MHb connectivity, predicts its functional role and how it is linked to memory. Combination of known findings about the septum and MHb connectivity and function led to this circuit-based idea that posterior septum activates MHb, MHb activates IPN,and IPN stimulates MRN and its serotonin release. Proposed idea is that this MHb-IPN-MRN circuit promotes slow wave sleep (SWS), high serotonin and low acetylcholine state. My prediction is that this SWS-promoting circuit reciprocally suppresses the theta oscillations promoting circuit, linked to high acetylcholine levels in brain, and formed by supramamillary area projections to the medial septum (MS) that induces theta rhythm in hippocampus and other theta-coupled regions. The MHb-IPN-MRN pathway likely inhibits, possibly reciprocally,some regions that have stimulating input to the theta-generating SUM and MS, such as the wake promoting nucleus incertus, posterior and lateral hypothalamus and LDT, but also the REM sleep inducing neurons in LDT and PnO. Thus, proposed SWS- promoting circuit attenuates the output of theta-promoting regions, both the active wake-on and REM-on regions. The theta rhythm in wake state is linked to recording and binding information with their spatio-temporal and relational context by hippocampus, while the SWS supports rest and replay of hippocampaly stored information and their cortical reactivations, e. g. in retrosplenial cortex linked to autobiographic memory or in prefrontal cortex that can combine information from any source. |
1607.07359 | Prakash Narayan PhD | Jake A. Nieto, Michael A. Yamin, Itzhak D. Goldberg and Prakash
Narayan | An Empirical Biomarker-based Calculator for Autosomal Recessive
Polycystic Kidney Disease - The Nieto-Narayan Formula | 3 tables and 8 figures | null | 10.1371/journal.pone.0163063 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autosomal polycystic kidney disease (ARPKD) is associated with progressive
enlargement of the kidneys fuelled by the formation and expansion of
fluid-filled cysts. The disease is congenital and children that do not succumb
to it during the neonatal period will, by age 10 years, more often than not,
require nephrectomy+renal replacement therapy for management of both pain and
renal insufficiency. Since increasing cystic index (CI; percent of kidney
occupied by cysts) drives both renal expansion and organ dysfunction,
management of these patients, including decisions such as elective nephrectomy
and prioritization on the transplant waitlist, could clearly benefit from
serial determination of CI. So also, clinical trials in ARPKD evaluating the
efficacy of novel drug candidates could benefit from serial determination of
CI. Although ultrasound is currently the imaging modality of choice for
diagnosis of ARPKD, its utilization for assessing disease progression is highly
limited. Magnetic resonance imaging or computed tomography, although more
reliable for determination of CI, are expensive, time-consuming and somewhat
impractical in the pediatric population. Using a well-established mammalian
model of ARPKD, we undertook a big data-like analysis of minimally- or
non-invasive serum and urine biomarkers of renal injury/dysfunction to derive a
family of equations for estimating CI. We then applied a signal averaging
protocol to distill these equations to a single empirical formula for
calculation of CI. Such a formula will eventually find use in identifying and
monitoring patients at high risk for progressing to end-stage renal disease and
aid in the conduct of clinical trials.
| [
{
"created": "Mon, 25 Jul 2016 16:54:12 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jul 2016 18:00:45 GMT",
"version": "v2"
}
] | 2017-02-08 | [
[
"Nieto",
"Jake A.",
""
],
[
"Yamin",
"Michael A.",
""
],
[
"Goldberg",
"Itzhak D.",
""
],
[
"Narayan",
"Prakash",
""
]
] | Autosomal polycystic kidney disease (ARPKD) is associated with progressive enlargement of the kidneys fuelled by the formation and expansion of fluid-filled cysts. The disease is congenital and children that do not succumb to it during the neonatal period will, by age 10 years, more often than not, require nephrectomy+renal replacement therapy for management of both pain and renal insufficiency. Since increasing cystic index (CI; percent of kidney occupied by cysts) drives both renal expansion and organ dysfunction, management of these patients, including decisions such as elective nephrectomy and prioritization on the transplant waitlist, could clearly benefit from serial determination of CI. So also, clinical trials in ARPKD evaluating the efficacy of novel drug candidates could benefit from serial determination of CI. Although ultrasound is currently the imaging modality of choice for diagnosis of ARPKD, its utilization for assessing disease progression is highly limited. Magnetic resonance imaging or computed tomography, although more reliable for determination of CI, are expensive, time-consuming and somewhat impractical in the pediatric population. Using a well-established mammalian model of ARPKD, we undertook a big data-like analysis of minimally- or non-invasive serum and urine biomarkers of renal injury/dysfunction to derive a family of equations for estimating CI. We then applied a signal averaging protocol to distill these equations to a single empirical formula for calculation of CI. Such a formula will eventually find use in identifying and monitoring patients at high risk for progressing to end-stage renal disease and aid in the conduct of clinical trials. |
q-bio/0610049 | Jean Sebastien Saulnier-Blache | Jean S\'ebastien Saulnier-Blache | Contr\^{o}le paracrine du d\'{e}veloppement du tissu adipeux par
l'autotaxine et l'acide lysophosphatidique | null | null | null | null | q-bio.BM | null | Secretion and role of autotaxin and lysophosphatidic acid in adipose tissue
In obesity, adipocyte hypertrophy is often associated with recrutement of new
fat cells (adipogenesis) under the control of circulating and local regulatory
factors. Among the different lipids released in the extracellular compartment
of adipocytes, our group found the presence of lysophosphatidic acid (LPA). LPA
is a bioactive phospholipid able to regulate several cell responses via the
activation of specific G-protein coupled membrane receptors. Our group found
that LPA increases preadipocyte proliferation and inhibits adipogenesis via the
activation of LPA1 receptor subtype. Extracellular LPA-synthesis is catalyzed
by a lysophospholipase D secreted by adipocytes : autotaxin (ATX). Adipocyte
ATX expression strongly increases with adipogenesis as well as in individuals
exhibiting type 2 diabetes associated with massive obesity. A possible
contribution of ATX and LPA as paracrine regulators of adipogenesis and obesity
associated diabetes is proposed.
| [
{
"created": "Thu, 26 Oct 2006 18:56:04 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Saulnier-Blache",
"Jean Sébastien",
""
]
] | Secretion and role of autotaxin and lysophosphatidic acid in adipose tissue In obesity, adipocyte hypertrophy is often associated with recrutement of new fat cells (adipogenesis) under the control of circulating and local regulatory factors. Among the different lipids released in the extracellular compartment of adipocytes, our group found the presence of lysophosphatidic acid (LPA). LPA is a bioactive phospholipid able to regulate several cell responses via the activation of specific G-protein coupled membrane receptors. Our group found that LPA increases preadipocyte proliferation and inhibits adipogenesis via the activation of LPA1 receptor subtype. Extracellular LPA-synthesis is catalyzed by a lysophospholipase D secreted by adipocytes : autotaxin (ATX). Adipocyte ATX expression strongly increases with adipogenesis as well as in individuals exhibiting type 2 diabetes associated with massive obesity. A possible contribution of ATX and LPA as paracrine regulators of adipogenesis and obesity associated diabetes is proposed. |
2311.12884 | Asmita Poddar | Asmita Poddar, Vladimir Uzun, Elizabeth Tunbridge, Wilfried Haerty,
Alejo Nevado-Holgado | Identifying DNA Sequence Motifs Using Deep Learning | null | null | null | null | q-bio.GN cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Splice sites play a crucial role in gene expression, and accurate prediction
of these sites in DNA sequences is essential for diagnosing and treating
genetic disorders. We address the challenge of splice site prediction by
introducing DeepDeCode, an attention-based deep learning sequence model to
capture the long-term dependencies in the nucleotides in DNA sequences. We
further propose using visualization techniques for accurate identification of
sequence motifs, which enhance the interpretability and trustworthiness of
DeepDeCode. We compare DeepDeCode to other state-of-the-art methods for splice
site prediction and demonstrate its accuracy, explainability and efficiency.
Given the results of our methodology, we expect that it can used for healthcare
applications to reason about genomic processes and be extended to discover new
splice sites and genomic regulatory elements.
| [
{
"created": "Mon, 20 Nov 2023 23:14:28 GMT",
"version": "v1"
}
] | 2023-11-23 | [
[
"Poddar",
"Asmita",
""
],
[
"Uzun",
"Vladimir",
""
],
[
"Tunbridge",
"Elizabeth",
""
],
[
"Haerty",
"Wilfried",
""
],
[
"Nevado-Holgado",
"Alejo",
""
]
] | Splice sites play a crucial role in gene expression, and accurate prediction of these sites in DNA sequences is essential for diagnosing and treating genetic disorders. We address the challenge of splice site prediction by introducing DeepDeCode, an attention-based deep learning sequence model to capture the long-term dependencies in the nucleotides in DNA sequences. We further propose using visualization techniques for accurate identification of sequence motifs, which enhance the interpretability and trustworthiness of DeepDeCode. We compare DeepDeCode to other state-of-the-art methods for splice site prediction and demonstrate its accuracy, explainability and efficiency. Given the results of our methodology, we expect that it can used for healthcare applications to reason about genomic processes and be extended to discover new splice sites and genomic regulatory elements. |
0809.4059 | Kilian Koepsell | Kilian Koepsell and Friedrich T. Sommer | Information transmission in oscillatory neural activity | 18 pages, 8 figures, to appear in Biological Cybernetics | Biological Cybernetics (2008) 99:403-416 | 10.1007/s00422-008-0273-6 | null | q-bio.NC cs.IT math.IT q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Periodic neural activity not locked to the stimulus or to motor responses is
usually ignored. Here, we present new tools for modeling and quantifying the
information transmission based on periodic neural activity that occurs with
quasi-random phase relative to the stimulus. We propose a model to reproduce
characteristic features of oscillatory spike trains, such as histograms of
inter-spike intervals and phase locking of spikes to an oscillatory influence.
The proposed model is based on an inhomogeneous Gamma process governed by a
density function that is a product of the usual stimulus-dependent rate and a
quasi-periodic function. Further, we present an analysis method generalizing
the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the
information content in such data. We demonstrate these tools on recordings from
relay cells in the lateral geniculate nucleus of the cat.
| [
{
"created": "Wed, 24 Sep 2008 00:34:13 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Oct 2008 17:02:58 GMT",
"version": "v2"
}
] | 2008-12-05 | [
[
"Koepsell",
"Kilian",
""
],
[
"Sommer",
"Friedrich T.",
""
]
] | Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase relative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat. |
1204.5102 | Eugene Koonin | Sagi Snir, Yuri I. Wolf, Eugene V. Koonin | Universal pacemaker of genome evolution | Three figures; supplementary information included; submitted to
Nature | null | 10.1371/journal.pcbi.1002785 | null | q-bio.PE | http://creativecommons.org/licenses/publicdomain/ | Molecular clock (MC) is a central concept of molecular evolution according to
which each gene evolves at a characteristic, near constant rate. Numerous
evolutionary studies have demonstrated the validity of MC but also have shown
that MC is substantially overdispersed, i.e. lineage-specific deviations of the
evolutionary rate of the given gene from the clock greatly exceed the
expectation from the sampling error. A fundamental observation of comparative
genomics that appears to complement the MC is that the distribution of
evolution rates across orthologous genes in pairs of related genomes remains
virtually unchanged throughout the evolution of life, from bacteria to mammals.
The conservation of this distribution implies that the relative evolution rates
of all genes remain nearly constant, or in other words, that evolutionary rates
of different genes are strongly correlated within each evolving genome. We
hypothesized that this correlation is not a simple consequence of MC but could
be better explained by a model we dubbed Universal PaceMaker (UPM) of genome
evolution. The UPM model posits that the rate of evolution changes
synchronously across genome-wide sets of genes in all evolving lineages. We
sought to differentiate between the MC and UPM models by fitting thousands of
phylogenetic trees for bacterial and archaeal genes to supertrees that reflect
the dominant trend of vertical descent in the evolution of archaea and bacteria
and that were constrained according to the two models. The goodness of fit for
the UPM model was better than the fit for the MC model, with overwhelming
statistical significance. These results reveal a universal pacemaker of genome
evolution that could have been in operation throughout the history of life.
| [
{
"created": "Mon, 23 Apr 2012 16:39:49 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Snir",
"Sagi",
""
],
[
"Wolf",
"Yuri I.",
""
],
[
"Koonin",
"Eugene V.",
""
]
] | Molecular clock (MC) is a central concept of molecular evolution according to which each gene evolves at a characteristic, near constant rate. Numerous evolutionary studies have demonstrated the validity of MC but also have shown that MC is substantially overdispersed, i.e. lineage-specific deviations of the evolutionary rate of the given gene from the clock greatly exceed the expectation from the sampling error. A fundamental observation of comparative genomics that appears to complement the MC is that the distribution of evolution rates across orthologous genes in pairs of related genomes remains virtually unchanged throughout the evolution of life, from bacteria to mammals. The conservation of this distribution implies that the relative evolution rates of all genes remain nearly constant, or in other words, that evolutionary rates of different genes are strongly correlated within each evolving genome. We hypothesized that this correlation is not a simple consequence of MC but could be better explained by a model we dubbed Universal PaceMaker (UPM) of genome evolution. The UPM model posits that the rate of evolution changes synchronously across genome-wide sets of genes in all evolving lineages. We sought to differentiate between the MC and UPM models by fitting thousands of phylogenetic trees for bacterial and archaeal genes to supertrees that reflect the dominant trend of vertical descent in the evolution of archaea and bacteria and that were constrained according to the two models. The goodness of fit for the UPM model was better than the fit for the MC model, with overwhelming statistical significance. These results reveal a universal pacemaker of genome evolution that could have been in operation throughout the history of life. |
2311.00015 | JAmes Hadfield Dr | Dan Stetson, Paul Labrousse, Hugh Russell, David Shera, Chris Abbosh,
Brian Dougherty, J. Carl Barrett, Darren Hodgson, James Hadfield | Next-generation MRD assays: do we have the tools to evaluate them
properly? | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Circulating tumour DNA (ctDNA) detection of molecular residual disease (MRD)
in solid tumours correlates strongly with patient outcomes and is being adopted
as a new clinical standard. ctDNA levels are known to correlate with tumor
volume, and although the absolute levels vary across indication and histology,
its analysis is driving the adoption of MRD. MRD assays must detect tumor when
imaging cannot and, as such, require very high sensitivity to detect the low
levels of ctDNA found after curative intent therapy. The minimum threshold is
0.01% Tumour Fraction but current methods like Archer and Signatera are limited
by detection sensitivity resulting in some patients receiving a false negative
call thereby missing out on earlier therapeutic intervention. Multiple vendors
are increasing the number of somatic variants tracked in tumour-informed and
personalized NGS assays, from tens to thousands of variants. Most recently,
assays using other biological features of ctDNA, e.g methylation or
fragmentome, have been developed at the LOD required for clinical utility.
These uniformed, or tumour-naive and non-personalised assays may be more
easily, and therefore more rapidly, adopted in the clinic. However, this rapid
development in MRD assay technology results in significant challenges in
benchmarking these new technologies for use in clinical trials. This is further
complicated by the fact that previous reference materials have focused on
somatic variants, and do not retain all of the epigenomic features assessed by
newer technologies. In this Comments and Controversy paper, we detail what is
known and what remains to be determined for optimal reference materials of MRD
methods and provide opinions generated during three-years of MRD technology
benchmarking in AstraZeneca Translational Medicine to help guide the community
conversation.
| [
{
"created": "Tue, 31 Oct 2023 16:22:50 GMT",
"version": "v1"
}
] | 2023-11-02 | [
[
"Stetson",
"Dan",
""
],
[
"Labrousse",
"Paul",
""
],
[
"Russell",
"Hugh",
""
],
[
"Shera",
"David",
""
],
[
"Abbosh",
"Chris",
""
],
[
"Dougherty",
"Brian",
""
],
[
"Barrett",
"J. Carl",
""
],
[
"Hodgson",
"Darren",
""
],
[
"Hadfield",
"James",
""
]
] | Circulating tumour DNA (ctDNA) detection of molecular residual disease (MRD) in solid tumours correlates strongly with patient outcomes and is being adopted as a new clinical standard. ctDNA levels are known to correlate with tumor volume, and although the absolute levels vary across indication and histology, its analysis is driving the adoption of MRD. MRD assays must detect tumor when imaging cannot and, as such, require very high sensitivity to detect the low levels of ctDNA found after curative intent therapy. The minimum threshold is 0.01% Tumour Fraction but current methods like Archer and Signatera are limited by detection sensitivity resulting in some patients receiving a false negative call thereby missing out on earlier therapeutic intervention. Multiple vendors are increasing the number of somatic variants tracked in tumour-informed and personalized NGS assays, from tens to thousands of variants. Most recently, assays using other biological features of ctDNA, e.g methylation or fragmentome, have been developed at the LOD required for clinical utility. These uniformed, or tumour-naive and non-personalised assays may be more easily, and therefore more rapidly, adopted in the clinic. However, this rapid development in MRD assay technology results in significant challenges in benchmarking these new technologies for use in clinical trials. This is further complicated by the fact that previous reference materials have focused on somatic variants, and do not retain all of the epigenomic features assessed by newer technologies. In this Comments and Controversy paper, we detail what is known and what remains to be determined for optimal reference materials of MRD methods and provide opinions generated during three-years of MRD technology benchmarking in AstraZeneca Translational Medicine to help guide the community conversation. |
1507.03397 | Dragutin Mihailovic | Dragutin T. Mihailovi\'c, Vladimir R. Kosti\'c, Igor Bala\v{z}, Darko
Kapor | Computing the Threshold of the Influence of Intercellular Nanotubes on
Cell-to-Cell Communication Integrity | 6 pages, 4 figures | null | 10.1016/j.chaos.2016.06.001 | null | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the threshold of the influence of the tunneling nanotubes (TNTs)
on the cell-to-cell communication integrity. A deterministic model is
introduced with the Michaelis-Menten dynamics and the intercellular exchange of
substance. The influence of TNTs are considered as a functional perturbation of
the main communication and treated as the matrix nearness problems. We analyze
communication integrity in terms of the \emph{pseudospectra} of the exchange,
to find the \emph{distance to instability}. The threshold of TNTs influence is
computed for Newman-Gastner and Erd\H{o}s-R\'enyi gap junction (GJ) networks.
| [
{
"created": "Mon, 13 Jul 2015 11:15:38 GMT",
"version": "v1"
}
] | 2016-06-29 | [
[
"Mihailović",
"Dragutin T.",
""
],
[
"Kostić",
"Vladimir R.",
""
],
[
"Balaž",
"Igor",
""
],
[
"Kapor",
"Darko",
""
]
] | We examine the threshold of the influence of the tunneling nanotubes (TNTs) on the cell-to-cell communication integrity. A deterministic model is introduced with the Michaelis-Menten dynamics and the intercellular exchange of substance. The influence of TNTs are considered as a functional perturbation of the main communication and treated as the matrix nearness problems. We analyze communication integrity in terms of the \emph{pseudospectra} of the exchange, to find the \emph{distance to instability}. The threshold of TNTs influence is computed for Newman-Gastner and Erd\H{o}s-R\'enyi gap junction (GJ) networks. |
2112.06923 | Marc-Antoine Fardin | Marc-Antoine Fardin | Metabolic cascades of energy | 7 pages, 2 figures | null | null | null | q-bio.OT cond-mat.soft physics.bio-ph physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Life has a special status, it even has its own science: biology. In many
ways, the logic of life seems to differ from that of atoms, molecules, planets,
or any other `inanimate object'. However, life is increasingly measured using
quantities shared by all sciences, like mass, force, energy or power. An
analysis of the dimensions of these quantities provides powerful ways to infer
the relationships they might have with one another. Here we show that a
dimensional analysis of the metabolic laws connecting the characteristic powers
and masses of living organisms offers new ways to understand the deep
connections between the chemistry of microscopic molecules and the physics of
macroscopic objects bound by gravity. This analysis reveals a link between
metabolism and the cascades of energy observed in turbulent flows, opening new
perspectives for both fields.
| [
{
"created": "Mon, 13 Dec 2021 14:35:30 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Dec 2021 07:10:09 GMT",
"version": "v2"
}
] | 2021-12-16 | [
[
"Fardin",
"Marc-Antoine",
""
]
] | Life has a special status, it even has its own science: biology. In many ways, the logic of life seems to differ from that of atoms, molecules, planets, or any other `inanimate object'. However, life is increasingly measured using quantities shared by all sciences, like mass, force, energy or power. An analysis of the dimensions of these quantities provides powerful ways to infer the relationships they might have with one another. Here we show that a dimensional analysis of the metabolic laws connecting the characteristic powers and masses of living organisms offers new ways to understand the deep connections between the chemistry of microscopic molecules and the physics of macroscopic objects bound by gravity. This analysis reveals a link between metabolism and the cascades of energy observed in turbulent flows, opening new perspectives for both fields. |
0911.4871 | Anirban Banerji | Param Priya Singh, Anirban Banerji | 3-10 and Pi-Helices: Stochastic Events on Sequence Space; Reasons and
Implications of their Accidental Occurrences across Protein Universe | null | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considering all available non-redundant protein structures across different
structural classes, present study identified the probabilistic characteristics
that describe several facets of the occurrence of 3(10) and Pi-helices in
proteins. Occurrence profile of 3(10) and Pi-helices revealed that, their
presence follows Poisson flow on the primary structure; implying that, their
occurrence profile is rare, random and accidental. Structural class-specific
statistical analyses of sequence intervals between consecutive occurrences of
3(10) and Pi-helices revealed that these could be best described by gamma and
exponential distributions, across structural classes. Comparative study of
normalized percentage of non-glycine and non-proline residues in 3(10), Pi and
alpha-helices revealed a considerably higher proportion of 3(10) and Pi-helix
residues in disallowed, generous and allowed regions of Ramachandran map. Probe
into these findings in the light of evolution suggested clearly that 3(10) and
Pi-helices should appropriately be viewed as evolutionary intermediates on long
time scale, for not only the {\alpha}-helical conformation but also for the
'turns', equiprobably. Hence, accidental and random nature of occurrences of
3(10) and Pi-helices, and their evolutionary non-conservation, could be
described and explained from an invariant quantitative framework. Extent of
correctness of two previously proposed hypotheses on 3(10) and Pi-helices, have
been investigated too. Alongside these, a new algorithm to differentiate
between related sequences is proposed, which reliably studies evolutionary
distance with respect to protein secondary structures.
| [
{
"created": "Wed, 25 Nov 2009 15:01:30 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Dec 2011 09:19:58 GMT",
"version": "v2"
}
] | 2011-12-21 | [
[
"Singh",
"Param Priya",
""
],
[
"Banerji",
"Anirban",
""
]
] | Considering all available non-redundant protein structures across different structural classes, present study identified the probabilistic characteristics that describe several facets of the occurrence of 3(10) and Pi-helices in proteins. Occurrence profile of 3(10) and Pi-helices revealed that, their presence follows Poisson flow on the primary structure; implying that, their occurrence profile is rare, random and accidental. Structural class-specific statistical analyses of sequence intervals between consecutive occurrences of 3(10) and Pi-helices revealed that these could be best described by gamma and exponential distributions, across structural classes. Comparative study of normalized percentage of non-glycine and non-proline residues in 3(10), Pi and alpha-helices revealed a considerably higher proportion of 3(10) and Pi-helix residues in disallowed, generous and allowed regions of Ramachandran map. Probe into these findings in the light of evolution suggested clearly that 3(10) and Pi-helices should appropriately be viewed as evolutionary intermediates on long time scale, for not only the {\alpha}-helical conformation but also for the 'turns', equiprobably. Hence, accidental and random nature of occurrences of 3(10) and Pi-helices, and their evolutionary non-conservation, could be described and explained from an invariant quantitative framework. Extent of correctness of two previously proposed hypotheses on 3(10) and Pi-helices, have been investigated too. Alongside these, a new algorithm to differentiate between related sequences is proposed, which reliably studies evolutionary distance with respect to protein secondary structures. |
2310.12996 | Li Kun | Kun Li, Yong Luo, Xiantao Cai, Wenbin Hu, Bo Du | Zero-shot Learning of Drug Response Prediction for Preclinical Drug
Screening | 16 pages, 3 figures, 3 tables | null | null | null | q-bio.BM cs.AI cs.LG q-bio.CB q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional deep learning methods typically employ supervised learning for
drug response prediction (DRP). This entails dependence on labeled response
data from drugs for model training. However, practical applications in the
preclinical drug screening phase demand that DRP models predict responses for
novel compounds, often with unknown drug responses. This presents a challenge,
rendering supervised deep learning methods unsuitable for such scenarios. In
this paper, we propose a zero-shot learning solution for the DRP task in
preclinical drug screening. Specifically, we propose a Multi-branch
Multi-Source Domain Adaptation Test Enhancement Plug-in, called MSDA. MSDA can
be seamlessly integrated with conventional DRP methods, learning invariant
features from the prior response data of similar drugs to enhance real-time
predictions of unlabeled compounds. We conducted experiments using the GDSCv2
and CellMiner datasets. The results demonstrate that MSDA efficiently predicts
drug responses for novel compounds, leading to a general performance
improvement of 5-10\% in the preclinical drug screening phase. The significance
of this solution resides in its potential to accelerate the drug discovery
process, improve drug candidate assessment, and facilitate the success of drug
discovery.
| [
{
"created": "Thu, 5 Oct 2023 05:55:41 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Li",
"Kun",
""
],
[
"Luo",
"Yong",
""
],
[
"Cai",
"Xiantao",
""
],
[
"Hu",
"Wenbin",
""
],
[
"Du",
"Bo",
""
]
] | Conventional deep learning methods typically employ supervised learning for drug response prediction (DRP). This entails dependence on labeled response data from drugs for model training. However, practical applications in the preclinical drug screening phase demand that DRP models predict responses for novel compounds, often with unknown drug responses. This presents a challenge, rendering supervised deep learning methods unsuitable for such scenarios. In this paper, we propose a zero-shot learning solution for the DRP task in preclinical drug screening. Specifically, we propose a Multi-branch Multi-Source Domain Adaptation Test Enhancement Plug-in, called MSDA. MSDA can be seamlessly integrated with conventional DRP methods, learning invariant features from the prior response data of similar drugs to enhance real-time predictions of unlabeled compounds. We conducted experiments using the GDSCv2 and CellMiner datasets. The results demonstrate that MSDA efficiently predicts drug responses for novel compounds, leading to a general performance improvement of 5-10\% in the preclinical drug screening phase. The significance of this solution resides in its potential to accelerate the drug discovery process, improve drug candidate assessment, and facilitate the success of drug discovery. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.