id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.04375 | Ge Wang | Ge Wang | A Perspective on Deep Imaging | 9 pages, 10 figures, 49 references, and accepted by IEEE Access | null | null | null | q-bio.QM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The combination of tomographic imaging and deep learning, or machine learning
in general, promises to empower not only image analysis but also image
reconstruction. The latter aspect is considered in this perspective article
with an emphasis on medical imaging to develop a new generation of image
reconstruction theories and techniques. This direction might lead to
intelligent utilization of domain knowledge from big data, innovative
approaches for image reconstruction, and superior performance in clinical and
preclinical applications. To realize the full impact of machine learning on
medical imaging, major challenges must be addressed.
| [
{
"created": "Sat, 10 Sep 2016 15:45:48 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Nov 2016 13:02:27 GMT",
"version": "v2"
}
] | 2016-11-07 | [
[
"Wang",
"Ge",
""
]
] | The combination of tomographic imaging and deep learning, or machine learning in general, promises to empower not only image analysis but also image reconstruction. The latter aspect is considered in this perspective article with an emphasis on medical imaging to develop a new generation of image reconstruction theories and techniques. This direction might lead to intelligent utilization of domain knowledge from big data, innovative approaches for image reconstruction, and superior performance in clinical and preclinical applications. To realize the full impact of machine learning on medical imaging, major challenges must be addressed. |
q-bio/0510046 | Marc Timme | Marc Timme, Theo Geisel, Fred Wolf | Speed of synchronization in complex networks of neural oscillators
Analytic results based on Random Matrix Theory | 17 pages, 12 figures, submitted to Chaos | Chaos 16, 015108 (2006) | 10.1063/1.2150775 | null | q-bio.NC cond-mat.dis-nn | null | We analyze the dynamics of networks of spiking neural oscillators. First, we
present an exact linear stability theory of the synchronous state for networks
of arbitrary connectivity. For general neuron rise functions, stability is
determined by multiple operators, for which standard analysis is not suitable.
We describe a general non-standard solution to the multi-operator problem.
Subsequently, we derive a class of rise functions for which all stability
operators become degenerate and standard eigenvalue analysis becomes a suitable
tool. Interestingly, this class is found to consist of networks of leaky
integrate and fire neurons. For random networks of inhibitory
integrate-and-fire neurons, we then develop an analytical approach, based on
the theory of random matrices, to precisely determine the eigenvalue
distribution. This yields the asymptotic relaxation time for perturbations to
the synchronous state which provides the characteristic time scale on which
neurons can coordinate their activity in such networks. For networks with
finite in-degree, i.e. finite number of presynaptic inputs per neuron, we find
a speed limit to coordinating spiking activity: Even with arbitrarily strong
interaction strengths neurons cannot synchronize faster than at a certain
maximal speed determined by the typical in-degree.
| [
{
"created": "Tue, 25 Oct 2005 16:13:15 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Timme",
"Marc",
""
],
[
"Geisel",
"Theo",
""
],
[
"Wolf",
"Fred",
""
]
] | We analyze the dynamics of networks of spiking neural oscillators. First, we present an exact linear stability theory of the synchronous state for networks of arbitrary connectivity. For general neuron rise functions, stability is determined by multiple operators, for which standard analysis is not suitable. We describe a general non-standard solution to the multi-operator problem. Subsequently, we derive a class of rise functions for which all stability operators become degenerate and standard eigenvalue analysis becomes a suitable tool. Interestingly, this class is found to consist of networks of leaky integrate and fire neurons. For random networks of inhibitory integrate-and-fire neurons, we then develop an analytical approach, based on the theory of random matrices, to precisely determine the eigenvalue distribution. This yields the asymptotic relaxation time for perturbations to the synchronous state which provides the characteristic time scale on which neurons can coordinate their activity in such networks. For networks with finite in-degree, i.e. finite number of presynaptic inputs per neuron, we find a speed limit to coordinating spiking activity: Even with arbitrarily strong interaction strengths neurons cannot synchronize faster than at a certain maximal speed determined by the typical in-degree. |
0902.0389 | Michael B\"orsch | M. Boersch, R. Reuter, G. Balasubramanian, R. Erdmann, F. Jelezko, J.
Wrachtrup | Fluorescent nanodiamonds for FRET-based monitoring of a single
biological nanomotor FoF1-ATP synthase | 10 pages, 4 figures | null | 10.1117/12.812720 | null | q-bio.BM q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Color centers in diamond nanocrystals are a new class of fluorescence markers
that attract significant interest due to matchless brightness, photostability
and biochemical inertness. Fluorescing diamond nanocrystals containing defects
can be used as markers replacing conventional organic dye molecules, quantum
dots or autofluorescent proteins. They can be applied for tracking and
ultrahigh-resolution localization of the single markers. In addition the spin
properties of diamond defects can be utilized for novel magneto-optical imaging
(MOI) with nanometer resolution. We develop this technique to unravel the
details of the rotary motions and the elastic energy storage mechanism of a
single biological nanomotor FoF1-ATP synthase. FoF1-ATP synthase is the enzyme
that provides the 'chemical energy currency' adenosine triphosphate, ATP, for
living cells. The formation of ATP is accomplished by a stepwise internal
rotation of subunits within the enzyme. Previously subunit rotation has been
monitored by single-molecule fluorescence resonance energy transfer (FRET) and
was limited by the photostability of the fluorophores. Fluorescent nanodiamonds
advance these FRET measurements to long time scales.
| [
{
"created": "Mon, 2 Feb 2009 22:13:24 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Boersch",
"M.",
""
],
[
"Reuter",
"R.",
""
],
[
"Balasubramanian",
"G.",
""
],
[
"Erdmann",
"R.",
""
],
[
"Jelezko",
"F.",
""
],
[
"Wrachtrup",
"J.",
""
]
] | Color centers in diamond nanocrystals are a new class of fluorescence markers that attract significant interest due to matchless brightness, photostability and biochemical inertness. Fluorescing diamond nanocrystals containing defects can be used as markers replacing conventional organic dye molecules, quantum dots or autofluorescent proteins. They can be applied for tracking and ultrahigh-resolution localization of the single markers. In addition the spin properties of diamond defects can be utilized for novel magneto-optical imaging (MOI) with nanometer resolution. We develop this technique to unravel the details of the rotary motions and the elastic energy storage mechanism of a single biological nanomotor FoF1-ATP synthase. FoF1-ATP synthase is the enzyme that provides the 'chemical energy currency' adenosine triphosphate, ATP, for living cells. The formation of ATP is accomplished by a stepwise internal rotation of subunits within the enzyme. Previously subunit rotation has been monitored by single-molecule fluorescence resonance energy transfer (FRET) and was limited by the photostability of the fluorophores. Fluorescent nanodiamonds advance these FRET measurements to long time scales. |
1601.03235 | Michele Monti | Michele Monti, Marta R A Matos, Jeong-Mo Choi, Michael S Ferry and
Bartlomiej Borek | A modified galactose network model with implications for growth | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The yeast galactose network has provided many insights into how eukaryotic
gene circuits regulate metabolic function. However, there is currently no
consensus model of the network that incorporates protein dilution due to
cellular growth. We address this by adapting a well-known model and having it
account for growth benefit and burden due to expression of the network
proteins. Modifying the model to incorporate galactose transport and basal
Gal1p production allows us to better reproduce experimental observations.
Incorporating the growth rate effect demonstrates how the native network can
optimize growth in different galactose environments. These findings advance our
quantitative understanding of this gene network, and implement a general
approach for analysing the balance between growth costs and benefits in a range
of metabolic control networks.
| [
{
"created": "Wed, 13 Jan 2016 13:26:04 GMT",
"version": "v1"
}
] | 2016-01-14 | [
[
"Monti",
"Michele",
""
],
[
"Matos",
"Marta R A",
""
],
[
"Choi",
"Jeong-Mo",
""
],
[
"Ferry",
"Michael S",
""
],
[
"Borek",
"Bartlomiej",
""
]
] | The yeast galactose network has provided many insights into how eukaryotic gene circuits regulate metabolic function. However, there is currently no consensus model of the network that incorporates protein dilution due to cellular growth. We address this by adapting a well-known model and having it account for growth benefit and burden due to expression of the network proteins. Modifying the model to incorporate galactose transport and basal Gal1p production allows us to better reproduce experimental observations. Incorporating the growth rate effect demonstrates how the native network can optimize growth in different galactose environments. These findings advance our quantitative understanding of this gene network, and implement a general approach for analysing the balance between growth costs and benefits in a range of metabolic control networks. |
1805.03227 | Johannes Signer | Johannes Signer and John Fieberg and Tal Avgar | Animal Movement Tools (amt): R-Package for Managing Tracking Data and
Conducting Habitat Selection Analyses | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 1. Advances in tracking technology have led to an exponential increase in
animal location data, greatly enhancing our ability to address interesting
questions in movement ecology, but also presenting new challenges related to
data manage- ment and analysis. 2. Step-Selection Functions (SSFs) are commonly
used to link environmental covariates to animal location data collected at fine
temporal resolution. SSFs are estimated by comparing observed steps connecting
successive animal locations to random steps, using a likelihood equivalent of a
Cox proportional hazards model. By using common statistical distributions to
model step length and turn angle distributions, and including habitat- and
movement-related covariates (functions of distances between points, angular
deviations), it is possible to make inference regarding habitat selection and
movement processes, or to control one process while investigating the other.
The fitted model can also be used to estimate utilization distributions and
mechanistic home ranges. 3. Here, we present the R-package amt (animal movement
tools) that allows users to fit SSFs to data and to simulate space use of
animals from fitted models. The amt package also provides tools for managing
telemetry data. 4. Using fisher (Pekania pennanti ) data as a case study, we
illustrate a four-step approach to the analysis of animal movement data,
consisting of data management, exploratory data analysis, fitting of models,
and simulating from fitted models.
| [
{
"created": "Tue, 8 May 2018 18:38:36 GMT",
"version": "v1"
}
] | 2018-05-10 | [
[
"Signer",
"Johannes",
""
],
[
"Fieberg",
"John",
""
],
[
"Avgar",
"Tal",
""
]
] | 1. Advances in tracking technology have led to an exponential increase in animal location data, greatly enhancing our ability to address interesting questions in movement ecology, but also presenting new challenges related to data manage- ment and analysis. 2. Step-Selection Functions (SSFs) are commonly used to link environmental covariates to animal location data collected at fine temporal resolution. SSFs are estimated by comparing observed steps connecting successive animal locations to random steps, using a likelihood equivalent of a Cox proportional hazards model. By using common statistical distributions to model step length and turn angle distributions, and including habitat- and movement-related covariates (functions of distances between points, angular deviations), it is possible to make inference regarding habitat selection and movement processes, or to control one process while investigating the other. The fitted model can also be used to estimate utilization distributions and mechanistic home ranges. 3. Here, we present the R-package amt (animal movement tools) that allows users to fit SSFs to data and to simulate space use of animals from fitted models. The amt package also provides tools for managing telemetry data. 4. Using fisher (Pekania pennanti ) data as a case study, we illustrate a four-step approach to the analysis of animal movement data, consisting of data management, exploratory data analysis, fitting of models, and simulating from fitted models. |
1807.04799 | Karsten Kruse | Frederic Folz, Lukas Wettmann, Giovanna Morigi, Karsten Kruse | The sound of an axon's growth | 5 pages, 4 figures | Phys. Rev. E 99, 050401 (2019) | 10.1103/PhysRevE.99.050401 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Axons are linear processes of nerve cells that can range from a few tens of
micrometers up to meters in length. In addition to external cues, the length of
an axon is also regulated by unknown internal mechanisms. Molecular motors have
been suggested to generate oscillations with an axon length-dependent frequency
that could be used to measure an axon's extension. Here, we present a mechanism
that depends on the spectral decomposition of the oscillatory signal to
determine the axon length.
| [
{
"created": "Thu, 12 Jul 2018 19:36:50 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Nov 2018 09:36:50 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Apr 2019 07:52:08 GMT",
"version": "v3"
}
] | 2019-05-15 | [
[
"Folz",
"Frederic",
""
],
[
"Wettmann",
"Lukas",
""
],
[
"Morigi",
"Giovanna",
""
],
[
"Kruse",
"Karsten",
""
]
] | Axons are linear processes of nerve cells that can range from a few tens of micrometers up to meters in length. In addition to external cues, the length of an axon is also regulated by unknown internal mechanisms. Molecular motors have been suggested to generate oscillations with an axon length-dependent frequency that could be used to measure an axon's extension. Here, we present a mechanism that depends on the spectral decomposition of the oscillatory signal to determine the axon length. |
1610.09926 | Guy Harling | Guy Harling, Rui Wang, Jukka-Pekka Onnela, Victor De Gruttola | Leveraging contact network structure in the design of cluster randomized
trials | 33 pages, original text submitted for review, Clinical Trials, first
published online October 24, 2016 | Clinical Trials, 2017, 14 (1), 37-47 | 10.1177/1740774516673355 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: In settings where proof-of-principle trials have succeeded but
the effectiveness of different forms of implementation remains uncertain,
trials that not only generate information about intervention effects but also
provide public health benefit would be useful. Cluster randomized trials (CRT)
capture both direct and indirect intervention effects; the latter depends
heavily on contact networks within and across clusters. We propose a novel
class of connectivity-informed trial designs that leverages information about
such networks in order to improve public health impact and preserve ability to
detect intervention effects.
Methods: We consider CRTs in which the order of enrollment is based on the
total number of ties between individuals across clusters (based either on the
total number of inter-cluster connections or on connections only to untreated
clusters). We include options analogous both to traditional Parallel and
Stepped Wedge designs. We also allow for control clusters to be "held-back"
from re-randomization for some period. We investigate the performance epidemic
control and power to detect vaccine effect performance of these designs by
simulating vaccination trials during an SEIR-type epidemic using a
network-structured agent-based model.
Results: In our simulations, connectivity-informed designs have lower peak
infectiousness than comparable traditional designs and reduce cumulative
incidence by 20%, but with little impact on time to end of epidemic and reduced
power to detect differences in incidence across clusters. However even a brief
"holdback" period restores most of the power lost compared to traditional
approaches.
Conclusion: Incorporating information about cluster connectivity in design of
CRTs can increase their public health impact, especially in acute outbreak
settings, with modest cost in power to detect an effective intervention.
| [
{
"created": "Fri, 28 Oct 2016 08:34:15 GMT",
"version": "v1"
}
] | 2017-05-16 | [
[
"Harling",
"Guy",
""
],
[
"Wang",
"Rui",
""
],
[
"Onnela",
"Jukka-Pekka",
""
],
[
"De Gruttola",
"Victor",
""
]
] | Background: In settings where proof-of-principle trials have succeeded but the effectiveness of different forms of implementation remains uncertain, trials that not only generate information about intervention effects but also provide public health benefit would be useful. Cluster randomized trials (CRT) capture both direct and indirect intervention effects; the latter depends heavily on contact networks within and across clusters. We propose a novel class of connectivity-informed trial designs that leverages information about such networks in order to improve public health impact and preserve ability to detect intervention effects. Methods: We consider CRTs in which the order of enrollment is based on the total number of ties between individuals across clusters (based either on the total number of inter-cluster connections or on connections only to untreated clusters). We include options analogous both to traditional Parallel and Stepped Wedge designs. We also allow for control clusters to be "held-back" from re-randomization for some period. We investigate the performance epidemic control and power to detect vaccine effect performance of these designs by simulating vaccination trials during an SEIR-type epidemic using a network-structured agent-based model. Results: In our simulations, connectivity-informed designs have lower peak infectiousness than comparable traditional designs and reduce cumulative incidence by 20%, but with little impact on time to end of epidemic and reduced power to detect differences in incidence across clusters. However even a brief "holdback" period restores most of the power lost compared to traditional approaches. Conclusion: Incorporating information about cluster connectivity in design of CRTs can increase their public health impact, especially in acute outbreak settings, with modest cost in power to detect an effective intervention. |
1410.1925 | Nicolas Innocenti | Nicolas Innocenti, Monica Golumbeanu, Aymeric Fouquier d'H\'erou\"el,
Caroline Lacoux, R\'emy A. Bonnin, Sean P. Kennedy, Fran\c{c}oise Wessner,
Pascale Serror, Philippe Bouloc, Francis Repoila, Erik Aurell | Whole genome mapping of 5' RNA ends in bacteria by tagged sequencing : A
comprehensive view in Enterococcus faecalis | null | null | 10.1261/rna.048470.114 | null | q-bio.GN q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enterococcus faecalis is the third cause of nosocomial infections. To obtain
the first comprehensive view of transcriptional organizations in this
bacterium, we used a modified RNA-seq approach enabling to discriminate primary
from processed 5'RNA ends. We also validated our approach by confirming known
features in Escherichia coli.
We mapped 559 transcription start sites and 352 processing sites in E.
faecalis. A blind motif search retrieved canonical features of SigA- and
SigN-dependent promoters preceding TSSs mapped. We discovered 95 novel putative
regulatory RNAs, small- and antisense RNAs, and 72 transcriptional antisense
organisations.
Presented data constitute a significant insight into bacterial RNA landscapes
and a step towards the inference of regulatory processes at transcriptional and
post-transcriptional levels in a comprehensive manner.
| [
{
"created": "Tue, 7 Oct 2014 21:42:44 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Innocenti",
"Nicolas",
""
],
[
"Golumbeanu",
"Monica",
""
],
[
"d'Hérouël",
"Aymeric Fouquier",
""
],
[
"Lacoux",
"Caroline",
""
],
[
"Bonnin",
"Rémy A.",
""
],
[
"Kennedy",
"Sean P.",
""
],
[
"Wessner",
"Fran... | Enterococcus faecalis is the third cause of nosocomial infections. To obtain the first comprehensive view of transcriptional organizations in this bacterium, we used a modified RNA-seq approach enabling to discriminate primary from processed 5'RNA ends. We also validated our approach by confirming known features in Escherichia coli. We mapped 559 transcription start sites and 352 processing sites in E. faecalis. A blind motif search retrieved canonical features of SigA- and SigN-dependent promoters preceding TSSs mapped. We discovered 95 novel putative regulatory RNAs, small- and antisense RNAs, and 72 transcriptional antisense organisations. Presented data constitute a significant insight into bacterial RNA landscapes and a step towards the inference of regulatory processes at transcriptional and post-transcriptional levels in a comprehensive manner. |
2303.03238 | Mareike Fischer | Mirko Wilde and Mareike Fischer | Defining binary phylogenetic trees using parsimony: new bounds | null | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic trees are frequently used to model evolution. Such trees are
typically reconstructed from data like DNA, RNA, or protein alignments using
methods based on criteria like maximum parsimony (amongst others). Maximum
parsimony has been assumed to work well for data with only few state changes.
Recently, some progress has been made to formally prove this assertion. For
instance, it has been shown that each binary phylogenetic tree $T$ with $n \geq
20k$ leaves is uniquely defined by the set $A_k(T)$, which consists of all
characters with parsimony score $k$ on $T$. In the present manuscript, we show
that the statement indeed holds for all $n \geq 4k$, thus drastically lowering
the lower bound for $n$ from $20k$ to $4k$. However, it has been known that for
$n \leq 2k$ and $k \geq 3$, it is not generally true that $A_k(T)$ defines $T$.
We improve this result by showing that the latter statement can be extended
from $n \leq 2k$ to $n \leq 2k+2$. So we drastically reduce the gap of values
of $n$ for which it is unknown if trees $T$ on $n$ taxa are defined by $A_k(T)$
from the previous interval of $[2k+1,20k-1]$ to the interval $[2k+3,4k-1]$.
Moreover, we close this gap completely for the nearest neighbor interchange
(NNI) neighborhood of $T$ in the following sense: We show that as long as
$n\geq 2k+3$, no tree that is one NNI move away from $T$ (and thus very similar
to $T$) shares the same $A_k$-alignment.
| [
{
"created": "Mon, 6 Mar 2023 15:53:45 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jul 2023 20:02:32 GMT",
"version": "v2"
}
] | 2023-07-31 | [
[
"Wilde",
"Mirko",
""
],
[
"Fischer",
"Mareike",
""
]
] | Phylogenetic trees are frequently used to model evolution. Such trees are typically reconstructed from data like DNA, RNA, or protein alignments using methods based on criteria like maximum parsimony (amongst others). Maximum parsimony has been assumed to work well for data with only few state changes. Recently, some progress has been made to formally prove this assertion. For instance, it has been shown that each binary phylogenetic tree $T$ with $n \geq 20k$ leaves is uniquely defined by the set $A_k(T)$, which consists of all characters with parsimony score $k$ on $T$. In the present manuscript, we show that the statement indeed holds for all $n \geq 4k$, thus drastically lowering the lower bound for $n$ from $20k$ to $4k$. However, it has been known that for $n \leq 2k$ and $k \geq 3$, it is not generally true that $A_k(T)$ defines $T$. We improve this result by showing that the latter statement can be extended from $n \leq 2k$ to $n \leq 2k+2$. So we drastically reduce the gap of values of $n$ for which it is unknown if trees $T$ on $n$ taxa are defined by $A_k(T)$ from the previous interval of $[2k+1,20k-1]$ to the interval $[2k+3,4k-1]$. Moreover, we close this gap completely for the nearest neighbor interchange (NNI) neighborhood of $T$ in the following sense: We show that as long as $n\geq 2k+3$, no tree that is one NNI move away from $T$ (and thus very similar to $T$) shares the same $A_k$-alignment. |
1007.1320 | Flora Bacelar S. | Flora S. Bacelar and Andrew White and Mike Boots | Life history and mating systems select for male biased parasitism
mediated through natural selection and ecological feedbacks | 18 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Males are often the "sicker" sex with male biased parasitism found in a
taxonomically diverse range of species. There is considerable interest in the
processes that could underlie the evolution of sex-biased parasitism. Mating
system differences along with differences in lifespan may play a key role. We
examine whether these factors are likely to lead to male-biased parasitism
through natural selection taking into account the critical role that ecological
feedbacks play in the evolution of defence. We use a host-parasite model with
two-sexes and the techniques of adaptive dynamics to investigate how mating
system and sexual differences in competitive ability and longevity can select
for a bias in the rates of parasitism. Male-biased parasitism is selected for
when males have a shorter average lifespan or when males are subject to greater
competition for resources. Male-biased parasitism evolves as a consequence of
sexual differences in life history that produce a greater proportion of
susceptible females than males and therefore reduce the cost of avoiding
parasitism in males. Different mating systems such as monogamy, polygamy or
polyandry did not produce a bias in parasitism through these ecological
feedbacks but may accentuate an existing bias.
| [
{
"created": "Thu, 8 Jul 2010 09:26:01 GMT",
"version": "v1"
}
] | 2010-07-09 | [
[
"Bacelar",
"Flora S.",
""
],
[
"White",
"Andrew",
""
],
[
"Boots",
"Mike",
""
]
] | Males are often the "sicker" sex with male biased parasitism found in a taxonomically diverse range of species. There is considerable interest in the processes that could underlie the evolution of sex-biased parasitism. Mating system differences along with differences in lifespan may play a key role. We examine whether these factors are likely to lead to male-biased parasitism through natural selection taking into account the critical role that ecological feedbacks play in the evolution of defence. We use a host-parasite model with two-sexes and the techniques of adaptive dynamics to investigate how mating system and sexual differences in competitive ability and longevity can select for a bias in the rates of parasitism. Male-biased parasitism is selected for when males have a shorter average lifespan or when males are subject to greater competition for resources. Male-biased parasitism evolves as a consequence of sexual differences in life history that produce a greater proportion of susceptible females than males and therefore reduce the cost of avoiding parasitism in males. Different mating systems such as monogamy, polygamy or polyandry did not produce a bias in parasitism through these ecological feedbacks but may accentuate an existing bias. |
1801.08087 | Ariadne Costa | Ariadne de Andrade Costa, Mary Jean Amon, Olaf Sporns, Luis Favela | Fractal analyses of networks of integrate-and-fire stochastic spiking
neurons | 11 pages, 3 subfigures divided into 2 figures | null | 10.1007/978-3-319-73198-8_14 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there is increasing evidence of criticality in the brain, the
processes that guide neuronal networks to reach or maintain criticality remain
unclear. The present research examines the role of neuronal gain plasticity in
time-series of simulated neuronal networks composed of integrate-and-fire
stochastic spiking neurons, and the utility of fractal methods in assessing
network criticality. Simulated time-series were derived from a network model of
fully connected discrete-time stochastic excitable neurons. Monofractal and
multifractal analyses were applied to neuronal gain time-series. Fractal
scaling was greatest in networks with a mid-range of neuronal plasticity,
versus extremely high or low levels of plasticity. Peak fractal scaling
corresponded closely to additional indices of criticality, including average
branching ratio. Networks exhibited multifractal structure, or multiple scaling
relationships. Multifractal spectra around peak criticality exhibited elongated
right tails, suggesting that the fractal structure is relatively insensitive to
high-amplitude local fluctuations. Networks near critical states exhibited
mid-range multifractal spectra width and tail length, which is consistent with
literature suggesting that networks poised at quasi-critical states must be
stable enough to maintain organization but unstable enough to be adaptable.
Lastly, fractal analyses may offer additional information about critical state
dynamics of networks by indicating scales of influence as networks approach
critical states.
| [
{
"created": "Fri, 19 Jan 2018 22:32:48 GMT",
"version": "v1"
}
] | 2018-02-21 | [
[
"Costa",
"Ariadne de Andrade",
""
],
[
"Amon",
"Mary Jean",
""
],
[
"Sporns",
"Olaf",
""
],
[
"Favela",
"Luis",
""
]
] | Although there is increasing evidence of criticality in the brain, the processes that guide neuronal networks to reach or maintain criticality remain unclear. The present research examines the role of neuronal gain plasticity in time-series of simulated neuronal networks composed of integrate-and-fire stochastic spiking neurons, and the utility of fractal methods in assessing network criticality. Simulated time-series were derived from a network model of fully connected discrete-time stochastic excitable neurons. Monofractal and multifractal analyses were applied to neuronal gain time-series. Fractal scaling was greatest in networks with a mid-range of neuronal plasticity, versus extremely high or low levels of plasticity. Peak fractal scaling corresponded closely to additional indices of criticality, including average branching ratio. Networks exhibited multifractal structure, or multiple scaling relationships. Multifractal spectra around peak criticality exhibited elongated right tails, suggesting that the fractal structure is relatively insensitive to high-amplitude local fluctuations. Networks near critical states exhibited mid-range multifractal spectra width and tail length, which is consistent with literature suggesting that networks poised at quasi-critical states must be stable enough to maintain organization but unstable enough to be adaptable. Lastly, fractal analyses may offer additional information about critical state dynamics of networks by indicating scales of influence as networks approach critical states. |
2009.02962 | Edward Goldstein | Edward Goldstein | Detectability of the novel coronavirus (SARS-CoV-2) infection and rates
of mortality from the novel coronavirus infection in different regions of the
Russian Federation | Paper in Russian | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Relevance: Laboratory diagnosis of the novel coronavirus (SARS-CoV-2)
infection combined with quarantine for contacts of infected individuals affects
the spread of SARS-CoV-2 infection and levels of related mortality. Practices
for testing for SARS-CoV-2 infection vary geographically in Russia. For
example, in the city of St. Petersburg, where mortality rate for COVID-19 is
the highest in the Russian Federation on Oct. 25, 2020, every death for
COVID-19 corresponds to 15.7 detected cases of COVID-19 in the population,
while the corresponding number for the whole of Russia is 58.1, suggesting
limited detection of mild/moderate cases of COVID-19 in St. Petersburg.
Methods: More active testing for SARS-CoV-2 results in lower case-fatality
ratio (i.e. the proportion of detected COVID-19 cases among all cases of
SARS-CoV-2 infection in the population). We used data on COVID-19 cases and
deaths to examine the correlation between case-fatality ratios and rates of
mortality for COVID-19 in different regions of the Russian Federation. Results:
The correlation between case-fatality ratios and rates of mortality for
COVID-19 in different regions of the Russian Federation on Oct. 25, 2020 is
0.64 (0.50,0.75). For several regions of the Russian Federation, detectability
of SARS-CoV-2 infection is relatively low, while rates of mortality for
COVID-19 are relatively high. Conclusions: Detectability of the SARS-CoV-2
infection is one of the factors that affects the levels of mortality from
COVID-19. To increase detectability, one ought to test all individuals with
respiratory symptoms seeking medical care for SARS-CoV-2 infection, and to
undertake additional measures to increase the volume of testing for SARS-CoV-2.
Such measures, in combination with quarantine for infected cases and their
close contacts help to mitigate the spread of the SARS-CoV-2 infection and
diminish the related mortality.
| [
{
"created": "Mon, 7 Sep 2020 09:20:52 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Sep 2020 10:32:17 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Oct 2020 08:10:37 GMT",
"version": "v3"
},
{
"created": "Sun, 25 Oct 2020 12:28:17 GMT",
"version": "v4"
}
] | 2020-10-27 | [
[
"Goldstein",
"Edward",
""
]
] | Relevance: Laboratory diagnosis of the novel coronavirus (SARS-CoV-2) infection combined with quarantine for contacts of infected individuals affects the spread of SARS-CoV-2 infection and levels of related mortality. Practices for testing for SARS-CoV-2 infection vary geographically in Russia. For example, in the city of St. Petersburg, where mortality rate for COVID-19 is the highest in the Russian Federation on Oct. 25, 2020, every death for COVID-19 corresponds to 15.7 detected cases of COVID-19 in the population, while the corresponding number for the whole of Russia is 58.1, suggesting limited detection of mild/moderate cases of COVID-19 in St. Petersburg. Methods: More active testing for SARS-CoV-2 results in lower case-fatality ratio (i.e. the proportion of detected COVID-19 cases among all cases of SARS-CoV-2 infection in the population). We used data on COVID-19 cases and deaths to examine the correlation between case-fatality ratios and rates of mortality for COVID-19 in different regions of the Russian Federation. Results: The correlation between case-fatality ratios and rates of mortality for COVID-19 in different regions of the Russian Federation on Oct. 25, 2020 is 0.64 (0.50,0.75). For several regions of the Russian Federation, detectability of SARS-CoV-2 infection is relatively low, while rates of mortality for COVID-19 are relatively high. Conclusions: Detectability of the SARS-CoV-2 infection is one of the factors that affects the levels of mortality from COVID-19. To increase detectability, one ought to test all individuals with respiratory symptoms seeking medical care for SARS-CoV-2 infection, and to undertake additional measures to increase the volume of testing for SARS-CoV-2. Such measures, in combination with quarantine for infected cases and their close contacts help to mitigate the spread of the SARS-CoV-2 infection and diminish the related mortality. |
1006.2903 | Chris Adami | Arend Hintze and Christoph Adami (KGI) | Darwinian Evolution of Cooperation via Punishment in the "Public Goods"
Game | 7 pages, 6 figures, requires alifex11.sty. To appear in Proc. of 12th
International Conference on Artificial Life (Odense, DK) | Proc. 12th Intern. Conf, on Artificial Life, H. Fellerman et al,
eds. (MIT Press, 2010) pp. 445-450 | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of cooperation has been a perennial problem for evolutionary
biology because cooperation is undermined by selfish cheaters (or "free
riders") that profit from cooperators but do not invest any resources
themselves. In a purely "selfish" view of evolution, those cheaters should be
favored. Evolutionary game theory has been able to show that under certain
conditions, cooperation nonetheless evolves stably. One of these scenarios
utilizes the power of punishment to suppress free riders, but only if players
interact in a structured population where cooperators are likely to be
surrounded by other cooperators. Here we show that cooperation via punishment
can evolve even in well-mixed populations that play the "public goods" game, if
the synergy effect of cooperation is high enough. As the synergy is increased,
populations transition from defection to cooperation in a manner reminiscent of
a phase transition. If punishment is turned off, the critical synergy is
significantly higher, illustrating that (as shown before) punishment aids in
establishing cooperation. We also show that the critical point depends on the
mutation rate so that higher mutation rates discourage cooperation, as has been
observed before in the Prisoner's Dilemma.
| [
{
"created": "Tue, 15 Jun 2010 07:13:29 GMT",
"version": "v1"
}
] | 2010-12-17 | [
[
"Hintze",
"Arend",
"",
"KGI"
],
[
"Adami",
"Christoph",
"",
"KGI"
]
] | The evolution of cooperation has been a perennial problem for evolutionary biology because cooperation is undermined by selfish cheaters (or "free riders") that profit from cooperators but do not invest any resources themselves. In a purely "selfish" view of evolution, those cheaters should be favored. Evolutionary game theory has been able to show that under certain conditions, cooperation nonetheless evolves stably. One of these scenarios utilizes the power of punishment to suppress free riders, but only if players interact in a structured population where cooperators are likely to be surrounded by other cooperators. Here we show that cooperation via punishment can evolve even in well-mixed populations that play the "public goods" game, if the synergy effect of cooperation is high enough. As the synergy is increased, populations transition from defection to cooperation in a manner reminiscent of a phase transition. If punishment is turned off, the critical synergy is significantly higher, illustrating that (as shown before) punishment aids in establishing cooperation. We also show that the critical point depends on the mutation rate so that higher mutation rates discourage cooperation, as has been observed before in the Prisoner's Dilemma. |
1311.7450 | Eli Shlizerman | Eli Shlizerman, Jeffrey A. Riffell, J. Nathan Kutz | Data-driven modeling of the olfactory neural codes and their dynamics in
the insect antennal lobe | null | null | 10.3389/fncom.2014.00070 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recordings from neurons in the insects' olfactory primary processing center,
the antennal lobe (AL), reveal that the AL is able to process the input from
chemical receptors into distinct neural activity patterns, called olfactory
neural codes. These exciting results show the importance of neural codes and
their relation to perception. The next challenge is to \emph{model the
dynamics} of neural codes. In our study, we perform multichannel recordings
from the projection neurons in the AL driven by different odorants. We then
derive a neural network from the electrophysiological data. The network
consists of lateral-inhibitory neurons and excitatory neurons, and is capable
of producing unique olfactory neural codes for the tested odorants.
Specifically, we (i) design a projection, an odor space, for the neural
recording from the AL, which discriminates between distinct odorants
trajectories (ii) characterize scent recognition, i.e., decision-making based
on olfactory signals and (iii) infer the wiring of the neural circuit, the
connectome of the AL. We show that the constructed model is consistent with
biological observations, such as contrast enhancement and robustness to noise.
The study answers a key biological question in identifying how lateral
inhibitory neurons can be wired to excitatory neurons to permit robust activity
patterns.
| [
{
"created": "Fri, 29 Nov 2013 00:24:52 GMT",
"version": "v1"
}
] | 2014-08-27 | [
[
"Shlizerman",
"Eli",
""
],
[
"Riffell",
"Jeffrey A.",
""
],
[
"Kutz",
"J. Nathan",
""
]
] | Recordings from neurons in the insects' olfactory primary processing center, the antennal lobe (AL), reveal that the AL is able to process the input from chemical receptors into distinct neural activity patterns, called olfactory neural codes. These exciting results show the importance of neural codes and their relation to perception. The next challenge is to \emph{model the dynamics} of neural codes. In our study, we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a neural network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons, and is capable of producing unique olfactory neural codes for the tested odorants. Specifically, we (i) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (ii) characterize scent recognition, i.e., decision-making based on olfactory signals and (iii) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study answers a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. |
2005.02353 | Ashfaq Ahmad | Muhammad Waqas, Muhammad Farooq, Rashid Ahmad and Ashfaq Ahmad | Analysis and Prediction of COVID-19 Pandemic in Pakistan using
Time-dependent SIR Model | 11 pages, 5 figures, 3 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current outbreak is known as Coronavirus Disease or COVID-19 caused by
the virus SAR-COV-2 which continues to wreak havoc across the globe. The World
Health Organization (WHO) has declared the outbreak a Public Health Emergency
of International Concern. In Pakistan, the spread of the virus is on the rise
with the number of infected people and causalities rapidly increasing. In the
absence of proper vaccination and treatment, to reduce the number of infections
and casualties, the only option so far is to educate people regarding
preventive measures and to enforce countrywide lock-down. Any strategy about
the preventive measures needs to be based upon detailed analysis of the
COVID-19 outbreak and accurate scientific predictions. In this paper, we
conduct mathematical and numerical analysis to come up with reliable and
accurate predictions of the outbreak in Pakistan. The time-dependent
Susceptible-Infected-Recovered (SIR) model is used to fit the data and provide
future predictions. The turning point of the peak of the pandemic is defined as
the day when the transmission rate becomes less than the recovering rate. We
have predicted that the outbreak will reach its maximum peak occurring from
late May to 9 June with unrecovered number of Infectives in the range
20000-47000 and the cumulative number of infected cases in the range of
57500-153100. The number of Infectives will remain at the lower end in the
lock-down scenario but can rapidly double or triple if the spread of the
epidemic is not curtailed and localized. The uncertainty on single day
projection in our analysis after April 15 is found to be within 5\%.
| [
{
"created": "Tue, 5 May 2020 17:37:57 GMT",
"version": "v1"
},
{
"created": "Sun, 10 May 2020 16:09:54 GMT",
"version": "v2"
}
] | 2020-05-12 | [
[
"Waqas",
"Muhammad",
""
],
[
"Farooq",
"Muhammad",
""
],
[
"Ahmad",
"Rashid",
""
],
[
"Ahmad",
"Ashfaq",
""
]
] | The current outbreak is known as Coronavirus Disease or COVID-19 caused by the virus SAR-COV-2 which continues to wreak havoc across the globe. The World Health Organization (WHO) has declared the outbreak a Public Health Emergency of International Concern. In Pakistan, the spread of the virus is on the rise with the number of infected people and causalities rapidly increasing. In the absence of proper vaccination and treatment, to reduce the number of infections and casualties, the only option so far is to educate people regarding preventive measures and to enforce countrywide lock-down. Any strategy about the preventive measures needs to be based upon detailed analysis of the COVID-19 outbreak and accurate scientific predictions. In this paper, we conduct mathematical and numerical analysis to come up with reliable and accurate predictions of the outbreak in Pakistan. The time-dependent Susceptible-Infected-Recovered (SIR) model is used to fit the data and provide future predictions. The turning point of the peak of the pandemic is defined as the day when the transmission rate becomes less than the recovering rate. We have predicted that the outbreak will reach its maximum peak occurring from late May to 9 June with unrecovered number of Infectives in the range 20000-47000 and the cumulative number of infected cases in the range of 57500-153100. The number of Infectives will remain at the lower end in the lock-down scenario but can rapidly double or triple if the spread of the epidemic is not curtailed and localized. The uncertainty on single day projection in our analysis after April 15 is found to be within 5\%. |
2008.05226 | Mark Blyth | Mark Blyth, Ludovic Renson, Lucia Marucci | Tutorial of numerical continuation and bifurcation theory for systems
and synthetic biology | 14 pages, 2 figures, 2 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical modelling allows us to concisely describe fundamental principles
in biology. Analysis of models can help to both explain known phenomena, and
predict the existence of new, unseen behaviours. Model analysis is often a
complex task, such that we have little choice but to approach the problem with
computational methods. Numerical continuation is a computational method for
analysing the dynamics of nonlinear models by algorithmically detecting
bifurcations. Here we aim to promote the use of numerical continuation tools by
providing an introduction to nonlinear dynamics and numerical bifurcation
analysis. Many numerical continuation packages are available, covering a wide
range of system classes; a review of these packages is provided, to help both
new and experienced practitioners in choosing the appropriate software tools
for their needs.
| [
{
"created": "Wed, 12 Aug 2020 10:54:41 GMT",
"version": "v1"
}
] | 2020-08-13 | [
[
"Blyth",
"Mark",
""
],
[
"Renson",
"Ludovic",
""
],
[
"Marucci",
"Lucia",
""
]
] | Mathematical modelling allows us to concisely describe fundamental principles in biology. Analysis of models can help to both explain known phenomena, and predict the existence of new, unseen behaviours. Model analysis is often a complex task, such that we have little choice but to approach the problem with computational methods. Numerical continuation is a computational method for analysing the dynamics of nonlinear models by algorithmically detecting bifurcations. Here we aim to promote the use of numerical continuation tools by providing an introduction to nonlinear dynamics and numerical bifurcation analysis. Many numerical continuation packages are available, covering a wide range of system classes; a review of these packages is provided, to help both new and experienced practitioners in choosing the appropriate software tools for their needs. |
1407.5505 | Daihai He | Daihai He, Roger Lui, Lin Wang, Chi Kong Tse, Lin Yang and Lewi Stone | Global Spatio-temporal Patterns of Influenza in the Post-pandemic Era | null | Scientific Reports. 5:11013, 2015 | 10.1038/srep11013 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We study the global spatio-temporal patterns of influenza dynamics. This is
achieved by analysing and modelling weekly laboratory confirmed cases of
influenza A and B from 138 countries between January 2006 and May 2014. The
data were obtained from FluNet, the surveillance network compiled by the the
World Health Organization. We report a pattern of {\it skip-and-resurgence}
behavior between the years 2011 and 2013 for influenza H1N1/09, the strain
responsible for the 2009 pandemic, in Europe and Eastern Asia. In particular,
the expected H1N1/09 epidemic outbreak in 2011 failed to occur (or"skipped") in
many countries across the globe, although an outbreak occurred in the following
year. We also report a pattern of {\it well-synchronized} 2010 winter wave of
H1N1/09 in the Northern Hemisphere countries, and a pattern of replacement of
strain H1N1/77 by H1N1/09 between the 2009 and 2012 influenza seasons. Using
both a statistical and a mechanistic mathematical model, and through fitting
the data of 108 countries (108 countries in a statistical model and 10 large
populations with a mechanistic model), we discuss the mechanisms that are
likely to generate these events taking into account the role of multi-strain
dynamics. A basic understanding of these patterns has important public health
implications and scientific significance.
| [
{
"created": "Mon, 21 Jul 2014 14:18:17 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Dec 2014 02:46:07 GMT",
"version": "v2"
}
] | 2016-01-20 | [
[
"He",
"Daihai",
""
],
[
"Lui",
"Roger",
""
],
[
"Wang",
"Lin",
""
],
[
"Tse",
"Chi Kong",
""
],
[
"Yang",
"Lin",
""
],
[
"Stone",
"Lewi",
""
]
] | We study the global spatio-temporal patterns of influenza dynamics. This is achieved by analysing and modelling weekly laboratory confirmed cases of influenza A and B from 138 countries between January 2006 and May 2014. The data were obtained from FluNet, the surveillance network compiled by the the World Health Organization. We report a pattern of {\it skip-and-resurgence} behavior between the years 2011 and 2013 for influenza H1N1/09, the strain responsible for the 2009 pandemic, in Europe and Eastern Asia. In particular, the expected H1N1/09 epidemic outbreak in 2011 failed to occur (or"skipped") in many countries across the globe, although an outbreak occurred in the following year. We also report a pattern of {\it well-synchronized} 2010 winter wave of H1N1/09 in the Northern Hemisphere countries, and a pattern of replacement of strain H1N1/77 by H1N1/09 between the 2009 and 2012 influenza seasons. Using both a statistical and a mechanistic mathematical model, and through fitting the data of 108 countries (108 countries in a statistical model and 10 large populations with a mechanistic model), we discuss the mechanisms that are likely to generate these events taking into account the role of multi-strain dynamics. A basic understanding of these patterns has important public health implications and scientific significance. |
2107.09670 | Ronald Manr\'iquez | Ronald Manr\'iquez and Camilo Guerrero-Nancuante | Diseases on complex networks. Modeling from a database and a protection
strategy proposal | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Among the diverse and important applications that networks currently have is
the modeling of infectious diseases. Immunization, or the process of protecting
nodes in the network, plays a key role in stopping diseases from spreading.
Hence the importance of having tools or strategies that allow the solving of
this challenge. In this work, we evaluate the effectiveness of the
DIL-W^{\alpha} ranking in immunizing nodes in an edge-weighted network. The
network is obtained from a real database and the spread of COVID-19 was modeled
with the classic SIR model. We apply the protection to the network, according
to the importance ranking list produced by DIL-W^{\alpha}.
| [
{
"created": "Tue, 20 Jul 2021 16:05:22 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Manríquez",
"Ronald",
""
],
[
"Guerrero-Nancuante",
"Camilo",
""
]
] | Among the diverse and important applications that networks currently have is the modeling of infectious diseases. Immunization, or the process of protecting nodes in the network, plays a key role in stopping diseases from spreading. Hence the importance of having tools or strategies that allow the solving of this challenge. In this work, we evaluate the effectiveness of the DIL-W^{\alpha} ranking in immunizing nodes in an edge-weighted network. The network is obtained from a real database and the spread of COVID-19 was modeled with the classic SIR model. We apply the protection to the network, according to the importance ranking list produced by DIL-W^{\alpha}. |
1505.04210 | Juan Antonio Garcia-Martin | Juan Antonio Garcia-Martin, Ivan Dotu and Peter Clote | RNAiFold 2.0: A web server and software to design custom and Rfam-based
RNA molecules | 16 pages, 3 figures. Accessible at
http://bioinformatics.bc.edu/clotelab/RNAiFold2.0 Accepted for publication in
Nucleic Acid Research | null | 10.1093/nar/gkv460 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several algorithms for RNA inverse folding have been used to design synthetic
riboswitches, ribozymes and thermoswitches, whose activity has been
experimentally validated. The RNAiFold software is unique among approaches for
inverse folding in that (exhaustive) constraint programming is used instead of
heuristic methods. For that reason, RNAiFold can generate all sequences that
fold into the target structure, or determine that there is no solution.
RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now
defunct COMET language to C++. The new code properly extends the capabilities
of its predecessor by providing a user-friendly pipeline to design synthetic
constructs having the functionality of given Rfam families. In addition, the
new software supports amino acid constraints, even for proteins translated in
different reading frames from overlapping coding sequences; moreover, structure
compatibility/incompatibility constraints have been expanded. With these
features, RNAiFold 2.0 allows the user to design single RNA molecules as well
as hybridization complexes of two RNA molecules.
The web server, source code and linux binaries are publicly accessible at
http://bioinformatics.bc.edu/clotelab/RNAiFold2.0
| [
{
"created": "Fri, 15 May 2015 22:17:06 GMT",
"version": "v1"
}
] | 2015-05-29 | [
[
"Garcia-Martin",
"Juan Antonio",
""
],
[
"Dotu",
"Ivan",
""
],
[
"Clote",
"Peter",
""
]
] | Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure, or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. The web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0 |
q-bio/0403004 | Davide Valenti | B. Spagnolo, D. Valenti, A. Fiasconaro | Noise in ecosystems: a short review | 27 pages, 16 figures. Accepted for publication in Mathematical
Biosciences and Engineering | null | null | null | q-bio.PE | null | Noise, through its interaction with the nonlinearity of the living systems,
can give rise to counter-intuitive phenomena such as stochastic resonance,
noise-delayed extinction, temporal oscillations, and spatial patterns. In this
paper we briefly review the noise-induced effects in three different
ecosystems: (i) two competing species; (ii) three interacting species, one
predator and two preys, and (iii) N-interacting species. The transient dynamics
of these ecosystems are analyzed through generalized Lotka-Volterra equations
in the presence of multiplicative noise, which models the interaction between
the species and the environment. The interaction parameter between the species
is random in cases (i) and (iii), and a periodical function, which accounts for
the environmental temperature, in case (ii). We find noise-induced phenomena
such as quasi-deterministic oscillations, stochastic resonance, noise-delayed
extinction, and noise-induced pattern formation with nonmonotonic behaviors of
patterns areas and of the density correlation as a function of the
multiplicative noise intensity. The asymptotic behavior of the time average of
the \emph{$i^{th}$} population when the ecosystem is composed of a great number
of interacting species is obtained and the effect of the noise on the
asymptotic probability distributions of the populations is discussed.
| [
{
"created": "Tue, 2 Mar 2004 21:42:50 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Mar 2004 12:20:59 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Spagnolo",
"B.",
""
],
[
"Valenti",
"D.",
""
],
[
"Fiasconaro",
"A.",
""
]
] | Noise, through its interaction with the nonlinearity of the living systems, can give rise to counter-intuitive phenomena such as stochastic resonance, noise-delayed extinction, temporal oscillations, and spatial patterns. In this paper we briefly review the noise-induced effects in three different ecosystems: (i) two competing species; (ii) three interacting species, one predator and two preys, and (iii) N-interacting species. The transient dynamics of these ecosystems are analyzed through generalized Lotka-Volterra equations in the presence of multiplicative noise, which models the interaction between the species and the environment. The interaction parameter between the species is random in cases (i) and (iii), and a periodical function, which accounts for the environmental temperature, in case (ii). We find noise-induced phenomena such as quasi-deterministic oscillations, stochastic resonance, noise-delayed extinction, and noise-induced pattern formation with nonmonotonic behaviors of patterns areas and of the density correlation as a function of the multiplicative noise intensity. The asymptotic behavior of the time average of the \emph{$i^{th}$} population when the ecosystem is composed of a great number of interacting species is obtained and the effect of the noise on the asymptotic probability distributions of the populations is discussed. |
1704.05826 | Arian Ashourvan | Arian Ashourvan, Qawi K. Telesford, Timothy Verstynen, Jean M. Vettel,
Danielle S. Bassett | Multi-scale detection of hierarchical community architecture in
structural and functional brain networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection algorithms have been widely used to study the
organization of complex systems like the brain. A principal appeal of these
techniques is their ability to identify a partition of brain regions (or nodes)
into communities, where nodes within a community are densely interconnected. In
their simplest application, community detection algorithms are agnostic to the
presence of community hierarchies, but a common characteristic of many neural
systems is a nested hierarchy. To address this limitation, we exercise a
multi-scale extension of a community detection technique known as modularity
maximization, and we apply the tool to both synthetic graphs and graphs derived
from human structural and functional imaging data. Our multi-scale community
detection algorithm links a graph to copies of itself across neighboring
topological scales, thereby becoming sensitive to conserved community
organization across neighboring levels of the hierarchy. We demonstrate that
this method allows for a better characterization of topological inhomogeneities
of the graph's hierarchy by providing a local (node) measure of community
stability and inter-scale reliability across topological scales. We compare the
brain's structural and functional network architectures and demonstrate that
structural graphs display a wider range of topological scales than functional
graphs. Finally, we build a multimodal multiplex graph that combines structural
and functional connectivity in a single model, and we identify the topological
scales where resting state functional connectivity and underlying structural
connectivity show similar versus unique hierarchical community architecture.
Together, our results showcase the advantages of the multi-scale community
detection algorithm in studying hierarchical community structure in brain
graphs, and they illustrate its utility in modeling multimodal neuroimaging
data.
| [
{
"created": "Wed, 19 Apr 2017 17:08:18 GMT",
"version": "v1"
}
] | 2017-04-20 | [
[
"Ashourvan",
"Arian",
""
],
[
"Telesford",
"Qawi K.",
""
],
[
"Verstynen",
"Timothy",
""
],
[
"Vettel",
"Jean M.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Community detection algorithms have been widely used to study the organization of complex systems like the brain. A principal appeal of these techniques is their ability to identify a partition of brain regions (or nodes) into communities, where nodes within a community are densely interconnected. In their simplest application, community detection algorithms are agnostic to the presence of community hierarchies, but a common characteristic of many neural systems is a nested hierarchy. To address this limitation, we exercise a multi-scale extension of a community detection technique known as modularity maximization, and we apply the tool to both synthetic graphs and graphs derived from human structural and functional imaging data. Our multi-scale community detection algorithm links a graph to copies of itself across neighboring topological scales, thereby becoming sensitive to conserved community organization across neighboring levels of the hierarchy. We demonstrate that this method allows for a better characterization of topological inhomogeneities of the graph's hierarchy by providing a local (node) measure of community stability and inter-scale reliability across topological scales. We compare the brain's structural and functional network architectures and demonstrate that structural graphs display a wider range of topological scales than functional graphs. Finally, we build a multimodal multiplex graph that combines structural and functional connectivity in a single model, and we identify the topological scales where resting state functional connectivity and underlying structural connectivity show similar versus unique hierarchical community architecture. Together, our results showcase the advantages of the multi-scale community detection algorithm in studying hierarchical community structure in brain graphs, and they illustrate its utility in modeling multimodal neuroimaging data. |
1608.04175 | Stephen Montgomery-Smith | Stephen Montgomery-Smith, Anh Le, George Smith, Sidney Billstein,
Hesam Oveys, Dylan Pisechko, Austin Yates | Estimation of Mutation Rates from Fluctuation Experiments via
Probability Generating Functions | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper calculates probability distributions modeling the Luria-Delbr\"uck
experiment. We show that by thinking purely in terms of generating functions,
and using a 'backwards in time' paradigm, that formulas describing various
situations can be easily obtained. This includes a generating function for
Haldane's probability distribution due to Ycart. We apply our formulas to both
simulated and real data created by looking at yeast cells acquiring an
immunization to the antibiotic canavanine.
This paper is somewhat incomplete, having been last significantly modified in
March 29, 2014. However the first author feels that this paper has some
worthwhile ideas, and so is going to make this paper publicly available.
| [
{
"created": "Mon, 15 Aug 2016 03:27:48 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Montgomery-Smith",
"Stephen",
""
],
[
"Le",
"Anh",
""
],
[
"Smith",
"George",
""
],
[
"Billstein",
"Sidney",
""
],
[
"Oveys",
"Hesam",
""
],
[
"Pisechko",
"Dylan",
""
],
[
"Yates",
"Austin",
""
]
] | This paper calculates probability distributions modeling the Luria-Delbr\"uck experiment. We show that by thinking purely in terms of generating functions, and using a 'backwards in time' paradigm, that formulas describing various situations can be easily obtained. This includes a generating function for Haldane's probability distribution due to Ycart. We apply our formulas to both simulated and real data created by looking at yeast cells acquiring an immunization to the antibiotic canavanine. This paper is somewhat incomplete, having been last significantly modified in March 29, 2014. However the first author feels that this paper has some worthwhile ideas, and so is going to make this paper publicly available. |
2012.07105 | Mariella Panagiotopoulou Miss | Mariella Panagiotopoulou, Christoforos A Papasavvas, Gabrielle M
Schroeder, Rhys H Thomas, Peter N Taylor, Yujiang Wang | Fluctuations in EEG band power at subject-specific timescales over
minutes to days explain changes in seizure evolutions | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epilepsy is recognised as a dynamic disease, where both seizure
susceptibility and seizure characteristics themselves change over time.
Specifically, we recently quantified the variable electrographic
spatio-temporal seizure evolutions that exist within individual patients. This
variability appears to follow subject-specific circadian, or longer, timescale
modulations. It is therefore important to know whether continuously-recorded
interictal iEEG features can capture signatures of these modulations over
different timescales.
In this work, we analyse continuous intracranial electroencephalographic
(iEEG) recordings from video-telemetry units and find fluctuations in iEEG band
power over timescales ranging from minutes up to twelve days.
As expected and in agreement with previous studies, we find that all subjects
show a circadian fluctuation in their iEEG band power. We additionally find
other fluctuations of similar magnitude on subject-specific timescales.
Importantly, we find that a combination of these fluctuations on different
timescales can explain changes in seizure evolutions in most subjects above
chance level.
These results suggest that subject-specific fluctuations in iEEG band power
over timescales of minutes to days may serve as markers of seizure modulating
processes. We hope that future work can link these detected fluctuations to
their biological driver(s). There is a critical need to better understand
seizure modulating processes, as this will enable the development of novel
treatment strategies that could minimise the seizure spread, duration, or
severity and therefore the clinical impact of seizures.
| [
{
"created": "Sun, 13 Dec 2020 17:17:38 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2021 20:07:47 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Sep 2021 18:18:29 GMT",
"version": "v3"
}
] | 2021-09-06 | [
[
"Panagiotopoulou",
"Mariella",
""
],
[
"Papasavvas",
"Christoforos A",
""
],
[
"Schroeder",
"Gabrielle M",
""
],
[
"Thomas",
"Rhys H",
""
],
[
"Taylor",
"Peter N",
""
],
[
"Wang",
"Yujiang",
""
]
] | Epilepsy is recognised as a dynamic disease, where both seizure susceptibility and seizure characteristics themselves change over time. Specifically, we recently quantified the variable electrographic spatio-temporal seizure evolutions that exist within individual patients. This variability appears to follow subject-specific circadian, or longer, timescale modulations. It is therefore important to know whether continuously-recorded interictal iEEG features can capture signatures of these modulations over different timescales. In this work, we analyse continuous intracranial electroencephalographic (iEEG) recordings from video-telemetry units and find fluctuations in iEEG band power over timescales ranging from minutes up to twelve days. As expected and in agreement with previous studies, we find that all subjects show a circadian fluctuation in their iEEG band power. We additionally find other fluctuations of similar magnitude on subject-specific timescales. Importantly, we find that a combination of these fluctuations on different timescales can explain changes in seizure evolutions in most subjects above chance level. These results suggest that subject-specific fluctuations in iEEG band power over timescales of minutes to days may serve as markers of seizure modulating processes. We hope that future work can link these detected fluctuations to their biological driver(s). There is a critical need to better understand seizure modulating processes, as this will enable the development of novel treatment strategies that could minimise the seizure spread, duration, or severity and therefore the clinical impact of seizures. |
2407.08877 | Wazeer Zulfikar | Wazeer Zulfikar, Nishat Protyasha, Camila Canales, Heli Patel, James
Williamson, Laura Sarnie, Lisa Nowinski, Nataliya Kosmyna, Paige Townsend,
Sophia Yuditskaya, Tanya Talkar, Utkarsh Oggy Sarawgi, Christopher McDougle,
Thomas Quatieri, Pattie Maes, Maria Mody | Analyzing Speech Motor Movement using Surface Electromyography in
Minimally Verbal Adults with Autism Spectrum Disorder | null | null | null | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by/4.0/ | Adults who are minimally verbal with autism spectrum disorder (mvASD) have
pronounced speech difficulties linked to impaired motor skills. Existing
research and clinical assessments primarily use indirect methods such as
standardized tests, video-based facial features, and handwriting tasks, which
may not directly target speech-related motor skills. In this study, we measure
activity from eight facial muscles associated with speech using surface
electromyography (sEMG), during carefully designed tasks. The findings reveal a
higher power in the sEMG signals and a significantly greater correlation
between the sEMG channels in mvASD adults (N=12) compared to age and
gender-matched neurotypical controls (N=14). This suggests stronger muscle
activation and greater synchrony in the discharge patterns of motor units.
Further, eigenvalues derived from correlation matrices indicate lower
complexity in muscle coordination in mvASD, implying fewer degrees of freedom
in motor control.
| [
{
"created": "Thu, 11 Jul 2024 21:32:20 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Zulfikar",
"Wazeer",
""
],
[
"Protyasha",
"Nishat",
""
],
[
"Canales",
"Camila",
""
],
[
"Patel",
"Heli",
""
],
[
"Williamson",
"James",
""
],
[
"Sarnie",
"Laura",
""
],
[
"Nowinski",
"Lisa",
""
],
[
... | Adults who are minimally verbal with autism spectrum disorder (mvASD) have pronounced speech difficulties linked to impaired motor skills. Existing research and clinical assessments primarily use indirect methods such as standardized tests, video-based facial features, and handwriting tasks, which may not directly target speech-related motor skills. In this study, we measure activity from eight facial muscles associated with speech using surface electromyography (sEMG), during carefully designed tasks. The findings reveal a higher power in the sEMG signals and a significantly greater correlation between the sEMG channels in mvASD adults (N=12) compared to age and gender-matched neurotypical controls (N=14). This suggests stronger muscle activation and greater synchrony in the discharge patterns of motor units. Further, eigenvalues derived from correlation matrices indicate lower complexity in muscle coordination in mvASD, implying fewer degrees of freedom in motor control. |
2011.10854 | Lawrence Ward | Conor L. Morrison and Priscilla E. Greenwood and Lawrence M. Ward | Plastic systemic inhibition controls amplitude while allowing phase
pattern in a stochastic neural field model | Supplementary material available from lward@psych.ubc.ca | Phys. Rev. E 103, 032311 (2021) | 10.1103/PhysRevE.103.032311 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Oscillatory phase pattern formation and amplitude control for a linearized
stochastic neuron field model was investigated by simulating coupled stochastic
processes defined by stochastic differential equations. It was found, for
several choices of parameters, that pattern formation in the phases of these
processes occurred if and only if the amplitudes were allowed to grow large.
Stimulated by recent work on homeostatic inhibitory plasticity, we introduced
static and plastic (adaptive) systemic inhibitory mechanisms to keep the
amplitudes stochastically bounded in subsequent simulations. The systems with
static systemic inhibition exhibited bounded amplitudes but no sustained phase
patterns, whereas the systems with plastic systemic inhibition exhibited both
bounded amplitudes and sustained phase patterns. These results demonstrate that
plastic inhibitory mechanisms in neural field models can stochastically control
amplitudes while allowing patterns of phase synchronization to develop. Similar
mechanisms of plastic systemic inhibition could play a role in regulating
oscillatory functioning in the brain.
| [
{
"created": "Sat, 21 Nov 2020 19:50:29 GMT",
"version": "v1"
}
] | 2021-03-24 | [
[
"Morrison",
"Conor L.",
""
],
[
"Greenwood",
"Priscilla E.",
""
],
[
"Ward",
"Lawrence M.",
""
]
] | Oscillatory phase pattern formation and amplitude control for a linearized stochastic neuron field model was investigated by simulating coupled stochastic processes defined by stochastic differential equations. It was found, for several choices of parameters, that pattern formation in the phases of these processes occurred if and only if the amplitudes were allowed to grow large. Stimulated by recent work on homeostatic inhibitory plasticity, we introduced static and plastic (adaptive) systemic inhibitory mechanisms to keep the amplitudes stochastically bounded in subsequent simulations. The systems with static systemic inhibition exhibited bounded amplitudes but no sustained phase patterns, whereas the systems with plastic systemic inhibition exhibited both bounded amplitudes and sustained phase patterns. These results demonstrate that plastic inhibitory mechanisms in neural field models can stochastically control amplitudes while allowing patterns of phase synchronization to develop. Similar mechanisms of plastic systemic inhibition could play a role in regulating oscillatory functioning in the brain. |
1907.03818 | Matthew Beauregard | Matthew A. Beauregard, Rana D. Parshad, Sarah Boon, Harley Conaway,
Thomas Griffin, Jingjing Lyu | Optimal Control and Analysis of a Modified Trojan Y-Chromosome Strategy | 18 pages, 7 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Trojan Y Chromosome (TYC) Strategy is a promising eradication method that
attempts to manipulate the female to male ratio to promote the reduction of the
population of an invasive species. The manipulation stems from an introduction
of sex-reversed males, called supermales, into an ecosystem. The offspring of
the supermales is guaranteed to be male. Mathematical models have shown that
the population can be driven to extinction with a continuous supply of
supermales. In this paper, a new model of the TYC strategy is introduced and
analyzed that includes two important modeling characteristics, that are
neglected in all previous models. First, the new model includes intraspecies
competition for mates. Second, a strong Allee effect is included. Several
conclusions about the strategy via optimal control are established. These
results have large scale implications for the biological control of invasive
species.
| [
{
"created": "Mon, 8 Jul 2019 19:15:30 GMT",
"version": "v1"
}
] | 2019-07-10 | [
[
"Beauregard",
"Matthew A.",
""
],
[
"Parshad",
"Rana D.",
""
],
[
"Boon",
"Sarah",
""
],
[
"Conaway",
"Harley",
""
],
[
"Griffin",
"Thomas",
""
],
[
"Lyu",
"Jingjing",
""
]
] | The Trojan Y Chromosome (TYC) Strategy is a promising eradication method that attempts to manipulate the female to male ratio to promote the reduction of the population of an invasive species. The manipulation stems from an introduction of sex-reversed males, called supermales, into an ecosystem. The offspring of the supermales is guaranteed to be male. Mathematical models have shown that the population can be driven to extinction with a continuous supply of supermales. In this paper, a new model of the TYC strategy is introduced and analyzed that includes two important modeling characteristics, that are neglected in all previous models. First, the new model includes intraspecies competition for mates. Second, a strong Allee effect is included. Several conclusions about the strategy via optimal control are established. These results have large scale implications for the biological control of invasive species. |
1609.06980 | Gilberto Nakamura | Gilberto M. Nakamura, Ana Carolina P. Monteiro, George C. Cardoso and
Alexandre S. Martinez | Finite symmetries in agent-based epidemic models | 25 pages, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm which explores permutation symmetries to describe the
time evolution of agent-based epidemic models. The main idea to improve
computation times relies on restricting the stochastic process to one sector of
the vector space, labeled by a single permutation eigenvalue. In this scheme,
the transition matrix reduces to block diagonal form, enhancing computational
performance.
| [
{
"created": "Wed, 21 Sep 2016 13:41:11 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Sep 2016 18:57:12 GMT",
"version": "v2"
}
] | 2016-09-28 | [
[
"Nakamura",
"Gilberto M.",
""
],
[
"Monteiro",
"Ana Carolina P.",
""
],
[
"Cardoso",
"George C.",
""
],
[
"Martinez",
"Alexandre S.",
""
]
] | We present an algorithm which explores permutation symmetries to describe the time evolution of agent-based epidemic models. The main idea to improve computation times relies on restricting the stochastic process to one sector of the vector space, labeled by a single permutation eigenvalue. In this scheme, the transition matrix reduces to block diagonal form, enhancing computational performance. |
2204.03718 | Ulrich S. Schwarz | Rick Bebon and Ulrich S. Schwarz (Heidelberg University) | First-passage times in complex energy landscapes: a case study with
nonmuscle myosin II assembly | Revtex, 34 pages, 8 figures, minor revisions compared to original
version | null | 10.1088/1367-2630/ac78fd | null | q-bio.SC cond-mat.soft cond-mat.stat-mech q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Complex energy landscapes often arise in biological systems, e.g. for protein
folding, biochemical reactions or intracellular transport processes. Their
physical effects are often reflected in the first-passage times arising from
these energy landscapes. However, their calculation is notoriously challenging
and it is often difficult to identify the most relevant features of a given
energy landscape. Here we show how this can be achieved by coarse-graining the
Fokker-Planck equation to a master equation and decomposing its first-passage
times in an iterative process. We apply this method to the electrostatic
interaction between two rods of nonmuscle myosin II (NM2), which is the main
molecular motor for force generation in nonmuscle cells. Energy landscapes are
computed directly from the amino acid sequences of the three different
isoforms. Our approach allows us to identify the most relevant energy barriers
for their self-assembly into nonmuscle myosin II minifilaments and how they
change under force. In particular, we find that antiparallel configurations are
more stable than parallel ones, but also show more changes under mechanical
loading. Our work demonstrates the rich dynamics that can be expected for
NM2-assemblies under mechanical load and in general shows how one can identify
the most relevant energy barriers in complex energy landscapes.
| [
{
"created": "Thu, 7 Apr 2022 20:13:06 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2022 20:51:52 GMT",
"version": "v2"
}
] | 2022-07-13 | [
[
"Bebon",
"Rick",
"",
"Heidelberg University"
],
[
"Schwarz",
"Ulrich S.",
"",
"Heidelberg University"
]
] | Complex energy landscapes often arise in biological systems, e.g. for protein folding, biochemical reactions or intracellular transport processes. Their physical effects are often reflected in the first-passage times arising from these energy landscapes. However, their calculation is notoriously challenging and it is often difficult to identify the most relevant features of a given energy landscape. Here we show how this can be achieved by coarse-graining the Fokker-Planck equation to a master equation and decomposing its first-passage times in an iterative process. We apply this method to the electrostatic interaction between two rods of nonmuscle myosin II (NM2), which is the main molecular motor for force generation in nonmuscle cells. Energy landscapes are computed directly from the amino acid sequences of the three different isoforms. Our approach allows us to identify the most relevant energy barriers for their self-assembly into nonmuscle myosin II minifilaments and how they change under force. In particular, we find that antiparallel configurations are more stable than parallel ones, but also show more changes under mechanical loading. Our work demonstrates the rich dynamics that can be expected for NM2-assemblies under mechanical load and in general shows how one can identify the most relevant energy barriers in complex energy landscapes. |
1304.2324 | Jesus Martinez-Linares | Jesus Martinez-Linares | Phase Space Formulation of Population Dynamics in Ecology | 4 pages, 1 figure | null | null | null | q-bio.PE math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A phase space theory for population dynamics in Ecology is presented. This
theory applies for a certain class of dynamical systems, that will be called
M-systems, for which a conserved quantity, the M-function, can be defined in
phase space. This M-function is the generator of time displacements and
contains all the dynamical information of the system. In this sense the
M-function plays the role of the hamiltonian function for mechanical systems.
In analogy with Hamilton theory we derive equations of motion as derivatives
over the resource function in phase space. A M-bracket is defined which allows
one to perform a geometrical approach in analogy to Poisson bracket of
hamiltonian systems. We show that the equations of motion can be derived from a
variational principle over a functional J of the trajectories. This functional
plays for M-systems the same role than the action S for hamiltonian systems.
Finally, three important systems in population dynamics, namely,
Lotka-Volterra, self-feeding and logistic evolution, are shown to be M-systems.
| [
{
"created": "Mon, 8 Apr 2013 19:23:20 GMT",
"version": "v1"
}
] | 2013-04-09 | [
[
"Martinez-Linares",
"Jesus",
""
]
] | A phase space theory for population dynamics in Ecology is presented. This theory applies for a certain class of dynamical systems, that will be called M-systems, for which a conserved quantity, the M-function, can be defined in phase space. This M-function is the generator of time displacements and contains all the dynamical information of the system. In this sense the M-function plays the role of the hamiltonian function for mechanical systems. In analogy with Hamilton theory we derive equations of motion as derivatives over the resource function in phase space. A M-bracket is defined which allows one to perform a geometrical approach in analogy to Poisson bracket of hamiltonian systems. We show that the equations of motion can be derived from a variational principle over a functional J of the trajectories. This functional plays for M-systems the same role than the action S for hamiltonian systems. Finally, three important systems in population dynamics, namely, Lotka-Volterra, self-feeding and logistic evolution, are shown to be M-systems. |
1301.4511 | Wlodek Bryc | Katarzyna Bryc, Wlodek Bryc, Jack W. Silverstein | Separation of the largest eigenvalues in eigenanalysis of genotype data
from discrete subpopulations | Corrected typos in Section 3.1 (M=120, N=2500) and proof of Lemma 2 | Theoretical Population Biology 89 (2013) 34-43 | 10.1016/j.tpb.2013.08.004 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a mathematical model, and the corresponding mathematical analysis,
that justifies and quantifies the use of principal component analysis of
biallelic genetic marker data for a set of individuals to detect the number of
subpopulations represented in the data. We indicate that the power of the
technique relies more on the number of individuals genotyped than on the number
of markers.
| [
{
"created": "Fri, 18 Jan 2013 21:59:10 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2017 14:58:15 GMT",
"version": "v2"
}
] | 2017-10-19 | [
[
"Bryc",
"Katarzyna",
""
],
[
"Bryc",
"Wlodek",
""
],
[
"Silverstein",
"Jack W.",
""
]
] | We present a mathematical model, and the corresponding mathematical analysis, that justifies and quantifies the use of principal component analysis of biallelic genetic marker data for a set of individuals to detect the number of subpopulations represented in the data. We indicate that the power of the technique relies more on the number of individuals genotyped than on the number of markers. |
1103.5625 | Reginald Smith | Reginald D. Smith | Information Theory and Population Genetics | 29 pages, 11 figures | null | null | null | q-bio.PE cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key findings of classical population genetics are derived using a
framework based on information theory using the entropies of the allele
frequency distribution as a basis. The common results for drift, mutation,
selection, and gene flow will be rewritten both in terms of information
theoretic measurements and used to draw the classic conclusions for balance
conditions and common features of one locus dynamics. Linkage disequilibrium
will also be discussed including the relationship between mutual information
and r^2 and a simple model of hitchhiking.
| [
{
"created": "Mon, 21 Mar 2011 15:45:40 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jun 2012 16:32:33 GMT",
"version": "v2"
}
] | 2012-06-11 | [
[
"Smith",
"Reginald D.",
""
]
] | The key findings of classical population genetics are derived using a framework based on information theory using the entropies of the allele frequency distribution as a basis. The common results for drift, mutation, selection, and gene flow will be rewritten both in terms of information theoretic measurements and used to draw the classic conclusions for balance conditions and common features of one locus dynamics. Linkage disequilibrium will also be discussed including the relationship between mutual information and r^2 and a simple model of hitchhiking. |
2312.07012 | Vikram Singh | Vikram Singh, Vikram Singh | Inferring interaction networks from transcriptomic data: methods and
applications | 48 pages, 3 figures | null | null | null | q-bio.MN q-bio.BM q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Transcriptomic data is a treasure-trove in modern molecular biology, as it
offers a comprehensive viewpoint into the intricate nuances of gene expression
dynamics underlying biological systems. This genetic information must be
utilised to infer biomolecular interaction networks that can provide insights
into the complex regulatory mechanisms underpinning the dynamic cellular
processes. Gene regulatory networks and protein-protein interaction networks
are two major classes of such networks. This chapter thoroughly investigates
the wide range of methodologies used for distilling insightful revelations from
transcriptomic data that include association based methods (based on
correlation among expression vectors), probabilistic models (using Bayesian and
Gaussian models), and interologous methods. We reviewed different approaches
for evaluating the significance of interactions based on the network topology
and biological functions of the interacting molecules, and discuss various
strategies for the identification of functional modules. The chapter concludes
with highlighting network based techniques of prioritising key genes, outlining
the centrality based, diffusion based and subgraph based methods. The chapter
provides a meticulous framework for investigating transcriptomic data to
uncover assembly of complex molecular networks for their adaptable analyses
across a broad spectrum of biological domains.
| [
{
"created": "Tue, 12 Dec 2023 06:56:08 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Singh",
"Vikram",
""
],
[
"Singh",
"Vikram",
""
]
] | Transcriptomic data is a treasure-trove in modern molecular biology, as it offers a comprehensive viewpoint into the intricate nuances of gene expression dynamics underlying biological systems. This genetic information must be utilised to infer biomolecular interaction networks that can provide insights into the complex regulatory mechanisms underpinning the dynamic cellular processes. Gene regulatory networks and protein-protein interaction networks are two major classes of such networks. This chapter thoroughly investigates the wide range of methodologies used for distilling insightful revelations from transcriptomic data that include association based methods (based on correlation among expression vectors), probabilistic models (using Bayesian and Gaussian models), and interologous methods. We reviewed different approaches for evaluating the significance of interactions based on the network topology and biological functions of the interacting molecules, and discuss various strategies for the identification of functional modules. The chapter concludes with highlighting network based techniques of prioritising key genes, outlining the centrality based, diffusion based and subgraph based methods. The chapter provides a meticulous framework for investigating transcriptomic data to uncover assembly of complex molecular networks for their adaptable analyses across a broad spectrum of biological domains. |
q-bio/0312010 | Eldon Emberly | Susanne Moelbert, Eldon Emberly and Chao Tang | Correlation between sequence hydrophobicity and surface-exposure pattern
of database proteins | 16 pages, 2 tables, 8 figures | null | null | null | q-bio.BM | null | Hydrophobicity is thought to be one of the primary forces driving the folding
of proteins. On average, hydrophobic residues occur preferentially in the core,
whereas polar residues tends to occur at the surface of a folded protein. By
analyzing the known protein structures, we quantify the degree to which the
hydrophobicity sequence of a protein correlates with its pattern of surface
exposure. We have assessed the statistical significance of this correlation for
several hydrophobicity scales in the literature, and find that the computed
correlations are significant but far from optimal. We show that this less than
optimal correlation arises primarily from the large degree of mutations that
naturally occurring proteins can tolerate. Lesser effects are due in part to
forces other than hydrophobicity and we quantify this by analyzing the surface
exposure distributions of all amino acids. Lastly we show that our database
findings are consistent with those found from an off-lattice hydrophobic-polar
model of protein folding.
| [
{
"created": "Mon, 8 Dec 2003 15:20:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Moelbert",
"Susanne",
""
],
[
"Emberly",
"Eldon",
""
],
[
"Tang",
"Chao",
""
]
] | Hydrophobicity is thought to be one of the primary forces driving the folding of proteins. On average, hydrophobic residues occur preferentially in the core, whereas polar residues tends to occur at the surface of a folded protein. By analyzing the known protein structures, we quantify the degree to which the hydrophobicity sequence of a protein correlates with its pattern of surface exposure. We have assessed the statistical significance of this correlation for several hydrophobicity scales in the literature, and find that the computed correlations are significant but far from optimal. We show that this less than optimal correlation arises primarily from the large degree of mutations that naturally occurring proteins can tolerate. Lesser effects are due in part to forces other than hydrophobicity and we quantify this by analyzing the surface exposure distributions of all amino acids. Lastly we show that our database findings are consistent with those found from an off-lattice hydrophobic-polar model of protein folding. |
q-bio/0509041 | Emmanuel Tannenbaum | Emmanuel Tannenbaum | Sexual replication in the quasispecies model | 7 pages, 4 figures, submitted to The Journal of Theoretical Biology | null | null | null | q-bio.PE q-bio.CB | null | This paper develops a simplified model for sexual replication within the
quasispecies formalism. We assume that the genomes of the replicating organisms
are two-chromosomed and diploid, and that the fitness is determined by the
number of chromosomes that are identical to a given master sequence. We also
assume that there is a cost to sexual replication, given by a characteristic
time $ \tau_{seek} $ during which haploid cells seek out a mate with which to
recombine. If the mating strategy is such that only viable haploids can mate,
then when $ \tau_{seek} = 0 $, it is possible to show that sexual replication
will always outcompete asexual replication. However, as $ \tau_{seek} $
increases, sexual replication only becomes advantageous at progressively higher
mutation rates. Once the time cost for sex reaches a critical threshold, the
selective advantage for sexual replication disappears entirely. The results of
this paper suggest that sexual replication is not advantageous in small
populations per se, but rather in populations with low replication rates. In
this regime, the cost for sex is sufficiently low that the selective advantage
obtained through recombination leads to the dominance of the strategy. In fact,
at a given replication rate and for a fixed environment volume, sexual
replication is selected for in high populations because of the reduced time
spent finding a reproductive partner.
| [
{
"created": "Thu, 29 Sep 2005 10:34:55 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Sep 2005 08:42:35 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Tannenbaum",
"Emmanuel",
""
]
] | This paper develops a simplified model for sexual replication within the quasispecies formalism. We assume that the genomes of the replicating organisms are two-chromosomed and diploid, and that the fitness is determined by the number of chromosomes that are identical to a given master sequence. We also assume that there is a cost to sexual replication, given by a characteristic time $ \tau_{seek} $ during which haploid cells seek out a mate with which to recombine. If the mating strategy is such that only viable haploids can mate, then when $ \tau_{seek} = 0 $, it is possible to show that sexual replication will always outcompete asexual replication. However, as $ \tau_{seek} $ increases, sexual replication only becomes advantageous at progressively higher mutation rates. Once the time cost for sex reaches a critical threshold, the selective advantage for sexual replication disappears entirely. The results of this paper suggest that sexual replication is not advantageous in small populations per se, but rather in populations with low replication rates. In this regime, the cost for sex is sufficiently low that the selective advantage obtained through recombination leads to the dominance of the strategy. In fact, at a given replication rate and for a fixed environment volume, sexual replication is selected for in high populations because of the reduced time spent finding a reproductive partner. |
2404.04181 | James Yorke | B Shayak, Sana Jahedi, James A Yorke | Ambiguity in the use of SIR models to fit epidemic incidence data | null | null | null | null | q-bio.PE math.DS physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When fitting a multi-parameter model to a data set, computer algorithms may
suggest that a range of parameters provide equally reasonable fits, making the
parameter estimation difficult. Here, we prove this fact for an SIR model. We
say a set of parameter values is a good fit to outbreak data if the solution
has the data's three most significant characteristics: the standard deviation,
the mean time, and the total number of cases. In our model, in addition to the
"basic reproduction number" $R_0$, three other parameters need to be estimated
to fit a solution to outbreak data. We will show that those parameters can be
chosen so that each gives a linear transformation of a solution's incidence
data. As a result, we show that for every choice of $R_0>1$, there is a good
fit for each outbreak. We also illustrate our results by providing the least
square best fits of the New York City and London data sets of the Omicron
variant of COVID-19. Furthermore, we show how versions of the SIR model with
$N$ compartments have far more good fits- - indeed a high dimensional set of
good fits -- for each target -- showing that more complicated models may have
an even greater problem in overparametrizing outbreak characteristics.
| [
{
"created": "Fri, 5 Apr 2024 15:51:27 GMT",
"version": "v1"
}
] | 2024-04-08 | [
[
"Shayak",
"B",
""
],
[
"Jahedi",
"Sana",
""
],
[
"Yorke",
"James A",
""
]
] | When fitting a multi-parameter model to a data set, computer algorithms may suggest that a range of parameters provide equally reasonable fits, making the parameter estimation difficult. Here, we prove this fact for an SIR model. We say a set of parameter values is a good fit to outbreak data if the solution has the data's three most significant characteristics: the standard deviation, the mean time, and the total number of cases. In our model, in addition to the "basic reproduction number" $R_0$, three other parameters need to be estimated to fit a solution to outbreak data. We will show that those parameters can be chosen so that each gives a linear transformation of a solution's incidence data. As a result, we show that for every choice of $R_0>1$, there is a good fit for each outbreak. We also illustrate our results by providing the least square best fits of the New York City and London data sets of the Omicron variant of COVID-19. Furthermore, we show how versions of the SIR model with $N$ compartments have far more good fits- - indeed a high dimensional set of good fits -- for each target -- showing that more complicated models may have an even greater problem in overparametrizing outbreak characteristics. |
1808.04113 | Zhu Yang | Kai Wang, Zhu Yang, Dongjin Qing, Feng Ren, Shichang Liu, Qingsong
Zheng, Jun Liu, Weiping Zhang, Chen Dai, Madeline Wu, E. Wassim Chehab, Janet
Braam, and Ning Li | Quantitative and functional post-translational modification proteomics
reveals that TREPH1 plays a role in plant thigmomorphogenesis | null | null | 10.1073/pnas.1814006115 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plants can sense both intracellular and extracellular mechanical forces and
can respond through morphological changes. The signaling components responsible
for mechanotransduction of the touch response are largely unknown. Here, we
performed a high-throughput SILIA (stable isotope labeling in
Arabidopsis)-based quantitative phosphoproteomics analysis to profile changes
in protein phosphorylation resulting from 40 seconds of force stimulation in
Arabidopsis thaliana. Of the 24 touch-responsive phosphopeptides identified,
many were derived from kinases, phosphatases, cytoskeleton proteins, membrane
proteins and ion transporters. TOUCH-REGULATED PHOSPHOPROTEIN1 (TREPH1) and MAP
KINASE KINASE 2 (MKK2) and/or MKK1 became rapidly phosphorylated in
touch-stimulated plants. Both TREPH1 and MKK2 are required for touch-induced
delayed flowering, a major component of thigmomorphogenesis. The treph1-1 and
mkk2 mutants also exhibited defects in touch-inducible gene expression. A
non-phosphorylatable site-specific isoform of TREPH1 (S625A) failed to restore
touch-induced flowering delay of treph1-1, indicating the necessity of S625 for
TREPH1 function and providing evidence consistent with the possible functional
relevance of the touch-regulated TREPH1 phosphorylation. Bioinformatic analysis
and biochemical subcellular fractionation of TREPH1 protein indicate that it is
a soluble protein. Altogether, these findings identify new protein players in
Arabidopsis thigmomorphogenesis regulation, suggesting that protein
phosphorylation may play a critical role in plant force responses.
| [
{
"created": "Mon, 13 Aug 2018 09:05:50 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Wang",
"Kai",
""
],
[
"Yang",
"Zhu",
""
],
[
"Qing",
"Dongjin",
""
],
[
"Ren",
"Feng",
""
],
[
"Liu",
"Shichang",
""
],
[
"Zheng",
"Qingsong",
""
],
[
"Liu",
"Jun",
""
],
[
"Zhang",
"Weiping",
... | Plants can sense both intracellular and extracellular mechanical forces and can respond through morphological changes. The signaling components responsible for mechanotransduction of the touch response are largely unknown. Here, we performed a high-throughput SILIA (stable isotope labeling in Arabidopsis)-based quantitative phosphoproteomics analysis to profile changes in protein phosphorylation resulting from 40 seconds of force stimulation in Arabidopsis thaliana. Of the 24 touch-responsive phosphopeptides identified, many were derived from kinases, phosphatases, cytoskeleton proteins, membrane proteins and ion transporters. TOUCH-REGULATED PHOSPHOPROTEIN1 (TREPH1) and MAP KINASE KINASE 2 (MKK2) and/or MKK1 became rapidly phosphorylated in touch-stimulated plants. Both TREPH1 and MKK2 are required for touch-induced delayed flowering, a major component of thigmomorphogenesis. The treph1-1 and mkk2 mutants also exhibited defects in touch-inducible gene expression. A non-phosphorylatable site-specific isoform of TREPH1 (S625A) failed to restore touch-induced flowering delay of treph1-1, indicating the necessity of S625 for TREPH1 function and providing evidence consistent with the possible functional relevance of the touch-regulated TREPH1 phosphorylation. Bioinformatic analysis and biochemical subcellular fractionation of TREPH1 protein indicate that it is a soluble protein. Altogether, these findings identify new protein players in Arabidopsis thigmomorphogenesis regulation, suggesting that protein phosphorylation may play a critical role in plant force responses. |
1810.12860 | Peter Karp | Peter D. Karp, Natalia Ivanova, Markus Krummenacker, Nikos Kyrpides,
Mario Latendresse, Peter Midford, Wai Kit Ong, Suzanne Paley, and Rekha
Seshadri | A Comparison of Microbial Genome Web Portals | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microbial genome web portals have a broad range of capabilities that address
a number of information-finding and analysis needs for scientists. This article
compares the capabilities of the major microbial genome web portals to aid
researchers in determining which portal(s) are best suited to solving their
information-finding and analytical needs. We assessed both the bioinformatics
tools and the data content of BioCyc, KEGG, Ensembl Bacteria, KBase, IMG, and
PATRIC. For each portal, our assessment compared and tallied the available
capabilities. The strengths of BioCyc include its genomic and metabolic tools,
multi-search capabilities, table-based analysis tools, regulatory network tools
and data, omics data analysis tools, breadth of data content, and large amount
of curated data. The strengths of KEGG include its genomic and metabolic tools.
The strengths of Ensembl Bacteria include its genomic tools and large number of
genomes. The strengths of KBase include its genomic tools and metabolic models.
The strengths of IMG include its genomic tools, multi-search capabilities,
large number of genomes, table-based analysis tools, and breadth of data
content. The strengths of PATRIC include its large number of genomes,
table-based analysis tools, metabolic models, and breadth of data content.
| [
{
"created": "Tue, 30 Oct 2018 17:01:34 GMT",
"version": "v1"
}
] | 2018-10-31 | [
[
"Karp",
"Peter D.",
""
],
[
"Ivanova",
"Natalia",
""
],
[
"Krummenacker",
"Markus",
""
],
[
"Kyrpides",
"Nikos",
""
],
[
"Latendresse",
"Mario",
""
],
[
"Midford",
"Peter",
""
],
[
"Ong",
"Wai Kit",
""
],... | Microbial genome web portals have a broad range of capabilities that address a number of information-finding and analysis needs for scientists. This article compares the capabilities of the major microbial genome web portals to aid researchers in determining which portal(s) are best suited to solving their information-finding and analytical needs. We assessed both the bioinformatics tools and the data content of BioCyc, KEGG, Ensembl Bacteria, KBase, IMG, and PATRIC. For each portal, our assessment compared and tallied the available capabilities. The strengths of BioCyc include its genomic and metabolic tools, multi-search capabilities, table-based analysis tools, regulatory network tools and data, omics data analysis tools, breadth of data content, and large amount of curated data. The strengths of KEGG include its genomic and metabolic tools. The strengths of Ensembl Bacteria include its genomic tools and large number of genomes. The strengths of KBase include its genomic tools and metabolic models. The strengths of IMG include its genomic tools, multi-search capabilities, large number of genomes, table-based analysis tools, and breadth of data content. The strengths of PATRIC include its large number of genomes, table-based analysis tools, metabolic models, and breadth of data content. |
1409.3911 | Liane Gabora | Liane Gabora | Probing the Mind Behind the (Literal and Figurative) Lightbulb | 16 pages; requested commentary on "Thomas Edison's creative career:
The Multilayered trajectory of trials, errors, failures, and triumphs" by
Dean Simonton. < http://psycnet.apa.org/psycinfo/2014-32896-001/ > Both
target paper and commentary are in press in Psychology of Aesthetics,
Creativity, and the Arts | 2015. Psychology of Aesthetics, Creativity, and the Arts, 9(1),
20-24 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After doing away with the evolutionary scaffold for BVSR, what remains is a
notion of "blindness" that does not distinguish BVSR from other theories of
creativity, and an assumption that creativity can be understood by treating
ideas as discrete, countable entities, as opposed to different external
manifestations of a singular gradually solidifying internal conception.
Uprooted from Darwinian theory, BVSR lacks a scientific framework that can be
called upon to generate hypotheses and test them. In lieu of such a framework,
hypotheses appear to be generated on the basis of previous data--they are not
theory-driven. The paper does not explain how the hypothesis that creativity is
enhanced by engagement in a "network of enterprises" is derived from BVSR; this
hypothesis is more compatible with competing conceptions of creativity. The
notion that creativity involves backtracking conflates evidence for
backtracking with respect to the external output with evidence for backtracking
of the conception of the invention. The first does not imply the second; a
creator can set aside a creative output but cannot go back to the conception of
the task he/she had prior to generating that output. The notion that creativity
entails superfluity (i.e., many ideas have "zero usefulness") is misguided;
usefulness is context-dependent, moreover, the usefulness of an idea may reside
in its being a critical stepping-stone to a subsequent idea.
| [
{
"created": "Sat, 13 Sep 2014 04:56:48 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Sep 2014 00:56:05 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Sep 2014 16:11:13 GMT",
"version": "v3"
},
{
"created": "Wed, 25 Feb 2015 18:55:02 GMT",
"version": "v4"
}
] | 2015-02-26 | [
[
"Gabora",
"Liane",
""
]
] | After doing away with the evolutionary scaffold for BVSR, what remains is a notion of "blindness" that does not distinguish BVSR from other theories of creativity, and an assumption that creativity can be understood by treating ideas as discrete, countable entities, as opposed to different external manifestations of a singular gradually solidifying internal conception. Uprooted from Darwinian theory, BVSR lacks a scientific framework that can be called upon to generate hypotheses and test them. In lieu of such a framework, hypotheses appear to be generated on the basis of previous data--they are not theory-driven. The paper does not explain how the hypothesis that creativity is enhanced by engagement in a "network of enterprises" is derived from BVSR; this hypothesis is more compatible with competing conceptions of creativity. The notion that creativity involves backtracking conflates evidence for backtracking with respect to the external output with evidence for backtracking of the conception of the invention. The first does not imply the second; a creator can set aside a creative output but cannot go back to the conception of the task he/she had prior to generating that output. The notion that creativity entails superfluity (i.e., many ideas have "zero usefulness") is misguided; usefulness is context-dependent, moreover, the usefulness of an idea may reside in its being a critical stepping-stone to a subsequent idea. |
2401.00214 | Yuxin Geng | Yuxin Geng, Xingru Chen | Evolutionary Dynamics with Randomly Distributed Benevolent Individuals | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the evolution of cooperation is pivotal in biology and social
science. Public resources sharing is a common scenario in the real world. In
our study, we explore the evolutionary dynamics of cooperation on a regular
graph with degree $k$, introducing the presence of a third strategy, namely the
benevolence, who does not evolve over time, but provides a fixed benefit to all
its neighbors. We find that the presence of the benevolence can foster the
development of cooperative behavior and it follows a simple rule: $b/c > k -
p_S(k-1)$. Our results provide new insights into the evolution of cooperation
in structured populations.
| [
{
"created": "Sat, 30 Dec 2023 12:22:05 GMT",
"version": "v1"
}
] | 2024-01-02 | [
[
"Geng",
"Yuxin",
""
],
[
"Chen",
"Xingru",
""
]
] | Understanding the evolution of cooperation is pivotal in biology and social science. Public resources sharing is a common scenario in the real world. In our study, we explore the evolutionary dynamics of cooperation on a regular graph with degree $k$, introducing the presence of a third strategy, namely the benevolence, who does not evolve over time, but provides a fixed benefit to all its neighbors. We find that the presence of the benevolence can foster the development of cooperative behavior and it follows a simple rule: $b/c > k - p_S(k-1)$. Our results provide new insights into the evolution of cooperation in structured populations. |
2403.17446 | Matthew Holden | Matthew H. Holden, Eva E. Plag\'anyi, Elizabeth A. Fulton, Alexander
B. Campbell, Rachel Janes, Robyn A. Lovett, Montana Wickens, Matthew P.
Adams, Larissa Lubiana Botelho, Catherine M. Dichmont, Philip Erm, Kate J
Helmstedt, Ryan F. Heneghan, Manuela Mendiolar, Anthony J. Richardson, Jacob
G. D. Rogers, Kate Saunders, Liam Timms | Cost-benefit analysis of ecosystem modelling to support fisheries
management | null | null | 10.1111/jfb.15741 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Mathematical and statistical models underlie many of the world's most
important fisheries management decisions. Since the 19th century, difficulty
calibrating and fitting such models has been used to justify the selection of
simple, stationary, single-species models to aid tactical fisheries management
decisions. Whereas these justifications are reasonable, it is imperative that
we quantify the value of different levels of model complexity for supporting
fisheries management, especially given a changing climate, where old
methodologies may no longer perform as well as in the past. Here we argue that
cost-benefit analysis is an ideal lens to assess the value of model complexity
in fisheries management. While some studies have reported the benefits of model
complexity in fisheries, modeling costs are rarely considered. In the absence
of cost data in the literature, we report, as a starting point, relative costs
of single-species stock assessment and marine ecosystem models from two
Australian organizations. We found that costs varied by two orders of
magnitude, and that ecosystem model costs increased with model complexity.
Using these costs, we walk through a hypothetical example of cost-benefit
analysis. The demonstration is intended to catalyze the reporting of modeling
costs and benefits.
| [
{
"created": "Tue, 26 Mar 2024 07:24:28 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Holden",
"Matthew H.",
""
],
[
"Plagányi",
"Eva E.",
""
],
[
"Fulton",
"Elizabeth A.",
""
],
[
"Campbell",
"Alexander B.",
""
],
[
"Janes",
"Rachel",
""
],
[
"Lovett",
"Robyn A.",
""
],
[
"Wickens",
"Montana",... | Mathematical and statistical models underlie many of the world's most important fisheries management decisions. Since the 19th century, difficulty calibrating and fitting such models has been used to justify the selection of simple, stationary, single-species models to aid tactical fisheries management decisions. Whereas these justifications are reasonable, it is imperative that we quantify the value of different levels of model complexity for supporting fisheries management, especially given a changing climate, where old methodologies may no longer perform as well as in the past. Here we argue that cost-benefit analysis is an ideal lens to assess the value of model complexity in fisheries management. While some studies have reported the benefits of model complexity in fisheries, modeling costs are rarely considered. In the absence of cost data in the literature, we report, as a starting point, relative costs of single-species stock assessment and marine ecosystem models from two Australian organizations. We found that costs varied by two orders of magnitude, and that ecosystem model costs increased with model complexity. Using these costs, we walk through a hypothetical example of cost-benefit analysis. The demonstration is intended to catalyze the reporting of modeling costs and benefits. |
2106.07206 | Deok-Sun Lee | Hyun Woo Lee, Jae Woo Lee, Deok-Sun Lee | Stability and selective extinction in complex mutualistic networks | 7 figures | Physical Review E 105, 014309 (2022) | 10.1103/PhysRevE.105.014309 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | We study species abundance in the empirical plant-pollinator mutualistic
networks exhibiting broad degree distributions, with uniform intra-group
competition assumed, by the Lotka-Volterra equation. The stability of a fixed
point is found to be identified by the signs of its non-zero components and
those of its neighboring fixed points. Taking the annealed approximation, we
derive the non-zero components to be formulated in terms of degrees and the
rescaled interaction strengths, which lead us to find different stable fixed
points depending on parameters, and we obtain the phase diagram. The selective
extinction phase finds small-degree species extinct and effective interaction
reduced, maintaining stability and hindering the onset of instability. The
non-zero minimum species abundances from different empirical networks show data
collapse when rescaled as predicted theoretically.
| [
{
"created": "Mon, 14 Jun 2021 07:44:35 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jan 2022 05:22:29 GMT",
"version": "v2"
}
] | 2022-01-25 | [
[
"Lee",
"Hyun Woo",
""
],
[
"Lee",
"Jae Woo",
""
],
[
"Lee",
"Deok-Sun",
""
]
] | We study species abundance in the empirical plant-pollinator mutualistic networks exhibiting broad degree distributions, with uniform intra-group competition assumed, by the Lotka-Volterra equation. The stability of a fixed point is found to be identified by the signs of its non-zero components and those of its neighboring fixed points. Taking the annealed approximation, we derive the non-zero components to be formulated in terms of degrees and the rescaled interaction strengths, which lead us to find different stable fixed points depending on parameters, and we obtain the phase diagram. The selective extinction phase finds small-degree species extinct and effective interaction reduced, maintaining stability and hindering the onset of instability. The non-zero minimum species abundances from different empirical networks show data collapse when rescaled as predicted theoretically. |
2201.13299 | Jiahan Li | Jiahan Li | Directed Weight Neural Networks for Protein Structure Representation
Learning | null | null | null | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A protein performs biological functions by folding to a particular 3D
structure. To accurately model the protein structures, both the overall
geometric topology and local fine-grained relations between amino acids (e.g.
side-chain torsion angles and inter-amino-acid orientations) should be
carefully considered. In this work, we propose the Directed Weight Neural
Network for better capturing geometric relations among different amino acids.
Extending a single weight from a scalar to a 3D directed vector, our new
framework supports a rich set of geometric operations on both classical and
SO(3)--representation features, on top of which we construct a perceptron unit
for processing amino-acid information. In addition, we introduce an equivariant
message passing paradigm on proteins for plugging the directed weight
perceptrons into existing Graph Neural Networks, showing superior versatility
in maintaining SO(3)-equivariance at the global scale. Experiments show that
our network has remarkably better expressiveness in representing geometric
relations in comparison to classical neural networks and the (globally)
equivariant networks. It also achieves state-of-the-art performance on various
computational biology applications related to protein 3D structures.
| [
{
"created": "Fri, 28 Jan 2022 13:41:56 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Mar 2022 15:07:43 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Jul 2022 11:31:06 GMT",
"version": "v3"
},
{
"created": "Sat, 17 Sep 2022 07:13:25 GMT",
"version": "v4"
}
] | 2022-09-20 | [
[
"Li",
"Jiahan",
""
]
] | A protein performs biological functions by folding to a particular 3D structure. To accurately model the protein structures, both the overall geometric topology and local fine-grained relations between amino acids (e.g. side-chain torsion angles and inter-amino-acid orientations) should be carefully considered. In this work, we propose the Directed Weight Neural Network for better capturing geometric relations among different amino acids. Extending a single weight from a scalar to a 3D directed vector, our new framework supports a rich set of geometric operations on both classical and SO(3)--representation features, on top of which we construct a perceptron unit for processing amino-acid information. In addition, we introduce an equivariant message passing paradigm on proteins for plugging the directed weight perceptrons into existing Graph Neural Networks, showing superior versatility in maintaining SO(3)-equivariance at the global scale. Experiments show that our network has remarkably better expressiveness in representing geometric relations in comparison to classical neural networks and the (globally) equivariant networks. It also achieves state-of-the-art performance on various computational biology applications related to protein 3D structures. |
0806.1872 | Sebastian Risau-Gusman | Sebastian Risau-Gusman, Damian H. Zanette | Contact switching as a control strategy for epidemic outbreaks | 21 pages, 8 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effects of switching social contacts as a strategy to control
epidemic outbreaks. Connections between susceptible and infective individuals
can be broken by either individual, and then reconnected to a randomly chosen
member of the population. It is assumed that the reconnecting individual has no
previous information on the epidemiological condition of the new contact. We
show that reconnection can completely suppress the disease, both by continuous
and discontinuous transitions between the endemic and the infection-free
states. For diseases with an asymptomatic phase, we analyze the conditions for
the suppression of the disease, and show that, even when these conditions are
not met, the increase of the endemic infection level is usually rather small.
We conclude that, within some simple epidemiological models, contact switching
is a quite robust and effective control strategy. This suggests that it may
also be an efficient method in more complex situations.
| [
{
"created": "Wed, 11 Jun 2008 13:36:59 GMT",
"version": "v1"
}
] | 2008-06-12 | [
[
"Risau-Gusman",
"Sebastian",
""
],
[
"Zanette",
"Damian H.",
""
]
] | We study the effects of switching social contacts as a strategy to control epidemic outbreaks. Connections between susceptible and infective individuals can be broken by either individual, and then reconnected to a randomly chosen member of the population. It is assumed that the reconnecting individual has no previous information on the epidemiological condition of the new contact. We show that reconnection can completely suppress the disease, both by continuous and discontinuous transitions between the endemic and the infection-free states. For diseases with an asymptomatic phase, we analyze the conditions for the suppression of the disease, and show that, even when these conditions are not met, the increase of the endemic infection level is usually rather small. We conclude that, within some simple epidemiological models, contact switching is a quite robust and effective control strategy. This suggests that it may also be an efficient method in more complex situations. |
q-bio/0511024 | Gustavo Camelo Neto | V.M. Kenkre, L. Giuggioli, G. Abramson, and G. Camelo-Neto | Theory of Hantavirus Infection Spread Incorporating Localized Adult and
Itinerant Juvenile Mice | 12 pages, 8 eps figures. Submitted to Phys. Rev. E | null | null | null | q-bio.PE cond-mat.stat-mech | null | A generalized model of the spread of the Hantavirus in mice populations is
presented on the basis of recent observational findings concerning the movement
characteristics of the mice that carry the infection. The factual information
behind the generalization is based on mark-recapture observations reported in
Giuggioli et al. [Bull. Math. Biol. 67, 1135 (2005)] that have necessitated the
introduction of home ranges in the simple model of Hantavirus spread presented
by Abramson and Kenkre [Phys. Rev. E 66, 11912 (2002)]. The essential feature
of the model presented here is the existence of adult mice that remain largely
confined to locations near their home ranges, and itinerant juvenile mice that
are not so confined, and, during their search for their own homes, move and
infect both other juveniles and adults that they meet during their movement.
The model is presented at three levels of description: mean field, kinetic and
configuration. Results of calculations are shown explicitly from the mean field
equations and the simulation rules, and are found to agree in some respects and
to differ in others. The origin of the differences is shown to lie in spatial
correlations. It is indicated how mark-recapture observations in the field may
be employed to verify the applicability of the theory.
| [
{
"created": "Tue, 15 Nov 2005 19:16:15 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kenkre",
"V. M.",
""
],
[
"Giuggioli",
"L.",
""
],
[
"Abramson",
"G.",
""
],
[
"Camelo-Neto",
"G.",
""
]
] | A generalized model of the spread of the Hantavirus in mice populations is presented on the basis of recent observational findings concerning the movement characteristics of the mice that carry the infection. The factual information behind the generalization is based on mark-recapture observations reported in Giuggioli et al. [Bull. Math. Biol. 67, 1135 (2005)] that have necessitated the introduction of home ranges in the simple model of Hantavirus spread presented by Abramson and Kenkre [Phys. Rev. E 66, 11912 (2002)]. The essential feature of the model presented here is the existence of adult mice that remain largely confined to locations near their home ranges, and itinerant juvenile mice that are not so confined, and, during their search for their own homes, move and infect both other juveniles and adults that they meet during their movement. The model is presented at three levels of description: mean field, kinetic and configuration. Results of calculations are shown explicitly from the mean field equations and the simulation rules, and are found to agree in some respects and to differ in others. The origin of the differences is shown to lie in spatial correlations. It is indicated how mark-recapture observations in the field may be employed to verify the applicability of the theory. |
2306.04291 | Valerie Gabelica | Anirban Ghosh (ARNA), Marko Trajkovski, Marie-paule Teulade-Fichou
(CMBC), Val\'erie Gabelica (ARNA, IECB), Janez Plavec | Phen-DC 3 Induces Refolding of Human Telomeric DNA into a Chair-Type
Antiparallel G-Quadruplex through Ligand Intercalation | null | Angewandte Chemie International Edition, 2022, 61 (40),
pp.e202207384 | 10.1002/anie.202207384 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human telomeric G-quadruplex DNA structures are attractive anticancer drug
targets, but the target's polymorphism complicates the drug design: different
ligands prefer different folds, and very few complexes have been solved at high
resolution. Here we report that Phen-DC3, one of the most prominent
G-quadruplex ligands in terms of high binding affinity and selectivity, causes
dTAGGG(TTAGGG)3 to completely change its fold in KCl solution from a hybrid-1
to an antiparallel chair-type structure, wherein the ligand intercalates
between a two-quartet unit and a pseudo-quartet, thereby ejecting one potassium
ion. This unprecedented high-resolution NMR structure shows for the first time
a true ligand intercalation into an intramolecular G-quadruplex.
| [
{
"created": "Wed, 7 Jun 2023 09:45:06 GMT",
"version": "v1"
}
] | 2023-06-08 | [
[
"Ghosh",
"Anirban",
"",
"ARNA"
],
[
"Trajkovski",
"Marko",
"",
"CMBC"
],
[
"Teulade-Fichou",
"Marie-paule",
"",
"CMBC"
],
[
"Gabelica",
"Valérie",
"",
"ARNA, IECB"
],
[
"Plavec",
"Janez",
""
]
] | Human telomeric G-quadruplex DNA structures are attractive anticancer drug targets, but the target's polymorphism complicates the drug design: different ligands prefer different folds, and very few complexes have been solved at high resolution. Here we report that Phen-DC3, one of the most prominent G-quadruplex ligands in terms of high binding affinity and selectivity, causes dTAGGG(TTAGGG)3 to completely change its fold in KCl solution from a hybrid-1 to an antiparallel chair-type structure, wherein the ligand intercalates between a two-quartet unit and a pseudo-quartet, thereby ejecting one potassium ion. This unprecedented high-resolution NMR structure shows for the first time a true ligand intercalation into an intramolecular G-quadruplex. |
1510.04579 | Wei Cai | Katherine Baker, Duan Chen, and Wei Cai | Investigating the Selectivity of KcsA Channel by an Image Charge
Solvation Method (ICSM) in Molecular Dynamics Simulations | null | null | 10.4208/cicp.130315.310815a | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the selectivity of the potassium channel KcsA by a
recently developed image-charge solvation method(ICSM) combined with molecular
dynamics simulations. The hybrid solvation model in the ICSM is able to
demonstrate atomistically the function of the selectivity filter of the KcsA
channel when potassium and sodium ions are considered and their distributions
inside the filter are simulated. Our study also shows that the reaction field
effect, explicitly accounted for through image charge approximation in the ICSM
model, is necessary in reproducing the correct selectivity property of the
potassium channels.
| [
{
"created": "Thu, 15 Oct 2015 15:23:09 GMT",
"version": "v1"
}
] | 2016-05-04 | [
[
"Baker",
"Katherine",
""
],
[
"Chen",
"Duan",
""
],
[
"Cai",
"Wei",
""
]
] | In this paper, we study the selectivity of the potassium channel KcsA by a recently developed image-charge solvation method(ICSM) combined with molecular dynamics simulations. The hybrid solvation model in the ICSM is able to demonstrate atomistically the function of the selectivity filter of the KcsA channel when potassium and sodium ions are considered and their distributions inside the filter are simulated. Our study also shows that the reaction field effect, explicitly accounted for through image charge approximation in the ICSM model, is necessary in reproducing the correct selectivity property of the potassium channels. |
1708.00574 | Chrysafis Vogiatzis | Chrysafis Vogiatzis, Mustafa Can Camur | Identification of Essential Proteins Using Induced Stars in
Protein-Protein Interaction Networks | null | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel centrality metric, referred to as star
centrality, which incorporates information from the closed neighborhood of a
node, rather than solely from the node itself, when calculating its topological
importance. More specifically, we focus on degree centrality and show that in
the complex protein-protein interaction networks it is a naive metric that can
lead to misclassifying protein importance. For our extension of degree
centrality when considering stars, we derive its computational complexity,
provide a mathematical formulation, and propose two approximation algorithms
that are shown to be efficient in practice. We portray the success of this new
metric in protein-protein interaction networks when predicting protein
essentiality in several organisms, including the well-studied Saccharomyces
cerevisiae, Helicobacter pylori, and Caenorhabditis elegans, where star
centrality is shown to significantly outperform other nodal centrality metrics
at detecting essential proteins. We also analyze the average and worst case
performance of the two approximation algorithms in practice, and show that they
are viable options for computing star centrality in very large-scale
protein-protein interaction networks, such as the human proteome, where exact
methodologies are bound to be time and memory intensive.
| [
{
"created": "Wed, 2 Aug 2017 01:52:21 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Mar 2018 15:48:01 GMT",
"version": "v2"
}
] | 2018-03-15 | [
[
"Vogiatzis",
"Chrysafis",
""
],
[
"Camur",
"Mustafa Can",
""
]
] | In this work, we propose a novel centrality metric, referred to as star centrality, which incorporates information from the closed neighborhood of a node, rather than solely from the node itself, when calculating its topological importance. More specifically, we focus on degree centrality and show that in the complex protein-protein interaction networks it is a naive metric that can lead to misclassifying protein importance. For our extension of degree centrality when considering stars, we derive its computational complexity, provide a mathematical formulation, and propose two approximation algorithms that are shown to be efficient in practice. We portray the success of this new metric in protein-protein interaction networks when predicting protein essentiality in several organisms, including the well-studied Saccharomyces cerevisiae, Helicobacter pylori, and Caenorhabditis elegans, where star centrality is shown to significantly outperform other nodal centrality metrics at detecting essential proteins. We also analyze the average and worst case performance of the two approximation algorithms in practice, and show that they are viable options for computing star centrality in very large-scale protein-protein interaction networks, such as the human proteome, where exact methodologies are bound to be time and memory intensive. |
1211.6003 | Marcelo Briones | Thais F. Bartelli, Renata C. Ferreira, Arnaldo L. Colombo and Marcelo
R. S. Briones | Intraspecific Comparative Genomics of Candida albicans Mitochondria
Reveals Non-Coding Regions Under Neutral Evolution | 30 pages, 6 figures, 5 tables | null | 10.1016/j.meegid.2012.12.012 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The opportunistic fungal pathogen Candida albicans causes serious hematogenic
hospital acquired candidiasis with worldwide impact on public health. Because
of its importance as a nosocomial etiologic agent, C. albicans genome has been
largely studied to identify intraspecific variation and several typing methods
have been developed to distinguish closely related strains. Mitochondrial DNA
can be useful for this purpose because, as compared to nuclear DNA, its higher
mutational load and evolutionary rate readily reveals microvariants.
Accordingly, we sequenced and assembled, with 8 fold coverage, the
mitochondrial genomes of two C. albicans clinical isolates (L296 and L757) and
compared these sequences with the genome sequence of reference strain SC5314.
The genome alignment of 33,928 positions revealed 372 polymorphic sites being
230 in coding and 142 in non-coding regions. Three intergenic regions located
between genes tRNAGly/COX1, NAD3/COB and ssurRNA/NAD4L, named IG1, IG2 and IG3
respectively, which showed high number of neutral substitutions, were amplified
and sequenced from 18 clinical isolates from different locations in Latin
America and 2 ATCC standard C. albicans strains. High variability of sequence
and size were observed, ranging up to 56bp size difference and phylogenies
based on IG1, IG2 and IG3 revealed three groups. Insertions of up to 49bp were
observed exclusively in Argentinean strains relative to the other sequences
which could suggest clustering by geographical polymorphism. Because of neutral
evolution, high variability, easy isolation by PCR and full length sequencing
these mitochondrial intergenic regions can contribute with a novel perspective
in molecular studies of C. albicans isolates, complementing well established
multilocus sequence typing methods.
| [
{
"created": "Mon, 26 Nov 2012 16:00:34 GMT",
"version": "v1"
}
] | 2013-01-01 | [
[
"Bartelli",
"Thais F.",
""
],
[
"Ferreira",
"Renata C.",
""
],
[
"Colombo",
"Arnaldo L.",
""
],
[
"Briones",
"Marcelo R. S.",
""
]
] | The opportunistic fungal pathogen Candida albicans causes serious hematogenic hospital acquired candidiasis with worldwide impact on public health. Because of its importance as a nosocomial etiologic agent, C. albicans genome has been largely studied to identify intraspecific variation and several typing methods have been developed to distinguish closely related strains. Mitochondrial DNA can be useful for this purpose because, as compared to nuclear DNA, its higher mutational load and evolutionary rate readily reveals microvariants. Accordingly, we sequenced and assembled, with 8 fold coverage, the mitochondrial genomes of two C. albicans clinical isolates (L296 and L757) and compared these sequences with the genome sequence of reference strain SC5314. The genome alignment of 33,928 positions revealed 372 polymorphic sites being 230 in coding and 142 in non-coding regions. Three intergenic regions located between genes tRNAGly/COX1, NAD3/COB and ssurRNA/NAD4L, named IG1, IG2 and IG3 respectively, which showed high number of neutral substitutions, were amplified and sequenced from 18 clinical isolates from different locations in Latin America and 2 ATCC standard C. albicans strains. High variability of sequence and size were observed, ranging up to 56bp size difference and phylogenies based on IG1, IG2 and IG3 revealed three groups. Insertions of up to 49bp were observed exclusively in Argentinean strains relative to the other sequences which could suggest clustering by geographical polymorphism. Because of neutral evolution, high variability, easy isolation by PCR and full length sequencing these mitochondrial intergenic regions can contribute with a novel perspective in molecular studies of C. albicans isolates, complementing well established multilocus sequence typing methods. |
1208.0843 | David Anderson | Elizabeth Skubak Wolf and David F. Anderson | A finite difference method for estimating second order parameter
sensitivities of discrete stochastic chemical reaction networks | New format (two columns). 14 pages, 9 figures, 7 tables | null | null | null | q-bio.QM math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an efficient finite difference method for the approximation of
second derivatives, with respect to system parameters, of expectations for a
class of discrete stochastic chemical reaction networks. The method uses a
coupling of the perturbed processes that yields a much lower variance than
existing methods, thereby drastically lowering the computational complexity
required to solve a given problem. Further, the method is simple to implement
and will also prove useful in any setting in which continuous time Markov
chains are used to model dynamics, such as population processes. We expect the
new method to be useful in the context of optimization algorithms that require
knowledge of the Hessian.
| [
{
"created": "Fri, 3 Aug 2012 20:52:41 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Oct 2012 16:55:14 GMT",
"version": "v2"
}
] | 2012-10-16 | [
[
"Wolf",
"Elizabeth Skubak",
""
],
[
"Anderson",
"David F.",
""
]
] | We present an efficient finite difference method for the approximation of second derivatives, with respect to system parameters, of expectations for a class of discrete stochastic chemical reaction networks. The method uses a coupling of the perturbed processes that yields a much lower variance than existing methods, thereby drastically lowering the computational complexity required to solve a given problem. Further, the method is simple to implement and will also prove useful in any setting in which continuous time Markov chains are used to model dynamics, such as population processes. We expect the new method to be useful in the context of optimization algorithms that require knowledge of the Hessian. |
q-bio/0408019 | Tsuyoshi Mizuguchi | T. Mizuguchi, K. Sugawara, H. Nishimori, T. Tao, T. Kazama, H.
Nakagawa, Y. Hayakawa, M. Sano | Collective Dynamics of Active Elements: Task Allocation and Pheromone
Trailing | 10 pages, 19 postscript figures, the 1st International symposium on
Dynamical Systems Theory and Its Applications to Biology and Environmental
Sciences | null | null | null | q-bio.PE | null | Collective behavior of active elements inspired by mass of biological
organisms is addressed. Especially, two topics are focused on among amazing
behaviors performed by colony of ants. First, task allocation phenomena are
treated from the viewpoint of proportion regulation of population between
different states. Using a dynamical model consisting of elements and external
``stock materials'', adaptability against various disturbances is numerically
studied. In addition, a dynamical model for a colony ants interacting via two
kind of pheromones is studied, in which simulated ants, as a mass, are shown to
make an efficient foraging flexibly varying the foraging tactics according to
feeding schedules. Finally, experiments are performed with robots moving in
virtual pheromone fields simulated by CG and CCD camera feedback system. Trail
formation processes are demonstrated by this multi-robot system.
| [
{
"created": "Wed, 25 Aug 2004 08:05:42 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Mizuguchi",
"T.",
""
],
[
"Sugawara",
"K.",
""
],
[
"Nishimori",
"H.",
""
],
[
"Tao",
"T.",
""
],
[
"Kazama",
"T.",
""
],
[
"Nakagawa",
"H.",
""
],
[
"Hayakawa",
"Y.",
""
],
[
"Sano",
"M.",
... | Collective behavior of active elements inspired by mass of biological organisms is addressed. Especially, two topics are focused on among amazing behaviors performed by colony of ants. First, task allocation phenomena are treated from the viewpoint of proportion regulation of population between different states. Using a dynamical model consisting of elements and external ``stock materials'', adaptability against various disturbances is numerically studied. In addition, a dynamical model for a colony ants interacting via two kind of pheromones is studied, in which simulated ants, as a mass, are shown to make an efficient foraging flexibly varying the foraging tactics according to feeding schedules. Finally, experiments are performed with robots moving in virtual pheromone fields simulated by CG and CCD camera feedback system. Trail formation processes are demonstrated by this multi-robot system. |
2404.11951 | Lorella Bonaccorsi | Lorella Bonaccorsi, Ugo Santosuosso, Massimo Gulisano and Luca Sodini | Neuropsychological Effects of Rock Steady Boxing in Patients with
Parkinson's Disease: A Comprehensive Analysis | 17 pages, figures 22, master's thesis in exercise science | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study investigates the efficacy of adapted boxing, specifically Rock
Steady Boxing, in mitigating dopamine decline in individuals with Parkinson
disease. The research involved 40 participants with confirmed diagnosis of
Parkinson disease who underwent biweekly RSB sessions over an 8 week period.
Training regimen included activation, core exercise, and a cooldown phase. The
findings revealed a significant amelioration in depressive symptoms through the
sessions. Assessment using the Beck Depression Inventory demonstrated a
progressive decrease in scores associated with depressive symptoms,
particularly affective, cognitive, and somatic symptoms. The reduction in more
severe symptoms was accompanied by an increase in milder symptoms. Statistical
analysis confirmed the significance of the reduction in depressive symptoms
over time, suggesting that physical activity, particularly RSB, may contribute
to enhancing the quality of life for individuals with Parkinson disease. The
positive impact was observed in both motor and depressive symptoms, suggesting
an overall beneficial effect of exercise training. It is important to note that
six participants withdrew from the study due to organizational reasons,
resulting in a reduction in the participant count from 40 to 34. Nonetheless,
the overall results suggest that RSB could be an effective approach to
addressing depression in Parkinson patients, providing a complementary
treatment option to conventional pharmacological therapy.
| [
{
"created": "Thu, 18 Apr 2024 07:12:56 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Bonaccorsi",
"Lorella",
""
],
[
"Santosuosso",
"Ugo",
""
],
[
"Gulisano",
"Massimo",
""
],
[
"Sodini",
"Luca",
""
]
] | This study investigates the efficacy of adapted boxing, specifically Rock Steady Boxing, in mitigating dopamine decline in individuals with Parkinson disease. The research involved 40 participants with confirmed diagnosis of Parkinson disease who underwent biweekly RSB sessions over an 8 week period. Training regimen included activation, core exercise, and a cooldown phase. The findings revealed a significant amelioration in depressive symptoms through the sessions. Assessment using the Beck Depression Inventory demonstrated a progressive decrease in scores associated with depressive symptoms, particularly affective, cognitive, and somatic symptoms. The reduction in more severe symptoms was accompanied by an increase in milder symptoms. Statistical analysis confirmed the significance of the reduction in depressive symptoms over time, suggesting that physical activity, particularly RSB, may contribute to enhancing the quality of life for individuals with Parkinson disease. The positive impact was observed in both motor and depressive symptoms, suggesting an overall beneficial effect of exercise training. It is important to note that six participants withdrew from the study due to organizational reasons, resulting in a reduction in the participant count from 40 to 34. Nonetheless, the overall results suggest that RSB could be an effective approach to addressing depression in Parkinson patients, providing a complementary treatment option to conventional pharmacological therapy. |
q-bio/0309016 | V. Krishnan Ramanujan | R.V. Krishnan, Eva Biener, Jian-Hua Zhang, Robert Heckel and Brian
Herman | Probing subtle fluorescence dynamics in cellular proteins by streak
camera based Fluorescence Lifetime Imaging Microscopy | null | null | 10.1063/1.1630154 | null | q-bio.CB | null | We report the cell biological applications of a recently developed
multiphoton fluorescence lifetime imaging microscopy system using a streak
camera (StreakFLIM). The system was calibrated with standard fluorophore
specimens and was shown to have high accuracy and reproducibility. We
demonstrate the applicability of this instrument in living cells for measuring
the effects of protein targeting and point mutations in the protein sequence
which are not obtainable in conventional intensity based fluorescence
microscopy methods. We discuss the relevance of such time resolved information
in quantitative energy transfer microscopy and in measurement of the parameters
characterizing intracellular physiology.
| [
{
"created": "Fri, 26 Sep 2003 20:31:05 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Krishnan",
"R. V.",
""
],
[
"Biener",
"Eva",
""
],
[
"Zhang",
"Jian-Hua",
""
],
[
"Heckel",
"Robert",
""
],
[
"Herman",
"Brian",
""
]
] | We report the cell biological applications of a recently developed multiphoton fluorescence lifetime imaging microscopy system using a streak camera (StreakFLIM). The system was calibrated with standard fluorophore specimens and was shown to have high accuracy and reproducibility. We demonstrate the applicability of this instrument in living cells for measuring the effects of protein targeting and point mutations in the protein sequence which are not obtainable in conventional intensity based fluorescence microscopy methods. We discuss the relevance of such time resolved information in quantitative energy transfer microscopy and in measurement of the parameters characterizing intracellular physiology. |
1308.3616 | Reginald Smith | Reginald D. Smith | Complexity in animal communication: Estimating the size of N-Gram
structures | 17 pages, 4 figures, 4 tables; accepted and to appear in Entropy | Entropy 2014, 16(1), 526-542 | 10.3390/e16010526 | null | q-bio.PE cs.IT math.IT q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, new techniques that allow conditional entropy to estimate the
combinatorics of symbols are applied to animal communication studies to
estimate the communication's repertoire size. By using the conditional entropy
estimates at multiple orders, the paper estimates the total repertoire sizes
for animal communication across bottlenose dolphins, humpback whales, and
several species of birds for N-grams length one to three. In addition to
discussing the impact of this method on studies of animal communication
complexity, the reliability of these estimates is compared to other methods
through simulation. While entropy does undercount the total repertoire size due
to rare N-grams, it gives a more accurate picture of the most frequently used
repertoire than just repertoire size alone.
| [
{
"created": "Thu, 15 Aug 2013 02:44:55 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Dec 2013 09:34:55 GMT",
"version": "v2"
}
] | 2014-01-17 | [
[
"Smith",
"Reginald D.",
""
]
] | In this paper, new techniques that allow conditional entropy to estimate the combinatorics of symbols are applied to animal communication studies to estimate the communication's repertoire size. By using the conditional entropy estimates at multiple orders, the paper estimates the total repertoire sizes for animal communication across bottlenose dolphins, humpback whales, and several species of birds for N-grams length one to three. In addition to discussing the impact of this method on studies of animal communication complexity, the reliability of these estimates is compared to other methods through simulation. While entropy does undercount the total repertoire size due to rare N-grams, it gives a more accurate picture of the most frequently used repertoire than just repertoire size alone. |
1801.04232 | Davide Michieletto | Davide Michieletto, Marina Lusic, Davide Marenduzzo, Enzo Orlandini | Physical Principles of Retroviral Integration in the Human Genome | Accepted in Nat Comm. SI and Movies can be found at
https://www2.ph.ed.ac.uk/~dmichiel/ | null | 10.1038/s41467-019-08333-8 | null | q-bio.SC cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Certain retroviruses, including HIV, insert their DNA in a non-random
fraction of the host genome via poorly understood selection mechanisms. Here,
we develop a biophysical model for retroviral integrations as stochastic and
quasi-equilibrium topological reconnections between polymers. We discover that
physical effects, such as DNA accessibility and elasticity, play important and
universal roles in this process. Our simulations predict that integration is
favoured within nucleosomal and flexible DNA, in line with experiments, and
that these biases arise due to competing energy barriers associated with DNA
deformations. By considering a long chromosomal region in human T-cells during
interphase, we discover that at these larger scales integration sites are
predominantly determined by chromatin accessibility. Finally, we propose and
solve a reaction-diffusion problem that recapitulates the distribution of HIV
hot-spots within T-cells. With few generic assumptions, our model can
rationalise experimental observations and identifies previously unappreciated
physical contributions to retroviral integration site selection.
| [
{
"created": "Fri, 12 Jan 2018 17:00:33 GMT",
"version": "v1"
},
{
"created": "Thu, 31 May 2018 16:06:28 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Dec 2018 11:57:22 GMT",
"version": "v3"
}
] | 2019-03-06 | [
[
"Michieletto",
"Davide",
""
],
[
"Lusic",
"Marina",
""
],
[
"Marenduzzo",
"Davide",
""
],
[
"Orlandini",
"Enzo",
""
]
] | Certain retroviruses, including HIV, insert their DNA in a non-random fraction of the host genome via poorly understood selection mechanisms. Here, we develop a biophysical model for retroviral integrations as stochastic and quasi-equilibrium topological reconnections between polymers. We discover that physical effects, such as DNA accessibility and elasticity, play important and universal roles in this process. Our simulations predict that integration is favoured within nucleosomal and flexible DNA, in line with experiments, and that these biases arise due to competing energy barriers associated with DNA deformations. By considering a long chromosomal region in human T-cells during interphase, we discover that at these larger scales integration sites are predominantly determined by chromatin accessibility. Finally, we propose and solve a reaction-diffusion problem that recapitulates the distribution of HIV hot-spots within T-cells. With few generic assumptions, our model can rationalise experimental observations and identifies previously unappreciated physical contributions to retroviral integration site selection. |
0709.0125 | Mark Lipson | Mark Lipson (Harvard University) | Differential and graphical approaches to multistability theory for
chemical reaction networks | 28 pages, no figures | null | null | null | q-bio.MN q-bio.QM | null | The use of mathematical models has helped to shed light on countless
phenomena in chemistry and biology. Often, though, one finds that systems of
interest in these fields are dauntingly complex. In this paper, we attempt to
synthesize and expand upon the body of mathematical results pertaining to the
theory of multiple equilibria in chemical reaction networks (CRNs), which has
yielded surprising insights with minimal computational effort. Our central
focus is a recent, cycle-based theorem by Gheorghe Craciun and Martin Feinberg,
which is of significant interest in its own right and also serves, in a
somewhat restated form, as the basis for a number of fruitful connections among
related results.
| [
{
"created": "Sun, 2 Sep 2007 20:26:03 GMT",
"version": "v1"
}
] | 2007-09-04 | [
[
"Lipson",
"Mark",
"",
"Harvard University"
]
] | The use of mathematical models has helped to shed light on countless phenomena in chemistry and biology. Often, though, one finds that systems of interest in these fields are dauntingly complex. In this paper, we attempt to synthesize and expand upon the body of mathematical results pertaining to the theory of multiple equilibria in chemical reaction networks (CRNs), which has yielded surprising insights with minimal computational effort. Our central focus is a recent, cycle-based theorem by Gheorghe Craciun and Martin Feinberg, which is of significant interest in its own right and also serves, in a somewhat restated form, as the basis for a number of fruitful connections among related results. |
1312.7262 | Nicolae Radu Zabet | Daphne Ezer, Nicolae Radu Zabet and Boris Adryan | Physical constraints determine the logic of bacterial promoter
architectures | D.E. and N.R.Z. contributed equally to this work | Nucleic Acids Res. 42:7 (2014) 4196-4207 | 10.1093/nar/gku078 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Site-specific transcription factors (TFs) bind to their target sites on the
DNA, where they regulate the rate at which genes are transcribed. Bacterial TFs
undergo facilitated diffusion (a combination of 3D diffusion around and 1D
random walk on the DNA) when searching for their target sites. Using computer
simulations of this search process, we show that the organisation of the
binding sites, in conjunction with TF copy number and binding site affinity,
plays an important role in determining not only the steady state of promoter
occupancy, but also the order at which TFs bind. These effects can be captured
by facilitated diffusion-based models, but not by standard thermodynamics. We
show that the spacing of binding sites encodes complex logic, which can be
derived from combinations of three basic building blocks: switches, barriers
and clusters, whose response alone and in higher orders of organisation we
characterise in detail. Effective promoter organizations are commonly found in
the E. coli genome and are highly conserved between strains. This will allow
studies of gene regulation at a previously unprecedented level of detail, where
our framework can create testable hypothesis of promoter logic.
| [
{
"created": "Fri, 27 Dec 2013 13:45:19 GMT",
"version": "v1"
}
] | 2014-04-23 | [
[
"Ezer",
"Daphne",
""
],
[
"Zabet",
"Nicolae Radu",
""
],
[
"Adryan",
"Boris",
""
]
] | Site-specific transcription factors (TFs) bind to their target sites on the DNA, where they regulate the rate at which genes are transcribed. Bacterial TFs undergo facilitated diffusion (a combination of 3D diffusion around and 1D random walk on the DNA) when searching for their target sites. Using computer simulations of this search process, we show that the organisation of the binding sites, in conjunction with TF copy number and binding site affinity, plays an important role in determining not only the steady state of promoter occupancy, but also the order at which TFs bind. These effects can be captured by facilitated diffusion-based models, but not by standard thermodynamics. We show that the spacing of binding sites encodes complex logic, which can be derived from combinations of three basic building blocks: switches, barriers and clusters, whose response alone and in higher orders of organisation we characterise in detail. Effective promoter organizations are commonly found in the E. coli genome and are highly conserved between strains. This will allow studies of gene regulation at a previously unprecedented level of detail, where our framework can create testable hypothesis of promoter logic. |
q-bio/0406019 | Mehdi Yahyanejad | Mehdi Yahyanejad, Christopher B. Burge, Mehran Kardar | Untangling influences of hydrophobicity on protein sequences and
structures | 4 pages, 3 figures, 7 eps files | null | null | null | q-bio.BM | null | We fit the Fourier transforms of solvent accessibility and hydrophobicity
profiles of a representative set of proteins to a joint multi-variable
Gaussian. This allows us to separate the intrinsic tendencies of sequence and
structure profiles from the interactions that correlate them; for example, the
$\alpha$-helix periodicity in sequence hydrophobicity is dictated by the
solvent accessibility of structures. The distinct intrinsic tendencies of
sequence and structure profiles are most pronounced at long periods, where
sequence hydrophobicity fluctuates more, while solvent accessibility
fluctuations are less than average. Interestingly, correlations between the two
profiles can be interpreted as the Boltzmann weight of the solvation energy at
room temperature.
| [
{
"created": "Tue, 8 Jun 2004 19:27:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Yahyanejad",
"Mehdi",
""
],
[
"Burge",
"Christopher B.",
""
],
[
"Kardar",
"Mehran",
""
]
] | We fit the Fourier transforms of solvent accessibility and hydrophobicity profiles of a representative set of proteins to a joint multi-variable Gaussian. This allows us to separate the intrinsic tendencies of sequence and structure profiles from the interactions that correlate them; for example, the $\alpha$-helix periodicity in sequence hydrophobicity is dictated by the solvent accessibility of structures. The distinct intrinsic tendencies of sequence and structure profiles are most pronounced at long periods, where sequence hydrophobicity fluctuates more, while solvent accessibility fluctuations are less than average. Interestingly, correlations between the two profiles can be interpreted as the Boltzmann weight of the solvation energy at room temperature. |
1903.11458 | Paulo Fonseca | Maria Angeles Torres, Paulo Fonseca, Karim Erzini, Teresa Cerveira
Borges, Aida Campos, Margarida Castro, Jorge Santos, Maria Esmeralda Costa,
Ana Marcalo, Nuno Oliveira, Jose Vingada | Modelling the impact of deep-water crustacean trawl fishery in the
marine ecosystem off Portuguese Southwestern and South Coasts: I) the trophic
web and trophic flows | 25 pages, 5 figures, 2 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concentration of the population in coastal regions, in addition to the
direct human use, is leading to an accelerated process of change and
deterioration of the marine ecosystems. Human activities such as fishing
together with environmental drivers (e.g. climate change) are triggering major
threats to marine biodiversity, and impact directly the services they provide.
In the South and Southwest coasts of Portugal, the deep-water crustacean trawl
fishery is not exemption. This fishery is recognized to have large effects on a
number of species while generating high rates of unwanted catches. However,
taking into account an ecosystem-based perspective, the fishing impacts along
the food web accounting for biological interactions between and among species
caught remains poorly understood. These impacts are particularly troubling and
are a cause of concern given the cascading effects that might arise. Facing the
main policies and legislative instruments for the restoration and conservation
of the marine environment, times are calling for implementing ecosystem-based
approaches to fisheries management. To this end, we use a food web modelling
(Ecopath with Ecosim) approach to assess the fishing impacts of this particular
fishery on the marine ecosystem of southern and southwestern Portugal. In
particular, we describe the food web structure and functioning, identify the
main keystone species and/or groups, quantify the major trophic and energy
flows, and ultimately assess the impact of fishing on the target species but
also on the ecosystem by means of ecological and ecosystem-based indicators.
Finally, we examine limitations and weaknesses of the model for potential
improvements and future research directions.
| [
{
"created": "Wed, 27 Mar 2019 14:48:25 GMT",
"version": "v1"
}
] | 2019-03-28 | [
[
"Torres",
"Maria Angeles",
""
],
[
"Fonseca",
"Paulo",
""
],
[
"Erzini",
"Karim",
""
],
[
"Borges",
"Teresa Cerveira",
""
],
[
"Campos",
"Aida",
""
],
[
"Castro",
"Margarida",
""
],
[
"Santos",
"Jorge",
""
... | The concentration of the population in coastal regions, in addition to the direct human use, is leading to an accelerated process of change and deterioration of the marine ecosystems. Human activities such as fishing together with environmental drivers (e.g. climate change) are triggering major threats to marine biodiversity, and impact directly the services they provide. In the South and Southwest coasts of Portugal, the deep-water crustacean trawl fishery is not exemption. This fishery is recognized to have large effects on a number of species while generating high rates of unwanted catches. However, taking into account an ecosystem-based perspective, the fishing impacts along the food web accounting for biological interactions between and among species caught remains poorly understood. These impacts are particularly troubling and are a cause of concern given the cascading effects that might arise. Facing the main policies and legislative instruments for the restoration and conservation of the marine environment, times are calling for implementing ecosystem-based approaches to fisheries management. To this end, we use a food web modelling (Ecopath with Ecosim) approach to assess the fishing impacts of this particular fishery on the marine ecosystem of southern and southwestern Portugal. In particular, we describe the food web structure and functioning, identify the main keystone species and/or groups, quantify the major trophic and energy flows, and ultimately assess the impact of fishing on the target species but also on the ecosystem by means of ecological and ecosystem-based indicators. Finally, we examine limitations and weaknesses of the model for potential improvements and future research directions. |
2302.10015 | Gabriel Cardona | Gabriel Cardona, Joan Carles Pons, Gerard Ribas, Tom\'as Mart\'inez
Coronado | Comparison of orchard networks using their extended $\mu$-representation | null | null | null | null | q-bio.PE cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks generalize phylogenetic trees in order to model
reticulation events. Although the comparison of phylogenetic trees is well
studied, and there are multiple ways to do it in an efficient way, the
situation is much different for phylogenetic networks.
Some classes of phylogenetic networks, mainly tree-child networks, are known
to be classified efficiently by their $\mu$-representation, which essentially
counts, for every node, the number of paths to each leaf. In this paper, we
introduce the extended $\mu$-representation of networks, where the number of
paths to reticulations is also taken into account. This modification allows us
to distinguish orchard networks and to define a sound metric on the space of
such networks that can, moreover, be computed efficiently.
The class of orchard networks, as well as being one of the classes with
biological significance (one such network can be interpreted as a tree with
extra arcs involving coexisting organisms), is one of the most generic ones (in
mathematical terms) for which such a representation can (conjecturally) exist,
since a slight relaxation of the definition leads to a problem that is Graph
Isomorphism Complete.
| [
{
"created": "Mon, 20 Feb 2023 14:44:35 GMT",
"version": "v1"
}
] | 2023-02-21 | [
[
"Cardona",
"Gabriel",
""
],
[
"Pons",
"Joan Carles",
""
],
[
"Ribas",
"Gerard",
""
],
[
"Coronado",
"Tomás Martínez",
""
]
] | Phylogenetic networks generalize phylogenetic trees in order to model reticulation events. Although the comparison of phylogenetic trees is well studied, and there are multiple ways to do it in an efficient way, the situation is much different for phylogenetic networks. Some classes of phylogenetic networks, mainly tree-child networks, are known to be classified efficiently by their $\mu$-representation, which essentially counts, for every node, the number of paths to each leaf. In this paper, we introduce the extended $\mu$-representation of networks, where the number of paths to reticulations is also taken into account. This modification allows us to distinguish orchard networks and to define a sound metric on the space of such networks that can, moreover, be computed efficiently. The class of orchard networks, as well as being one of the classes with biological significance (one such network can be interpreted as a tree with extra arcs involving coexisting organisms), is one of the most generic ones (in mathematical terms) for which such a representation can (conjecturally) exist, since a slight relaxation of the definition leads to a problem that is Graph Isomorphism Complete. |
1201.4339 | Sebastian Bitzer | Sebastian Bitzer and Stefan J. Kiebel | Recognizing recurrent neural networks (rRNN): Bayesian inference for
recurrent neural networks | null | Biological Cybernetics 106(4): 201-217, 2012 | 10.1007/s00422-012-0490-x | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics.
| [
{
"created": "Fri, 20 Jan 2012 17:04:18 GMT",
"version": "v1"
}
] | 2012-07-10 | [
[
"Bitzer",
"Sebastian",
""
],
[
"Kiebel",
"Stefan J.",
""
]
] | Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics. |
2402.04389 | C\'ecile Delacour | Scott Greenhorn, V\'eronique Coizet, Victor Dupuit, Bruno Fernandez,
Guillaume Bres, Arnaud Claudel, Pierre Gasner, Jan M. Warnking, Emmanuel L.
Barbier, C\'ecile Delacour | Ultrathin, flexible and MRI-compatible microelectrode array for chronic
single units recording within subcortical layers | null | null | null | null | q-bio.NC cond-mat.mes-hall | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current techniques of neuroimaging, including electrical devices, are either
of low spatiotemporal resolution or invasive, impeding multiscale monitoring of
brain activity at both single cell and network levels. Overcoming this issue is
of great importance to assess brain's computational ability and for
neurorehabilitation projects that require real-time monitoring of neurons and
concomitant networks activities. Currently, that information could be extracted
from functional MRI when combined with mathematical models. Novel methods
enabling quantitative and long-lasting recording at both single cell and
network levels will allow to correlate the MRI data with intracortical activity
at single cell level, and to refine those models. Here, we report the
fabrication and validation of ultra-thin, optically transparent and flexible
intracortical microelectrode arrays for combining extracellular multi-unit and
fMRI recordings. The sensing devices are compatible with large-scale
manufacturing, and demonstrate both fMRI transparency at 4.7 T, and high
electrical performance, and thus appears as a promising candidate for
simultaneous multiscale neurodynamic measurements.
| [
{
"created": "Tue, 6 Feb 2024 20:44:44 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Greenhorn",
"Scott",
""
],
[
"Coizet",
"Véronique",
""
],
[
"Dupuit",
"Victor",
""
],
[
"Fernandez",
"Bruno",
""
],
[
"Bres",
"Guillaume",
""
],
[
"Claudel",
"Arnaud",
""
],
[
"Gasner",
"Pierre",
""
],
... | Current techniques of neuroimaging, including electrical devices, are either of low spatiotemporal resolution or invasive, impeding multiscale monitoring of brain activity at both single cell and network levels. Overcoming this issue is of great importance to assess brain's computational ability and for neurorehabilitation projects that require real-time monitoring of neurons and concomitant networks activities. Currently, that information could be extracted from functional MRI when combined with mathematical models. Novel methods enabling quantitative and long-lasting recording at both single cell and network levels will allow to correlate the MRI data with intracortical activity at single cell level, and to refine those models. Here, we report the fabrication and validation of ultra-thin, optically transparent and flexible intracortical microelectrode arrays for combining extracellular multi-unit and fMRI recordings. The sensing devices are compatible with large-scale manufacturing, and demonstrate both fMRI transparency at 4.7 T, and high electrical performance, and thus appears as a promising candidate for simultaneous multiscale neurodynamic measurements. |
2108.00837 | Edward D Lee | Edward D. Lee, Xiaowen Chen, Bryan C. Daniels | Discovering sparse control strategies in C. elegans | null | null | 10.1371/journal.pcbi.1010072 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Biological circuits such as neural or gene regulation networks use internal
states to map sensory input to an adaptive repertoire of behavior.
Characterizing this mapping is a major challenge for systems biology, and
though experiments that probe internal states are developing rapidly,
organismal complexity presents a fundamental obstacle given the many possible
ways internal states could map to behavior. Using C. elegans as an example, we
propose a protocol for systematic perturbation of neural states that limits
experimental complexity but still characterizes collective aspects of the
neural-behavioral map. We consider experimentally motivated small perturbations
-- ones that are most likely to preserve natural dynamics and are closer to
internal control mechanisms -- to neural states and their impact on collective
neural behavior. Then, we connect such perturbations to the local information
geometry of collective statistics, which can be fully characterized using
pairwise perturbations. Applying the protocol to a minimal model of C. elegans
neural activity, we find that collective neural statistics are most sensitive
to a few principal perturbative modes. Dominant eigenvalues decay initially as
a power law, unveiling a hierarchy that arises from variation in individual
neural activity and pairwise interactions. Highest-ranking modes tend to be
dominated by a few, "pivotal" neurons that account for most of the system's
sensitivity, suggesting a sparse mechanism for control of collective behavior.
| [
{
"created": "Mon, 2 Aug 2021 12:48:20 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Lee",
"Edward D.",
""
],
[
"Chen",
"Xiaowen",
""
],
[
"Daniels",
"Bryan C.",
""
]
] | Biological circuits such as neural or gene regulation networks use internal states to map sensory input to an adaptive repertoire of behavior. Characterizing this mapping is a major challenge for systems biology, and though experiments that probe internal states are developing rapidly, organismal complexity presents a fundamental obstacle given the many possible ways internal states could map to behavior. Using C. elegans as an example, we propose a protocol for systematic perturbation of neural states that limits experimental complexity but still characterizes collective aspects of the neural-behavioral map. We consider experimentally motivated small perturbations -- ones that are most likely to preserve natural dynamics and are closer to internal control mechanisms -- to neural states and their impact on collective neural behavior. Then, we connect such perturbations to the local information geometry of collective statistics, which can be fully characterized using pairwise perturbations. Applying the protocol to a minimal model of C. elegans neural activity, we find that collective neural statistics are most sensitive to a few principal perturbative modes. Dominant eigenvalues decay initially as a power law, unveiling a hierarchy that arises from variation in individual neural activity and pairwise interactions. Highest-ranking modes tend to be dominated by a few, "pivotal" neurons that account for most of the system's sensitivity, suggesting a sparse mechanism for control of collective behavior. |
0812.1086 | Kilian Koepsell | Kilian Koepsell, Xin Wang, Vishal Vaingankar, Yichun Wei, Qingbo Wang,
Daniel L. Rathbun, W. Martin Usrey, Judith A. Hirsch, Friedrich T. Sommer | Retinal oscillations carry visual information to cortex | 21 pages, 10 figures, submitted to Frontiers in Systems Neuroscience | Front. Syst. Neurosci. (2009) 3:4. | 10.3389/neuro.06.004.2009 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thalamic relay cells fire action potentials that transmit information from
retina to cortex. The amount of information that spike trains encode is usually
estimated from the precision of spike timing with respect to the stimulus.
Sensory input, however, is only one factor that influences neural activity. For
example, intrinsic dynamics, such as oscillations of networks of neurons, also
modulate firing pattern. Here, we asked if retinal oscillations might help to
convey information to neurons downstream. Specifically, we made whole-cell
recordings from relay cells to reveal retinal inputs (EPSPs) and thalamic
outputs (spikes) and analyzed these events with information theory. Our results
show that thalamic spike trains operate as two multiplexed channels. One
channel, which occupies a low frequency band (<30 Hz), is encoded by average
firing rate with respect to the stimulus and carries information about local
changes in the image over time. The other operates in the gamma frequency band
(40-80 Hz) and is encoded by spike time relative to the retinal oscillations.
Because these oscillations involve extensive areas of the retina, it is likely
that the second channel transmits information about global features of the
visual scene. At times, the second channel conveyed even more information than
the first.
| [
{
"created": "Fri, 5 Dec 2008 08:12:30 GMT",
"version": "v1"
}
] | 2009-04-25 | [
[
"Koepsell",
"Kilian",
""
],
[
"Wang",
"Xin",
""
],
[
"Vaingankar",
"Vishal",
""
],
[
"Wei",
"Yichun",
""
],
[
"Wang",
"Qingbo",
""
],
[
"Rathbun",
"Daniel L.",
""
],
[
"Usrey",
"W. Martin",
""
],
[
... | Thalamic relay cells fire action potentials that transmit information from retina to cortex. The amount of information that spike trains encode is usually estimated from the precision of spike timing with respect to the stimulus. Sensory input, however, is only one factor that influences neural activity. For example, intrinsic dynamics, such as oscillations of networks of neurons, also modulate firing pattern. Here, we asked if retinal oscillations might help to convey information to neurons downstream. Specifically, we made whole-cell recordings from relay cells to reveal retinal inputs (EPSPs) and thalamic outputs (spikes) and analyzed these events with information theory. Our results show that thalamic spike trains operate as two multiplexed channels. One channel, which occupies a low frequency band (<30 Hz), is encoded by average firing rate with respect to the stimulus and carries information about local changes in the image over time. The other operates in the gamma frequency band (40-80 Hz) and is encoded by spike time relative to the retinal oscillations. Because these oscillations involve extensive areas of the retina, it is likely that the second channel transmits information about global features of the visual scene. At times, the second channel conveyed even more information than the first. |
2311.12717 | Volodymyr Minin | Peter B. Chi and Volodymyr M. Minin | Phylogenetic least squares estimation without genetic distances | 16 pages of main text, 6 figures | null | null | null | q-bio.PE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Least squares estimation of phylogenies is an established family of methods
with good statistical properties. State-of-the-art least squares phylogenetic
estimation proceeds by first estimating a distance matrix, which is then used
to determine the phylogeny by minimizing a squared-error loss function. Here,
we develop a method for least squares phylogenetic inference that does not rely
on a pre-estimated distance matrix. Our approach allows us to circumvent the
typical need to first estimate a distance matrix by forming a new loss function
inspired by the phylogenetic likelihood score function; in this manner,
inference is not based on a summary statistic of the sequence data, but
directly on the sequence data itself. We use a Jukes-Cantor substitution model
to show that our method leads to improvements over ordinary least squares
phylogenetic inference, and is even observed to rival maximum likelihood
estimation in terms of topology estimation efficiency. Using a Kimura
2-parameter model, we show that our method also allows for estimation of the
global transition/transversion ratio simultaneously with the phylogeny and its
branch lengths. This is impossible to accomplish with any other distance-based
method as far as we know. Our developments pave the way for more optimal
phylogenetic inference under the least squares framework, particularly in
settings under which likelihood-based inference is infeasible, including when
one desires to build a phylogeny based on information provided by only a subset
of all possible nucleotide substitutions such as synonymous or non-synonymous
substitutions.
| [
{
"created": "Tue, 21 Nov 2023 16:44:19 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jun 2024 07:53:49 GMT",
"version": "v2"
}
] | 2024-06-24 | [
[
"Chi",
"Peter B.",
""
],
[
"Minin",
"Volodymyr M.",
""
]
] | Least squares estimation of phylogenies is an established family of methods with good statistical properties. State-of-the-art least squares phylogenetic estimation proceeds by first estimating a distance matrix, which is then used to determine the phylogeny by minimizing a squared-error loss function. Here, we develop a method for least squares phylogenetic inference that does not rely on a pre-estimated distance matrix. Our approach allows us to circumvent the typical need to first estimate a distance matrix by forming a new loss function inspired by the phylogenetic likelihood score function; in this manner, inference is not based on a summary statistic of the sequence data, but directly on the sequence data itself. We use a Jukes-Cantor substitution model to show that our method leads to improvements over ordinary least squares phylogenetic inference, and is even observed to rival maximum likelihood estimation in terms of topology estimation efficiency. Using a Kimura 2-parameter model, we show that our method also allows for estimation of the global transition/transversion ratio simultaneously with the phylogeny and its branch lengths. This is impossible to accomplish with any other distance-based method as far as we know. Our developments pave the way for more optimal phylogenetic inference under the least squares framework, particularly in settings under which likelihood-based inference is infeasible, including when one desires to build a phylogeny based on information provided by only a subset of all possible nucleotide substitutions such as synonymous or non-synonymous substitutions. |
1806.09373 | Pedro Mediano | Pedro A.M. Mediano, Anil K. Seth and Adam B. Barrett | Measuring Integrated Information: Comparison of Candidate Measures in
Theory and Simulation | null | null | 10.3390/e21010017 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrated Information Theory (IIT) is a prominent theory of consciousness
that has at its centre measures that quantify the extent to which a system
generates more information than the sum of its parts. While several candidate
measures of integrated information (`$\Phi$') now exist, little is known about
how they compare, especially in terms of their behaviour on non-trivial network
models. In this article we provide clear and intuitive descriptions of six
distinct candidate measures. We then explore the properties of each of these
measures in simulation on networks consisting of eight interacting nodes,
animated with Gaussian linear autoregressive dynamics. We find a striking
diversity in the behaviour of these measures -- no two measures show consistent
agreement across all analyses. Further, only a subset of the measures appear to
genuinely reflect some form of dynamical complexity, in the sense of
simultaneous segregation and integration between system components. Our results
help guide the operationalisation of IIT and advance the development of
measures of integrated information that may have more general applicability.
| [
{
"created": "Mon, 25 Jun 2018 10:37:48 GMT",
"version": "v1"
}
] | 2019-01-30 | [
[
"Mediano",
"Pedro A. M.",
""
],
[
"Seth",
"Anil K.",
""
],
[
"Barrett",
"Adam B.",
""
]
] | Integrated Information Theory (IIT) is a prominent theory of consciousness that has at its centre measures that quantify the extent to which a system generates more information than the sum of its parts. While several candidate measures of integrated information (`$\Phi$') now exist, little is known about how they compare, especially in terms of their behaviour on non-trivial network models. In this article we provide clear and intuitive descriptions of six distinct candidate measures. We then explore the properties of each of these measures in simulation on networks consisting of eight interacting nodes, animated with Gaussian linear autoregressive dynamics. We find a striking diversity in the behaviour of these measures -- no two measures show consistent agreement across all analyses. Further, only a subset of the measures appear to genuinely reflect some form of dynamical complexity, in the sense of simultaneous segregation and integration between system components. Our results help guide the operationalisation of IIT and advance the development of measures of integrated information that may have more general applicability. |
0805.3757 | Niko Komin | Niko Komin, Ra\'ul Toral | Drug absorption through a cell monolayer: a theoretical work on a
non-linear three-compartment model | 21 pages, 8 figures (v4: detailed definition of the treated model -
additional information about limitations) | European Journal of Pharmaceutical Sciences, 37 (2009), 106-114 | 10.1016/j.ejps.2009.01.005 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The subject of analysis is a non-linear three-compartment model, widely used
in pharmacological absorption studies. It has been transformed into a general
form, thus leading automatically to an appropriate approximation. This made the
absorption profile accessible and expressions for absorption times, apparent
permeabilities and equilibrium values were given. These findings allowed a
profound analysis of results from non-linear curve fits and delivered the
dependencies on the systems' parameters over a wide range of values. The
results were applied to an absorption experiment with multidrug
transporter-affected antibiotic CNV97100 on Caco-2 cell monolayers.
| [
{
"created": "Mon, 26 May 2008 15:57:15 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Oct 2008 14:26:22 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Dec 2008 11:55:44 GMT",
"version": "v3"
},
{
"created": "Mon, 12 Jan 2009 11:11:46 GMT",
"version": "v4"
}
] | 2010-03-04 | [
[
"Komin",
"Niko",
""
],
[
"Toral",
"Raúl",
""
]
] | The subject of analysis is a non-linear three-compartment model, widely used in pharmacological absorption studies. It has been transformed into a general form, thus leading automatically to an appropriate approximation. This made the absorption profile accessible and expressions for absorption times, apparent permeabilities and equilibrium values were given. These findings allowed a profound analysis of results from non-linear curve fits and delivered the dependencies on the systems' parameters over a wide range of values. The results were applied to an absorption experiment with multidrug transporter-affected antibiotic CNV97100 on Caco-2 cell monolayers. |
2002.05357 | Stuart Johnston | Stuart T. Johnston, Matthew J. Simpson and Edmund J. Crampin | Predicting population extinction in lattice-based birth-death-movement
models | null | null | 10.1098/rspa.2020.0089 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The question of whether a population will persist or go extinct is of key
interest throughout ecology and biology. Various mathematical techniques allow
us to generate knowledge regarding individual behaviour, which can be analysed
to obtain predictions about the ultimate survival or extinction of the
population. A common model employed to describe population dynamics is the
lattice-based random walk model with crowding (exclusion). This model can
incorporate behaviour such as birth, death and movement, while including
natural phenomena such as finite size effects. Performing sufficiently many
realisations of the random walk model to extract representative population
behaviour is computationally intensive. Therefore, continuum approximations of
random walk models are routinely employed. However, standard continuum
approximations are notoriously incapable of making accurate predictions about
population extinction. Here, we develop a new continuum approximation, the
state space diffusion approximation, which explicitly accounts for population
extinction. Predictions from our approximation faithfully capture the behaviour
in the random walk model, and provides additional information compared to
standard approximations. We examine the influence of the number of lattice
sites and initial number of individuals on the long-term population behaviour,
and demonstrate the reduction in computation time between the random walk model
and our approximation.
| [
{
"created": "Thu, 13 Feb 2020 05:33:39 GMT",
"version": "v1"
}
] | 2021-04-28 | [
[
"Johnston",
"Stuart T.",
""
],
[
"Simpson",
"Matthew J.",
""
],
[
"Crampin",
"Edmund J.",
""
]
] | The question of whether a population will persist or go extinct is of key interest throughout ecology and biology. Various mathematical techniques allow us to generate knowledge regarding individual behaviour, which can be analysed to obtain predictions about the ultimate survival or extinction of the population. A common model employed to describe population dynamics is the lattice-based random walk model with crowding (exclusion). This model can incorporate behaviour such as birth, death and movement, while including natural phenomena such as finite size effects. Performing sufficiently many realisations of the random walk model to extract representative population behaviour is computationally intensive. Therefore, continuum approximations of random walk models are routinely employed. However, standard continuum approximations are notoriously incapable of making accurate predictions about population extinction. Here, we develop a new continuum approximation, the state space diffusion approximation, which explicitly accounts for population extinction. Predictions from our approximation faithfully capture the behaviour in the random walk model, and provides additional information compared to standard approximations. We examine the influence of the number of lattice sites and initial number of individuals on the long-term population behaviour, and demonstrate the reduction in computation time between the random walk model and our approximation. |
2010.08465 | Tomas Svensson | Tomas Svensson | A review of mass concentrations of Bramblings Fringilla montifringilla:
implications for assessment of large numbers of birds | Typo corrections and minor clarifications based on feedback from
readers | Ornis Svecica, 31, 44-67 (2021) | 10.34080/os.v31.22031 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mass concentrations of birds, or lack of such, is a phenomenon of great
ecological and domestic significance. Apart from being and indicator for e.g.
food availability, ecological change and population size, it is also a source
of conflict between humans and birds. Moreover, massive gatherings or colonies
of birds also get the attention of the public -- either as a spectacular
phenomenon or as an unwelcome pest -- thereby forming the public perception of
birds and their abundance. In the context of the mass concentration of
bramblings (Fringilla montifringilla) in Sweden the winter 2019-2020, this work
reviews the literature on this striking phenomenon. Winter roosts are found to
amount to on the order of one million birds per hectare of roost area, but the
spread between reports is significant. Support for roosts of up to around 15
million birds was found, but much larger numbers are frequently recited in the
literature. It is argued that these larger numbers are the result of
overestimation or, in some cases, even completely unfounded (potentially
typos). While the difficulties related to the count of large numbers of birds
can explain this state, it is unfortunate that "high numbers" sometimes
displace proper numbers. Since incorrect data, and its persistence, may result
in that incorrect conclusions are drawn from new observations, this matter
deserves attention. As the Brambling is a well-studied species, the matter also
raises concerns regarding numbers for mass concentrations of other species. It
is recommended that very large numbers of birds should be recited and used with
care: underlying data and methods of the original sources should be
scrutinized. Analogously, reporters of large numbers of birds are advised to
describe and document counting methods. In particular, number estimates based
on flock volume and bird density should be avoided.
| [
{
"created": "Fri, 16 Oct 2020 16:05:26 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Oct 2020 13:30:27 GMT",
"version": "v2"
}
] | 2021-04-27 | [
[
"Svensson",
"Tomas",
""
]
] | Mass concentrations of birds, or lack of such, is a phenomenon of great ecological and domestic significance. Apart from being and indicator for e.g. food availability, ecological change and population size, it is also a source of conflict between humans and birds. Moreover, massive gatherings or colonies of birds also get the attention of the public -- either as a spectacular phenomenon or as an unwelcome pest -- thereby forming the public perception of birds and their abundance. In the context of the mass concentration of bramblings (Fringilla montifringilla) in Sweden the winter 2019-2020, this work reviews the literature on this striking phenomenon. Winter roosts are found to amount to on the order of one million birds per hectare of roost area, but the spread between reports is significant. Support for roosts of up to around 15 million birds was found, but much larger numbers are frequently recited in the literature. It is argued that these larger numbers are the result of overestimation or, in some cases, even completely unfounded (potentially typos). While the difficulties related to the count of large numbers of birds can explain this state, it is unfortunate that "high numbers" sometimes displace proper numbers. Since incorrect data, and its persistence, may result in that incorrect conclusions are drawn from new observations, this matter deserves attention. As the Brambling is a well-studied species, the matter also raises concerns regarding numbers for mass concentrations of other species. It is recommended that very large numbers of birds should be recited and used with care: underlying data and methods of the original sources should be scrutinized. Analogously, reporters of large numbers of birds are advised to describe and document counting methods. In particular, number estimates based on flock volume and bird density should be avoided. |
1010.6178 | Sander Bohte | Sander M. Bohte and Jaldert O. Rombouts | Fractionally Predictive Spiking Neurons | 13 pages, 5 figures, in Advances in Neural Information Processing
2010 | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent experimental work has suggested that the neural firing rate can be
interpreted as a fractional derivative, at least when signal variation induces
neural adaptation. Here, we show that the actual neural spike-train itself can
be considered as the fractional derivative, provided that the neural signal is
approximated by a sum of power-law kernels. A simple standard thresholding
spiking neuron suffices to carry out such an approximation, given a suitable
refractory response. Empirically, we find that the online approximation of
signals with a sum of power-law kernels is beneficial for encoding signals with
slowly varying components, like long-memory self-similar signals. For such
signals, the online power-law kernel approximation typically required less than
half the number of spikes for similar SNR as compared to sums of similar but
exponentially decaying kernels. As power-law kernels can be accurately
approximated using sums or cascades of weighted exponentials, we demonstrate
that the corresponding decoding of spike-trains by a receiving neuron allows
for natural and transparent temporal signal filtering by tuning the weights of
the decoding kernel.
| [
{
"created": "Fri, 29 Oct 2010 10:48:25 GMT",
"version": "v1"
}
] | 2010-11-01 | [
[
"Bohte",
"Sander M.",
""
],
[
"Rombouts",
"Jaldert O.",
""
]
] | Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel. |
2007.00621 | Papia Chowdhury Dr | Papia Chowdhury | In Silico Investigation of Phytoconstituents from Indian Medicinal Herb
'Tinospora cordifolia (Giloy)' against SARS-CoV-2 (COVID-19) by Molecular
Dynamics Approach | Possibility of using chemical extracts from Indian Medicinal Herb for
the treatment of COVID-19 is investigated. Accepted for publication in
Journal of Biomolecular Structure and Dynamics | null | 10.1080/07391102.2020.1803968 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent appearance of COVID-19 virus has created a global crisis due to
unavailability of any vaccine or drug that can effectively and
deterministically work against it. Naturally, different possibilities
(including herbal medicines having known therapeutic significance) have been
explored by the scientists. The systematic scientific study (beginning with in
silico study) of herbal medicines in particular and any drug in general is now
possible as the structural components (proteins) of COVID-19 are already
characterized. The main protease of COVID-19 virus is $\rm{M^{pro}}$ or
$\rm{3CL^{pro}}$ which is a key CoV enzyme and an attractive drug target as it
plays a pivotal role in mediating viral replication and transcription. In the
present study, $\rm{3CL^{pro}}$ is used to study drug:3CLpro interactions and
thus to investigate whether all or any of the main chemical constituents of
Tinospora cordifolia (e.g., berberine $\rm{(C_{20}H_{18}NO_{4})}$,
$\beta$-sitosterol $\rm{(C_{29}H_{50}O)}$, choline $\rm{(C_{5}H_{14}NO)}$,
tetrahydropalmatine $\rm{(C_{21}H_{25}NO_{4})}$ and octacosanol
$\rm{(C_{28}H_{58}O))}$ can be used as an anti-viral drug against SARS-CoV-2.
The in silico study performed using tools of network pharmacology, molecular
docking including molecular dynamics have revealed that among all considered
phytochemicals in Tinospora cordifolia, berberine can regulate $\rm{3CL^{pro}}$
protein's function due to its easy inhibition and thus can control viral
replication. The selection of Tinospora cordifolia was motivated by the fact
that the main constituents of it are known to be responsible for various
antiviral activities and the treatment of jaundice, rheumatism, diabetes, etc.
| [
{
"created": "Fri, 29 May 2020 20:10:35 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 17:00:18 GMT",
"version": "v2"
}
] | 2020-08-07 | [
[
"Chowdhury",
"Papia",
""
]
] | The recent appearance of COVID-19 virus has created a global crisis due to unavailability of any vaccine or drug that can effectively and deterministically work against it. Naturally, different possibilities (including herbal medicines having known therapeutic significance) have been explored by the scientists. The systematic scientific study (beginning with in silico study) of herbal medicines in particular and any drug in general is now possible as the structural components (proteins) of COVID-19 are already characterized. The main protease of COVID-19 virus is $\rm{M^{pro}}$ or $\rm{3CL^{pro}}$ which is a key CoV enzyme and an attractive drug target as it plays a pivotal role in mediating viral replication and transcription. In the present study, $\rm{3CL^{pro}}$ is used to study drug:3CLpro interactions and thus to investigate whether all or any of the main chemical constituents of Tinospora cordifolia (e.g., berberine $\rm{(C_{20}H_{18}NO_{4})}$, $\beta$-sitosterol $\rm{(C_{29}H_{50}O)}$, choline $\rm{(C_{5}H_{14}NO)}$, tetrahydropalmatine $\rm{(C_{21}H_{25}NO_{4})}$ and octacosanol $\rm{(C_{28}H_{58}O))}$ can be used as an anti-viral drug against SARS-CoV-2. The in silico study performed using tools of network pharmacology, molecular docking including molecular dynamics have revealed that among all considered phytochemicals in Tinospora cordifolia, berberine can regulate $\rm{3CL^{pro}}$ protein's function due to its easy inhibition and thus can control viral replication. The selection of Tinospora cordifolia was motivated by the fact that the main constituents of it are known to be responsible for various antiviral activities and the treatment of jaundice, rheumatism, diabetes, etc. |
2004.08365 | Emanuele Daddi | Emanuele Daddi and Mauro Giavalisco | Early forecasts of the evolution of the COVID-19 outbreaks and
quantitative assessment of the effectiveness of countering measures | Forecasts in Table I updated with data collected until April 26th. 10
pages, 7 figures, 2 tables. Comments welcome | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discovered that the time evolution of the inverse fractional daily growth
of new infections, N/dN, in the current outbreak of COVID-19 is accurately
described by a universal function, namely the two-parameter Gumbel cumulative
function, in all countries that we have investigated. While the two Gumbel
parameters, as determined bit fits to the data, vary from country to country
(and even within different regions of the same country), reflecting the
diversity and efficacy of the adopted containment measures, the functional form
of the evolution of N/dN appears to be universal. The result of the fit in a
given region or country appears to be stable against variations of the selected
time interval. This makes it possible to robustly estimate the two parameters
from the data data even over relatively small time periods. In turn, this
allows one to predict with large advance and well-controlled confidence levels,
the time of the peak in the daily new infections, its magnitude and duration
(hence the total infections), as well as the time when the daily new infections
decrease to a pre-set value (e.g. less than about 2 new infections per day per
million people), which can be very useful for planning the reopening of
economic and social activities. We use this formalism to predict and compare
these key features of the evolution of the COVID-19 disease in a number of
countries and provide a quantitative assessment of the degree of success in in
their efforts to countain the outbreak.
| [
{
"created": "Fri, 17 Apr 2020 17:41:24 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Apr 2020 17:10:14 GMT",
"version": "v2"
}
] | 2020-04-29 | [
[
"Daddi",
"Emanuele",
""
],
[
"Giavalisco",
"Mauro",
""
]
] | We discovered that the time evolution of the inverse fractional daily growth of new infections, N/dN, in the current outbreak of COVID-19 is accurately described by a universal function, namely the two-parameter Gumbel cumulative function, in all countries that we have investigated. While the two Gumbel parameters, as determined bit fits to the data, vary from country to country (and even within different regions of the same country), reflecting the diversity and efficacy of the adopted containment measures, the functional form of the evolution of N/dN appears to be universal. The result of the fit in a given region or country appears to be stable against variations of the selected time interval. This makes it possible to robustly estimate the two parameters from the data data even over relatively small time periods. In turn, this allows one to predict with large advance and well-controlled confidence levels, the time of the peak in the daily new infections, its magnitude and duration (hence the total infections), as well as the time when the daily new infections decrease to a pre-set value (e.g. less than about 2 new infections per day per million people), which can be very useful for planning the reopening of economic and social activities. We use this formalism to predict and compare these key features of the evolution of the COVID-19 disease in a number of countries and provide a quantitative assessment of the degree of success in in their efforts to countain the outbreak. |
1707.08356 | Andrea Mazzolini | Andrea Mazzolini, Marco Gherardi, Michele Caselle, Marco Cosentino
Lagomarsino, Matteo Osella | Statistics of shared components in complex component systems | 18 pages, 7 main figures, 7 supplementary figures | Phys. Rev. X 8, 021023 (2018) | 10.1103/PhysRevX.8.021023 | null | q-bio.GN physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many complex systems are modular. Such systems can be represented as
"component systems", i.e., sets of elementary components, such as LEGO bricks
in LEGO sets. The bricks found in a LEGO set reflect a target architecture,
which can be built following a set-specific list of instructions. In other
component systems, instead, the underlying functional design and constraints
are not obvious a priori, and their detection is often a challenge of both
scientific and practical importance, requiring a clear understanding of
component statistics. Importantly, some quantitative invariants appear to be
common to many component systems, most notably a common broad distribution of
component abundances, which often resembles the well-known Zipf's law. Such
"laws" affect in a general and non-trivial way the component statistics,
potentially hindering the identification of system-specific functional
constraints or generative processes. Here, we specifically focus on the
statistics of shared components, i.e., the distribution of the number of
components shared by different system-realizations, such as the common bricks
found in different LEGO sets. To account for the effects of component
heterogeneity, we consider a simple null model, which builds
system-realizations by random draws from a universe of possible components.
Under general assumptions on abundance heterogeneity, we provide analytical
estimates of component occurrence, which quantify exhaustively the statistics
of shared components. Surprisingly, this simple null model can positively
explain important features of empirical component-occurrence distributions
obtained from data on bacterial genomes, LEGO sets, and book chapters. Specific
architectural features and functional constraints can be detected from
occurrence patterns as deviations from these null predictions, as we show for
the illustrative case of the "core" genome in bacteria.
| [
{
"created": "Wed, 26 Jul 2017 10:23:24 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Apr 2018 12:07:17 GMT",
"version": "v2"
}
] | 2018-04-24 | [
[
"Mazzolini",
"Andrea",
""
],
[
"Gherardi",
"Marco",
""
],
[
"Caselle",
"Michele",
""
],
[
"Lagomarsino",
"Marco Cosentino",
""
],
[
"Osella",
"Matteo",
""
]
] | Many complex systems are modular. Such systems can be represented as "component systems", i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf's law. Such "laws" affect in a general and non-trivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system-realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system-realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the "core" genome in bacteria. |
2405.01703 | Charles Puelz | Charles Puelz, Craig G. Rusin, Dan Lior, Shagun Sachdeva, Tam T. Doan,
Lindsay F. Eilers, Dana Reaves-O'Neal, and Silvana Molossi | Fluid-structure interaction simulations for the prediction of fractional
flow reserve in pediatric patients with anomalous aortic origin of a coronary
artery | null | null | null | null | q-bio.TO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer simulations of blood flow in patients with anomalous aortic origin
of a coronary artery (AAOCA) have the promise to provide insight into this
complex disease. They provide an in-silico experimental platform to explore
possible mechanisms of myocardial ischemia, a potentially deadly complication
for patients with this defect. This paper focuses on the question of model
calibration for fluid-structure interaction models of pediatric AAOCA patients.
Imaging and cardiac catheterization data provide partial information for model
construction and calibration. However, parameters for downstream boundary
conditions needed for these models are difficult to estimate. Further,
important model predictions, like fractional flow reserve (FFR), are sensitive
to these parameters. We describe an approach to calibrate downstream boundary
condition parameters to clinical measurements of resting FFR. The calibrated
models are then used to predict FFR at stress, an invasively measured quantity
that can be used in the clinical evaluation of these patients. We find
reasonable agreement between the model predicted and clinically measured FFR at
stress, indicating the credibility of this modeling framework for predicting
hemodynamics of pediatric AAOCA patients. This approach could lead to important
clinical applications since it may serve as a tool for risk stratifying
children with AAOCA.
| [
{
"created": "Thu, 2 May 2024 19:58:54 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Puelz",
"Charles",
""
],
[
"Rusin",
"Craig G.",
""
],
[
"Lior",
"Dan",
""
],
[
"Sachdeva",
"Shagun",
""
],
[
"Doan",
"Tam T.",
""
],
[
"Eilers",
"Lindsay F.",
""
],
[
"Reaves-O'Neal",
"Dana",
""
],
[
... | Computer simulations of blood flow in patients with anomalous aortic origin of a coronary artery (AAOCA) have the promise to provide insight into this complex disease. They provide an in-silico experimental platform to explore possible mechanisms of myocardial ischemia, a potentially deadly complication for patients with this defect. This paper focuses on the question of model calibration for fluid-structure interaction models of pediatric AAOCA patients. Imaging and cardiac catheterization data provide partial information for model construction and calibration. However, parameters for downstream boundary conditions needed for these models are difficult to estimate. Further, important model predictions, like fractional flow reserve (FFR), are sensitive to these parameters. We describe an approach to calibrate downstream boundary condition parameters to clinical measurements of resting FFR. The calibrated models are then used to predict FFR at stress, an invasively measured quantity that can be used in the clinical evaluation of these patients. We find reasonable agreement between the model predicted and clinically measured FFR at stress, indicating the credibility of this modeling framework for predicting hemodynamics of pediatric AAOCA patients. This approach could lead to important clinical applications since it may serve as a tool for risk stratifying children with AAOCA. |
2003.05406 | Homayoun Valafar | Timothy Matthew Fawcett, Stephanie Irausquin, Mikhail Simin, Homayoun
Valafar | An Artificial Neural Network Based Approach for Identification of Native
Protein Structures using an Extended ForceField | null | 2011 IEEE International Conference on Bioinformatics and
Biomedicine, 500-505 | 10.1109/BIBM.2011.53 | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current protein forcefields like the ones seen in CHARMM or Xplor-NIH have
many terms that include bonded and non-bonded terms. Yet the forcefields do not
take into account the use of hydrogen bonds which are important for secondary
structure creation and stabilization of proteins. SCOPE is an open-source
program that generates proteins from rotamer space. It then creates a
forcefield that uses only non-bonded and hydrogen bond energy terms to create a
profile for a given protein. The profiles can then be used in an artificial
neural network to create a linear model that is funneled to the true protein
conformation.
| [
{
"created": "Thu, 5 Mar 2020 13:47:29 GMT",
"version": "v1"
}
] | 2020-03-12 | [
[
"Fawcett",
"Timothy Matthew",
""
],
[
"Irausquin",
"Stephanie",
""
],
[
"Simin",
"Mikhail",
""
],
[
"Valafar",
"Homayoun",
""
]
] | Current protein forcefields like the ones seen in CHARMM or Xplor-NIH have many terms that include bonded and non-bonded terms. Yet the forcefields do not take into account the use of hydrogen bonds which are important for secondary structure creation and stabilization of proteins. SCOPE is an open-source program that generates proteins from rotamer space. It then creates a forcefield that uses only non-bonded and hydrogen bond energy terms to create a profile for a given protein. The profiles can then be used in an artificial neural network to create a linear model that is funneled to the true protein conformation. |
1208.0162 | Thomas R. Sokolowski | Thomas R. Sokolowski, Thorsten Erdmann and Pieter Rein ten Wolde | Mutual Repression enhances the Steepness and Precision of Gene
Expression Boundaries | 29 pages, 9 figures, supporting text with 9 supporting figures;
accepted for publication in PLoS Comp. Biol | null | 10.1371/journal.pcbi.1002654 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embryonic development is driven by spatial patterns of gene expression that
determine the fate of each cell in the embryo. While gene expression is often
highly erratic, embryonic development is usually exceedingly precise. In
particular, gene expression boundaries are robust not only against intrinsic
noise from gene expression and protein diffusion, but also against
embryo-to-embryo variations in the morphogen gradients, which provide
positional information to the differentiating cells. How development is robust
against intra- and inter-embryonic variations is not understood. A common motif
in the gene regulation networks that control embryonic development is mutual
repression between pairs of genes. To assess the role of mutual repression in
the robust formation of gene expression patterns, we have performed large-scale
stochastic simulations of a minimal model of two mutually repressing gap genes
in Drosophila, hunchback (hb) and knirps (kni). Our model includes not only
mutual repression between hb and kni, but also the stochastic and cooperative
activation of hb by the anterior morphogen Bicoid (Bcd) and of kni by the
posterior morphogen Caudal (Cad), as well as the diffusion of Hb and Kni. Our
analysis reveals that mutual repression can markedly increase the steepness and
precision of the gap gene expression boundaries. In contrast to spatial
averaging and cooperative gene activation, mutual repression thus allows for
gene-expression boundaries that are both steep and precise. Moreover, mutual
repression dramatically enhances their robustness against embryo-to-embryo
variations in the morphogen levels. Finally, our simulations reveal that gap
protein diffusion plays a critical role not only in reducing the width of gap
gene expression boundaries via spatial averaging, but also in repairing
patterning errors that could arise due to the bistability induced by mutual
repression.
| [
{
"created": "Wed, 1 Aug 2012 10:15:37 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Aug 2012 11:08:50 GMT",
"version": "v2"
}
] | 2015-06-11 | [
[
"Sokolowski",
"Thomas R.",
""
],
[
"Erdmann",
"Thorsten",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] | Embryonic development is driven by spatial patterns of gene expression that determine the fate of each cell in the embryo. While gene expression is often highly erratic, embryonic development is usually exceedingly precise. In particular, gene expression boundaries are robust not only against intrinsic noise from gene expression and protein diffusion, but also against embryo-to-embryo variations in the morphogen gradients, which provide positional information to the differentiating cells. How development is robust against intra- and inter-embryonic variations is not understood. A common motif in the gene regulation networks that control embryonic development is mutual repression between pairs of genes. To assess the role of mutual repression in the robust formation of gene expression patterns, we have performed large-scale stochastic simulations of a minimal model of two mutually repressing gap genes in Drosophila, hunchback (hb) and knirps (kni). Our model includes not only mutual repression between hb and kni, but also the stochastic and cooperative activation of hb by the anterior morphogen Bicoid (Bcd) and of kni by the posterior morphogen Caudal (Cad), as well as the diffusion of Hb and Kni. Our analysis reveals that mutual repression can markedly increase the steepness and precision of the gap gene expression boundaries. In contrast to spatial averaging and cooperative gene activation, mutual repression thus allows for gene-expression boundaries that are both steep and precise. Moreover, mutual repression dramatically enhances their robustness against embryo-to-embryo variations in the morphogen levels. Finally, our simulations reveal that gap protein diffusion plays a critical role not only in reducing the width of gap gene expression boundaries via spatial averaging, but also in repairing patterning errors that could arise due to the bistability induced by mutual repression. |
2006.12616 | Guido Schillaci | Guido Schillaci and Luis Miranda and Uwe Schmidt | Prediction error-driven memory consolidation for continual learning. On
the case of adaptive greenhouse models | Revised version. Paper under review, submitted to Springer German
Journal on Artificial Intelligence (K\"unstliche Intelligenz), Special Issue
on Developmental Robotics | null | 10.1007/s13218-020-00700-8 | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | This work presents an adaptive architecture that performs online learning and
faces catastrophic forgetting issues by means of episodic memories and
prediction-error driven memory consolidation. In line with evidences from the
cognitive science and neuroscience, memories are retained depending on their
congruency with the prior knowledge stored in the system. This is estimated in
terms of prediction error resulting from a generative model. Moreover, this AI
system is transferred onto an innovative application in the horticulture
industry: the learning and transfer of greenhouse models. This work presents a
model trained on data recorded from research facilities and transferred to a
production greenhouse.
| [
{
"created": "Tue, 19 May 2020 15:22:53 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2020 11:16:28 GMT",
"version": "v2"
}
] | 2021-01-29 | [
[
"Schillaci",
"Guido",
""
],
[
"Miranda",
"Luis",
""
],
[
"Schmidt",
"Uwe",
""
]
] | This work presents an adaptive architecture that performs online learning and faces catastrophic forgetting issues by means of episodic memories and prediction-error driven memory consolidation. In line with evidences from the cognitive science and neuroscience, memories are retained depending on their congruency with the prior knowledge stored in the system. This is estimated in terms of prediction error resulting from a generative model. Moreover, this AI system is transferred onto an innovative application in the horticulture industry: the learning and transfer of greenhouse models. This work presents a model trained on data recorded from research facilities and transferred to a production greenhouse. |
1503.04224 | Duncan Ralph | Duncan K. Ralph and Frederick A. Matsen IV | Consistency of VDJ rearrangement and substitution parameters enables
accurate B cell receptor sequence annotation | null | null | 10.1371/journal.pcbi.1004409 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | VDJ rearrangement and somatic hypermutation work together to produce
antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of
antigens. It is now possible to sequence these BCRs in high throughput;
analysis of these sequences is bringing new insight into how antibodies
develop, in particular for broadly-neutralizing antibodies against HIV and
influenza. A fundamental step in such sequence analysis is to annotate each
base as coming from a specific one of the V, D, or J genes, or from an
N-addition (a.k.a. non-templated insertion). Previous work has used simple
parametric distributions to model transitions from state to state in a hidden
Markov model (HMM) of VDJ recombination, and assumed that mutations occur via
the same process across sites. However, codon frame and other effects have been
observed to violate these parametric assumptions for such coding sequences,
suggesting that a non-parametric approach to modeling the recombination process
could be useful. In our paper, we find that indeed large modern data sets
suggest a model using parameter-rich per-allele categorical distributions for
HMM transition probabilities and per-allele-per-position mutation
probabilities, and that using such a model for inference leads to significantly
improved results. We present an accurate and efficient BCR sequence annotation
software package using a novel HMM "factorization" strategy. This package,
called partis (https://github.com/psathyrella/partis/), is built on a new
general-purpose HMM compiler that can perform efficient inference given a
simple text description of an HMM.
| [
{
"created": "Fri, 13 Mar 2015 21:16:20 GMT",
"version": "v1"
},
{
"created": "Thu, 28 May 2015 19:14:09 GMT",
"version": "v2"
}
] | 2016-02-17 | [
[
"Ralph",
"Duncan K.",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] | VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. |
q-bio/0403017 | Byung Mook Weon | Byung Mook Weon | Complementarity between survival and mortality | 29 Pages, 9 Figures, Submitted to Experimental Gerontology | null | null | null | q-bio.PE | null | Accurate demographic functions help scientists define and understand
longevity. We summarize a new demographic model, the Weon model, and show the
application to the demographic data for Switzerland (1876-2002). Particularly,
the Weon model simply defines the maximum longevity, which is induced in nature
by the mortality dynamics. In this study, we reconsider the definition of the
maximum longevity and the effectiveness for longevity by the combined effect of
the survival and mortality functions. The results suggest that the mortality
function should be zero at the maximum longevity, since the density function is
zero but the survival function is not zero. Furthermore, the effectiveness for
longevity can be maximized at the characteristic life by the complementarity
between the survival and mortality functions, which suggests that there may be
two parts of rectangularization for longevity. The historical trends for
Switzerland (1876-2002) implies that there may be a fundamental limiting force
to restrict the increase of the effectiveness. As a result, it seems that the
density function is essential to define and understand the mortality dynamics,
the maximum longevity, the effectiveness for longevity, the paradigm of
rectangularization and the historical trends of the effectiveness by the
complementarity between the survival and mortality functions.
| [
{
"created": "Mon, 15 Mar 2004 10:00:00 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Weon",
"Byung Mook",
""
]
] | Accurate demographic functions help scientists define and understand longevity. We summarize a new demographic model, the Weon model, and show the application to the demographic data for Switzerland (1876-2002). Particularly, the Weon model simply defines the maximum longevity, which is induced in nature by the mortality dynamics. In this study, we reconsider the definition of the maximum longevity and the effectiveness for longevity by the combined effect of the survival and mortality functions. The results suggest that the mortality function should be zero at the maximum longevity, since the density function is zero but the survival function is not zero. Furthermore, the effectiveness for longevity can be maximized at the characteristic life by the complementarity between the survival and mortality functions, which suggests that there may be two parts of rectangularization for longevity. The historical trends for Switzerland (1876-2002) implies that there may be a fundamental limiting force to restrict the increase of the effectiveness. As a result, it seems that the density function is essential to define and understand the mortality dynamics, the maximum longevity, the effectiveness for longevity, the paradigm of rectangularization and the historical trends of the effectiveness by the complementarity between the survival and mortality functions. |
1509.03513 | Yuri Shestopaloff | Yuri K. Shestopaloff and Ivo F. Sbalzarini | A Method for Modeling Growth of Organs and Transplants Based on the
General Growth Law: Application to the Liver in Dogs and Humans | 13 pages, 6 figures | PLoS ONE 2014, 9(6): e99275 | 10.1371/journal.pone.0099275 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding biological phenomena requires a systemic approach that
incorporates different mechanisms acting on different spatial and temporal
scales, since in organisms the workings of all components, such as organelles,
cells, and organs interrelate. This inherent interdependency between diverse
biological mechanisms, both on the same and on different scales, provides the
functioning of an organism capable of maintaining homeostasis and physiological
stability through numerous feedback loops. Thus, developing models of organisms
and their constituents should be done within the overall systemic context of
the studied phenomena. We introduce such a method for modeling growth and
regeneration of livers at the organ scale, considering it a part of the overall
multi-scale biochemical and biophysical processes of an organism. Our method is
based on the earlier discovered general growth law, postulating that any
biological growth process comprises a uniquely defined distribution of
nutritional resources between maintenance needs and biomass production. Based
on this law, we introduce a liver growth model that allows to accurately
predicting the growth of liver transplants in dogs and liver grafts in humans.
Using this model, we find quantitative growth characteristics, such as the time
point when the transition period after surgery is over and the liver resumes
normal growth, rates at which hepatocytes are involved in proliferation, etc.
We then use the model to determine and quantify otherwise unobservable
metabolic properties of livers.
| [
{
"created": "Thu, 10 Sep 2015 00:44:32 GMT",
"version": "v1"
}
] | 2015-09-14 | [
[
"Shestopaloff",
"Yuri K.",
""
],
[
"Sbalzarini",
"Ivo F.",
""
]
] | Understanding biological phenomena requires a systemic approach that incorporates different mechanisms acting on different spatial and temporal scales, since in organisms the workings of all components, such as organelles, cells, and organs interrelate. This inherent interdependency between diverse biological mechanisms, both on the same and on different scales, provides the functioning of an organism capable of maintaining homeostasis and physiological stability through numerous feedback loops. Thus, developing models of organisms and their constituents should be done within the overall systemic context of the studied phenomena. We introduce such a method for modeling growth and regeneration of livers at the organ scale, considering it a part of the overall multi-scale biochemical and biophysical processes of an organism. Our method is based on the earlier discovered general growth law, postulating that any biological growth process comprises a uniquely defined distribution of nutritional resources between maintenance needs and biomass production. Based on this law, we introduce a liver growth model that allows to accurately predicting the growth of liver transplants in dogs and liver grafts in humans. Using this model, we find quantitative growth characteristics, such as the time point when the transition period after surgery is over and the liver resumes normal growth, rates at which hepatocytes are involved in proliferation, etc. We then use the model to determine and quantify otherwise unobservable metabolic properties of livers. |
0904.2637 | Hongbin Guo | Hongbin Guo, Kewei Chen, Rosemary A Renaut, Eric M Reiman | Reducing the noise effects in Logan graphic analysis for PET receptor
measurements | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logan's graphical analysis (LGA) is a widely-used approach for quantification
of biochemical and physiological processes from Positron emission tomography
(PET) image data. A well-noted problem associated with the LGA method is the
bias in the estimated parameters. We recently systematically evaluated the bias
associated with the linear model approximation and developed an alternative to
minimize the bias due to model error. In this study, we examined the noise
structure in the equations defining linear quantification methods, including
LGA. The noise structure conflicts with the conditions given by the
Gauss-Markov theorem for the least squares (LS) solution to generate the best
linear unbiased estimator. By carefully taking care of the data error
structure, we propose to use structured total least squares (STLS) to obtain
the solution using a one-dimensional optimization problem. Simulations of PET
data for [11C] benzothiazole-aniline (Pittsburgh Compound-B [PIB]) show that
the proposed method significantly reduces the bias. We conclude that the bias
associated with noise is primarily due to the unusual structure of he
correlated noise and it can be reduced with the proposed STLS method.
| [
{
"created": "Fri, 17 Apr 2009 05:44:16 GMT",
"version": "v1"
}
] | 2009-04-20 | [
[
"Guo",
"Hongbin",
""
],
[
"Chen",
"Kewei",
""
],
[
"Renaut",
"Rosemary A",
""
],
[
"Reiman",
"Eric M",
""
]
] | Logan's graphical analysis (LGA) is a widely-used approach for quantification of biochemical and physiological processes from Positron emission tomography (PET) image data. A well-noted problem associated with the LGA method is the bias in the estimated parameters. We recently systematically evaluated the bias associated with the linear model approximation and developed an alternative to minimize the bias due to model error. In this study, we examined the noise structure in the equations defining linear quantification methods, including LGA. The noise structure conflicts with the conditions given by the Gauss-Markov theorem for the least squares (LS) solution to generate the best linear unbiased estimator. By carefully taking care of the data error structure, we propose to use structured total least squares (STLS) to obtain the solution using a one-dimensional optimization problem. Simulations of PET data for [11C] benzothiazole-aniline (Pittsburgh Compound-B [PIB]) show that the proposed method significantly reduces the bias. We conclude that the bias associated with noise is primarily due to the unusual structure of he correlated noise and it can be reduced with the proposed STLS method. |
2009.06899 | Xuyun Wen | Xuyun Wen, Liming Hsu, Weili Lin, Han Zhang, Dinggang Shen | Co-evolution of Functional Brain Network at Multiple Scales during Early
Infancy | 10 pages, 4 figures | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brains are organized into hierarchically modular networks
facilitating efficient and stable information processing and supporting diverse
cognitive processes during the course of development. While the remarkable
reconfiguration of functional brain network has been firmly established in
early life, all these studies investigated the network development from a
"single-scale" perspective, which ignore the richness engendered by its
hierarchical nature. To fill this gap, this paper leveraged a longitudinal
infant resting-state functional magnetic resonance imaging dataset from birth
to 2 years of age, and proposed an advanced methodological framework to
delineate the multi-scale reconfiguration of functional brain network during
early development. Our proposed framework is consist of two parts. The first
part developed a novel two-step multi-scale module detection method that could
uncover efficient and consistent modular structure for longitudinal dataset
from multiple scales in a completely data-driven manner. The second part
designed a systematic approach that employed the linear mixed-effect model to
four global and nodal module-related metrics to delineate scale-specific
age-related changes of network organization. By applying our proposed
methodological framework on the collected longitudinal infant dataset, we
provided the first evidence that, in the first 2 years of life, the brain
functional network is co-evolved at different scales, where each scale displays
the unique reconfiguration pattern in terms of modular organization.
| [
{
"created": "Tue, 15 Sep 2020 07:21:04 GMT",
"version": "v1"
}
] | 2020-09-16 | [
[
"Wen",
"Xuyun",
""
],
[
"Hsu",
"Liming",
""
],
[
"Lin",
"Weili",
""
],
[
"Zhang",
"Han",
""
],
[
"Shen",
"Dinggang",
""
]
] | The human brains are organized into hierarchically modular networks facilitating efficient and stable information processing and supporting diverse cognitive processes during the course of development. While the remarkable reconfiguration of functional brain network has been firmly established in early life, all these studies investigated the network development from a "single-scale" perspective, which ignore the richness engendered by its hierarchical nature. To fill this gap, this paper leveraged a longitudinal infant resting-state functional magnetic resonance imaging dataset from birth to 2 years of age, and proposed an advanced methodological framework to delineate the multi-scale reconfiguration of functional brain network during early development. Our proposed framework is consist of two parts. The first part developed a novel two-step multi-scale module detection method that could uncover efficient and consistent modular structure for longitudinal dataset from multiple scales in a completely data-driven manner. The second part designed a systematic approach that employed the linear mixed-effect model to four global and nodal module-related metrics to delineate scale-specific age-related changes of network organization. By applying our proposed methodological framework on the collected longitudinal infant dataset, we provided the first evidence that, in the first 2 years of life, the brain functional network is co-evolved at different scales, where each scale displays the unique reconfiguration pattern in terms of modular organization. |
q-bio/0410036 | Ilya M. Nemenman | Adam A. Margolin, Ilya Nemenman, Chris Wiggins, Gustavo Stolovitzky,
Andrea Califano | On The Reconstruction of Interaction Networks with Applications to
Transcriptional Regulation | 4 pages, 1 figure; NIPS'04 workshop on Computational Biology;
extended abstract of q-bio.MN/0410037; minor changes following post-workshop
discussions | null | null | null | q-bio.MN q-bio.GN q-bio.QM | null | A novel information-theoretic method for reconstruction of interaction
networks is introduced. We prove that the method is exact for some class of
networks. Performance tests on large synthetic transcriptional regulatory
networks produce very encouraging results.
| [
{
"created": "Thu, 28 Oct 2004 13:49:23 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Sep 2005 19:26:51 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Margolin",
"Adam A.",
""
],
[
"Nemenman",
"Ilya",
""
],
[
"Wiggins",
"Chris",
""
],
[
"Stolovitzky",
"Gustavo",
""
],
[
"Califano",
"Andrea",
""
]
] | A novel information-theoretic method for reconstruction of interaction networks is introduced. We prove that the method is exact for some class of networks. Performance tests on large synthetic transcriptional regulatory networks produce very encouraging results. |
1805.00674 | Yannick Ramonet | Yannick Ramonet, Carole Bertin | Use of accelerometers to measure physical activity of group-housed
pregnant sows. Method development and use in six pig herds | in French | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of precision livestock farming which adjusts the food needs
of each animal requires detailed knowledge of its behavior and particularly
physical activity. Individual differences between animals can be observed for
group-housed sows. Accelerometer technology offers opportunities for automatic
monitoring of animal behavior. The aim of the first step was to develop a
methodology to attach the accelerometer on the sow's leg, and an algorithm to
automatically detect standing and lying posture. Accelerometers (Hobo Pendant
G) were put in a metal case and fastened with two cable ties on the leg of 6
group-housed sows. The data loggers recorded the acceleration on one axis every
20 s. Data were then validated by 9 hours of direct observations. The automatic
recording device showed data of high sensitivity (98.8%) and specificity
(99.8%). Then, accelerometers were placed on 12 to 13 group-housed sows for 2
to 4 consecutive days in 6 commercial farms equipped with electronic sow
feeding. On average each day, sows spent 259 minutes ($\pm$ 114) standing and
changed posture 29 ($\pm$ 12) times. The sow's standing time was repeatable day
to day. Differences between sows and herds were significant. Based on
behavioral data, 5 categories of sows were identified. This study suggests that
the consideration of individual behavior of each animal would improve herd
management.
| [
{
"created": "Wed, 2 May 2018 08:32:25 GMT",
"version": "v1"
}
] | 2018-05-03 | [
[
"Ramonet",
"Yannick",
""
],
[
"Bertin",
"Carole",
""
]
] | The development of precision livestock farming which adjusts the food needs of each animal requires detailed knowledge of its behavior and particularly physical activity. Individual differences between animals can be observed for group-housed sows. Accelerometer technology offers opportunities for automatic monitoring of animal behavior. The aim of the first step was to develop a methodology to attach the accelerometer on the sow's leg, and an algorithm to automatically detect standing and lying posture. Accelerometers (Hobo Pendant G) were put in a metal case and fastened with two cable ties on the leg of 6 group-housed sows. The data loggers recorded the acceleration on one axis every 20 s. Data were then validated by 9 hours of direct observations. The automatic recording device showed data of high sensitivity (98.8%) and specificity (99.8%). Then, accelerometers were placed on 12 to 13 group-housed sows for 2 to 4 consecutive days in 6 commercial farms equipped with electronic sow feeding. On average each day, sows spent 259 minutes ($\pm$ 114) standing and changed posture 29 ($\pm$ 12) times. The sow's standing time was repeatable day to day. Differences between sows and herds were significant. Based on behavioral data, 5 categories of sows were identified. This study suggests that the consideration of individual behavior of each animal would improve herd management. |
1912.04823 | Ata Ak{\i}n | Ipek Ustun, Ege Ozer, Erim Habib, Burcin Tatliesme, Ata Akin | Vigilance Overload Measured by Computerized Mackworth Clock Test | 4 pages, 4 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studied the change of vigilance based on stimulus coming
consecutively using the computerized version of the Mackworth Clock Test run
from PsyToolkit website. 7 participants (16.57 +/-1 years old, 2 males),
performed 10 consecutive trials in order to measure whether or not it is a
realistic goal for high school students to display the level of vigilance
expected from them in class. Success percentages were calculated by dividing
the number of correct jumps to the total number of jumps. The results indicated
that while the average success percentage for all subjects remained relatively
stable over the 10 trials (79% +/-7%), success percentages drop relatively as
the number of jumps increase. Success rate dropped from 90% (2 jumps) to 70% (7
jumps). We conclude that there is an upper limit of vigilance that should be
expected from students when they are exposed to more than 4 randomly occurring
attention requiring task within a minute.
| [
{
"created": "Thu, 21 Nov 2019 12:33:25 GMT",
"version": "v1"
}
] | 2019-12-11 | [
[
"Ustun",
"Ipek",
""
],
[
"Ozer",
"Ege",
""
],
[
"Habib",
"Erim",
""
],
[
"Tatliesme",
"Burcin",
""
],
[
"Akin",
"Ata",
""
]
] | This paper studied the change of vigilance based on stimulus coming consecutively using the computerized version of the Mackworth Clock Test run from PsyToolkit website. 7 participants (16.57 +/-1 years old, 2 males), performed 10 consecutive trials in order to measure whether or not it is a realistic goal for high school students to display the level of vigilance expected from them in class. Success percentages were calculated by dividing the number of correct jumps to the total number of jumps. The results indicated that while the average success percentage for all subjects remained relatively stable over the 10 trials (79% +/-7%), success percentages drop relatively as the number of jumps increase. Success rate dropped from 90% (2 jumps) to 70% (7 jumps). We conclude that there is an upper limit of vigilance that should be expected from students when they are exposed to more than 4 randomly occurring attention requiring task within a minute. |
1104.0702 | John Maloney | John M. Maloney and Krystyn J. Van Vliet | On the origin and extent of mechanical variation among cells | null | null | null | null | q-bio.CB cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigations of natural variation in cell mechanics within a cell
population are essential to understand the stochastic nature of soft-network
deformation. Striking commonalities have been found concerning the average
values and distribution of rheological parameters of cells: first, attached and
suspended cells exhibit power-law rheological behavior; second, cell stiffness
is distributed log-normally. A predictive connection between these two
near-universal findings has not been reported, to our knowledge. Here we
postulate, based on our own and others' experimental reports and leading models
of cell rheology, that the exponent that characterizes power-law rheology
varies intrinsically among cells as an approximately Gaussian-distributed
variable. Besides explaining naturally the log-normal distribution of cell
stiffness that is widely observed, this postulate predicts multiple empirically
observed relationships from cell deformation studies. Our framework ultimately
links inherent noise in postulated relaxation mechanisms of cytoskeletal
networks to mechanical variation among cells and cell populations.
| [
{
"created": "Mon, 4 Apr 2011 22:21:19 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jun 2011 13:58:05 GMT",
"version": "v2"
}
] | 2011-06-22 | [
[
"Maloney",
"John M.",
""
],
[
"Van Vliet",
"Krystyn J.",
""
]
] | Investigations of natural variation in cell mechanics within a cell population are essential to understand the stochastic nature of soft-network deformation. Striking commonalities have been found concerning the average values and distribution of rheological parameters of cells: first, attached and suspended cells exhibit power-law rheological behavior; second, cell stiffness is distributed log-normally. A predictive connection between these two near-universal findings has not been reported, to our knowledge. Here we postulate, based on our own and others' experimental reports and leading models of cell rheology, that the exponent that characterizes power-law rheology varies intrinsically among cells as an approximately Gaussian-distributed variable. Besides explaining naturally the log-normal distribution of cell stiffness that is widely observed, this postulate predicts multiple empirically observed relationships from cell deformation studies. Our framework ultimately links inherent noise in postulated relaxation mechanisms of cytoskeletal networks to mechanical variation among cells and cell populations. |
1003.5557 | Tobias Reichenbach | Tobias Reichenbach, A. J. Hudspeth | A ratchet mechanism for amplification in low-frequency mammalian hearing | 6 pages, 4 figures, plus Supplementary Information. Animation
available on the PNAS website (http://dx.doi.org/10.1073/pnas.0914345107). | Proc. Natl. Acad. Sci. U.S.A. 107, 4973-4978 (2010) | 10.1073/pnas.0914345107 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sensitivity and frequency selectivity of hearing result from tuned
amplification by an active process in the mechanoreceptive hair cells. In most
vertebrates the active process stems from the active motility of hair bundles.
The mammalian cochlea exhibits an additional form of mechanical activity termed
electromotility: its outer hair cells (OHCs) change length upon electrical
stimulation. The relative contributions of these two mechanisms to the active
process in the mammalian inner ear is the subject of intense current debate.
Here we show that active hair-bundle motility and electromotility can together
implement an efficient mechanism for amplification that functions like a
ratchet: sound-evoked forces acting on the basilar membrane are transmitted to
the hair bundles whereas electromotility decouples active hair-bundle forces
from the basilar membrane. This unidirectional coupling can extend the hearing
range well below the resonant frequency of the basilar membrane. It thereby
provides a concept for low-frequency hearing that accounts for a variety of
unexplained experimental observations from the cochlear apex, including the
shape and phase behavior of apical tuning curves, their lack of significant
nonlinearities, and the shape changes of threshold tuning curves of auditory
nerve fibers along the cochlea. The ratchet mechanism constitutes a general
design principle for implementing mechanical amplification in engineering
applications.
| [
{
"created": "Mon, 29 Mar 2010 14:51:39 GMT",
"version": "v1"
}
] | 2010-03-30 | [
[
"Reichenbach",
"Tobias",
""
],
[
"Hudspeth",
"A. J.",
""
]
] | The sensitivity and frequency selectivity of hearing result from tuned amplification by an active process in the mechanoreceptive hair cells. In most vertebrates the active process stems from the active motility of hair bundles. The mammalian cochlea exhibits an additional form of mechanical activity termed electromotility: its outer hair cells (OHCs) change length upon electrical stimulation. The relative contributions of these two mechanisms to the active process in the mammalian inner ear is the subject of intense current debate. Here we show that active hair-bundle motility and electromotility can together implement an efficient mechanism for amplification that functions like a ratchet: sound-evoked forces acting on the basilar membrane are transmitted to the hair bundles whereas electromotility decouples active hair-bundle forces from the basilar membrane. This unidirectional coupling can extend the hearing range well below the resonant frequency of the basilar membrane. It thereby provides a concept for low-frequency hearing that accounts for a variety of unexplained experimental observations from the cochlear apex, including the shape and phase behavior of apical tuning curves, their lack of significant nonlinearities, and the shape changes of threshold tuning curves of auditory nerve fibers along the cochlea. The ratchet mechanism constitutes a general design principle for implementing mechanical amplification in engineering applications. |
0912.4196 | Tamon Stephen | Cedric Chauve, Utz-Uwe Haus, Tamon Stephen, Vivija P. You | Minimal Conflicting Sets for the Consecutive Ones Property in ancestral
genome reconstruction | 20 pages, 3 figures | J Comput Biol. 2010 Sep;17(9):1167-81 | 10.1089/cmb.2010.0113 | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A binary matrix has the Consecutive Ones Property (C1P) if its columns can be
ordered in such a way that all 1's on each row are consecutive. A Minimal
Conflicting Set is a set of rows that does not have the C1P, but every proper
subset has the C1P. Such submatrices have been considered in comparative
genomics applications, but very little is known about their combinatorial
structure and efficient algorithms to compute them. We first describe an
algorithm that detects rows that belong to Minimal Conflicting Sets. This
algorithm has a polynomial time complexity when the number of 1's in each row
of the considered matrix is bounded by a constant. Next, we show that the
problem of computing all Minimal Conflicting Sets can be reduced to the joint
generation of all minimal true clauses and maximal false clauses for some
monotone boolean function. We use these methods on simulated data related to
ancestral genome reconstruction to show that computing Minimal Conflicting Set
is useful in discriminating between true positive and false positive ancestral
syntenies. We also study a dataset of yeast genomes and address the reliability
of an ancestral genome proposal of the Saccahromycetaceae yeasts.
| [
{
"created": "Mon, 21 Dec 2009 16:03:06 GMT",
"version": "v1"
}
] | 2011-10-13 | [
[
"Chauve",
"Cedric",
""
],
[
"Haus",
"Utz-Uwe",
""
],
[
"Stephen",
"Tamon",
""
],
[
"You",
"Vivija P.",
""
]
] | A binary matrix has the Consecutive Ones Property (C1P) if its columns can be ordered in such a way that all 1's on each row are consecutive. A Minimal Conflicting Set is a set of rows that does not have the C1P, but every proper subset has the C1P. Such submatrices have been considered in comparative genomics applications, but very little is known about their combinatorial structure and efficient algorithms to compute them. We first describe an algorithm that detects rows that belong to Minimal Conflicting Sets. This algorithm has a polynomial time complexity when the number of 1's in each row of the considered matrix is bounded by a constant. Next, we show that the problem of computing all Minimal Conflicting Sets can be reduced to the joint generation of all minimal true clauses and maximal false clauses for some monotone boolean function. We use these methods on simulated data related to ancestral genome reconstruction to show that computing Minimal Conflicting Set is useful in discriminating between true positive and false positive ancestral syntenies. We also study a dataset of yeast genomes and address the reliability of an ancestral genome proposal of the Saccahromycetaceae yeasts. |
2304.03780 | Pan Tan | Pan Tan, Mingchen Li, Liang Zhang, Zhiqiang Hu, Liang Hong | TemPL: A Novel Deep Learning Model for Zero-Shot Prediction of Protein
Stability and Activity Based on Temperature-Guided Language Modeling | This project has been terminated | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce TemPL, a novel deep learning approach for zero-shot prediction
of protein stability and activity, harnessing temperature-guided language
modeling. By assembling an extensive dataset of 96 million sequence-host
bacterial strain optimal growth temperatures (OGTs) and {\Delta}Tm data for
point mutations under consistent experimental conditions, we effectively
compared TemPL with state-of-the-art models. Notably, TemPL demonstrated
superior performance in predicting protein stability. An ablation study was
conducted to elucidate the influence of OGT prediction and language modeling
modules on TemPL's performance, revealing the importance of integrating both
components. Consequently, TemPL offers considerable promise for protein
engineering applications, facilitating the design of mutation sequences with
enhanced stability and activity.
| [
{
"created": "Fri, 7 Apr 2023 09:21:28 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Apr 2023 08:42:56 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Apr 2023 12:39:31 GMT",
"version": "v3"
},
{
"created": "Wed, 10 May 2023 03:04:29 GMT",
"version": "v4"
},
{
"cr... | 2024-05-14 | [
[
"Tan",
"Pan",
""
],
[
"Li",
"Mingchen",
""
],
[
"Zhang",
"Liang",
""
],
[
"Hu",
"Zhiqiang",
""
],
[
"Hong",
"Liang",
""
]
] | We introduce TemPL, a novel deep learning approach for zero-shot prediction of protein stability and activity, harnessing temperature-guided language modeling. By assembling an extensive dataset of 96 million sequence-host bacterial strain optimal growth temperatures (OGTs) and {\Delta}Tm data for point mutations under consistent experimental conditions, we effectively compared TemPL with state-of-the-art models. Notably, TemPL demonstrated superior performance in predicting protein stability. An ablation study was conducted to elucidate the influence of OGT prediction and language modeling modules on TemPL's performance, revealing the importance of integrating both components. Consequently, TemPL offers considerable promise for protein engineering applications, facilitating the design of mutation sequences with enhanced stability and activity. |
1308.1912 | Wei Zhang | Xu Zhang, Wenbo Mu, Wei Zhang | On the analysis of the Illumina 450K array data: probes ambiguously
mapped to the human genome | null | Zhang X, Mu W, Zhang W. On the analysis of the Illumina 450K array
data: probes ambiguously mapped to the human genome. Front Genet. 2012; 3: 73 | 10.3389/fgene.2012.00073 | null | q-bio.QM q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | We pointed out that a substantial number of CpG probes on the Illumina 450K
array could be mapped to multiple loci across the human genome. These CpGs need
to be considered when interpreting results using this platform.
| [
{
"created": "Thu, 8 Aug 2013 17:44:47 GMT",
"version": "v1"
}
] | 2013-08-09 | [
[
"Zhang",
"Xu",
""
],
[
"Mu",
"Wenbo",
""
],
[
"Zhang",
"Wei",
""
]
] | We pointed out that a substantial number of CpG probes on the Illumina 450K array could be mapped to multiple loci across the human genome. These CpGs need to be considered when interpreting results using this platform. |
1011.3666 | Wei Li Dr. | Juergen Jost and Wei Li | The tragedy of the commons in a multi-population complementarity game | 4 pages, 2 figures, ECCS 09 | null | null | null | q-bio.PE cs.GT physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a complementarity game with multiple populations whose members'
offered contributions are put together towards some common aim. When the sum of
the players' offers reaches or exceeds some threshold K, they each receive K
minus their own offers. Else, they all receive nothing. Each player tries to
offer as little as possible, hoping that the sum of the contributions still
reaches K, however. The game is symmetric at the individual level, but has many
equilibria that are more or less favorable to the members of certain
populations. In particular, it is possible that the members of one or several
populations do not contribute anything, a behavior called defecting, while the
others still contribute enough to reach the threshold. Which of these
equilibria then is attained is decided by the dynamics at the population level
that in turn depends on the strategic options the players possess. We find that
defecting occurs when more than 3 populations participate in the game, even
when the strategy scheme employed is very simple, if certain conditions for the
system parameters are satisfied. The results are obtained through systematic
simulations.
| [
{
"created": "Tue, 16 Nov 2010 11:57:29 GMT",
"version": "v1"
}
] | 2010-11-17 | [
[
"Jost",
"Juergen",
""
],
[
"Li",
"Wei",
""
]
] | We study a complementarity game with multiple populations whose members' offered contributions are put together towards some common aim. When the sum of the players' offers reaches or exceeds some threshold K, they each receive K minus their own offers. Else, they all receive nothing. Each player tries to offer as little as possible, hoping that the sum of the contributions still reaches K, however. The game is symmetric at the individual level, but has many equilibria that are more or less favorable to the members of certain populations. In particular, it is possible that the members of one or several populations do not contribute anything, a behavior called defecting, while the others still contribute enough to reach the threshold. Which of these equilibria then is attained is decided by the dynamics at the population level that in turn depends on the strategic options the players possess. We find that defecting occurs when more than 3 populations participate in the game, even when the strategy scheme employed is very simple, if certain conditions for the system parameters are satisfied. The results are obtained through systematic simulations. |
2201.05396 | Birgitta Dresp-Langley | Birgitta Dresp-Langley | Consciousness beyond neural fields: expanding the possibilities of what
has not yet happened | null | Frontiers in Psychology. 2022; 12: 762349 | 10.3389/fpsyg.2021.762349 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field theories in physics, any particular region of the presumed
space-time continuum and all interactions between elementary objects therein
can be objectively measured and/or accounted for mathematically. Since this
does not apply to any of the field theories, or any other neural theory, of
consciousness, their explanatory power is limited. As discussed in detail
herein, the matter is complicated further by the facts than any scientifically
operational definition of consciousness is inevitably partial, and that the
phenomenon has no spatial dimensionality. Under the light of insights from
research on meditation and expanded consciousness, chronic pain syndrome,
healthy ageing, and eudaimonic well-being, we may conceive consciousness as a
source of potential energy that has no clearly defined spatial dimensionality,
but can produce significant changes in others and in the world, observable in
terms of changes in time. It is argued that consciousness may have evolved to
enable the human species to generate such changes in order to cope with
unprecedented and/or unpredictable adversity. Such coping could, ultimately,
include the conscious planning of our own extinction when survival on the
planet is no longer an acceptable option.
| [
{
"created": "Fri, 14 Jan 2022 11:23:01 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Dresp-Langley",
"Birgitta",
""
]
] | In the field theories in physics, any particular region of the presumed space-time continuum and all interactions between elementary objects therein can be objectively measured and/or accounted for mathematically. Since this does not apply to any of the field theories, or any other neural theory, of consciousness, their explanatory power is limited. As discussed in detail herein, the matter is complicated further by the facts than any scientifically operational definition of consciousness is inevitably partial, and that the phenomenon has no spatial dimensionality. Under the light of insights from research on meditation and expanded consciousness, chronic pain syndrome, healthy ageing, and eudaimonic well-being, we may conceive consciousness as a source of potential energy that has no clearly defined spatial dimensionality, but can produce significant changes in others and in the world, observable in terms of changes in time. It is argued that consciousness may have evolved to enable the human species to generate such changes in order to cope with unprecedented and/or unpredictable adversity. Such coping could, ultimately, include the conscious planning of our own extinction when survival on the planet is no longer an acceptable option. |
1901.10829 | Markus Pagitz Dr | Markus Pagitz | Pressure Actuated Cellular Structures | This postdoctoral thesis summarizes my work on pressure actuated
cellular structures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This postdoctoral thesis starts by reviewing the historic development of
airplane structures and high lift devices from an engineering point of view.
However, the main purpose of this document is the development of a novel
concept for shape changing, gapless high lift devices that is inspired by the
nastic movement of plants. A particular focus is put on the efficient
simulation and optimization of compliant pressure actuated cellular structures.
| [
{
"created": "Wed, 30 Jan 2019 13:57:49 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Aug 2020 08:33:15 GMT",
"version": "v2"
}
] | 2020-08-25 | [
[
"Pagitz",
"Markus",
""
]
] | This postdoctoral thesis starts by reviewing the historic development of airplane structures and high lift devices from an engineering point of view. However, the main purpose of this document is the development of a novel concept for shape changing, gapless high lift devices that is inspired by the nastic movement of plants. A particular focus is put on the efficient simulation and optimization of compliant pressure actuated cellular structures. |
2202.10698 | John McBride | John M McBride and Jean-Pierre Eckmann and Tsvi Tlusty | General theory of specific binding: insights from a
genetic-mechano-chemical protein model | null | null | null | null | q-bio.BM physics.bio-ph | http://creativecommons.org/licenses/by-sa/4.0/ | Proteins need to selectively interact with specific targets among a multitude
of similar molecules in the cell. But despite a firm physical understanding of
binding interactions, we lack a general theory of how proteins evolve high
specificity. Here, we present such a model that combines chemistry, mechanics
and genetics, and explains how their interplay governs the evolution of
specific protein-ligand interactions. The model shows that there are many
routes to achieving molecular discrimination - by varying degrees of
flexibility and shape/chemistry complementarity - but the key ingredient is
precision. Harder discrimination tasks require more collective and precise
coaction of structure, forces and movements. Proteins can achieve this through
correlated mutations extending far from a binding site, which fine-tune the
localized interaction with the ligand. Thus, the solution of more complicated
tasks is enabled by increasing the protein size, and proteins become more
evolvable and robust when they are larger than the bare minimum required for
discrimination. The model makes testable, specific predictions about the role
of flexibility and shape mismatch in discrimination, and how evolution can
independently tune affinity and specificity. Thus, the proposed theory of
specific binding addresses the natural question of "why are proteins so big?".
A possible answer is that molecular discrimination is often a hard task best
performed by adding more layers to the protein.
| [
{
"created": "Tue, 22 Feb 2022 07:15:07 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jul 2022 06:33:03 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Sep 2022 01:20:15 GMT",
"version": "v3"
}
] | 2022-09-28 | [
[
"McBride",
"John M",
""
],
[
"Eckmann",
"Jean-Pierre",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | Proteins need to selectively interact with specific targets among a multitude of similar molecules in the cell. But despite a firm physical understanding of binding interactions, we lack a general theory of how proteins evolve high specificity. Here, we present such a model that combines chemistry, mechanics and genetics, and explains how their interplay governs the evolution of specific protein-ligand interactions. The model shows that there are many routes to achieving molecular discrimination - by varying degrees of flexibility and shape/chemistry complementarity - but the key ingredient is precision. Harder discrimination tasks require more collective and precise coaction of structure, forces and movements. Proteins can achieve this through correlated mutations extending far from a binding site, which fine-tune the localized interaction with the ligand. Thus, the solution of more complicated tasks is enabled by increasing the protein size, and proteins become more evolvable and robust when they are larger than the bare minimum required for discrimination. The model makes testable, specific predictions about the role of flexibility and shape mismatch in discrimination, and how evolution can independently tune affinity and specificity. Thus, the proposed theory of specific binding addresses the natural question of "why are proteins so big?". A possible answer is that molecular discrimination is often a hard task best performed by adding more layers to the protein. |
2212.02251 | Kuang Liu | Kuang Liu, Rajiv K. Kalia, Xinlian Liu, Aiichiro Nakano, Ken-ichi
Nomura, Priya Vashishta, Rafael Zamora-Resendizc | Multiscale Graph Neural Networks for Protein Residue Contact Map
Prediction | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning (ML) is revolutionizing protein structural analysis,
including an important subproblem of predicting protein residue contact maps,
i.e., which amino-acid residues are in close spatial proximity given the
amino-acid sequence of a protein. Despite recent progresses in ML-based protein
contact prediction, predicting contacts with a wide range of distances
(commonly classified into short-, medium- and long-range contacts) remains a
challenge. Here, we propose a multiscale graph neural network (GNN) based
approach taking a cue from multiscale physics simulations, in which a standard
pipeline involving a recurrent neural network (RNN) is augmented with three
GNNs to refine predictive capability for short-, medium- and long-range residue
contacts, respectively. Test results on the ProteinNet dataset show improved
accuracy for contacts of all ranges using the proposed multiscale RNN+GNN
approach over the conventional approach, including the most challenging case of
long-range contact prediction.
| [
{
"created": "Fri, 2 Dec 2022 05:30:59 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Dec 2022 08:18:51 GMT",
"version": "v2"
}
] | 2022-12-23 | [
[
"Liu",
"Kuang",
""
],
[
"Kalia",
"Rajiv K.",
""
],
[
"Liu",
"Xinlian",
""
],
[
"Nakano",
"Aiichiro",
""
],
[
"Nomura",
"Ken-ichi",
""
],
[
"Vashishta",
"Priya",
""
],
[
"Zamora-Resendizc",
"Rafael",
""
]
... | Machine learning (ML) is revolutionizing protein structural analysis, including an important subproblem of predicting protein residue contact maps, i.e., which amino-acid residues are in close spatial proximity given the amino-acid sequence of a protein. Despite recent progresses in ML-based protein contact prediction, predicting contacts with a wide range of distances (commonly classified into short-, medium- and long-range contacts) remains a challenge. Here, we propose a multiscale graph neural network (GNN) based approach taking a cue from multiscale physics simulations, in which a standard pipeline involving a recurrent neural network (RNN) is augmented with three GNNs to refine predictive capability for short-, medium- and long-range residue contacts, respectively. Test results on the ProteinNet dataset show improved accuracy for contacts of all ranges using the proposed multiscale RNN+GNN approach over the conventional approach, including the most challenging case of long-range contact prediction. |
1507.06920 | Augusto Gonzalez | Augusto Gonzalez | The long-tail distribution function of mutations in bacteria | Proceedings of the Meeting on Complex Matter Systems, Univ. of
Havana, June 2015, to be published in the Revista Cubana de Fisica | Rev. Cub. Fis. 32, 86-89 (2015) | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Levy flights in the space of mutations model time evolution of bacterial DNA.
Parameters in the model are adjusted in order to fit observations coming from
the Long Time Evolution Experiment with E. Coli.
| [
{
"created": "Fri, 24 Jul 2015 16:55:57 GMT",
"version": "v1"
}
] | 2016-02-24 | [
[
"Gonzalez",
"Augusto",
""
]
] | Levy flights in the space of mutations model time evolution of bacterial DNA. Parameters in the model are adjusted in order to fit observations coming from the Long Time Evolution Experiment with E. Coli. |
1010.0934 | Joao Frederico Matias Rodrigues | Jo\~ao F. Matias Rodrigues and Andreas Wagner | Genotype networks, innovation, and robustness in sulfur metabolism | 27 pages, 9 figures | null | null | null | q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolic networks are complex systems that comprise hundreds of chemical
reactions which synthesize biomass molecules from chemicals in an organism's
environment. The metabolic network of any one organism is encoded by a
metabolic genotype, defined by a set of enzyme-coding genes whose products
catalyze the network's reactions. Each metabolic genotype has a metabolic
phenotype, such as the ability to synthesize biomass on a spectrum of different
sources of chemical elements and energy. We here focus on sulfur metabolism,
which is attractive to study the evolution of metabolic networks, because it
involves many fewer reactions than carbon metabolism. Specifically, we study
properties of the space of all possible metabolic genotypes, and analyze
properties of random metabolic genotypes that are viable on different numbers
of sulfur sources. We show that metabolic genotypes with the same phenotype
form large connected genotype networks that extend far through metabolic
genotype space. How far they reach through this space is a linear function of
the number of super-essential reactions in such networks, the number of
reactions that occur in all networks with the same phenotype. We show that
different neighborhoods of any genotype network harbor very different novel
phenotypes, metabolic innovations that can sustain life on novel sulfur
sources. We also analyze the ability of evolving populations of metabolic
networks to explore novel metabolic phenotypes. This ability is facilitated by
the existence of genotype networks, because different neighborhoods of these
networks contain very different novel phenotypes. In contrast to
macromolecules, where phenotypic robustness may facilitate phenotypic
innovation, we show that here the ability to access novel phenotypes does not
monotonically increase with robustness.
| [
{
"created": "Tue, 5 Oct 2010 16:25:01 GMT",
"version": "v1"
}
] | 2016-08-14 | [
[
"Rodrigues",
"João F. Matias",
""
],
[
"Wagner",
"Andreas",
""
]
] | Metabolic networks are complex systems that comprise hundreds of chemical reactions which synthesize biomass molecules from chemicals in an organism's environment. The metabolic network of any one organism is encoded by a metabolic genotype, defined by a set of enzyme-coding genes whose products catalyze the network's reactions. Each metabolic genotype has a metabolic phenotype, such as the ability to synthesize biomass on a spectrum of different sources of chemical elements and energy. We here focus on sulfur metabolism, which is attractive to study the evolution of metabolic networks, because it involves many fewer reactions than carbon metabolism. Specifically, we study properties of the space of all possible metabolic genotypes, and analyze properties of random metabolic genotypes that are viable on different numbers of sulfur sources. We show that metabolic genotypes with the same phenotype form large connected genotype networks that extend far through metabolic genotype space. How far they reach through this space is a linear function of the number of super-essential reactions in such networks, the number of reactions that occur in all networks with the same phenotype. We show that different neighborhoods of any genotype network harbor very different novel phenotypes, metabolic innovations that can sustain life on novel sulfur sources. We also analyze the ability of evolving populations of metabolic networks to explore novel metabolic phenotypes. This ability is facilitated by the existence of genotype networks, because different neighborhoods of these networks contain very different novel phenotypes. In contrast to macromolecules, where phenotypic robustness may facilitate phenotypic innovation, we show that here the ability to access novel phenotypes does not monotonically increase with robustness. |
2405.06511 | Yonghan Yu | Yonghan Yu, Ming Li | Towards Less Biased Data-driven Scoring with Deep Learning-Based
End-to-end Database Search in Tandem Mass Spectrometry | null | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Peptide identification in mass spectrometry-based proteomics is crucial for
understanding protein function and dynamics. Traditional database search
methods, though widely used, rely on heuristic scoring functions and
statistical estimations have to be introduced for a higher identification rate.
Here, we introduce DeepSearch, the first deep learning-based end-to-end
database search method for tandem mass spectrometry. DeepSearch leverages a
modified transformer-based encoder-decoder architecture under the contrastive
learning framework. Unlike conventional methods that rely on ion-to-ion
matching, DeepSearch adopts a data-driven approach to score peptide spectrum
matches. DeepSearch is also the first deep learning-based method that can
profile variable post-translational modifications in a zero-shot manner. We
showed that DeepSearch's scoring scheme expressed less bias and did not require
any statistical estimation. We validated DeepSearch's accuracy and robustness
across various datasets, including those from species with diverse protein
compositions and a modification-enriched dataset. DeepSearch sheds new light on
database search methods in tandem mass spectrometry.
| [
{
"created": "Wed, 8 May 2024 19:39:17 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Yu",
"Yonghan",
""
],
[
"Li",
"Ming",
""
]
] | Peptide identification in mass spectrometry-based proteomics is crucial for understanding protein function and dynamics. Traditional database search methods, though widely used, rely on heuristic scoring functions and statistical estimations have to be introduced for a higher identification rate. Here, we introduce DeepSearch, the first deep learning-based end-to-end database search method for tandem mass spectrometry. DeepSearch leverages a modified transformer-based encoder-decoder architecture under the contrastive learning framework. Unlike conventional methods that rely on ion-to-ion matching, DeepSearch adopts a data-driven approach to score peptide spectrum matches. DeepSearch is also the first deep learning-based method that can profile variable post-translational modifications in a zero-shot manner. We showed that DeepSearch's scoring scheme expressed less bias and did not require any statistical estimation. We validated DeepSearch's accuracy and robustness across various datasets, including those from species with diverse protein compositions and a modification-enriched dataset. DeepSearch sheds new light on database search methods in tandem mass spectrometry. |
2012.00629 | Antonio Maria Scarfone | Giorgio Kaniadakis, Mauro M. Baldi, Thomas S. Deisboeck, Giulia
Grisolia, Dionissios T. Hristopulos, Antonio M. Scarfone, Amelia Sparavigna,
Tatsuaki Wada and Umberto Lucia | The k-statistics approach to epidemiology | 15 pages, 1 table, 5 figures | Scientific Report (2020) 10:19949 | 10.1038/s41598-020-76673-3 | null | q-bio.PE nlin.AO physics.bio-ph physics.soc-ph stat.AP | http://creativecommons.org/publicdomain/zero/1.0/ | A great variety of complex physical, natural and artificial systems are
governed by statistical distributions, which often follow a standard
exponential function in the bulk, while their tail obeys the Pareto power law.
The recently introduced $\kappa$-statistics framework predicts distribution
functions with this feature. A growing number of applications in different
fields of investigation are beginning to prove the relevance and effectiveness
of $\kappa$-statistics in fitting empirical data. In this paper, we use
$\kappa$-statistics to formulate a statistical approach for epidemiological
analysis. We validate the theoretical results by fitting the derived
$\kappa$-Weibull distributions with data from the plague pandemic of 1417 in
Florence as well as data from the COVID-19 pandemic in China over the entire
cycle that concludes in April 16, 2020. As further validation of the proposed
approach we present a more systematic analysis of COVID-19 data from countries
such as Germany, Italy, Spain and United Kingdom, obtaining very good agreement
between theoretical predictions and empirical observations. For these countries
we also study the entire first cycle of the pandemic which extends until the
end of July 2020. The fact that both the data of the Florence plague and those
of the Covid-19 pandemic are successfully described by the same theoretical
model, even though the two events are caused by different diseases and they are
separated by more than 600 years, is evidence that the $\kappa$-Weibull model
has universal features.
| [
{
"created": "Wed, 25 Nov 2020 16:15:24 GMT",
"version": "v1"
}
] | 2020-12-07 | [
[
"Kaniadakis",
"Giorgio",
""
],
[
"Baldi",
"Mauro M.",
""
],
[
"Deisboeck",
"Thomas S.",
""
],
[
"Grisolia",
"Giulia",
""
],
[
"Hristopulos",
"Dionissios T.",
""
],
[
"Scarfone",
"Antonio M.",
""
],
[
"Sparavigna",
... | A great variety of complex physical, natural and artificial systems are governed by statistical distributions, which often follow a standard exponential function in the bulk, while their tail obeys the Pareto power law. The recently introduced $\kappa$-statistics framework predicts distribution functions with this feature. A growing number of applications in different fields of investigation are beginning to prove the relevance and effectiveness of $\kappa$-statistics in fitting empirical data. In this paper, we use $\kappa$-statistics to formulate a statistical approach for epidemiological analysis. We validate the theoretical results by fitting the derived $\kappa$-Weibull distributions with data from the plague pandemic of 1417 in Florence as well as data from the COVID-19 pandemic in China over the entire cycle that concludes in April 16, 2020. As further validation of the proposed approach we present a more systematic analysis of COVID-19 data from countries such as Germany, Italy, Spain and United Kingdom, obtaining very good agreement between theoretical predictions and empirical observations. For these countries we also study the entire first cycle of the pandemic which extends until the end of July 2020. The fact that both the data of the Florence plague and those of the Covid-19 pandemic are successfully described by the same theoretical model, even though the two events are caused by different diseases and they are separated by more than 600 years, is evidence that the $\kappa$-Weibull model has universal features. |
1409.0654 | Annalisa Fierro | Annalisa Fierro, Sergio Cocozza, Antonella Monticelli, Giovanni Scala
and Gennaro Miele | Continuous and Discontinuous Phase Transitions in the evolution of a
polygenic trait under stabilizing selective pressure | 8 pages, 7 figures, 1 table | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of phenomena analogous to phase transition in Statistical
Mechanics, has been suggested in the evolution of a polygenic trait under
stabilizing selection, mutation and genetic drift.
By using numerical simulations of a model system, we analyze the evolution of
a population of $N$ diploid hermaphrodites in random mating regime. The
population evolves under the effect of drift, selective pressure in form of
viability on an additive polygenic trait, and mutation. The analysis allows to
determine a phase diagram in the plane of mutation rate and strength of
selection. The involved pattern of phase transitions is characterized by a line
of critical points for weak selective pressure (smaller than a threshold),
whereas discontinuous phase transitions, characterized by metastable
hysteresis, are observed for strong selective pressure. A finite size scaling
analysis suggests the analogy between our system and the mean field Ising model
for selective pressure approaching the threshold from weaker values. In this
framework, the mutation rate, which allows the system to explore the accessible
microscopic states, is the parameter controlling the transition from large
heterozygosity (disordered phase) to small heterozygosity (ordered one).
| [
{
"created": "Tue, 2 Sep 2014 10:15:44 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Sep 2014 13:06:03 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Feb 2017 11:26:39 GMT",
"version": "v3"
}
] | 2017-02-13 | [
[
"Fierro",
"Annalisa",
""
],
[
"Cocozza",
"Sergio",
""
],
[
"Monticelli",
"Antonella",
""
],
[
"Scala",
"Giovanni",
""
],
[
"Miele",
"Gennaro",
""
]
] | The presence of phenomena analogous to phase transition in Statistical Mechanics, has been suggested in the evolution of a polygenic trait under stabilizing selection, mutation and genetic drift. By using numerical simulations of a model system, we analyze the evolution of a population of $N$ diploid hermaphrodites in random mating regime. The population evolves under the effect of drift, selective pressure in form of viability on an additive polygenic trait, and mutation. The analysis allows to determine a phase diagram in the plane of mutation rate and strength of selection. The involved pattern of phase transitions is characterized by a line of critical points for weak selective pressure (smaller than a threshold), whereas discontinuous phase transitions, characterized by metastable hysteresis, are observed for strong selective pressure. A finite size scaling analysis suggests the analogy between our system and the mean field Ising model for selective pressure approaching the threshold from weaker values. In this framework, the mutation rate, which allows the system to explore the accessible microscopic states, is the parameter controlling the transition from large heterozygosity (disordered phase) to small heterozygosity (ordered one). |
2310.05037 | Jo\"el Lindegger | Jo\"el Lindegger, Can Firtina, Nika Mansouri Ghiasi, Mohammad
Sadrosadati, Mohammed Alser, Onur Mutlu | RawAlign: Accurate, Fast, and Scalable Raw Nanopore Signal Mapping via
Combining Seeding and Alignment | null | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nanopore-based sequencers generate a series of raw electrical signal
measurements that represent the contents of a biological sequence molecule
passing through the sequencer's nanopore. If the raw signal is analyzed in
real-time, an irrelevant molecule can be ejected from the nanopore before it is
completely sequenced, reducing sequencing time. To meet the low-latency and
high-throughput requirements of the real-time analysis, a number of recent
works propose the direct analysis of raw nanopore signals instead of
traditional basecalling-based analysis approaches. We observe that while
existing proposals for raw signal read mapping typically do well in all metrics
for small reference databases (e.g., viral genomes), they all fail to scale to
large reference databases (e.g., the human genome) in some aspect. Our goal is
to analyze raw nanopore signals with high accuracy, high throughput, low
latency, low memory usage, and needing few bases to be sequenced for a wide
range of reference database sizes. To this end, we propose RawAlign, the first
Seed-Filter-Align mapper for raw nanopore signals. Our evaluation shows that
RawAlign is the only tool that can map raw nanopore signals to large reference
databases $\geq$3117Mbp with high accuracy. Our evaluation shows that RawAlign
generalizes well to a wide range of reference database sizes. In particular,
RawAlign has a similar throughput to the overall prior state-of-the-art RawHash
(between 0.80$\times$-1.08$\times$) while improving accuracy on all datasets
(between 1.02$\times$-1.64$\times$ F-1 score). RawAlign provides a 2.83$\times$
(2.06$\times$) speedup over Uncalled (Sigmap) on average (geo. mean) while
improving accuracy by 1.35$\times$ (1.34$\times$) in terms of F-1 score on
average (geo. mean). Availability: https://github.com/cmu-safari/RawAlign
| [
{
"created": "Sun, 8 Oct 2023 06:37:51 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Lindegger",
"Joël",
""
],
[
"Firtina",
"Can",
""
],
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Mutlu",
"Onur",
""
]
] | Nanopore-based sequencers generate a series of raw electrical signal measurements that represent the contents of a biological sequence molecule passing through the sequencer's nanopore. If the raw signal is analyzed in real-time, an irrelevant molecule can be ejected from the nanopore before it is completely sequenced, reducing sequencing time. To meet the low-latency and high-throughput requirements of the real-time analysis, a number of recent works propose the direct analysis of raw nanopore signals instead of traditional basecalling-based analysis approaches. We observe that while existing proposals for raw signal read mapping typically do well in all metrics for small reference databases (e.g., viral genomes), they all fail to scale to large reference databases (e.g., the human genome) in some aspect. Our goal is to analyze raw nanopore signals with high accuracy, high throughput, low latency, low memory usage, and needing few bases to be sequenced for a wide range of reference database sizes. To this end, we propose RawAlign, the first Seed-Filter-Align mapper for raw nanopore signals. Our evaluation shows that RawAlign is the only tool that can map raw nanopore signals to large reference databases $\geq$3117Mbp with high accuracy. Our evaluation shows that RawAlign generalizes well to a wide range of reference database sizes. In particular, RawAlign has a similar throughput to the overall prior state-of-the-art RawHash (between 0.80$\times$-1.08$\times$) while improving accuracy on all datasets (between 1.02$\times$-1.64$\times$ F-1 score). RawAlign provides a 2.83$\times$ (2.06$\times$) speedup over Uncalled (Sigmap) on average (geo. mean) while improving accuracy by 1.35$\times$ (1.34$\times$) in terms of F-1 score on average (geo. mean). Availability: https://github.com/cmu-safari/RawAlign |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.