id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1908.03886 | Hao Si | Hao Si and Xiaojuan Sun | Population rate coding in recurrent neuronal networks with
undetermined-type neurons | 14 pages,10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural coding is a key problem in neuroscience, which can promote people's
understanding of the mechanism that brain processes information. Among the
classical theories of neural coding, the population rate coding has been
studied widely in many works. Most computational studies considered the neurons
and the corresponding presynaptic synapses as pre-determined excitatory or
inhibitory types. According to physiological evidence, however, that the real
effect of a synapse is inhibitory or excitatory is determined by the type of
the activated receptors. The co-release of excitatory and inhibitory receptors
in the same synapse exists widely in the brain. In this paper, we study the
population rate coding in recurrent neuronal networks with undetermined neurons
and synapses, different from the traditional works, in which one neuron can
perform either excitatory or inhibitory effect to the corresponding
postsynaptic neurons. We find such neuronal networks can encode the stimuli
information in population firing rate well. We find that intermediate recurrent
probability together with moderate Inhibitory-Excitatory strength ratio can
enhance the encoding performance. Suitable combinations of the previous two
parameters with the noise intensity, the excitatory synaptic strength and the
synaptic time constant have promoting effects on the performance of population
rate coding. Finally, we compare the performance of population rate coding
between the traditional (determined) model and ours, and we find that it is
rational to consider the co-release of inhibitory and excitatory receptors.
| [
{
"created": "Sun, 11 Aug 2019 11:23:24 GMT",
"version": "v1"
}
] | 2019-08-13 | [
[
"Si",
"Hao",
""
],
[
"Sun",
"Xiaojuan",
""
]
] | Neural coding is a key problem in neuroscience, which can promote people's understanding of the mechanism that brain processes information. Among the classical theories of neural coding, the population rate coding has been studied widely in many works. Most computational studies considered the neurons and the corresponding presynaptic synapses as pre-determined excitatory or inhibitory types. According to physiological evidence, however, that the real effect of a synapse is inhibitory or excitatory is determined by the type of the activated receptors. The co-release of excitatory and inhibitory receptors in the same synapse exists widely in the brain. In this paper, we study the population rate coding in recurrent neuronal networks with undetermined neurons and synapses, different from the traditional works, in which one neuron can perform either excitatory or inhibitory effect to the corresponding postsynaptic neurons. We find such neuronal networks can encode the stimuli information in population firing rate well. We find that intermediate recurrent probability together with moderate Inhibitory-Excitatory strength ratio can enhance the encoding performance. Suitable combinations of the previous two parameters with the noise intensity, the excitatory synaptic strength and the synaptic time constant have promoting effects on the performance of population rate coding. Finally, we compare the performance of population rate coding between the traditional (determined) model and ours, and we find that it is rational to consider the co-release of inhibitory and excitatory receptors. |
1111.0964 | Leah B. Shaw | Maxim S. Shkarayev, Ira B. Schwartz, Leah B. Shaw | Recruitment dynamics in adaptive social networks | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We model recruitment in adaptive social networks in the presence of birth and
death processes. Recruitment is characterized by nodes changing their status to
that of the recruiting class as a result of contact with recruiting nodes. Only
a susceptible subset of nodes can be recruited. The recruiting individuals may
adapt their connections in order to improve recruitment capabilities, thus
changing the network structure adaptively. We derive a mean field theory to
predict the dependence of the growth threshold of the recruiting class on the
adaptation parameter. Furthermore, we investigate the effect of adaptation on
the recruitment level, as well as on network topology. The theoretical
predictions are compared with direct simulations of the full system. We
identify two parameter regimes with qualitatively different bifurcation
diagrams depending on whether nodes become susceptible frequently (multiple
times in their lifetime) or rarely (much less than once per lifetime).
| [
{
"created": "Thu, 3 Nov 2011 19:57:03 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Jul 2012 15:21:23 GMT",
"version": "v2"
}
] | 2012-07-20 | [
[
"Shkarayev",
"Maxim S.",
""
],
[
"Schwartz",
"Ira B.",
""
],
[
"Shaw",
"Leah B.",
""
]
] | We model recruitment in adaptive social networks in the presence of birth and death processes. Recruitment is characterized by nodes changing their status to that of the recruiting class as a result of contact with recruiting nodes. Only a susceptible subset of nodes can be recruited. The recruiting individuals may adapt their connections in order to improve recruitment capabilities, thus changing the network structure adaptively. We derive a mean field theory to predict the dependence of the growth threshold of the recruiting class on the adaptation parameter. Furthermore, we investigate the effect of adaptation on the recruitment level, as well as on network topology. The theoretical predictions are compared with direct simulations of the full system. We identify two parameter regimes with qualitatively different bifurcation diagrams depending on whether nodes become susceptible frequently (multiple times in their lifetime) or rarely (much less than once per lifetime). |
0801.1931 | Anca Radulescu | Anca Radulescu, Kingsley Cox, Paul Adams | Hebbian Inspecificity in the Oja Model | 42 pages (including appendices and references); 13 figures | null | null | null | q-bio.NC q-bio.QM | null | Recent work on Long Term Potentiation in brain slices shows that Hebb's rule
is not completely synapse-specific, probably due to intersynapse diffusion of
calcium or other factors. We extend the classical Oja unsupervised model of
learning by a single linear neuron to include Hebbian inspecificity, by
introducing an error matrix E, which expresses possible crosstalk between
updating at different connections. We show the modified algorithm converges to
the leading eigenvector of the matrix EC, where C is the input covariance
matrix. When there is no inspecificity, this gives the classical result of
convergence to the first principal component of the input distribution (PC1).
We then study the outcome of learning using different versions of E. In the
most biologically plausible case, arising when there are no intrinsically
privileged connections, E has diagonal elements Q and off- diagonal elements
(1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of
inputs n. We analyze this error-onto-all case in detail, for both uncorrelated
and correlated inputs. We study the dependence of the angle theta between PC1
and the leading eigenvector of EC on b, n and the amount of input activity or
correlation. (We do this analytically and using Matlab calculations.) We find
that theta increases (learning becomes gradually less useful) with increases in
b, particularly for intermediate (i.e. biologically-realistic) correlation
strength, although some useful learning always occurs up to the trivial limit Q
= 1/n. We discuss the relation of our results to Hebbian unsupervised learning
in the brain.
| [
{
"created": "Sun, 13 Jan 2008 03:16:12 GMT",
"version": "v1"
}
] | 2008-01-15 | [
[
"Radulescu",
"Anca",
""
],
[
"Cox",
"Kingsley",
""
],
[
"Adams",
"Paul",
""
]
] | Recent work on Long Term Potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We extend the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity, by introducing an error matrix E, which expresses possible crosstalk between updating at different connections. We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We then study the outcome of learning using different versions of E. In the most biologically plausible case, arising when there are no intrinsically privileged connections, E has diagonal elements Q and off- diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n. We analyze this error-onto-all case in detail, for both uncorrelated and correlated inputs. We study the dependence of the angle theta between PC1 and the leading eigenvector of EC on b, n and the amount of input activity or correlation. (We do this analytically and using Matlab calculations.) We find that theta increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e. biologically-realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q = 1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. |
q-bio/0701048 | Brigitte Gaillard | Philippe Gaspar, Jean-Yves Georges (DEPE-IPHC), Arnaud Lenoble, Sandra
Ferraroli (DEPE-IPHC), Sabrina Fossette (DEPE-IPHC), Yvon Le Maho (DEPE-IPHC) | Marine animal behaviour: neglecting ocean currents can lead us up the
wrong track | null | Proc. R. Soc. B. 273 (07/11/2006) 2697-2702 | 10.1098/rspb.2006.3623 | null | q-bio.PE | null | Tracks of marine animals in the wild, now increasingly acquired by electronic
tagging of individuals, are of prime interest not only to identify habitats and
high-risk areas, but also to gain detailed information about the behaviour of
these animals. Using recent satellite-derived current estimates and leatherback
turtle (Dermochelys coriacea) tracking data, we demonstrate that oceanic
currents, usually neglected when analysing tracking data, can substantially
distort the observed trajectories. Consequently, this will affect several
important results deduced from the analysis of tracking data, such as the
evaluation of the orientation skills and the energy budget of animals or the
identification of foraging areas. We conclude that currents should be
systematically taken into account to ensure the unbiased interpretation of
tracking data, which now play a major role in marine conservation biology.
| [
{
"created": "Mon, 29 Jan 2007 15:11:26 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Gaspar",
"Philippe",
"",
"DEPE-IPHC"
],
[
"Georges",
"Jean-Yves",
"",
"DEPE-IPHC"
],
[
"Lenoble",
"Arnaud",
"",
"DEPE-IPHC"
],
[
"Ferraroli",
"Sandra",
"",
"DEPE-IPHC"
],
[
"Fossette",
"Sabrina",
"",
"DEPE-IPHC"
],
[
"Maho",
"Yvon Le",
"",
"DEPE-IPHC"
]
] | Tracks of marine animals in the wild, now increasingly acquired by electronic tagging of individuals, are of prime interest not only to identify habitats and high-risk areas, but also to gain detailed information about the behaviour of these animals. Using recent satellite-derived current estimates and leatherback turtle (Dermochelys coriacea) tracking data, we demonstrate that oceanic currents, usually neglected when analysing tracking data, can substantially distort the observed trajectories. Consequently, this will affect several important results deduced from the analysis of tracking data, such as the evaluation of the orientation skills and the energy budget of animals or the identification of foraging areas. We conclude that currents should be systematically taken into account to ensure the unbiased interpretation of tracking data, which now play a major role in marine conservation biology. |
2002.02936 | Vincent Huin | Vincent Huin (JPArc - U1172 Inserm), Claire-Marie Dhaenens (JPArc -
U837 Inserm), M\'egane Homa (JPArc - U1172 Inserm), K\'evin Carvalho, Luc
Bu\'ee, Bernard Sablonni\`ere | Neurogenetics of the Human Adenosine Receptor Genes: Genetic Structures
and Involvement in Brain Diseases | null | Journal of Caffeine and Adenosine Research, New Rochelle, N.Y. :
Mary Ann Liebert, Inc., [2018]-, 2019, 9 (3), pp.73-88 | 10.1089/caff.2019.0011 | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adenosine receptors are G-protein-coupled receptors involved in a wide range
of physiological and pathological phenomena in most mammalian systems. All four
receptors are widely expressed in the central nervous system, where they
modulate neurotransmitter release and neuronal plasticity. A large number of
gene association studies have shown that common genetic variants of the
adenosine receptors (encoded by the ADORA1, ADORA2A, ADORA2B and ADORA3 genes)
have a neuroprotective or neurodegenerative role in neurologic/psychiatric
diseases. New genetic studies of rare variants and few novel associations with
depression or epilepsy subtypes have recently been reported. Here, we review
the literature on the genetics of adenosine receptors in neurologic and/or
psychiatric diseases in humans, and discuss perspectives for further genetic
research. We also provide an update on the genetic structures of the four human
adenosine receptor genes and their regulation - a topic that has not been
extensively addressed. Our review emphasizes the importance of (i) better
characterizing the genetics of adenosine receptor genes and (ii) understanding
how these genes are regulated.
| [
{
"created": "Fri, 31 Jan 2020 14:04:41 GMT",
"version": "v1"
}
] | 2020-02-10 | [
[
"Huin",
"Vincent",
"",
"JPArc - U1172 Inserm"
],
[
"Dhaenens",
"Claire-Marie",
"",
"JPArc -\n U837 Inserm"
],
[
"Homa",
"Mégane",
"",
"JPArc - U1172 Inserm"
],
[
"Carvalho",
"Kévin",
""
],
[
"Buée",
"Luc",
""
],
[
"Sablonnière",
"Bernard",
""
]
] | Adenosine receptors are G-protein-coupled receptors involved in a wide range of physiological and pathological phenomena in most mammalian systems. All four receptors are widely expressed in the central nervous system, where they modulate neurotransmitter release and neuronal plasticity. A large number of gene association studies have shown that common genetic variants of the adenosine receptors (encoded by the ADORA1, ADORA2A, ADORA2B and ADORA3 genes) have a neuroprotective or neurodegenerative role in neurologic/psychiatric diseases. New genetic studies of rare variants and few novel associations with depression or epilepsy subtypes have recently been reported. Here, we review the literature on the genetics of adenosine receptors in neurologic and/or psychiatric diseases in humans, and discuss perspectives for further genetic research. We also provide an update on the genetic structures of the four human adenosine receptor genes and their regulation - a topic that has not been extensively addressed. Our review emphasizes the importance of (i) better characterizing the genetics of adenosine receptor genes and (ii) understanding how these genes are regulated. |
1009.3150 | Cornelia Borck | Cornelia Borck | The Effect of Recurrent Mutation on the Linkage Disequilibrium under a
Selective Sweep | 24 pages, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A selective sweep describes the reduction of diversity due to strong positive
selection. If the mutation rate to a selectively beneficial allele is
sufficiently high, Pennings and Hermisson (2006a) have shown, that it becomes
likely, that a selective sweep is caused by several individuals. Such an event
is called a soft sweep and the complementary event of a single origin of the
beneficial allele, the classical case, a hard sweep. We give analytical
expressions for the linkage disequilibrium (LD) between two neutral loci linked
to the selected locus, depending on the recurrent mutation to the beneficial
allele, measured by $D$ and $\hat{\sigma_D^2}$, a quantity introduced by Ohta
and Kimura (1969), and conclude that the LD-pattern of a soft sweep differs
substantially from that of a hard sweep due to haplotype structure. We compare
our results with simulations.
| [
{
"created": "Thu, 16 Sep 2010 11:29:24 GMT",
"version": "v1"
}
] | 2010-09-17 | [
[
"Borck",
"Cornelia",
""
]
] | A selective sweep describes the reduction of diversity due to strong positive selection. If the mutation rate to a selectively beneficial allele is sufficiently high, Pennings and Hermisson (2006a) have shown, that it becomes likely, that a selective sweep is caused by several individuals. Such an event is called a soft sweep and the complementary event of a single origin of the beneficial allele, the classical case, a hard sweep. We give analytical expressions for the linkage disequilibrium (LD) between two neutral loci linked to the selected locus, depending on the recurrent mutation to the beneficial allele, measured by $D$ and $\hat{\sigma_D^2}$, a quantity introduced by Ohta and Kimura (1969), and conclude that the LD-pattern of a soft sweep differs substantially from that of a hard sweep due to haplotype structure. We compare our results with simulations. |
2406.03141 | Ke Liu | Ke Liu, Weian Mao, Shuaike Shen, Xiaoran Jiao, Zheng Sun, Hao Chen,
Chunhua Shen | Floating Anchor Diffusion Model for Multi-motif Scaffolding | ICML 2024 | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motif scaffolding seeks to design scaffold structures for constructing
proteins with functions derived from the desired motif, which is crucial for
the design of vaccines and enzymes. Previous works approach the problem by
inpainting or conditional generation. Both of them can only scaffold motifs
with fixed positions, and the conditional generation cannot guarantee the
presence of motifs. However, prior knowledge of the relative motif positions in
a protein is not readily available, and constructing a protein with multiple
functions in one protein is more general and significant because of the
synergies between functions. We propose a Floating Anchor Diffusion (FADiff)
model. FADiff allows motifs to float rigidly and independently in the process
of diffusion, which guarantees the presence of motifs and automates the motif
position design. Our experiments demonstrate the efficacy of FADiff with high
success rates and designable novel scaffolds. To the best of our knowledge,
FADiff is the first work to tackle the challenge of scaffolding multiple motifs
without relying on the expertise of relative motif positions in the protein.
Code is available at https://github.com/aim-uofa/FADiff.
| [
{
"created": "Wed, 5 Jun 2024 10:54:18 GMT",
"version": "v1"
}
] | 2024-06-06 | [
[
"Liu",
"Ke",
""
],
[
"Mao",
"Weian",
""
],
[
"Shen",
"Shuaike",
""
],
[
"Jiao",
"Xiaoran",
""
],
[
"Sun",
"Zheng",
""
],
[
"Chen",
"Hao",
""
],
[
"Shen",
"Chunhua",
""
]
] | Motif scaffolding seeks to design scaffold structures for constructing proteins with functions derived from the desired motif, which is crucial for the design of vaccines and enzymes. Previous works approach the problem by inpainting or conditional generation. Both of them can only scaffold motifs with fixed positions, and the conditional generation cannot guarantee the presence of motifs. However, prior knowledge of the relative motif positions in a protein is not readily available, and constructing a protein with multiple functions in one protein is more general and significant because of the synergies between functions. We propose a Floating Anchor Diffusion (FADiff) model. FADiff allows motifs to float rigidly and independently in the process of diffusion, which guarantees the presence of motifs and automates the motif position design. Our experiments demonstrate the efficacy of FADiff with high success rates and designable novel scaffolds. To the best of our knowledge, FADiff is the first work to tackle the challenge of scaffolding multiple motifs without relying on the expertise of relative motif positions in the protein. Code is available at https://github.com/aim-uofa/FADiff. |
1311.4919 | Dmitrii Rachinskii | Gary Friedman, Stephen McCarthy, Dmitrii Rachinskii | Hysteresis Can Grant Fitness in Stochastically Varying Environment | null | null | 10.1371/journal.pone.0103241 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hysteresis and bet-hedging (random choice of phenotypes) are two different
observations typically linked with multiplicity of phenotypes in biological
systems. Hysteresis can be viewed as form of the system's persistent memory of
past environmental conditions, while bet-hedging is a diversification strategy
not necessarily associated with any memory. It has been shown that bet-hedging
can increase population growth when phenotype adjusts its switching probability
in response to environmental inputs. Although memory and hysteresis have been
used to generate environment dependent phenotype switching probabilities, their
exact connection to bet-hedging have remained unclear. In this work, using a
simple model that takes into account phenotype switching as well as lag phase
in the population growth occurring after the phenotype switching, it is shown
that memory and hysteresis emerge naturally and are firmly linked to
bet-hedging as organisms attempt to optimize their population growth rate. The
optimal ``magnitude'' of hysteresis is explained to be associated with
stochastic resonance where the characteristic time between subsequent phenotype
switching events is linked to the lag phase delay. Furthermore, hysteretic
switching strategy is shown not to confer any additional population growth
advantage if the environment varies periodically in a deterministic fashion.
This suggests that, while bet-hedging may evolve under some conditions even in
deterministic environments, memory and hysteresis is probably the result of
environmental uncertainty in the presence of lag phase in the switching of
phenotypes.
| [
{
"created": "Tue, 19 Nov 2013 23:38:25 GMT",
"version": "v1"
}
] | 2015-06-17 | [
[
"Friedman",
"Gary",
""
],
[
"McCarthy",
"Stephen",
""
],
[
"Rachinskii",
"Dmitrii",
""
]
] | Hysteresis and bet-hedging (random choice of phenotypes) are two different observations typically linked with multiplicity of phenotypes in biological systems. Hysteresis can be viewed as form of the system's persistent memory of past environmental conditions, while bet-hedging is a diversification strategy not necessarily associated with any memory. It has been shown that bet-hedging can increase population growth when phenotype adjusts its switching probability in response to environmental inputs. Although memory and hysteresis have been used to generate environment dependent phenotype switching probabilities, their exact connection to bet-hedging have remained unclear. In this work, using a simple model that takes into account phenotype switching as well as lag phase in the population growth occurring after the phenotype switching, it is shown that memory and hysteresis emerge naturally and are firmly linked to bet-hedging as organisms attempt to optimize their population growth rate. The optimal ``magnitude'' of hysteresis is explained to be associated with stochastic resonance where the characteristic time between subsequent phenotype switching events is linked to the lag phase delay. Furthermore, hysteretic switching strategy is shown not to confer any additional population growth advantage if the environment varies periodically in a deterministic fashion. This suggests that, while bet-hedging may evolve under some conditions even in deterministic environments, memory and hysteresis is probably the result of environmental uncertainty in the presence of lag phase in the switching of phenotypes. |
2112.06048 | Jennifer Williams | Jennifer Williams, Leila Wehbe | Behavior measures are predicted by how information is encoded in an
individual's brain | null | null | null | null | q-bio.NC cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Similar to how differences in the proficiency of the cardiovascular and
musculoskeletal system predict an individual's athletic ability, differences in
how the same brain region encodes information across individuals may explain
their behavior. However, when studying how the brain encodes information,
researchers choose different neuroimaging tasks (e.g., language or motor
tasks), which can rely on processing different types of information and can
modulate different brain regions. We hypothesize that individual differences in
how information is encoded in the brain are task-specific and predict different
behavior measures. We propose a framework using encoding-models to identify
individual differences in brain encoding and test if these differences can
predict behavior. We evaluate our framework using task functional magnetic
resonance imaging data. Our results indicate that individual differences
revealed by encoding-models are a powerful tool for predicting behavior, and
that researchers should optimize their choice of task and encoding-model for
their behavior of interest.
| [
{
"created": "Sat, 11 Dec 2021 18:40:30 GMT",
"version": "v1"
}
] | 2021-12-14 | [
[
"Williams",
"Jennifer",
""
],
[
"Wehbe",
"Leila",
""
]
] | Similar to how differences in the proficiency of the cardiovascular and musculoskeletal system predict an individual's athletic ability, differences in how the same brain region encodes information across individuals may explain their behavior. However, when studying how the brain encodes information, researchers choose different neuroimaging tasks (e.g., language or motor tasks), which can rely on processing different types of information and can modulate different brain regions. We hypothesize that individual differences in how information is encoded in the brain are task-specific and predict different behavior measures. We propose a framework using encoding-models to identify individual differences in brain encoding and test if these differences can predict behavior. We evaluate our framework using task functional magnetic resonance imaging data. Our results indicate that individual differences revealed by encoding-models are a powerful tool for predicting behavior, and that researchers should optimize their choice of task and encoding-model for their behavior of interest. |
2406.10146 | Shouju Wang | Jiajia Tang, Jie Zhang, Jiulou Zhang, Yuxia Tang, Hao Ni, Shouju Wang | Multimodal Radiomics Model for Predicting Gold Nanoparticles
Accumulation in Mouse Tumors | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Background: Nanoparticles can accumulate in solid tumors, serving as
diagnostic or therapeutic agents for cancer. Clinical translation is
challenging due to low accumulation in tumors and heterogeneity between tumor
types and individuals. Tools to identify this heterogeneity and predict
nanoparticle accumulation are needed. Advanced imaging techniques combined with
radiomics and AI may offer a solution.
Methods: 183 mice were used to create seven subcutaneous tumor models, with
three sizes (15nm, 40nm, 70nm) of gold nanoparticles injected via the tail
vein. Accumulation was measured using ICP-OES. Data were divided into training
and test sets (7:3). Tumors were categorized into high and low uptake groups
based on the median value of the training set. Before injection, multimodal
imaging data (CT, B-mode ultrasound, SWE, CEUS) were acquired, and radiomics
features extracted. LASSO and RFE algorithms built a radiomics signature. This,
along with tumor type and mean values from CT and SWE, constructed the best
model using SVM. For each tumor in the test set, the radiomics signature
predicted gold nanoparticle uptake. Model performance was evaluated by AUC.
Results: Significant variability in gold nanoparticle accumulation was
observed among tumors (P < 0.001). The median accumulation in the training set
was 3.37% ID/g. Nanoparticle size was not a main determinant of uptake (P >
0.05). The composite model based on radiomics signature outperformed the basic
model in both training (AUC 0.93 vs. 0.68) and testing (0.78 vs. 0.61)
datasets.
Conclusion: The composite model identifies tumor heterogeneity and predicts
high uptake of gold nanoparticles, improving patient stratification and
supporting nanomedicine's clinical application.
| [
{
"created": "Fri, 14 Jun 2024 15:55:44 GMT",
"version": "v1"
}
] | 2024-06-17 | [
[
"Tang",
"Jiajia",
""
],
[
"Zhang",
"Jie",
""
],
[
"Zhang",
"Jiulou",
""
],
[
"Tang",
"Yuxia",
""
],
[
"Ni",
"Hao",
""
],
[
"Wang",
"Shouju",
""
]
] | Background: Nanoparticles can accumulate in solid tumors, serving as diagnostic or therapeutic agents for cancer. Clinical translation is challenging due to low accumulation in tumors and heterogeneity between tumor types and individuals. Tools to identify this heterogeneity and predict nanoparticle accumulation are needed. Advanced imaging techniques combined with radiomics and AI may offer a solution. Methods: 183 mice were used to create seven subcutaneous tumor models, with three sizes (15nm, 40nm, 70nm) of gold nanoparticles injected via the tail vein. Accumulation was measured using ICP-OES. Data were divided into training and test sets (7:3). Tumors were categorized into high and low uptake groups based on the median value of the training set. Before injection, multimodal imaging data (CT, B-mode ultrasound, SWE, CEUS) were acquired, and radiomics features extracted. LASSO and RFE algorithms built a radiomics signature. This, along with tumor type and mean values from CT and SWE, constructed the best model using SVM. For each tumor in the test set, the radiomics signature predicted gold nanoparticle uptake. Model performance was evaluated by AUC. Results: Significant variability in gold nanoparticle accumulation was observed among tumors (P < 0.001). The median accumulation in the training set was 3.37% ID/g. Nanoparticle size was not a main determinant of uptake (P > 0.05). The composite model based on radiomics signature outperformed the basic model in both training (AUC 0.93 vs. 0.68) and testing (0.78 vs. 0.61) datasets. Conclusion: The composite model identifies tumor heterogeneity and predicts high uptake of gold nanoparticles, improving patient stratification and supporting nanomedicine's clinical application. |
1504.00932 | Paulo Matias | Lirio Onofre Baptista de Almeida, Paulo Matias, Rafael Tuma Guariento | An embedded system for real-time feedback neuroscience experiments | 17 pages, 11 figures, IV Brazilian Symposium on Computing Systems
Engineering | null | 10.13140/RG.2.1.4077.7769 | null | q-bio.QM cs.OH | http://creativecommons.org/licenses/by/3.0/ | A complete data acquisition and signal output control system for synchronous
stimuli generation, geared towards in vivo neuroscience experiments, was
developed using the Terasic DE2i-150 board. All emotions and thoughts are an
emergent property of the chemical and electrical activity of neurons. Most of
these cells are regarded as excitable cells (spiking neurons), which produce
temporally localized electric patterns (spikes). Researchers usually consider
that only the instant of occurrence (timestamp) of these spikes encodes
information. Registering neural activity evoked by stimuli demands timing
determinism and data storage capabilities that cannot be met without dedicated
hardware and a hard real-time operational system (RTOS). Indeed, research in
neuroscience usually requires dedicated electronic instrumentation for studies
in neural coding, brain machine interfaces and closed loop in vivo or in vitro
experiments. We developed a complete embedded system solution consisting of a
hardware/software co-design with the Intel Atom processor running a free RTOS
and a FPGA communicating via a PCIe-to-Avalon bridge. Our system is capable of
registering input event timestamps with 1{\mu}s precision and digitally
generating stimuli output in hard real-time. The whole system is controlled by
a Linux-based Graphical User Interface (GUI). Collected results are
simultaneously saved in a local file and broadcasted wirelessly to mobile
device web-browsers in an user-friendly graphic format, enhanced by HTML5
technology. The developed system is low-cost and highly configurable, enabling
various neuroscience experimental setups, while the commercial off-the-shelf
systems have low availability and are less flexible to adapt to specific
experimental configurations.
| [
{
"created": "Fri, 3 Apr 2015 20:05:59 GMT",
"version": "v1"
}
] | 2015-04-21 | [
[
"de Almeida",
"Lirio Onofre Baptista",
""
],
[
"Matias",
"Paulo",
""
],
[
"Guariento",
"Rafael Tuma",
""
]
] | A complete data acquisition and signal output control system for synchronous stimuli generation, geared towards in vivo neuroscience experiments, was developed using the Terasic DE2i-150 board. All emotions and thoughts are an emergent property of the chemical and electrical activity of neurons. Most of these cells are regarded as excitable cells (spiking neurons), which produce temporally localized electric patterns (spikes). Researchers usually consider that only the instant of occurrence (timestamp) of these spikes encodes information. Registering neural activity evoked by stimuli demands timing determinism and data storage capabilities that cannot be met without dedicated hardware and a hard real-time operational system (RTOS). Indeed, research in neuroscience usually requires dedicated electronic instrumentation for studies in neural coding, brain machine interfaces and closed loop in vivo or in vitro experiments. We developed a complete embedded system solution consisting of a hardware/software co-design with the Intel Atom processor running a free RTOS and a FPGA communicating via a PCIe-to-Avalon bridge. Our system is capable of registering input event timestamps with 1{\mu}s precision and digitally generating stimuli output in hard real-time. The whole system is controlled by a Linux-based Graphical User Interface (GUI). Collected results are simultaneously saved in a local file and broadcasted wirelessly to mobile device web-browsers in an user-friendly graphic format, enhanced by HTML5 technology. The developed system is low-cost and highly configurable, enabling various neuroscience experimental setups, while the commercial off-the-shelf systems have low availability and are less flexible to adapt to specific experimental configurations. |
2309.09955 | Chase Armer | Chase Armer, Hassan Kane, Dana Cortade, Dave Estell, Adil Yusuf,
Radhakrishna Sanka, Henning Redestig, TJ Brunette, Pete Kelly, Erika
DeBenedictis | The Protein Engineering Tournament: An Open Science Benchmark for
Protein Modeling and Design | 8 pages, 5 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The grand challenge of protein engineering is the development of
computational models that can characterize and generate protein sequences for
any arbitrary function. However, progress today is limited by lack of 1)
benchmarks with which to compare computational techniques, 2) large datasets of
protein function, and 3) democratized access to experimental protein
characterization. Here, we introduce the Protein Engineering Tournament, a
fully-remote, biennial competition for the development and benchmarking of
computational methods in protein engineering. The tournament consists of two
rounds: a first in silico round, where participants use computational models to
predict biophysical properties for a set of protein sequences, and a second in
vitro round, where participants are challenged to design new protein sequences,
which are experimentally measured with open-source, automated methods to
determine a winner. At the Tournament's conclusion, the experimental protocols
and all collected data will be open-sourced for continued benchmarking and
advancement of computational models. We hope the Protein Engineering Tournament
will provide a transparent platform with which to evaluate progress in this
field and mobilize the scientific community to conquer the grand challenge of
computational protein engineering.
| [
{
"created": "Mon, 18 Sep 2023 17:26:25 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Sep 2023 16:47:55 GMT",
"version": "v2"
}
] | 2023-09-20 | [
[
"Armer",
"Chase",
""
],
[
"Kane",
"Hassan",
""
],
[
"Cortade",
"Dana",
""
],
[
"Estell",
"Dave",
""
],
[
"Yusuf",
"Adil",
""
],
[
"Sanka",
"Radhakrishna",
""
],
[
"Redestig",
"Henning",
""
],
[
"Brunette",
"TJ",
""
],
[
"Kelly",
"Pete",
""
],
[
"DeBenedictis",
"Erika",
""
]
] | The grand challenge of protein engineering is the development of computational models that can characterize and generate protein sequences for any arbitrary function. However, progress today is limited by lack of 1) benchmarks with which to compare computational techniques, 2) large datasets of protein function, and 3) democratized access to experimental protein characterization. Here, we introduce the Protein Engineering Tournament, a fully-remote, biennial competition for the development and benchmarking of computational methods in protein engineering. The tournament consists of two rounds: a first in silico round, where participants use computational models to predict biophysical properties for a set of protein sequences, and a second in vitro round, where participants are challenged to design new protein sequences, which are experimentally measured with open-source, automated methods to determine a winner. At the Tournament's conclusion, the experimental protocols and all collected data will be open-sourced for continued benchmarking and advancement of computational models. We hope the Protein Engineering Tournament will provide a transparent platform with which to evaluate progress in this field and mobilize the scientific community to conquer the grand challenge of computational protein engineering. |
1810.03435 | Ruibo Tu | Charles Hamesse, Ruibo Tu, Paul Ackermann, Hedvig Kjellstr\"om, Cheng
Zhang | Simultaneous Measurement Imputation and Outcome Prediction for Achilles
Tendon Rupture Rehabilitation | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries.
Rehabilitation after such a musculoskeletal injury remains a prolonged process
with a very variable outcome. Accurately predicting rehabilitation outcome is
crucial for treatment decision support. However, it is challenging to train an
automatic method for predicting the ATR rehabilitation outcome from treatment
data, due to a massive amount of missing entries in the data recorded from ATR
patients, as well as complex nonlinear relations between measurements and
outcomes. In this work, we design an end-to-end probabilistic framework to
impute missing data entries and predict rehabilitation outcomes simultaneously.
We evaluate our model on a real-life ATR clinical cohort, comparing with
various baselines. The proposed method demonstrates its clear superiority over
traditional methods which typically perform imputation and prediction in two
separate stages.
| [
{
"created": "Sat, 8 Sep 2018 07:25:12 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2019 09:10:16 GMT",
"version": "v2"
}
] | 2019-08-14 | [
[
"Hamesse",
"Charles",
""
],
[
"Tu",
"Ruibo",
""
],
[
"Ackermann",
"Paul",
""
],
[
"Kjellström",
"Hedvig",
""
],
[
"Zhang",
"Cheng",
""
]
] | Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries. Rehabilitation after such a musculoskeletal injury remains a prolonged process with a very variable outcome. Accurately predicting rehabilitation outcome is crucial for treatment decision support. However, it is challenging to train an automatic method for predicting the ATR rehabilitation outcome from treatment data, due to a massive amount of missing entries in the data recorded from ATR patients, as well as complex nonlinear relations between measurements and outcomes. In this work, we design an end-to-end probabilistic framework to impute missing data entries and predict rehabilitation outcomes simultaneously. We evaluate our model on a real-life ATR clinical cohort, comparing with various baselines. The proposed method demonstrates its clear superiority over traditional methods which typically perform imputation and prediction in two separate stages. |
2202.01989 | Bhaskar Sen | Bhaskar Sen | A Comparison of Representation Learning Methods for Dimensionality
Reduction of fMRI Scans for Classification of ADHD | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | This paper compares three feature representation techniques used to represent
resting state functional magnetic resonance (fMRI) scans. The proposed models
of feature representation consider the time averaged fMRI scans as raw
representation of image data. The effectiveness of the representation is
evaluated by using these features for classification of Attention Deficit
Hyperactivity Disorder (ADHD) patients from healthy controls. The
dimensionality reduction methods used for feature representation are
maximum-variance unfolding, locally linear embedding and auto-encoders. The
classifiers tested for classification purpose were neural net and support
vector machine. Using auto-encoders with four hidden layers along with a
support vector machine classifier yielded a classification accuracy of 61.25%
along with 65.69% sensitivity and 52.20% specificity.
| [
{
"created": "Fri, 4 Feb 2022 06:05:12 GMT",
"version": "v1"
}
] | 2022-02-07 | [
[
"Sen",
"Bhaskar",
""
]
] | This paper compares three feature representation techniques used to represent resting state functional magnetic resonance (fMRI) scans. The proposed models of feature representation consider the time averaged fMRI scans as raw representation of image data. The effectiveness of the representation is evaluated by using these features for classification of Attention Deficit Hyperactivity Disorder (ADHD) patients from healthy controls. The dimensionality reduction methods used for feature representation are maximum-variance unfolding, locally linear embedding and auto-encoders. The classifiers tested for classification purpose were neural net and support vector machine. Using auto-encoders with four hidden layers along with a support vector machine classifier yielded a classification accuracy of 61.25% along with 65.69% sensitivity and 52.20% specificity. |
1408.3236 | Han Chen | Han Chen, Fangqin Lin and Xionglei He | The degenerative evolution from multicellularity to unicellularity
during cancer | null | null | 10.1038/ncomms9812 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theoretical reasoning suggests that human cancer may result from knocking
down the genetic constraints evolved for maintenance of the metazoan
multicellularity, which, however, requires a critical test. Using
xenograft-based experimental evolution we characterized for the first time the
full life history from initiation to metastasis of a tumor at the genomic and
transcriptomic levels, and observed metastasis-driving positive selection for
generally loss-of-function mutations on a set of multicellularity-related
genes, which is further supported by large-scale exome data of clinical tumor
samples. Subsequent expression analysis revealed mainly expression
down-regulation of multicellularity-related genes, which form an evolving
expression profile approaching that of embryonic stem cells, the cell type with
the most characteristics of unicellular life. The theoretical conjecture
predicts that genes born at the emergence of metazoan multicellularity tend to
be cancer drivers, which we validated using a rigorous phylostratigraphy
analysis on the birth rate of genes annotated by Cancer Gene Census. Also, the
number of loss-of-function tumor suppressors often predominates over activated
oncogenes in a typical tumor of human patients. These data collectively suggest
that, different from typical organismal evolution in which gain of new genes is
the mainstream, cancer represents a loss-of-function-driven degenerative
evolution back to the unicellular ground state. This cancer evolution model may
explain the enormous tumoral genetic heterogeneity in the clinic, underlie how
distant-organ metastases originate in primary tumors despite distinct
environmental requirements, and hold implications for designing effective
cancer therapy.
| [
{
"created": "Thu, 14 Aug 2014 09:58:51 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Chen",
"Han",
""
],
[
"Lin",
"Fangqin",
""
],
[
"He",
"Xionglei",
""
]
] | Theoretical reasoning suggests that human cancer may result from knocking down the genetic constraints evolved for maintenance of the metazoan multicellularity, which, however, requires a critical test. Using xenograft-based experimental evolution we characterized for the first time the full life history from initiation to metastasis of a tumor at the genomic and transcriptomic levels, and observed metastasis-driving positive selection for generally loss-of-function mutations on a set of multicellularity-related genes, which is further supported by large-scale exome data of clinical tumor samples. Subsequent expression analysis revealed mainly expression down-regulation of multicellularity-related genes, which form an evolving expression profile approaching that of embryonic stem cells, the cell type with the most characteristics of unicellular life. The theoretical conjecture predicts that genes born at the emergence of metazoan multicellularity tend to be cancer drivers, which we validated using a rigorous phylostratigraphy analysis on the birth rate of genes annotated by Cancer Gene Census. Also, the number of loss-of-function tumor suppressors often predominates over activated oncogenes in a typical tumor of human patients. These data collectively suggest that, different from typical organismal evolution in which gain of new genes is the mainstream, cancer represents a loss-of-function-driven degenerative evolution back to the unicellular ground state. This cancer evolution model may explain the enormous tumoral genetic heterogeneity in the clinic, underlie how distant-organ metastases originate in primary tumors despite distinct environmental requirements, and hold implications for designing effective cancer therapy. |
1004.4388 | David Young | David A. Young | Growth-Algorithm Model of Leaf Shape | 21 pages, 9 figures | null | null | UCRL-JRNL-226257 | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The innumerable shapes of plant leaves present a challenge to the explanatory
power of biophysical theory. A model is needed that can produce these shapes
with a small set of parameters. This paper presents a simple model of leaf
shape based on a growth algorithm, which governs the growth rate of leaf tissue
in two dimensions and hence the outline of the leaf. The growth of leaf lobes
is governed by the position of leaf veins. This model gives an approximation to
a wide variety of higher plant leaf shapes. The variation of leaf shapes found
in closely related plants is discussed in terms of variability in the growth
algorithms. The model can be extended to more complex leaf types.
| [
{
"created": "Sun, 25 Apr 2010 23:09:00 GMT",
"version": "v1"
}
] | 2010-04-27 | [
[
"Young",
"David A.",
""
]
] | The innumerable shapes of plant leaves present a challenge to the explanatory power of biophysical theory. A model is needed that can produce these shapes with a small set of parameters. This paper presents a simple model of leaf shape based on a growth algorithm, which governs the growth rate of leaf tissue in two dimensions and hence the outline of the leaf. The growth of leaf lobes is governed by the position of leaf veins. This model gives an approximation to a wide variety of higher plant leaf shapes. The variation of leaf shapes found in closely related plants is discussed in terms of variability in the growth algorithms. The model can be extended to more complex leaf types. |
1904.10337 | Mile Sikic | Neven Miculini\'c, Marko Ratkovi\'c, Mile \v{S}iki\'c | MinCall - MinION end2end convolutional deep learning basecaller | 2nd international workshop on deep learning for precision medicine,
ECML-PKDD 2017 | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Oxford Nanopore Technologies's MinION is the first portable DNA
sequencing device. It is capable of producing long reads, over 100 kBp were
reported. However, it has significantly higher error rate than other methods.
In this study, we present MinCall, an end2end basecaller model for the MinION.
The model is based on deep learning and uses convolutional neural networks
(CNN) in its implementation. For extra performance, it uses cutting edge deep
learning techniques and architectures, batch normalization and Connectionist
Temporal Classification (CTC) loss. The best performing deep learning model
achieves 91.4% median match rate on E. Coli dataset using R9 pore chemistry and
1D reads.
| [
{
"created": "Mon, 22 Apr 2019 16:37:00 GMT",
"version": "v1"
}
] | 2019-04-24 | [
[
"Miculinić",
"Neven",
""
],
[
"Ratković",
"Marko",
""
],
[
"Šikić",
"Mile",
""
]
] | The Oxford Nanopore Technologies's MinION is the first portable DNA sequencing device. It is capable of producing long reads, over 100 kBp were reported. However, it has significantly higher error rate than other methods. In this study, we present MinCall, an end2end basecaller model for the MinION. The model is based on deep learning and uses convolutional neural networks (CNN) in its implementation. For extra performance, it uses cutting edge deep learning techniques and architectures, batch normalization and Connectionist Temporal Classification (CTC) loss. The best performing deep learning model achieves 91.4% median match rate on E. Coli dataset using R9 pore chemistry and 1D reads. |
2301.10002 | Ilias Rentzeperis | Ilias Rentzeperis, Luca Calatroni, Laurent Perrinet, Dario Prandi | Beyond $\ell_1$ sparse coding in V1 | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Growing evidence indicates that only a sparse subset from a pool of sensory
neurons is active for the encoding of visual stimuli at any instant in time.
Traditionally, to replicate such biological sparsity, generative models have
been using the $\ell_1$ norm as a penalty due to its convexity, which makes it
amenable to fast and simple algorithmic solvers. In this work, we use
biological vision as a test-bed and show that the soft thresholding operation
associated to the use of the $\ell_1$ norm is highly suboptimal compared to
other functions suited to approximating $\ell_q$ with $0 \leq q < 1 $
(including recently proposed Continuous Exact relaxations), both in terms of
performance and in the production of features that are akin to signatures of
the primary visual cortex. We show that $\ell_1$ sparsity produces a denser
code or employs a pool with more neurons, i.e. has a higher degree of
overcompleteness, in order to maintain the same reconstruction error as the
other methods considered. For all the penalty functions tested, a subset of the
neurons develop orientation selectivity similarly to V1 neurons. When their
code is sparse enough, the methods also develop receptive fields with varying
functionalities, another signature of V1. Compared to other methods, soft
thresholding achieves this level of sparsity at the expense of much degraded
reconstruction performance, that more likely than not is not acceptable in
biological vision. Our results indicate that V1 uses a sparsity inducing
regularization that is closer to the $\ell_0$ pseudo-norm rather than to the
$\ell_1$ norm.
| [
{
"created": "Tue, 24 Jan 2023 13:53:07 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Jan 2023 13:04:38 GMT",
"version": "v2"
}
] | 2023-01-26 | [
[
"Rentzeperis",
"Ilias",
""
],
[
"Calatroni",
"Luca",
""
],
[
"Perrinet",
"Laurent",
""
],
[
"Prandi",
"Dario",
""
]
] | Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the $\ell_1$ norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the $\ell_1$ norm is highly suboptimal compared to other functions suited to approximating $\ell_q$ with $0 \leq q < 1 $ (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that $\ell_1$ sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the $\ell_0$ pseudo-norm rather than to the $\ell_1$ norm. |
1806.06343 | Giovanni Sena | Anne-Mieke Reijne and Gunnar Pruessner and Giovanni Sena | Linear stability analysis of morphodynamics during tissue regeneration
in plants | 18 pages, 11 figures, 1 table. Typos fixed, added one ref, improved
some figure labels | null | 10.1088/1361-6463/aaf68e | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the key characteristics of multicellular organisms is the ability to
establish and maintain shapes, or morphologies, under a variety of physical and
chemical perturbations. A quantitative description of the underlying
morphological dynamics is a critical step to fully understand the
self-organising properties of multicellular systems. Although many powerful
mathematical tools have been developed to analyse stochastic dynamics, rarely
these are applied to experimental developmental biology. Here, we take root tip
regeneration in the plant model system Arabidopsis thaliana as an example of
robust morphogenesis in living tissue and present a novel approach to quantify
and model the relaxation of the system to its unperturbed morphology. By
generating and analysing time-lapse series of regenerating root tips captured
with confocal microscopy, we are able to extract and model the dynamics of key
morphological traits at cellular resolution. We present a linear stability
analysis of its Markovian dynamics, with the stationary state representing the
intact root in the space of morphological traits. We find that the resulting
eigenvalues can be classified into two groups, suggesting the co-existence of
two distinct temporal scales during the process of regeneration. We discuss the
possible biological implications of our specific results and suggest future
experiments to further probe the self-organising properties of living tissue.
| [
{
"created": "Sun, 17 Jun 2018 07:57:41 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jun 2018 17:30:56 GMT",
"version": "v2"
}
] | 2019-05-22 | [
[
"Reijne",
"Anne-Mieke",
""
],
[
"Pruessner",
"Gunnar",
""
],
[
"Sena",
"Giovanni",
""
]
] | One of the key characteristics of multicellular organisms is the ability to establish and maintain shapes, or morphologies, under a variety of physical and chemical perturbations. A quantitative description of the underlying morphological dynamics is a critical step to fully understand the self-organising properties of multicellular systems. Although many powerful mathematical tools have been developed to analyse stochastic dynamics, rarely these are applied to experimental developmental biology. Here, we take root tip regeneration in the plant model system Arabidopsis thaliana as an example of robust morphogenesis in living tissue and present a novel approach to quantify and model the relaxation of the system to its unperturbed morphology. By generating and analysing time-lapse series of regenerating root tips captured with confocal microscopy, we are able to extract and model the dynamics of key morphological traits at cellular resolution. We present a linear stability analysis of its Markovian dynamics, with the stationary state representing the intact root in the space of morphological traits. We find that the resulting eigenvalues can be classified into two groups, suggesting the co-existence of two distinct temporal scales during the process of regeneration. We discuss the possible biological implications of our specific results and suggest future experiments to further probe the self-organising properties of living tissue. |
2402.11289 | Hossein Moghimianavval | Hossein Moghimianavval, Baharan Meghdadi, Tasmine Clement, Man I Wu | Predicting Breast Cancer Phenotypes from Single-cell RNA-seq Data Using
CloudPred | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Numerous tools have been recently developed to predict disease phenotypes
using single-cell RNA sequencing (RNA-seq) data. CloudPred is an end-to-end
differentiable learning algorithm coupled with a biologically informed mixture
model, originally tested on lupus data. This study extends CloudPred's
applications to breast cancer disease phenotype prediction to test its
robustness and applicability on untested and unrelated biological data. When
applying a breast cancer single-cell RNA seq dataset, CloudPred achieved an
area under the ROC curve (AUC) of 1 in predicting cancer status and performed
better than a linear and Deepset models.
| [
{
"created": "Sat, 17 Feb 2024 14:17:06 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Moghimianavval",
"Hossein",
""
],
[
"Meghdadi",
"Baharan",
""
],
[
"Clement",
"Tasmine",
""
],
[
"Wu",
"Man I",
""
]
] | Numerous tools have been recently developed to predict disease phenotypes using single-cell RNA sequencing (RNA-seq) data. CloudPred is an end-to-end differentiable learning algorithm coupled with a biologically informed mixture model, originally tested on lupus data. This study extends CloudPred's applications to breast cancer disease phenotype prediction to test its robustness and applicability on untested and unrelated biological data. When applying a breast cancer single-cell RNA seq dataset, CloudPred achieved an area under the ROC curve (AUC) of 1 in predicting cancer status and performed better than a linear and Deepset models. |
2101.10953 | Wei Zhong Goh | Wei Zhong Goh, Varun Ursekar, Marc W. Howard | Predicting the future with a scale-invariant temporal memory for the
past | 41 pages, 9 figures; authors' final version, accepted for publication
in Neural Computation | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years it has become clear that the brain maintains a temporal
memory of recent events stretching far into the past. This paper presents a
neurally-inspired algorithm to use a scale-invariant temporal representation of
the past to predict a scale-invariant future. The result is a scale-invariant
estimate of future events as a function of the time at which they are expected
to occur. The algorithm is time-local, with credit assigned to the present
event by observing how it affects the prediction of the future. To illustrate
the potential utility of this approach, we test the model on simultaneous
renewal processes with different time scales. The algorithm scales well on
these problems despite the fact that the number of states needed to describe
them as a Markov process grows exponentially.
| [
{
"created": "Tue, 26 Jan 2021 17:22:17 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Sep 2021 18:16:12 GMT",
"version": "v2"
},
{
"created": "Sat, 23 Oct 2021 04:03:20 GMT",
"version": "v3"
}
] | 2021-10-26 | [
[
"Goh",
"Wei Zhong",
""
],
[
"Ursekar",
"Varun",
""
],
[
"Howard",
"Marc W.",
""
]
] | In recent years it has become clear that the brain maintains a temporal memory of recent events stretching far into the past. This paper presents a neurally-inspired algorithm to use a scale-invariant temporal representation of the past to predict a scale-invariant future. The result is a scale-invariant estimate of future events as a function of the time at which they are expected to occur. The algorithm is time-local, with credit assigned to the present event by observing how it affects the prediction of the future. To illustrate the potential utility of this approach, we test the model on simultaneous renewal processes with different time scales. The algorithm scales well on these problems despite the fact that the number of states needed to describe them as a Markov process grows exponentially. |
1706.08119 | Christophe Guyeux | Huda Al-Nayyef, Christophe Guyeux, Marie Petitjean, Didier Hocquet,
Jacques M. Bahi | Relation between Insertion Sequences and Genome Rearrangements in
Pseudomonas aeruginosa | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During evolution of microorganisms genomes underwork have different changes
in their lengths, gene orders, and gene contents. Investigating these
structural rearrangements helps to understand how genomes have been modified
over time. Some elements that play an important role in genome rearrangements
are called insertion sequences (ISs), they are the simplest types of
transposable elements (TEs) that widely spread within prokaryotic genomes. ISs
can be defined as DNA segments that have the ability to move (cut and paste)
themselves to another location within the same chromosome or not. Due to their
ability to move around, they are often presented as responsible of some of
these genomic recombination. Authors of this research work have regarded this
claim, by checking if a relation between insertion sequences (ISs) and genome
rearrangements can be found. To achieve this goal, a new pipeline that combines
various tools have firstly been designed, for detecting the distribution of
ORFs that belongs to each IS category. Secondly, links between these predicted
ISs and observed rearrangements of two close genomes have been investigated, by
seeing them with the naked eye, and by using computational approaches. The
proposal has been tested on 18 complete bacterial genomes of Pseudomonas
aeruginosa, leading to the conclusion that IS3 family of insertion sequences
are related to genomic inversions.
| [
{
"created": "Sun, 25 Jun 2017 15:14:41 GMT",
"version": "v1"
}
] | 2017-06-27 | [
[
"Al-Nayyef",
"Huda",
""
],
[
"Guyeux",
"Christophe",
""
],
[
"Petitjean",
"Marie",
""
],
[
"Hocquet",
"Didier",
""
],
[
"Bahi",
"Jacques M.",
""
]
] | During evolution of microorganisms genomes underwork have different changes in their lengths, gene orders, and gene contents. Investigating these structural rearrangements helps to understand how genomes have been modified over time. Some elements that play an important role in genome rearrangements are called insertion sequences (ISs), they are the simplest types of transposable elements (TEs) that widely spread within prokaryotic genomes. ISs can be defined as DNA segments that have the ability to move (cut and paste) themselves to another location within the same chromosome or not. Due to their ability to move around, they are often presented as responsible of some of these genomic recombination. Authors of this research work have regarded this claim, by checking if a relation between insertion sequences (ISs) and genome rearrangements can be found. To achieve this goal, a new pipeline that combines various tools have firstly been designed, for detecting the distribution of ORFs that belongs to each IS category. Secondly, links between these predicted ISs and observed rearrangements of two close genomes have been investigated, by seeing them with the naked eye, and by using computational approaches. The proposal has been tested on 18 complete bacterial genomes of Pseudomonas aeruginosa, leading to the conclusion that IS3 family of insertion sequences are related to genomic inversions. |
1109.4965 | Fernao Vistulo de Abreu | Andre M. Lindo, Bruno F. Faria and Fernao V. de Abreu | Tunable kinetic proofreading in a model with molecular frustration | null | Theory in Biosciences, 2011 | 10.1007/s12064-011-0134-z | null | q-bio.MN nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In complex systems, feedback loops can build intricate emergent phenomena, so
that a description of the whole system cannot be easily derived from the
properties of the individual parts. Here we propose that inter-molecular
frustration mechanisms can provide non trivial feedback loops which can develop
nontrivial specificity amplification. We show that this mechanism can be seen
as a more general form of a kinetic proofreading mechanism, with an interesting
new property, namely the ability to tune the specificity amplification by
changing the reactants concentrations. This contrasts with the classical
kinetic proofreading mechanism in which specificity is a function of only the
reaction rate constants involved in a chemical pathway. These results are also
interesting because they show that a wide class of frustration models exists
that share the same underlining kinetic proofreading mechanisms, with even
richer properties. These models can find applications in different areas such
as evolutionary biology, immunology and biochemistry.
| [
{
"created": "Thu, 22 Sep 2011 22:57:32 GMT",
"version": "v1"
}
] | 2011-09-26 | [
[
"Lindo",
"Andre M.",
""
],
[
"Faria",
"Bruno F.",
""
],
[
"de Abreu",
"Fernao V.",
""
]
] | In complex systems, feedback loops can build intricate emergent phenomena, so that a description of the whole system cannot be easily derived from the properties of the individual parts. Here we propose that inter-molecular frustration mechanisms can provide non trivial feedback loops which can develop nontrivial specificity amplification. We show that this mechanism can be seen as a more general form of a kinetic proofreading mechanism, with an interesting new property, namely the ability to tune the specificity amplification by changing the reactants concentrations. This contrasts with the classical kinetic proofreading mechanism in which specificity is a function of only the reaction rate constants involved in a chemical pathway. These results are also interesting because they show that a wide class of frustration models exists that share the same underlining kinetic proofreading mechanisms, with even richer properties. These models can find applications in different areas such as evolutionary biology, immunology and biochemistry. |
1806.00415 | Boran Zhou | Boran Zhou, Landon W. Trost, Xiaoming Zhang | A Numerical Study of the Relationship Between Erectile Pressure and
Shear Wave Speed of Corpus Cavernosa in Ultrasound Vibro-elastography | 18 pages, 5 figures. 1 table | null | null | null | q-bio.TO eess.SP physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of this study was to investigate the relationship between
erectile pressure (EP) and shear wave speed of the corpus cavernosa obtained
via a specific ultrasound vibro-elastography (UVE) technique. This study builds
upon our prior investigation, in which UVE was used to evaluate the
viscoelastic properties of the corpus cavernosa in the flaccid and erect
states. A two-dimensional poroviscoelastic finite element model (FEM) was
developed to simulate wave propagation in the penile tissue according to our
experimental setup. Various levels of EP were applied to the corpus cavernosa,
and the relationship between shear wave speed in the corpus cavernosa and EP
was investigated. Results demonstrated non-linear, positive correlations
between shear wave speeds in the corpus cavernosa and increasing EP at
different vibration frequencies (100-200 Hz). These findings represent the
first report of the impact of EP on shear wave speed and validates the use of
UVE in the evaluation of men with erectile dysfunction. Further evaluations are
warranted to determine the clinical utility of this instrument in the diagnosis
and treatment of men with erectile dysfunction.
| [
{
"created": "Fri, 1 Jun 2018 16:07:27 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jun 2018 17:51:49 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Jun 2018 13:59:28 GMT",
"version": "v3"
}
] | 2018-06-22 | [
[
"Zhou",
"Boran",
""
],
[
"Trost",
"Landon W.",
""
],
[
"Zhang",
"Xiaoming",
""
]
] | The objective of this study was to investigate the relationship between erectile pressure (EP) and shear wave speed of the corpus cavernosa obtained via a specific ultrasound vibro-elastography (UVE) technique. This study builds upon our prior investigation, in which UVE was used to evaluate the viscoelastic properties of the corpus cavernosa in the flaccid and erect states. A two-dimensional poroviscoelastic finite element model (FEM) was developed to simulate wave propagation in the penile tissue according to our experimental setup. Various levels of EP were applied to the corpus cavernosa, and the relationship between shear wave speed in the corpus cavernosa and EP was investigated. Results demonstrated non-linear, positive correlations between shear wave speeds in the corpus cavernosa and increasing EP at different vibration frequencies (100-200 Hz). These findings represent the first report of the impact of EP on shear wave speed and validates the use of UVE in the evaluation of men with erectile dysfunction. Further evaluations are warranted to determine the clinical utility of this instrument in the diagnosis and treatment of men with erectile dysfunction. |
1709.02342 | Giovanni Bussi | Vojt\v{e}ch Ml\'ynsk\'y and Giovanni Bussi | Exploring RNA structure and dynamics through enhanced sampling
simulations | Accepted for publication on Current Opinion in Structural Biology | Curr. Opin. Struct. Biol. 49, 63 (2018) | 10.1016/j.sbi.2018.01.004 | null | q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA function is intimately related to its structural dynamics. Molecular
dynamics simulations are useful for exploring biomolecular flexibility but are
severely limited by the accessible timescale. Enhanced sampling methods allow
this timescale to be effectively extended in order to probe
biologically-relevant conformational changes and chemical reactions. Here, we
review the role of enhanced sampling techniques in the study of RNA systems. We
discuss the challenges and promises associated with the application of these
methods to force-field validation, exploration of conformational landscapes and
ion/ligand-RNA interactions, as well as catalytic pathways. Important technical
aspects of these methods, such as the choice of the biased collective variables
and the analysis of multi-replica simulations, are examined in detail. Finally,
a perspective on the role of these methods in the characterization of RNA
dynamics is provided.
| [
{
"created": "Thu, 7 Sep 2017 16:39:34 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jan 2018 14:49:43 GMT",
"version": "v2"
}
] | 2018-02-06 | [
[
"Mlýnský",
"Vojtěch",
""
],
[
"Bussi",
"Giovanni",
""
]
] | RNA function is intimately related to its structural dynamics. Molecular dynamics simulations are useful for exploring biomolecular flexibility but are severely limited by the accessible timescale. Enhanced sampling methods allow this timescale to be effectively extended in order to probe biologically-relevant conformational changes and chemical reactions. Here, we review the role of enhanced sampling techniques in the study of RNA systems. We discuss the challenges and promises associated with the application of these methods to force-field validation, exploration of conformational landscapes and ion/ligand-RNA interactions, as well as catalytic pathways. Important technical aspects of these methods, such as the choice of the biased collective variables and the analysis of multi-replica simulations, are examined in detail. Finally, a perspective on the role of these methods in the characterization of RNA dynamics is provided. |
2006.06360 | Breno de Oliveira Ferraz | P.P. Avelino, B.F. de Oliveira and J.V.O. Silva | Rock-paper-scissors models with a preferred mobility direction | 5 pages, 7 figures | null | 10.1209/0295-5075/132/48003 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a modified spatial stochastic Lotka-Volterra formulation of
the rock-paper-scissors model using off-lattice stochastic simulations. In this
model one of the species moves preferentially in a specific direction -- the
level of preference being controlled by a noise strength parameter $\eta \in
[0, 1]$ ($\eta = 0$ and $\eta = 1$ corresponding to total preference and no
preference, respectively) -- while the other two species have no referred
direction of motion. We study the behaviour of the system starting from random
initial conditions, showing that the species with asymmetric mobility has
always an advantage over its predator. We also determine the optimal value of
the noise strength parameter which gives the maximum advantage to that species.
Finally, we find that the critical number of individuals, below which the
probability of extinction becomes significant, decreases as the noise level
increases, thus showing that the addition of a preferred mobility direction
studied in the present paper does not favour coexistence.
| [
{
"created": "Thu, 11 Jun 2020 12:22:29 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Avelino",
"P. P.",
""
],
[
"de Oliveira",
"B. F.",
""
],
[
"Silva",
"J. V. O.",
""
]
] | We investigate a modified spatial stochastic Lotka-Volterra formulation of the rock-paper-scissors model using off-lattice stochastic simulations. In this model one of the species moves preferentially in a specific direction -- the level of preference being controlled by a noise strength parameter $\eta \in [0, 1]$ ($\eta = 0$ and $\eta = 1$ corresponding to total preference and no preference, respectively) -- while the other two species have no referred direction of motion. We study the behaviour of the system starting from random initial conditions, showing that the species with asymmetric mobility has always an advantage over its predator. We also determine the optimal value of the noise strength parameter which gives the maximum advantage to that species. Finally, we find that the critical number of individuals, below which the probability of extinction becomes significant, decreases as the noise level increases, thus showing that the addition of a preferred mobility direction studied in the present paper does not favour coexistence. |
1308.3363 | Sven Jahnke | Sven Jahnke, Raoul-Martin Memmesheimer, Marc Timme | How Chaotic is the Balanced State? | null | Front. Comput. Neurosci. (2009) 3:13 | 10.3389/neuro.10.013.2009 | null | q-bio.NC cond-mat.dis-nn physics.bio-ph | http://creativecommons.org/licenses/by/3.0/ | Large sparse circuits of spiking neurons exhibit a balanced state of highly
irregular activity under a wide range of conditions. It occurs likewise in
sparsely connected random networks that receive excitatory external inputs and
recurrent inhibition as well as in networks with mixed recurrent inhibition and
excitation. Here we analytically investigate this irregular dynamics in finite
networks keeping track of all individual spike times and the identities of
individual neurons. For delayed, purely inhibitory interactions we show that
the irregular dynamics is not chaotic but in fact stable. Moreover, we
demonstrate that after long transients the dynamics converges towards periodic
orbits and that every generic periodic orbit of these dynamical systems is
stable. We investigate the collective irregular dynamics upon increasing the
time scale of synaptic responses and upon iteratively replacing inhibitory by
excitatory interactions. Whereas for small and moderate time scales as well as
for few excitatory interactions, the dynamics stays stable, there is a smooth
transition to chaos if the synaptic response becomes sufficiently slow (even in
purely inhibitory networks) or the number of excitatory interactions becomes
too large. These results indicate that chaotic and stable dynamics are equally
capable of generating the irregular neuronal activity. More generally, chaos
apparently is not essential for generating high irregularity of balanced
activity, and we suggest that a mechanism different from chaos and
stochasticity significantly contributes to irregular activity in cortical
circuits.
| [
{
"created": "Thu, 15 Aug 2013 11:30:56 GMT",
"version": "v1"
}
] | 2013-08-16 | [
[
"Jahnke",
"Sven",
""
],
[
"Memmesheimer",
"Raoul-Martin",
""
],
[
"Timme",
"Marc",
""
]
] | Large sparse circuits of spiking neurons exhibit a balanced state of highly irregular activity under a wide range of conditions. It occurs likewise in sparsely connected random networks that receive excitatory external inputs and recurrent inhibition as well as in networks with mixed recurrent inhibition and excitation. Here we analytically investigate this irregular dynamics in finite networks keeping track of all individual spike times and the identities of individual neurons. For delayed, purely inhibitory interactions we show that the irregular dynamics is not chaotic but in fact stable. Moreover, we demonstrate that after long transients the dynamics converges towards periodic orbits and that every generic periodic orbit of these dynamical systems is stable. We investigate the collective irregular dynamics upon increasing the time scale of synaptic responses and upon iteratively replacing inhibitory by excitatory interactions. Whereas for small and moderate time scales as well as for few excitatory interactions, the dynamics stays stable, there is a smooth transition to chaos if the synaptic response becomes sufficiently slow (even in purely inhibitory networks) or the number of excitatory interactions becomes too large. These results indicate that chaotic and stable dynamics are equally capable of generating the irregular neuronal activity. More generally, chaos apparently is not essential for generating high irregularity of balanced activity, and we suggest that a mechanism different from chaos and stochasticity significantly contributes to irregular activity in cortical circuits. |
1801.06168 | Youngmin Park | Youngmin Park, G. Bard Ermentrout | Scalar Reduction of a Neural Field Model with Spike Frequency Adaptation | 60 pages, 22 figures | null | null | null | q-bio.NC nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a deterministic version of a one- and two-dimensional attractor
neural network model of hippocampal activity first studied by Itskov et al
2011. We analyze the dynamics of the system on the ring and torus domain with
an even periodized weight matrix, assum- ing weak and slow spike frequency
adaptation and a weak stationary input current. On these domains, we find
transitions from spatially localized stationary solutions ("bumps") to
(periodically modulated) solutions ("sloshers"), as well as constant and
non-constant velocity traveling bumps depending on the relative strength of
external input current and adaptation. The weak and slow adaptation allows for
a reduction of the system from a distributed partial integro-differential
equation to a system of scalar Volterra integro-differential equations
describing the movement of the centroid of the bump solution. Using this
reduction, we show that on both domains, sloshing solutions arise through an
Andronov-Hopf bifurcation and derive a normal form for the Hopf bifurcation on
the ring. We also show existence and stability of constant velocity solutions
on both domains using Evans functions. In contrast to existing studies, we
assume a general weight matrix of Mexican-hat type in addition to a smooth
firing rate function.
| [
{
"created": "Thu, 18 Jan 2018 18:46:21 GMT",
"version": "v1"
}
] | 2018-01-19 | [
[
"Park",
"Youngmin",
""
],
[
"Ermentrout",
"G. Bard",
""
]
] | We study a deterministic version of a one- and two-dimensional attractor neural network model of hippocampal activity first studied by Itskov et al 2011. We analyze the dynamics of the system on the ring and torus domain with an even periodized weight matrix, assum- ing weak and slow spike frequency adaptation and a weak stationary input current. On these domains, we find transitions from spatially localized stationary solutions ("bumps") to (periodically modulated) solutions ("sloshers"), as well as constant and non-constant velocity traveling bumps depending on the relative strength of external input current and adaptation. The weak and slow adaptation allows for a reduction of the system from a distributed partial integro-differential equation to a system of scalar Volterra integro-differential equations describing the movement of the centroid of the bump solution. Using this reduction, we show that on both domains, sloshing solutions arise through an Andronov-Hopf bifurcation and derive a normal form for the Hopf bifurcation on the ring. We also show existence and stability of constant velocity solutions on both domains using Evans functions. In contrast to existing studies, we assume a general weight matrix of Mexican-hat type in addition to a smooth firing rate function. |
2011.11099 | Christopher Kempes | Christopher P. Kempes, Geoffrey B. West, John W. Pepper | Paradox resolved: The allometric scaling of cancer risk across species | 10 pages, 3 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the cross-species behavior of cancer is important for
uncovering fundamental mechanisms of carcinogenesis, and for translating
results of model systems between species. One of the most famous interspecific
considerations of cancer is Peto's paradox, which asserts that organisms with
vastly different body mass are expected to have a vastly different degree of
cancer risk, a pattern that is not observed empirically. Here we show that this
observation is not a paradox at all but follows naturally from the
interspecific scaling of metabolic rates across mammalian body size. We connect
metabolic allometry to evolutionary models of cancer development in tissues to
show that waiting time to cancer scales with body mass in a similar way as
normal organism lifespan does. Thus, the expectation across mammals is that
lifetime cancer risk is invariant with body mass. Peto's observation is
therefore not a paradox, but the natural expectation from interspecific scaling
of metabolism and physiology. These allometric patterns have theoretical
implications for understanding life span evolution, and practical implications
for using smaller animals as model systems for human cancer.
| [
{
"created": "Sun, 22 Nov 2020 20:33:12 GMT",
"version": "v1"
}
] | 2020-11-24 | [
[
"Kempes",
"Christopher P.",
""
],
[
"West",
"Geoffrey B.",
""
],
[
"Pepper",
"John W.",
""
]
] | Understanding the cross-species behavior of cancer is important for uncovering fundamental mechanisms of carcinogenesis, and for translating results of model systems between species. One of the most famous interspecific considerations of cancer is Peto's paradox, which asserts that organisms with vastly different body mass are expected to have a vastly different degree of cancer risk, a pattern that is not observed empirically. Here we show that this observation is not a paradox at all but follows naturally from the interspecific scaling of metabolic rates across mammalian body size. We connect metabolic allometry to evolutionary models of cancer development in tissues to show that waiting time to cancer scales with body mass in a similar way as normal organism lifespan does. Thus, the expectation across mammals is that lifetime cancer risk is invariant with body mass. Peto's observation is therefore not a paradox, but the natural expectation from interspecific scaling of metabolism and physiology. These allometric patterns have theoretical implications for understanding life span evolution, and practical implications for using smaller animals as model systems for human cancer. |
0903.0137 | Sergei Kozyrev | A.Yu. Khrennikov, S.V. Kozyrev | p-Adic numbers in bioinformatics: from genetic code to PAM-matrix | 11 pages, discussion and references added | Journal of Theoretical Biology. 2009. V.261. P.396--406 | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we denonstrate that the use of the system of 2-adic numbers
provides a new insight to some problems of genetics, in particular, generacy of
the genetic code and the structure of the PAM matrix in bioinformatics. The
2-adic distance is an ultrametric and applications of ultrametrics in
bioinformatics are not surprising. However, by using the 2-adic numbers we
match ultrametric with a number theoretic structure. In this way we find new
applications of an ultrametric which differ from known up to now in
bioinformatics.
We obtain the following results. We show that the PAM matrix A allows the
expansion into the sum of the two matrices A=A^{(2)}+A^{(\infty)}, where the
matrix A^{(2)} is 2-adically regular (i.e. matrix elements of this matrix are
close to locally constant with respect to the discussed earlier by the authors
2-adic parametrization of the genetic code), and the matrix A^{(\infty)} is
sparse. We discuss the structure of the matrix A^{(\infty)} in relation to the
side chain properties of the corresponding amino acids.
| [
{
"created": "Sun, 1 Mar 2009 13:02:54 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Mar 2009 12:08:36 GMT",
"version": "v2"
},
{
"created": "Sat, 4 Apr 2009 16:42:16 GMT",
"version": "v3"
}
] | 2011-05-10 | [
[
"Khrennikov",
"A. Yu.",
""
],
[
"Kozyrev",
"S. V.",
""
]
] | In this paper we denonstrate that the use of the system of 2-adic numbers provides a new insight to some problems of genetics, in particular, generacy of the genetic code and the structure of the PAM matrix in bioinformatics. The 2-adic distance is an ultrametric and applications of ultrametrics in bioinformatics are not surprising. However, by using the 2-adic numbers we match ultrametric with a number theoretic structure. In this way we find new applications of an ultrametric which differ from known up to now in bioinformatics. We obtain the following results. We show that the PAM matrix A allows the expansion into the sum of the two matrices A=A^{(2)}+A^{(\infty)}, where the matrix A^{(2)} is 2-adically regular (i.e. matrix elements of this matrix are close to locally constant with respect to the discussed earlier by the authors 2-adic parametrization of the genetic code), and the matrix A^{(\infty)} is sparse. We discuss the structure of the matrix A^{(\infty)} in relation to the side chain properties of the corresponding amino acids. |
q-bio/0503007 | Helmut Schiessel | Frank Muehlbacher, Christian Holm, Helmut Schiessel | Controlled DNA compaction within chromatin: the tail-bridging effect | 4 pages, 5 figures, submitted | null | 10.1209/epl/i2005-10351-4 | null | q-bio.BM | null | We study the mechanism underlying the attraction between nucleosomes, the
fundamental packaging units of DNA inside the chromatin complex. We introduce a
simple model of the nucleosome, the eight-tail colloid, consisting of a charged
sphere with eight oppositely charged, flexible, grafted chains that represent
the terminal histone tails. We demonstrate that our complexes are attracted via
the formation of chain bridges and that this attraction can be tuned by
changing the fraction of charged monomers on the tails. This suggests a
physical mechanism of chromatin compaction where the degree of DNA condensation
can be controlled via biochemical means, namely the acetylation and
deacetylation of lysines in the histone tails.
| [
{
"created": "Wed, 2 Mar 2005 09:57:47 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Muehlbacher",
"Frank",
""
],
[
"Holm",
"Christian",
""
],
[
"Schiessel",
"Helmut",
""
]
] | We study the mechanism underlying the attraction between nucleosomes, the fundamental packaging units of DNA inside the chromatin complex. We introduce a simple model of the nucleosome, the eight-tail colloid, consisting of a charged sphere with eight oppositely charged, flexible, grafted chains that represent the terminal histone tails. We demonstrate that our complexes are attracted via the formation of chain bridges and that this attraction can be tuned by changing the fraction of charged monomers on the tails. This suggests a physical mechanism of chromatin compaction where the degree of DNA condensation can be controlled via biochemical means, namely the acetylation and deacetylation of lysines in the histone tails. |
2004.09404 | Andrea Pugliese | Andrea Pugliese and Sara Sottile | Inferring the COVID-19 infection curve in Italy | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aim of this manuscript is to show a simple method to infer the time-course of
new COVID-19 infections (the most important information in order to establish
the effect of containment strategies) from available aggregated data, such as
number of deaths and hospitalizations. The method, that was used for HIV-AIDS
and was named `back-calculation', relies on good estimates of the distribution
of the delays between infection and the observed events; assuming that the
epidemic follows a simple SIR model with a known generation interval, we can
then estimate the parameters that define the time-varying contact rate through
maximum likelihood. We show the application of the method to data from Italy
and several of its region; it is found that $R_0$ had decreased consistently
below 1 around March 20, and in the beginning of April it was between 0.5 and
0.8 in the whole Italy and in most regions.
| [
{
"created": "Mon, 20 Apr 2020 15:58:48 GMT",
"version": "v1"
}
] | 2020-04-21 | [
[
"Pugliese",
"Andrea",
""
],
[
"Sottile",
"Sara",
""
]
] | Aim of this manuscript is to show a simple method to infer the time-course of new COVID-19 infections (the most important information in order to establish the effect of containment strategies) from available aggregated data, such as number of deaths and hospitalizations. The method, that was used for HIV-AIDS and was named `back-calculation', relies on good estimates of the distribution of the delays between infection and the observed events; assuming that the epidemic follows a simple SIR model with a known generation interval, we can then estimate the parameters that define the time-varying contact rate through maximum likelihood. We show the application of the method to data from Italy and several of its region; it is found that $R_0$ had decreased consistently below 1 around March 20, and in the beginning of April it was between 0.5 and 0.8 in the whole Italy and in most regions. |
1911.02245 | Khaled Khleifat Dr | Ibtihal Nayel ALrawashdeh, Haitham Qaralleh, Muhamad O. Al-limoun,
Khaled M. Khleifat | Antibactrial Activity of Asteriscus graveolens Methanolic Extract:
Synergistic Effect with Fungal Mediated Nanoparticles against Some Enteric
Bacterial Human Pathogens | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibactrial activity of Asteriscus graveolens methanolic extract and its
synergy effect with fungal mediated silver nanoparticles (AgNPs) against some
enteric bacterial human pathogen was conducted. Silver nanoparticles were
synthesized by the fungal strain namely Tritirachium oryzae W5H as reported
early. In this study, MICs of AgNPs against E. aerogenes, Salmonella sp., E.
coli and C. albicans in order were 2.13, 19.15, 0.08 and 6.38 micrograms per
mL, respectively, while the MICs of A. graveolens ethanolic extract against the
same bacteria were 4, 366, 3300 and 40 micrograms per mL, respectively. The MIC
values at concentration less than 19.15 and 40 micrograms per mL indicating the
potent bacteriostatic effect of AgNPs and A. graveolens ethanolic
extract.Increasing in IFA was reported when Nitrofurantion and Trimethoprim
were combined with Etoh extract with maximum increase in IFA by 6 and 12 folds
for, respectively. Also, 10 folds increasing in IFA was reported when
trimethoprim was combined with AgNPs: Etoh extract.But, there were no
synergistic effect between the antifungal agents (Caspofungin and Micafungin)
combined with AgNPs and or A. graveolens ethanolic extract against C.
albicans.The potent synergistic effect of A. graveolens ethanolic extract
and/or NPs with the conventional antibiotics is novel in inhibiting antibiotics
resistant bacteria. In this study, remarkable increasing in the antibacterial
activity, when the most resistant antibiotics combined with A. graveolense
thanolic extract and/or NPs was reported.
| [
{
"created": "Wed, 6 Nov 2019 08:09:07 GMT",
"version": "v1"
}
] | 2019-11-07 | [
[
"ALrawashdeh",
"Ibtihal Nayel",
""
],
[
"Qaralleh",
"Haitham",
""
],
[
"Al-limoun",
"Muhamad O.",
""
],
[
"Khleifat",
"Khaled M.",
""
]
] | Antibactrial activity of Asteriscus graveolens methanolic extract and its synergy effect with fungal mediated silver nanoparticles (AgNPs) against some enteric bacterial human pathogen was conducted. Silver nanoparticles were synthesized by the fungal strain namely Tritirachium oryzae W5H as reported early. In this study, MICs of AgNPs against E. aerogenes, Salmonella sp., E. coli and C. albicans in order were 2.13, 19.15, 0.08 and 6.38 micrograms per mL, respectively, while the MICs of A. graveolens ethanolic extract against the same bacteria were 4, 366, 3300 and 40 micrograms per mL, respectively. The MIC values at concentration less than 19.15 and 40 micrograms per mL indicating the potent bacteriostatic effect of AgNPs and A. graveolens ethanolic extract.Increasing in IFA was reported when Nitrofurantion and Trimethoprim were combined with Etoh extract with maximum increase in IFA by 6 and 12 folds for, respectively. Also, 10 folds increasing in IFA was reported when trimethoprim was combined with AgNPs: Etoh extract.But, there were no synergistic effect between the antifungal agents (Caspofungin and Micafungin) combined with AgNPs and or A. graveolens ethanolic extract against C. albicans.The potent synergistic effect of A. graveolens ethanolic extract and/or NPs with the conventional antibiotics is novel in inhibiting antibiotics resistant bacteria. In this study, remarkable increasing in the antibacterial activity, when the most resistant antibiotics combined with A. graveolense thanolic extract and/or NPs was reported. |
1510.07346 | Christos Skiadas H | Christos H Skiadas | Verifying the HALE measures of the Global Burden of Disease Study:
Quantitative Methods Proposed | 29 pages, 9 figures, 6 Tables (3 Tables with full estimated figures) | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To verify the Global Burden of Disease Study and the provided healthy life
expectancy (HALE) estimates from the World Health Organization (WHO) we propose
a very simple model based on the mortality {\mu}x of a population provided in a
classical life table and a mortality diagram. We use the abridged life tables
provided by WHO. Our estimates are compared with the HALE estimates for the
World territories and the WHO countries. Even more we have developed the
related simple program in Excel which provides immediately the Life Expectancy,
the Loss of Healthy Life Years and the Healthy Life Expectancy estimate. We
also apply the health state function theory to have more estimates and
comparisons. The results suggest improved WHO estimates in recent years for the
majority of the cases. Keywords: Health state function, Healthy life
expectancy, Mortality Diagram, Loss of healthy years, LHLY, HALE, DALE, World
Health Organization, WHO, Global burden of Disease, Health status.
| [
{
"created": "Mon, 26 Oct 2015 01:14:31 GMT",
"version": "v1"
}
] | 2015-10-27 | [
[
"Skiadas",
"Christos H",
""
]
] | To verify the Global Burden of Disease Study and the provided healthy life expectancy (HALE) estimates from the World Health Organization (WHO) we propose a very simple model based on the mortality {\mu}x of a population provided in a classical life table and a mortality diagram. We use the abridged life tables provided by WHO. Our estimates are compared with the HALE estimates for the World territories and the WHO countries. Even more we have developed the related simple program in Excel which provides immediately the Life Expectancy, the Loss of Healthy Life Years and the Healthy Life Expectancy estimate. We also apply the health state function theory to have more estimates and comparisons. The results suggest improved WHO estimates in recent years for the majority of the cases. Keywords: Health state function, Healthy life expectancy, Mortality Diagram, Loss of healthy years, LHLY, HALE, DALE, World Health Organization, WHO, Global burden of Disease, Health status. |
2205.03912 | Ruiyang Zhou | Ruiyang Zhou, Fengying Wei | An age-structured epidemic model with vaccination | null | PLOS ONE 19(7): e0306554 (2024) | 10.1371/journal.pone.0306554 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | In this article, we construct an age-structured model for COVID-19 with
vaccination and analyze it from multiple perspectives. We derive the unique
disease-free equilibrium point and the basic reproduction number $
\mathscr{R}_0 $, then we show that the disease-free equilibrium is locally
asymptotically stable when $ \mathscr{R}_0 < 1 $, while is unstable when $
\mathscr{R}_0 > 1 $. We also work out endemic equilibrium points and reveal the
stability. We use sensitivity analysis to explore how parameters influence $
\mathscr{R}_0 $. Sensitivity analysis helps us develop more targeted strategies
to control epidemics. Finally, this model is used to discuss the cases in
Shijiazhuang, Hebei Province at the beginning of 2021. We compare reported
cases with the simulation to evaluate the measures taken by Shijiazhuang
government. Our study shows how age structure, vaccination and drastic
containment measures can affect the epidemic.
| [
{
"created": "Sun, 8 May 2022 16:43:31 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Aug 2022 07:18:32 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Aug 2022 06:38:36 GMT",
"version": "v3"
}
] | 2024-07-09 | [
[
"Zhou",
"Ruiyang",
""
],
[
"Wei",
"Fengying",
""
]
] | In this article, we construct an age-structured model for COVID-19 with vaccination and analyze it from multiple perspectives. We derive the unique disease-free equilibrium point and the basic reproduction number $ \mathscr{R}_0 $, then we show that the disease-free equilibrium is locally asymptotically stable when $ \mathscr{R}_0 < 1 $, while is unstable when $ \mathscr{R}_0 > 1 $. We also work out endemic equilibrium points and reveal the stability. We use sensitivity analysis to explore how parameters influence $ \mathscr{R}_0 $. Sensitivity analysis helps us develop more targeted strategies to control epidemics. Finally, this model is used to discuss the cases in Shijiazhuang, Hebei Province at the beginning of 2021. We compare reported cases with the simulation to evaluate the measures taken by Shijiazhuang government. Our study shows how age structure, vaccination and drastic containment measures can affect the epidemic. |
2011.08444 | Shusen Pu | Shusen Pu and Peter J. Thomas | Resolving Molecular Contributions of Ion Channel Noise to Interspike
Interval Variability through Stochastic Shielding | 51 pages, 14 Figures | null | null | null | q-bio.NC cs.NA math.DS math.NA | http://creativecommons.org/licenses/by/4.0/ | The contributions of independent noise sources to the variability of action
potential timing has not previously been studied at the level of individual
directed molecular transitions within a conductance-based model ion-state
graph. The underlying connection provides an important example of how
mathematics can be applied to study the effects of unobservable microscopic
fluctuations to macroscopically observable quantities. We study a stochastic
Langevin model and show how to resolve the individual contributions that each
transition in the ion channel graph makes to the variance of the interspike
interval (ISI). We extend the mean--return-time (MRT) phase reduction developed
in (Cao et al. 2020, SIAM J. Appl. Math) to the second moment of the return
time from an MRT isochron to itself. Because fixed-voltage spike-detection
triggers do not correspond to MRT isochrons, the inter-phase interval (IPI)
variance only approximates the ISI variance. We find the IPI variance and ISI
variance agree to within a few percent when both can be computed. Moreover, we
prove rigorously, and show numerically, that our expression for the IPI
variance is accurate in the small noise (large system size) regime; our theory
is exact in the limit of small noise. By selectively including the noise
associated with only those few transitions responsible for most of the ISI
variance, our analysis extends the stochastic shielding (SS) paradigm (Schmandt
et al. 2012, Phys. Rev. Lett.) from the stationary voltage-clamp case to the
current-clamp case. We show numerically that the SS approximation has a high
degree of accuracy even for larger, physiologically relevant noise levels. We
show that the ISI variance is not an unambiguously defined quantity, but
depends on the choice of voltage level set as the spike-detection threshold,
both in vitro and in silico.
| [
{
"created": "Tue, 17 Nov 2020 05:54:50 GMT",
"version": "v1"
}
] | 2020-11-18 | [
[
"Pu",
"Shusen",
""
],
[
"Thomas",
"Peter J.",
""
]
] | The contributions of independent noise sources to the variability of action potential timing has not previously been studied at the level of individual directed molecular transitions within a conductance-based model ion-state graph. The underlying connection provides an important example of how mathematics can be applied to study the effects of unobservable microscopic fluctuations to macroscopically observable quantities. We study a stochastic Langevin model and show how to resolve the individual contributions that each transition in the ion channel graph makes to the variance of the interspike interval (ISI). We extend the mean--return-time (MRT) phase reduction developed in (Cao et al. 2020, SIAM J. Appl. Math) to the second moment of the return time from an MRT isochron to itself. Because fixed-voltage spike-detection triggers do not correspond to MRT isochrons, the inter-phase interval (IPI) variance only approximates the ISI variance. We find the IPI variance and ISI variance agree to within a few percent when both can be computed. Moreover, we prove rigorously, and show numerically, that our expression for the IPI variance is accurate in the small noise (large system size) regime; our theory is exact in the limit of small noise. By selectively including the noise associated with only those few transitions responsible for most of the ISI variance, our analysis extends the stochastic shielding (SS) paradigm (Schmandt et al. 2012, Phys. Rev. Lett.) from the stationary voltage-clamp case to the current-clamp case. We show numerically that the SS approximation has a high degree of accuracy even for larger, physiologically relevant noise levels. We show that the ISI variance is not an unambiguously defined quantity, but depends on the choice of voltage level set as the spike-detection threshold, both in vitro and in silico. |
1403.0631 | Vishnu Chaturvedi | Tao Zhang, Tanya R. Victor, Sunanda S. Rajkumar, Xiaojiang Li, Joseph
C. Okoniewski, Alan C. Hicks, April D. Davis, Kelly Broussard, Shannon L.
LaDeau, Sudha Chaturvedi, and Vishnu Chaturvedi | Mycobiome of the Bat White Nose Syndrome (WNS) Affected Caves and Mines
reveals High Diversity of Fungi and Local Adaptation by the Fungal Pathogen
Pseudogymnoascus (Geomyces) destructans | 59 pages, 7figures | PloS one 9.9 (2014): e108714 | 10.1371/journal.pone.0108714 | null | q-bio.PE | http://creativecommons.org/licenses/publicdomain/ | The investigations of the bat White Nose Syndrome (WNS) have yet to provide
answers as to how the causative fungus Pseudogymnoascus (Geomyces) destructans
(Pd) first appeared in the Northeast and how a single clone has spread rapidly
in the US and Canada. We aimed to catalogue Pd and all other fungi (mycobiome)
by the culture-dependent (CD) and culture-independent (CI) methods in four
Mines and two Caves from the epicenter of WNS zoonotic. Six hundred sixty-five
fungal isolates were obtained by CD method including the live recovery of Pd.
Seven hundred three nucleotide sequences that met the definition of operational
taxonomic units (OTUs) were recovered by CI methods. Most OTUs belonged to
unidentified clones deposited in the databases as environmental nucleic acid
sequences (ENAS). The core mycobiome of WNS affected sites comprised of 46
species of fungi from 31 genera recovered in culture, and 17 fungal genera and
31 ENAS identified from clone libraries. Fungi such as Arthroderma spp.,
Geomyces spp., Kernia spp., Mortierella spp., Penicillium spp., and
Verticillium spp. were predominant in culture while Ganoderma spp., Geomyces
spp., Mortierella spp., Penicillium spp. and Trichosporon spp. were abundant is
clone libraries. Alpha diversity analyses from CI data revealed that fungal
community structure was highly diverse. However, the true species diversity
remains undetermined due to under sampling. The frequent recovery of Pd
indicated that the pathogen has adapted to WNS-afflicted habitats. Further,
this study supports the hypothesis that Pd is an introduced species. These
findings underscore the need for integrated WNS control measures that target
both bats and the fungal pathogen.
| [
{
"created": "Mon, 3 Mar 2014 23:20:08 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Oct 2014 12:21:59 GMT",
"version": "v2"
}
] | 2014-10-07 | [
[
"Zhang",
"Tao",
""
],
[
"Victor",
"Tanya R.",
""
],
[
"Rajkumar",
"Sunanda S.",
""
],
[
"Li",
"Xiaojiang",
""
],
[
"Okoniewski",
"Joseph C.",
""
],
[
"Hicks",
"Alan C.",
""
],
[
"Davis",
"April D.",
""
],
[
"Broussard",
"Kelly",
""
],
[
"LaDeau",
"Shannon L.",
""
],
[
"Chaturvedi",
"Sudha",
""
],
[
"Chaturvedi",
"Vishnu",
""
]
] | The investigations of the bat White Nose Syndrome (WNS) have yet to provide answers as to how the causative fungus Pseudogymnoascus (Geomyces) destructans (Pd) first appeared in the Northeast and how a single clone has spread rapidly in the US and Canada. We aimed to catalogue Pd and all other fungi (mycobiome) by the culture-dependent (CD) and culture-independent (CI) methods in four Mines and two Caves from the epicenter of WNS zoonotic. Six hundred sixty-five fungal isolates were obtained by CD method including the live recovery of Pd. Seven hundred three nucleotide sequences that met the definition of operational taxonomic units (OTUs) were recovered by CI methods. Most OTUs belonged to unidentified clones deposited in the databases as environmental nucleic acid sequences (ENAS). The core mycobiome of WNS affected sites comprised of 46 species of fungi from 31 genera recovered in culture, and 17 fungal genera and 31 ENAS identified from clone libraries. Fungi such as Arthroderma spp., Geomyces spp., Kernia spp., Mortierella spp., Penicillium spp., and Verticillium spp. were predominant in culture while Ganoderma spp., Geomyces spp., Mortierella spp., Penicillium spp. and Trichosporon spp. were abundant is clone libraries. Alpha diversity analyses from CI data revealed that fungal community structure was highly diverse. However, the true species diversity remains undetermined due to under sampling. The frequent recovery of Pd indicated that the pathogen has adapted to WNS-afflicted habitats. Further, this study supports the hypothesis that Pd is an introduced species. These findings underscore the need for integrated WNS control measures that target both bats and the fungal pathogen. |
1501.07795 | Subhadip Raychaudhuri | Subhadip Raychaudhuri and Somkanya C. Raychaudhuri | Death ligand concentration and the membrane proximal signaling module
regulate the type 1/ type 2 choice in apoptotic death signaling | 33 pages, 8 figures in Syst Synth Biol 2014 | null | 10.1007/s11693-013-9124-4 | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Apoptotic death pathways are frequently activated by death ligand induction
and subsequent activation of the membrane proximal signaling module. Death
receptors cluster upon binding to death ligands, leading to formation of a
membrane proximal death-inducing-signaling-complex (DISC). In this membrane
proximal signalosome, initiator caspases (caspase 8) are processed resulting in
activation of both type 1 and type 2 pathways of apoptosis signaling. How the
type 1/type 2 choice is made is an important question in the systems biology of
apoptosis signaling. In this study, we utilize a Monte Carlo based in silico
approach to elucidate the role of membrane proximal signaling module in the
type 1/type 2 choice of apoptosis signaling. Our results provide crucial
mechanistic insights into the formation of DISC signalosome and caspase 8
activation. Increased concentration of death ligands was shown to correlate
with increased type 1 activation. We also study the caspase 6 mediated system
level feedback activation of apoptosis signaling and its role in the type
1/type 2 choice. Our results clarify the basis of cell-to-cell stochastic
variability in apoptosis activation and ramifications of this issue is further
discussed in the context of therapies for cancer and neurodegenerative
disorders.
| [
{
"created": "Fri, 30 Jan 2015 15:03:12 GMT",
"version": "v1"
}
] | 2015-02-02 | [
[
"Raychaudhuri",
"Subhadip",
""
],
[
"Raychaudhuri",
"Somkanya C.",
""
]
] | Apoptotic death pathways are frequently activated by death ligand induction and subsequent activation of the membrane proximal signaling module. Death receptors cluster upon binding to death ligands, leading to formation of a membrane proximal death-inducing-signaling-complex (DISC). In this membrane proximal signalosome, initiator caspases (caspase 8) are processed resulting in activation of both type 1 and type 2 pathways of apoptosis signaling. How the type 1/type 2 choice is made is an important question in the systems biology of apoptosis signaling. In this study, we utilize a Monte Carlo based in silico approach to elucidate the role of membrane proximal signaling module in the type 1/type 2 choice of apoptosis signaling. Our results provide crucial mechanistic insights into the formation of DISC signalosome and caspase 8 activation. Increased concentration of death ligands was shown to correlate with increased type 1 activation. We also study the caspase 6 mediated system level feedback activation of apoptosis signaling and its role in the type 1/type 2 choice. Our results clarify the basis of cell-to-cell stochastic variability in apoptosis activation and ramifications of this issue is further discussed in the context of therapies for cancer and neurodegenerative disorders. |
2106.01746 | Alix Marie d'Avigneau | A. Marie d'Avigneau, S. S. Singh, R. J. Ober | Limits of accuracy for parameter estimation and localisation in
Single-Molecule Microscopy via sequential Monte Carlo methods | 38 pages (inc. 7 pages appendix), 11 figures | null | null | null | q-bio.QM stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assessing the quality of parameter estimates for models describing the motion
of single molecules in cellular environments is an important problem in
fluorescence microscopy. We consider the fundamental data model, where
molecules emit photons at random times and the photons arrive at random
locations on the detector according to complex point spread functions (PSFs).
The random, non-Gaussian PSF of the detection process and random trajectory of
the molecule make inference challenging. Moreover, the presence of other nearby
molecules causes further uncertainty in the origin of the measurements, which
impacts the statistical precision of estimates. We quantify the limits of
accuracy of model parameter estimates and separation distance between closely
spaced molecules (known as the resolution problem) by computing the Cramer-Rao
lower bound (CRLB), or equivalently the inverse of the Fisher information
matrix (FIM), for the variance of estimates. This fundamental CRLB is crucial,
as it provides a lower bound for more practical scenarios. While analytic
expressions for the FIM can be derived for static molecules, the analytical
tools to evaluate it for molecules whose trajectories follow SDEs are still
mostly missing. We address this by presenting a general SMC based methodology
for both parameter inference and computing the desired accuracy limits for
non-static molecules and a non-Gaussian fundamental detection model. For the
first time, we are able to estimate the FIM for stochastically moving molecules
observed through the Airy and Born & Wolf PSF. This is achieved by estimating
the score and observed information matrix via SMC. We sum up the outcome of our
numerical work by summarising the qualitative behaviours for the accuracy
limits as functions of e.g. collected photon count, molecule diffusion, etc. We
also verify that we can recover known results from the static molecule case.
| [
{
"created": "Thu, 3 Jun 2021 10:54:40 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Sep 2021 12:00:04 GMT",
"version": "v2"
}
] | 2021-09-15 | [
[
"d'Avigneau",
"A. Marie",
""
],
[
"Singh",
"S. S.",
""
],
[
"Ober",
"R. J.",
""
]
] | Assessing the quality of parameter estimates for models describing the motion of single molecules in cellular environments is an important problem in fluorescence microscopy. We consider the fundamental data model, where molecules emit photons at random times and the photons arrive at random locations on the detector according to complex point spread functions (PSFs). The random, non-Gaussian PSF of the detection process and random trajectory of the molecule make inference challenging. Moreover, the presence of other nearby molecules causes further uncertainty in the origin of the measurements, which impacts the statistical precision of estimates. We quantify the limits of accuracy of model parameter estimates and separation distance between closely spaced molecules (known as the resolution problem) by computing the Cramer-Rao lower bound (CRLB), or equivalently the inverse of the Fisher information matrix (FIM), for the variance of estimates. This fundamental CRLB is crucial, as it provides a lower bound for more practical scenarios. While analytic expressions for the FIM can be derived for static molecules, the analytical tools to evaluate it for molecules whose trajectories follow SDEs are still mostly missing. We address this by presenting a general SMC based methodology for both parameter inference and computing the desired accuracy limits for non-static molecules and a non-Gaussian fundamental detection model. For the first time, we are able to estimate the FIM for stochastically moving molecules observed through the Airy and Born & Wolf PSF. This is achieved by estimating the score and observed information matrix via SMC. We sum up the outcome of our numerical work by summarising the qualitative behaviours for the accuracy limits as functions of e.g. collected photon count, molecule diffusion, etc. We also verify that we can recover known results from the static molecule case. |
1509.01913 | Philippe Terrier PhD | Philippe Terrier | Fractal Fluctuations in Human Walking: Comparison between Auditory and
Visually Guided Stepping | Article accepted for publication in the Annals of Biomedical
Engineering. Revised in February 2016: final author's version | null | 10.1007/s10439-016-1573-y | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In human locomotion, sensorimotor synchronization of gait consists of the
coordination of stepping with rhythmic auditory cues (auditory cueing, AC). AC
changes the long-range correlations among consecutive strides (fractal
dynamics) into anti-correlations. Visual cueing (VC) is the alignment of step
lengths with marks on the floor. The effects of VC on the fluctuation structure
of walking have not been investigated. Therefore, the objective was to compare
the effects of AC and VC on the fluctuation pattern of basic spatiotemporal
gait parameters. Thirty-six healthy individuals walked 3 x 500 strides on an
instrumented treadmill with augmented reality capabilities. The conditions were
no cueing (NC), AC, and VC. AC included an isochronous metronome. In VC,
projected stepping stones were synchronized with the treadmill speed. Detrended
fluctuation analysis assessed the correlation structure. The coefficient of
variation (CV) was also assessed. The results showed that AC and VC similarly
induced a strong anti-correlated pattern in the gait parameters. The CVs were
similar between the NC and AC conditions but substantially higher in the VC
condition. AC and VC probably mobilize similar motor control pathways and can
be used alternatively in gait rehabilitation. However, the increased gait
variability induced by VC should be considered.
| [
{
"created": "Mon, 7 Sep 2015 06:20:48 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Dec 2015 10:48:38 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Feb 2016 08:20:46 GMT",
"version": "v3"
}
] | 2016-02-19 | [
[
"Terrier",
"Philippe",
""
]
] | In human locomotion, sensorimotor synchronization of gait consists of the coordination of stepping with rhythmic auditory cues (auditory cueing, AC). AC changes the long-range correlations among consecutive strides (fractal dynamics) into anti-correlations. Visual cueing (VC) is the alignment of step lengths with marks on the floor. The effects of VC on the fluctuation structure of walking have not been investigated. Therefore, the objective was to compare the effects of AC and VC on the fluctuation pattern of basic spatiotemporal gait parameters. Thirty-six healthy individuals walked 3 x 500 strides on an instrumented treadmill with augmented reality capabilities. The conditions were no cueing (NC), AC, and VC. AC included an isochronous metronome. In VC, projected stepping stones were synchronized with the treadmill speed. Detrended fluctuation analysis assessed the correlation structure. The coefficient of variation (CV) was also assessed. The results showed that AC and VC similarly induced a strong anti-correlated pattern in the gait parameters. The CVs were similar between the NC and AC conditions but substantially higher in the VC condition. AC and VC probably mobilize similar motor control pathways and can be used alternatively in gait rehabilitation. However, the increased gait variability induced by VC should be considered. |
1306.6163 | Dave Lunt | David H Lunt, Sujai Kumar, Georgios Koutsovoulos, Mark L Blaxter | The complex hybrid origins of the root knot nematodes revealed through
comparative genomics | null | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Meloidogyne root knot nematodes (RKN) can infect most of the world's
agricultural crop species and are among the most important of all plant
pathogens. As yet however we have little understanding of their origins or the
genomic basis of their extreme polyphagy. The most damaging pathogens reproduce
by mitotic parthenogenesis and are suggested to originate by interspecific
hybridizations between unknown parental taxa. We sequenced the genome of the
diploid meiotic parthenogen Meloidogyne floridensis, and use a comparative
genomic approach to test the hypothesis that it was involved in the hybrid
origin of the tropical mitotic parthenogen M. incognita. Phylogenomic analysis
of gene families from M. floridensis, M. incognita and an outgroup species M.
hapla was used to trace the evolutionary history of these species' genomes,
demonstrating that M. floridensis was one of the parental species in the hybrid
origins of M. incognita. Analysis of the M. floridensis genome revealed many
gene loci present in divergent copies, as they are in M. incognita, indicating
that it too had a hybrid origin. The triploid M. incognita is shown to be a
complex double-hybrid between M. floridensis and a third, unidentified parent.
The agriculturally important RKN have very complex origins involving the mixing
of several parental genomes by hybridization and their extreme polyphagy and
agricultural success may be related to this hybridization, producing
transgressive variation on which natural selection acts. Studying RKN variation
via individual marker loci may fail due to the species' convoluted origins, and
multi-species population genomics is essential to understand the hybrid
diversity and adaptive variation of this important species complex. This
comparative genomic analysis provides a compelling example of the importance
and complexity of hybridization in generating animal species diversity more
generally.
| [
{
"created": "Wed, 26 Jun 2013 08:42:02 GMT",
"version": "v1"
}
] | 2013-06-27 | [
[
"Lunt",
"David H",
""
],
[
"Kumar",
"Sujai",
""
],
[
"Koutsovoulos",
"Georgios",
""
],
[
"Blaxter",
"Mark L",
""
]
] | Meloidogyne root knot nematodes (RKN) can infect most of the world's agricultural crop species and are among the most important of all plant pathogens. As yet however we have little understanding of their origins or the genomic basis of their extreme polyphagy. The most damaging pathogens reproduce by mitotic parthenogenesis and are suggested to originate by interspecific hybridizations between unknown parental taxa. We sequenced the genome of the diploid meiotic parthenogen Meloidogyne floridensis, and use a comparative genomic approach to test the hypothesis that it was involved in the hybrid origin of the tropical mitotic parthenogen M. incognita. Phylogenomic analysis of gene families from M. floridensis, M. incognita and an outgroup species M. hapla was used to trace the evolutionary history of these species' genomes, demonstrating that M. floridensis was one of the parental species in the hybrid origins of M. incognita. Analysis of the M. floridensis genome revealed many gene loci present in divergent copies, as they are in M. incognita, indicating that it too had a hybrid origin. The triploid M. incognita is shown to be a complex double-hybrid between M. floridensis and a third, unidentified parent. The agriculturally important RKN have very complex origins involving the mixing of several parental genomes by hybridization and their extreme polyphagy and agricultural success may be related to this hybridization, producing transgressive variation on which natural selection acts. Studying RKN variation via individual marker loci may fail due to the species' convoluted origins, and multi-species population genomics is essential to understand the hybrid diversity and adaptive variation of this important species complex. This comparative genomic analysis provides a compelling example of the importance and complexity of hybridization in generating animal species diversity more generally. |
1505.08014 | Giandomenico Sassi | Giandomenico Sassi, Nicola Sassi | Exploitation of the genomic double-strand breaks to reduce the
reproductive power of microorganisms | 13 pages, 1 figure | null | null | null | q-bio.GN physics.bio-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown how to take advantage of the frequent occurrence of double-strand
breaks in the genome of prokaryotic cells, in order to reduce their high
efficient reproductive capability. The analysis examines the physical status of
the free ends of each break and considers how this status can interfere with an
external physical apparatus, with the aim of undermining the repair processes.
We indicate the biological consequences of this interaction and we give an
approximate evaluation of the topological and dynamical effects that arise on
the genomic material involved. The overall result suggests a significant
reduction of the dynamics of the repair.
| [
{
"created": "Fri, 29 May 2015 12:27:26 GMT",
"version": "v1"
}
] | 2015-06-01 | [
[
"Sassi",
"Giandomenico",
""
],
[
"Sassi",
"Nicola",
""
]
] | It is shown how to take advantage of the frequent occurrence of double-strand breaks in the genome of prokaryotic cells, in order to reduce their high efficient reproductive capability. The analysis examines the physical status of the free ends of each break and considers how this status can interfere with an external physical apparatus, with the aim of undermining the repair processes. We indicate the biological consequences of this interaction and we give an approximate evaluation of the topological and dynamical effects that arise on the genomic material involved. The overall result suggests a significant reduction of the dynamics of the repair. |
1204.6060 | Michael Deem | Keyao Pan and Michael W. Deem | Predicting Fixation Tendencies of the H3N2 Influenza Virus by Free
Energy Calculation | 5 tables and 4 figures | J. Chem. Theory Comput. 7 (2011) 1259-1272 | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influenza virus evolves to escape from immune system antibodies that bind to
it. We used free energy calculations with Einstein crystals as reference states
to calculate the difference of antibody binding free energy ($\Delta\Delta G$)
induced by amino acid substitution at each position in epitope B of the H3N2
influenza hemagglutinin, the key target for antibody. A substitution with
positive $\Delta\Delta G$ value decreases the antibody binding constant. On
average an uncharged to charged amino acid substitution generates the highest
$\Delta\Delta G$ values. Also on average, substitutions between small amino
acids generate $\Delta\Delta G$ values near to zero. The 21 sites in epitope B
have varying expected free energy differences for a random substitution.
Historical amino acid substitutions in epitope B for the A/Aichi/2/1968 strain
of influenza A show that most fixed and temporarily circulating substitutions
generate positive $\Delta\Delta G$ values. We propose that the observed pattern
of H3N2 virus evolution is affected by the free energy landscape, the mapping
from the free energy landscape to virus fitness landscape, and random genetic
drift of the virus. Monte Carlo simulations of virus evolution are presented to
support this view.
| [
{
"created": "Thu, 26 Apr 2012 20:57:58 GMT",
"version": "v1"
}
] | 2012-04-30 | [
[
"Pan",
"Keyao",
""
],
[
"Deem",
"Michael W.",
""
]
] | Influenza virus evolves to escape from immune system antibodies that bind to it. We used free energy calculations with Einstein crystals as reference states to calculate the difference of antibody binding free energy ($\Delta\Delta G$) induced by amino acid substitution at each position in epitope B of the H3N2 influenza hemagglutinin, the key target for antibody. A substitution with positive $\Delta\Delta G$ value decreases the antibody binding constant. On average an uncharged to charged amino acid substitution generates the highest $\Delta\Delta G$ values. Also on average, substitutions between small amino acids generate $\Delta\Delta G$ values near to zero. The 21 sites in epitope B have varying expected free energy differences for a random substitution. Historical amino acid substitutions in epitope B for the A/Aichi/2/1968 strain of influenza A show that most fixed and temporarily circulating substitutions generate positive $\Delta\Delta G$ values. We propose that the observed pattern of H3N2 virus evolution is affected by the free energy landscape, the mapping from the free energy landscape to virus fitness landscape, and random genetic drift of the virus. Monte Carlo simulations of virus evolution are presented to support this view. |
1004.4045 | Yukimichi Tamaki | Yukimichi Tamaki, Yu Kataoka, and Takashi Miyazaki | Bone regenerative potential of mesenchymal stem cells on a
micro-structured titanium processed by wire-type electric discharge machining | 6 pages, 4 figures | null | null | null | q-bio.CB physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new strategy with bone tissue engineering by mesenchymal stem cell
transplantation on titanium implant has been dawn attention. The surface
scaffold properties of titanium surface play an important role in bone
regenerative potential of cells. The surface topography and chemistry are
postulated to be two major factors increasing the scaffold properties of
titanium implants. This study aimed to evaluate the osteogenic gene expression
of mesenchymal stem cells on titanium processed by wire-type electric discharge
machining. Some amount of roughness and distinctive irregular features were
observed on titanium processed by wire-type electric discharge machining. The
thickness of suboxide layer was concomitantly grown during the processing.
Since the thickness of oxide film and micro-topography allowed an improvement
of mRNA expression of cells, titanium processed by wire-type electric discharge
machining is a promising candidate for mesenchymal stem cell based functional
restoration of implants.
| [
{
"created": "Fri, 23 Apr 2010 03:07:57 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Apr 2010 00:58:24 GMT",
"version": "v2"
}
] | 2010-05-03 | [
[
"Tamaki",
"Yukimichi",
""
],
[
"Kataoka",
"Yu",
""
],
[
"Miyazaki",
"Takashi",
""
]
] | A new strategy with bone tissue engineering by mesenchymal stem cell transplantation on titanium implant has been dawn attention. The surface scaffold properties of titanium surface play an important role in bone regenerative potential of cells. The surface topography and chemistry are postulated to be two major factors increasing the scaffold properties of titanium implants. This study aimed to evaluate the osteogenic gene expression of mesenchymal stem cells on titanium processed by wire-type electric discharge machining. Some amount of roughness and distinctive irregular features were observed on titanium processed by wire-type electric discharge machining. The thickness of suboxide layer was concomitantly grown during the processing. Since the thickness of oxide film and micro-topography allowed an improvement of mRNA expression of cells, titanium processed by wire-type electric discharge machining is a promising candidate for mesenchymal stem cell based functional restoration of implants. |
1410.7716 | Kyle Hickmann | Kyle S. Hickmann, Geoffrey Fairchild, Reid Priedhorsky, Nicholas
Generous, James M. Hyman, Alina Deshpande, Sara Y. Del Valle | Forecasting the 2013--2014 Influenza Season using Wikipedia | Second version. In previous version 2 figure references were
compiling wrong due to error in latex source | PLOS Comput. Biol., vol. 11, no. 5, p. e1004239, May 2015 | 10.1371/journal.pcbi.1004239 | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infectious diseases are one of the leading causes of morbidity and mortality
around the world; thus, forecasting their impact is crucial for planning an
effective response strategy. According to the Centers for Disease Control and
Prevention (CDC), seasonal influenza affects between 5% to 20% of the U.S.
population and causes major economic impacts resulting from hospitalization and
absenteeism. Understanding influenza dynamics and forecasting its impact is
fundamental for developing prevention and mitigation strategies.
We combine modern data assimilation methods with Wikipedia access logs and
CDC influenza like illness (ILI) reports to create a weekly forecast for
seasonal influenza. The methods are applied to the 2013--2014 influenza season
but are sufficiently general to forecast any disease outbreak, given incidence
or case count data. We adjust the initialization and parametrization of a
disease model and show that this allows us to determine systematic model bias.
In addition, we provide a way to determine where the model diverges from
observation and evaluate forecast accuracy.
Wikipedia article access logs are shown to be highly correlated with
historical ILI records and allow for accurate prediction of ILI data several
weeks before it becomes available. The results show that prior to the peak of
the flu season, our forecasting method projected the actual outcome with a high
probability. However, since our model does not account for re-infection or
multiple strains of influenza, the tail of the epidemic is not predicted well
after the peak of flu season has past.
| [
{
"created": "Wed, 22 Oct 2014 20:26:39 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Nov 2014 22:41:02 GMT",
"version": "v2"
}
] | 2015-05-18 | [
[
"Hickmann",
"Kyle S.",
""
],
[
"Fairchild",
"Geoffrey",
""
],
[
"Priedhorsky",
"Reid",
""
],
[
"Generous",
"Nicholas",
""
],
[
"Hyman",
"James M.",
""
],
[
"Deshpande",
"Alina",
""
],
[
"Del Valle",
"Sara Y.",
""
]
] | Infectious diseases are one of the leading causes of morbidity and mortality around the world; thus, forecasting their impact is crucial for planning an effective response strategy. According to the Centers for Disease Control and Prevention (CDC), seasonal influenza affects between 5% to 20% of the U.S. population and causes major economic impacts resulting from hospitalization and absenteeism. Understanding influenza dynamics and forecasting its impact is fundamental for developing prevention and mitigation strategies. We combine modern data assimilation methods with Wikipedia access logs and CDC influenza like illness (ILI) reports to create a weekly forecast for seasonal influenza. The methods are applied to the 2013--2014 influenza season but are sufficiently general to forecast any disease outbreak, given incidence or case count data. We adjust the initialization and parametrization of a disease model and show that this allows us to determine systematic model bias. In addition, we provide a way to determine where the model diverges from observation and evaluate forecast accuracy. Wikipedia article access logs are shown to be highly correlated with historical ILI records and allow for accurate prediction of ILI data several weeks before it becomes available. The results show that prior to the peak of the flu season, our forecasting method projected the actual outcome with a high probability. However, since our model does not account for re-infection or multiple strains of influenza, the tail of the epidemic is not predicted well after the peak of flu season has past. |
1506.03008 | Brian Ginn | Brian R. Ginn | The thermodynamics of protein aggregation reactions may underpin the
enhanced metabolic efficiency associated with heterosis, some balancing
selection, and the evolution of ploidy levels | 40 pages; 5 figures. arXiv admin note: text overlap with
arXiv:1111.0360. Appendix added to discuss syngamy, polygenic adaptation, and
the evolution of recombination. Corrected a major word omission in the last
paragraph of Section 3.1. Fixed other typos. Added references. Added a
paragraph to Appendix A2 | Progress in Biophysics and Molecular Biology Volume 126, July
2017, Pages 1-21 | 10.1016/j.pbiomolbio.2017.01.005 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying the physical basis of heterosis (or hybrid vigor) has remained
elusive despite over a hundred years of research on the subject. The three main
theories of heterosis are dominance theory, overdominance theory, and epistasis
theory. Kacser and Burns (1981) identified the molecular basis of dominance,
which has greatly enhanced our understanding of its importance to heterosis.
This paper aims to explain how overdominance, and some features of epistasis,
can similarly emerge from the molecular dynamics of proteins. Possessing
multiple alleles at a gene locus results in the synthesis of different
allozymes at reduced concentrations. This in turn reduces the rate at which
each allozyme forms soluble oligomers, which are toxic and must be degraded,
because allozymes co-aggregate at low efficiencies. The model developed in this
paper will be used to explain how heterozygosity can impact the metabolic
efficiency of an organism. It can also explain why the viabilities of some
inbred lines seem to decline rapidly at high inbreeding coefficients (F > 0.5),
which may provide a physical basis for truncation selection for heterozygosity.
Finally, the model has implications for the ploidy level of organisms. It can
explain why polyploids are frequently found in environments where severe
physical stresses promote the formation of soluble oligomers. The model can
also explain why complex organisms, which need to synthesize aggregation-prone
proteins that contain intrinsically unstructured regions (IURs) and multiple
domains because they facilitate complex protein interaction networks (PINs),
tend to be diploid while haploidy tends to be restricted to relatively simple
organisms.
| [
{
"created": "Tue, 9 Jun 2015 17:17:36 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2019 20:23:32 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Jul 2019 02:43:04 GMT",
"version": "v3"
}
] | 2019-07-17 | [
[
"Ginn",
"Brian R.",
""
]
] | Identifying the physical basis of heterosis (or hybrid vigor) has remained elusive despite over a hundred years of research on the subject. The three main theories of heterosis are dominance theory, overdominance theory, and epistasis theory. Kacser and Burns (1981) identified the molecular basis of dominance, which has greatly enhanced our understanding of its importance to heterosis. This paper aims to explain how overdominance, and some features of epistasis, can similarly emerge from the molecular dynamics of proteins. Possessing multiple alleles at a gene locus results in the synthesis of different allozymes at reduced concentrations. This in turn reduces the rate at which each allozyme forms soluble oligomers, which are toxic and must be degraded, because allozymes co-aggregate at low efficiencies. The model developed in this paper will be used to explain how heterozygosity can impact the metabolic efficiency of an organism. It can also explain why the viabilities of some inbred lines seem to decline rapidly at high inbreeding coefficients (F > 0.5), which may provide a physical basis for truncation selection for heterozygosity. Finally, the model has implications for the ploidy level of organisms. It can explain why polyploids are frequently found in environments where severe physical stresses promote the formation of soluble oligomers. The model can also explain why complex organisms, which need to synthesize aggregation-prone proteins that contain intrinsically unstructured regions (IURs) and multiple domains because they facilitate complex protein interaction networks (PINs), tend to be diploid while haploidy tends to be restricted to relatively simple organisms. |
2407.21412 | Kevin Godin-Dubois | Kevin Godin-Dubois and Sylvain Cussat-Blanc and Yves Duthen | APOGeT: Automated Phylogeny over Geological Time-scales | Presented at ALife 2019 as part of the MethAL workshop | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | To tackle the challenge of producing tractable phylogenetic trees in contexts
where complete information is available, we introduce APOGeT: an online,
pluggable, clustering algorithm for a stream of genomes. It is designed to run
alongside a given experimental protocol with minimal interactions and
integration effort. From the genomic flow, it extracts and displays species'
boundaries and dynamics. Starting with a light introduction to the core idea of
this classification we discuss the requirements on the genomes and the
underlying processes of building species' identities and managing hybridism.
Though stemming from an ALife experimental setting, APOGeT ought not be limited
to this field but could be used by (and benefit from) a broader audience.
| [
{
"created": "Wed, 31 Jul 2024 07:59:23 GMT",
"version": "v1"
}
] | 2024-08-01 | [
[
"Godin-Dubois",
"Kevin",
""
],
[
"Cussat-Blanc",
"Sylvain",
""
],
[
"Duthen",
"Yves",
""
]
] | To tackle the challenge of producing tractable phylogenetic trees in contexts where complete information is available, we introduce APOGeT: an online, pluggable, clustering algorithm for a stream of genomes. It is designed to run alongside a given experimental protocol with minimal interactions and integration effort. From the genomic flow, it extracts and displays species' boundaries and dynamics. Starting with a light introduction to the core idea of this classification we discuss the requirements on the genomes and the underlying processes of building species' identities and managing hybridism. Though stemming from an ALife experimental setting, APOGeT ought not be limited to this field but could be used by (and benefit from) a broader audience. |
2201.09924 | Chi H. Mak | Chi H. Mak | Ab Initio Nucleic Acid Folding Simulations Using a Physics-Based
Atomistic Free Energy Model | Submitted to J. Chem. Phys | null | 10.1063/5.0086304 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Performing full-resolution atomistic simulations of nucleic acid folding has
remained a challenge for biomolecular modeling. Understanding how nucleic acids
fold and how they transition between different folded structures as they unfold
and refold has important implications for biology. This paper reports a
theoretical model and computer simulation of the ab initio folding of DNA
inverted repeat sequences. The formulation is based on an all-atom
conformational model of the sugar-phosphate backbone via chain closure, and it
incorporates three major molecular-level driving forces - base stacking,
counterion-induced backbone self-interactions and base pairing - via separate
analytical theories designed to capture and reproduce the effects of the
solvent without requiring explicit water and ions in the simulation. To
accelerate computational throughput, a mixed numerical/analytical algorithm for
the calculation of the backbone conformational volume is incorporated into the
Monte Carlo simulation, and special stochastic sampling techniques were
employed to achieve the computational efficiency needed to fold nucleic acids
from scratch. This paper describes implementation details, benchmark results
and the advantages and technical challenges with this approach.
| [
{
"created": "Mon, 24 Jan 2022 19:31:26 GMT",
"version": "v1"
}
] | 2022-05-18 | [
[
"Mak",
"Chi H.",
""
]
] | Performing full-resolution atomistic simulations of nucleic acid folding has remained a challenge for biomolecular modeling. Understanding how nucleic acids fold and how they transition between different folded structures as they unfold and refold has important implications for biology. This paper reports a theoretical model and computer simulation of the ab initio folding of DNA inverted repeat sequences. The formulation is based on an all-atom conformational model of the sugar-phosphate backbone via chain closure, and it incorporates three major molecular-level driving forces - base stacking, counterion-induced backbone self-interactions and base pairing - via separate analytical theories designed to capture and reproduce the effects of the solvent without requiring explicit water and ions in the simulation. To accelerate computational throughput, a mixed numerical/analytical algorithm for the calculation of the backbone conformational volume is incorporated into the Monte Carlo simulation, and special stochastic sampling techniques were employed to achieve the computational efficiency needed to fold nucleic acids from scratch. This paper describes implementation details, benchmark results and the advantages and technical challenges with this approach. |
1601.00701 | Carlos Stein Naves De Brito | Carlos S. N. Brito, Wulfram Gerstner | Nonlinear Hebbian learning as a unifying principle in receptive field
formation | null | null | 10.1371/journal.pcbi.1005070 | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities.
| [
{
"created": "Mon, 4 Jan 2016 23:35:41 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Brito",
"Carlos S. N.",
""
],
[
"Gerstner",
"Wulfram",
""
]
] | The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. |
2104.10944 | Mar\'ia Vallet-Regi | M Gisbert-Garzaran, M Manzano, M Vallet-Regi | Self-immolative chemistry in nanomedicine | 31 pages, 10 figures | Chem. Eng. J. 340, 24-31 (2018) | 10.1016/j.cej.2017.12.098 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Self-Immolative Chemistry is based on the cascade of disassembling reactions
triggered by the adequate stimulation and leading to the sequential release of
the smaller constituent elements. This review will focus on the possibilities
that this type of chemistry offers to nanomedicine research, which is an area
where the stimuli responsive behavior is always targeted. There are some
examples on the use of self-immolative chemistry for prodrugs or nanoparticles
for drug delivery, but there is still an exciting land of opportunities waiting
to be explored. This review aims at revising what has been done so far, but,
most importantly, it aims at inspiring new research of self-immolative
chemistry on nanomedicine.
| [
{
"created": "Thu, 22 Apr 2021 09:15:17 GMT",
"version": "v1"
}
] | 2021-04-23 | [
[
"Gisbert-Garzaran",
"M",
""
],
[
"Manzano",
"M",
""
],
[
"Vallet-Regi",
"M",
""
]
] | Self-Immolative Chemistry is based on the cascade of disassembling reactions triggered by the adequate stimulation and leading to the sequential release of the smaller constituent elements. This review will focus on the possibilities that this type of chemistry offers to nanomedicine research, which is an area where the stimuli responsive behavior is always targeted. There are some examples on the use of self-immolative chemistry for prodrugs or nanoparticles for drug delivery, but there is still an exciting land of opportunities waiting to be explored. This review aims at revising what has been done so far, but, most importantly, it aims at inspiring new research of self-immolative chemistry on nanomedicine. |
2210.15029 | Qijun He | Christopher Barrett, Andrei Bura, Qijun He, Fenix Huang, Christian
Reidys | The arithmetic topology of genetic alignments | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel mathematical paradigm for the study of genetic variation
in sequence alignments. This framework originates from extending the notion of
pairwise relations, upon which current analysis is based on, to k-ary
dissimilarity. This dissimilarity naturally leads to a generalization of
simplicial complexes by endowing simplices with weights, compatible with the
boundary operator. We introduce the notion of k-stances and dissimilarity
complex, the former encapsulating arithmetic as well as topological structure
expressing these k-ary relations. We study basic mathematical properties of
dissimilarity complexes and show how this approach captures an entirely new
layer of biologically relevant viral dynamics in the context of SARS-CoV-2 and
H1N1 flu genomic data.
| [
{
"created": "Wed, 26 Oct 2022 21:01:33 GMT",
"version": "v1"
}
] | 2022-10-28 | [
[
"Barrett",
"Christopher",
""
],
[
"Bura",
"Andrei",
""
],
[
"He",
"Qijun",
""
],
[
"Huang",
"Fenix",
""
],
[
"Reidys",
"Christian",
""
]
] | We propose a novel mathematical paradigm for the study of genetic variation in sequence alignments. This framework originates from extending the notion of pairwise relations, upon which current analysis is based on, to k-ary dissimilarity. This dissimilarity naturally leads to a generalization of simplicial complexes by endowing simplices with weights, compatible with the boundary operator. We introduce the notion of k-stances and dissimilarity complex, the former encapsulating arithmetic as well as topological structure expressing these k-ary relations. We study basic mathematical properties of dissimilarity complexes and show how this approach captures an entirely new layer of biologically relevant viral dynamics in the context of SARS-CoV-2 and H1N1 flu genomic data. |
1210.7091 | Suk Keun Lee | Yeon Sook Kim, Dae Gwan Lee, Suk Keun Lee | Development of Hydrogen Bonding Magnetic Reaction-based Gene Regulation
through Cyclic Electromagnetic DNA Simulation in Double-Stranded DNA | Please, find two manuscripts "Development of Hydrogen Bonding
Magnetic Reaction-based Gene Regulation through Cyclic Electromagnetic DNA
Simulation in Double-Stranded DNA" and "Application of Cyclic Electromagnetic
DNA Simulation to Target Oncogenesis-related miRNAs and DNA Motifs: Changes
of Protein Signaling Pathway System in RAW 264.7 Cells" | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proton-magnetic reaction is commonly used in MRI machines with a strong
magnetic field of over 1 T, while this study hypothesized that the electron
magnetic reaction of hydrogen could affect the hydrogen bonds of
double-stranded DNA (dsDNA) at a low magnetic field below 0.01 T. The goal is
to develop a hydrogen bonding magnetic reaction-based gene regulation (HBMR-GR)
system. The polarities of DNA base pairs are derived from the relative
electrostatic charge between purines and pyrimidines, which become positively
and negatively charged, respectively. The Pyu dsDNAs with
pyrimidine(s)-purine(s) sequences, ds3T3A, ds3C3G, and ds3C3A, showed stronger
DNA hybridization potential, increased infrared absorption at 3400-3200 cm-1,
and a unique DNA conformation in HPLC analysis compared to the corresponding
Puy dsDNAs. To target the three-dimensional structure of dsDNA based on the DNA
base pair polarities, one can use cyclic electromagnetic DNA simulation (CEDS)
with approximately 25% efficiency for randomly oriented dsDNAs. CEDS was found
to induce sequence-specific hybridization of target oligo-dsDNAs in 0.005M NaCl
solution and sequence-specific conformation of oligo-dsDNAs in 0.1M NaCl
solution. It was found that the Pyu oligo-dsDNAs were more responsible for the
hybridization and conformational changes by CEDS than the Puy oligo-dsDNAs.
CEDS decreased ethidium bromide (EtBr) DNA intercalation and spermidine DNA
condensation depending on CEDS time in the binding assay. The results also
included that the Pyu oligo-dsDNAs were more responsible for CEDS by forming
stable and unique conformation of oligo-dsDNA than the Therefore, it is
postulated that the low-level HBMR-based CEDS can enhance the hybridization
potential of oligo-dsDNAs and subsequently lead to the unique DNA conformation
required for the initiation of various DNA functions.
| [
{
"created": "Fri, 26 Oct 2012 10:25:03 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Jun 2024 07:11:00 GMT",
"version": "v2"
}
] | 2024-07-02 | [
[
"Kim",
"Yeon Sook",
""
],
[
"Lee",
"Dae Gwan",
""
],
[
"Lee",
"Suk Keun",
""
]
] | The proton-magnetic reaction is commonly used in MRI machines with a strong magnetic field of over 1 T, while this study hypothesized that the electron magnetic reaction of hydrogen could affect the hydrogen bonds of double-stranded DNA (dsDNA) at a low magnetic field below 0.01 T. The goal is to develop a hydrogen bonding magnetic reaction-based gene regulation (HBMR-GR) system. The polarities of DNA base pairs are derived from the relative electrostatic charge between purines and pyrimidines, which become positively and negatively charged, respectively. The Pyu dsDNAs with pyrimidine(s)-purine(s) sequences, ds3T3A, ds3C3G, and ds3C3A, showed stronger DNA hybridization potential, increased infrared absorption at 3400-3200 cm-1, and a unique DNA conformation in HPLC analysis compared to the corresponding Puy dsDNAs. To target the three-dimensional structure of dsDNA based on the DNA base pair polarities, one can use cyclic electromagnetic DNA simulation (CEDS) with approximately 25% efficiency for randomly oriented dsDNAs. CEDS was found to induce sequence-specific hybridization of target oligo-dsDNAs in 0.005M NaCl solution and sequence-specific conformation of oligo-dsDNAs in 0.1M NaCl solution. It was found that the Pyu oligo-dsDNAs were more responsible for the hybridization and conformational changes by CEDS than the Puy oligo-dsDNAs. CEDS decreased ethidium bromide (EtBr) DNA intercalation and spermidine DNA condensation depending on CEDS time in the binding assay. The results also included that the Pyu oligo-dsDNAs were more responsible for CEDS by forming stable and unique conformation of oligo-dsDNA than the Therefore, it is postulated that the low-level HBMR-based CEDS can enhance the hybridization potential of oligo-dsDNAs and subsequently lead to the unique DNA conformation required for the initiation of various DNA functions. |
2010.01718 | Ivan Junier | Nelle Varoquaux, Virginia S. Lioy, Fr\'ed\'eric Boccard and Ivan
Junier | Computational tools for the multiscale analysis of Hi-C data in
bacterial chromosomes | In press in Methods in Molecular Biology (Hi-C Data Analysis: Methods
and Protocols) | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Just as in eukaryotes, high-throughput chromosome conformation capture (Hi-C)
data have revealed nested organizations of bacterial chromosomes into
overlapping interaction domains. In this chapter, we present a multiscale
analysis framework aiming at capturing and quantifying these properties. These
include both standard tools (e.g. contact laws) and novel ones such as an index
that allows identifying loci involved in domain formation independently of the
structuring scale at play. Our objective is two-fold. On the one hand, we aim
at providing a full, understandable Python/Jupyter-based code which can be used
by both computer scientists as well as biologists with no advanced
computational background. On the other hand, we discuss statistical issues
inherent to Hi-C data analysis, focusing more particularly on how to properly
assess the statistical significance of results. As a pedagogical example, we
analyze data produced in {\it Pseudomonas aeruginosa}, a model pathogenetic
bacterium. All files (codes and input data) can be found on a github
repository. We have also embedded the files into a Binder package so that the
full analysis can be run on any machine through internet.
| [
{
"created": "Sun, 4 Oct 2020 23:34:21 GMT",
"version": "v1"
}
] | 2020-10-06 | [
[
"Varoquaux",
"Nelle",
""
],
[
"Lioy",
"Virginia S.",
""
],
[
"Boccard",
"Frédéric",
""
],
[
"Junier",
"Ivan",
""
]
] | Just as in eukaryotes, high-throughput chromosome conformation capture (Hi-C) data have revealed nested organizations of bacterial chromosomes into overlapping interaction domains. In this chapter, we present a multiscale analysis framework aiming at capturing and quantifying these properties. These include both standard tools (e.g. contact laws) and novel ones such as an index that allows identifying loci involved in domain formation independently of the structuring scale at play. Our objective is two-fold. On the one hand, we aim at providing a full, understandable Python/Jupyter-based code which can be used by both computer scientists as well as biologists with no advanced computational background. On the other hand, we discuss statistical issues inherent to Hi-C data analysis, focusing more particularly on how to properly assess the statistical significance of results. As a pedagogical example, we analyze data produced in {\it Pseudomonas aeruginosa}, a model pathogenetic bacterium. All files (codes and input data) can be found on a github repository. We have also embedded the files into a Binder package so that the full analysis can be run on any machine through internet. |
1306.6124 | Surya Saha | Surya Saha, Magdalen Lindeberg | Bound to succeed: Transcription factor binding site prediction and its
contribution to understanding virulence and environmental adaptation in
bacterial plant pathogens | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial plant pathogens rely on a battalion of transcription factors to
fine-tune their response to changing environmental conditions and marshal the
genetic resources required for successful pathogenesis. Prediction of
transcription factor binding sites represents an important tool for elucidating
regulatory networks, and has been conducted in multiple genera of plant
pathogenic bacteria for the purpose of better understanding mechanisms of
survival and pathogenesis. The major categories of transcription factor binding
sites that have been characterized are reviewed here with emphasis on in silico
methods used for site identification and challenges therein, their
applicability to different types of sequence datasets, and insights into
mechanisms of virulence and survival that have been gained through binding site
mapping. An improved strategy for establishing E value cutoffs when using
existing models to screen uncharacterized genomes is also discussed.
| [
{
"created": "Wed, 26 Jun 2013 03:48:38 GMT",
"version": "v1"
}
] | 2013-06-27 | [
[
"Saha",
"Surya",
""
],
[
"Lindeberg",
"Magdalen",
""
]
] | Bacterial plant pathogens rely on a battalion of transcription factors to fine-tune their response to changing environmental conditions and marshal the genetic resources required for successful pathogenesis. Prediction of transcription factor binding sites represents an important tool for elucidating regulatory networks, and has been conducted in multiple genera of plant pathogenic bacteria for the purpose of better understanding mechanisms of survival and pathogenesis. The major categories of transcription factor binding sites that have been characterized are reviewed here with emphasis on in silico methods used for site identification and challenges therein, their applicability to different types of sequence datasets, and insights into mechanisms of virulence and survival that have been gained through binding site mapping. An improved strategy for establishing E value cutoffs when using existing models to screen uncharacterized genomes is also discussed. |
1103.0451 | Roland Koberle | Ingrid M. Esteves, Nelson M. Fernandes, Roland K\"oberle | How to take turns: the fly's way to encode and decode rotational
information | 16 pages including 5 figures | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensory systems take continuously varying stimuli as their input and encode
features relevant for the organism's survival into a sequence of action
potentials - spike trains. The full dynamic range of complex dynamical inputs
has to be compressed into a set of discrete spike times and the question,
facing any sensory system, arises: which features of the stimulus are thereby
encoded and how does the animal decode them to recover its external sensory
world?
Here we study this issue for the two motion-sensitive H1 neurons of the fly's
optical system, which are sensitive to horizontal velocity stimuli, each neuron
responding to oppositely pointing preferred directions. They constitute an
efficient detector for rotations of the fly's body about a vertical axis.
Surprisingly the spike trains $\rho_B(t)$ generated by an empoverished stimulus
$S_B(t)$, containing just the instants when the of velocity $S(t)$ reverses its
direction, convey the same amount of global (Shannon) information as spike
trains $\rho(t)$ generated by the complete stimulus $S(t)$. This amount of
information is just enough to encode the instants of velocity reversal. Yet
this suffices to give the motor system just one, yet vital order: go left or
right, turning the H1 neurons into efficient analog-to-digital converters.
Furthermore also probability distributions computed from $\rho(t)$ and
$\rho_B(t)$ are identical. Still there are regions in the spike trains
following velocity reversals, 80 msec long and containing about 3-6 msec long
spike intervals, where detailed stimulus properties are encoded. We suggest a
decoding scheme - how to reconstruct the stimulus from the spike train, which
is fast and works in real time.
| [
{
"created": "Wed, 2 Mar 2011 15:11:59 GMT",
"version": "v1"
}
] | 2011-03-03 | [
[
"Esteves",
"Ingrid M.",
""
],
[
"Fernandes",
"Nelson M.",
""
],
[
"Köberle",
"Roland",
""
]
] | Sensory systems take continuously varying stimuli as their input and encode features relevant for the organism's survival into a sequence of action potentials - spike trains. The full dynamic range of complex dynamical inputs has to be compressed into a set of discrete spike times and the question, facing any sensory system, arises: which features of the stimulus are thereby encoded and how does the animal decode them to recover its external sensory world? Here we study this issue for the two motion-sensitive H1 neurons of the fly's optical system, which are sensitive to horizontal velocity stimuli, each neuron responding to oppositely pointing preferred directions. They constitute an efficient detector for rotations of the fly's body about a vertical axis. Surprisingly the spike trains $\rho_B(t)$ generated by an empoverished stimulus $S_B(t)$, containing just the instants when the of velocity $S(t)$ reverses its direction, convey the same amount of global (Shannon) information as spike trains $\rho(t)$ generated by the complete stimulus $S(t)$. This amount of information is just enough to encode the instants of velocity reversal. Yet this suffices to give the motor system just one, yet vital order: go left or right, turning the H1 neurons into efficient analog-to-digital converters. Furthermore also probability distributions computed from $\rho(t)$ and $\rho_B(t)$ are identical. Still there are regions in the spike trains following velocity reversals, 80 msec long and containing about 3-6 msec long spike intervals, where detailed stimulus properties are encoded. We suggest a decoding scheme - how to reconstruct the stimulus from the spike train, which is fast and works in real time. |
1812.11850 | Diego Ulisse Pizzagalli | Diego Ulisse Pizzagalli, Santiago Fernandez Gonzalez, Rolf Krause | A shortest-path based clustering algorithm for joint human-machine
analysis of complex datasets | null | null | 10.1126/sciadv.aax377 | null | q-bio.QM cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is a technique for the analysis of datasets obtained by empirical
studies in several disciplines with a major application for biomedical
research. Essentially, clustering algorithms are executed by machines aiming at
finding groups of related points in a dataset. However, the result of grouping
depends on both metrics for point-to-point similarity and rules for
point-to-group association. Indeed, non-appropriate metrics and rules can lead
to undesirable clustering artifacts. This is especially relevant for datasets,
where groups with heterogeneous structures co-exist. In this work, we propose
an algorithm that achieves clustering by exploring the paths between points.
This allows both, to evaluate the properties of the path (such as gaps, density
variations, etc.), and expressing the preference for certain paths. Moreover,
our algorithm supports the integration of existing knowledge about admissible
and non-admissible clusters by training a path classifier. We demonstrate the
accuracy of the proposed method on challenging datasets including points from
synthetic shapes in publicly available benchmarks and microscopy data.
| [
{
"created": "Mon, 31 Dec 2018 15:50:53 GMT",
"version": "v1"
}
] | 2022-10-27 | [
[
"Pizzagalli",
"Diego Ulisse",
""
],
[
"Gonzalez",
"Santiago Fernandez",
""
],
[
"Krause",
"Rolf",
""
]
] | Clustering is a technique for the analysis of datasets obtained by empirical studies in several disciplines with a major application for biomedical research. Essentially, clustering algorithms are executed by machines aiming at finding groups of related points in a dataset. However, the result of grouping depends on both metrics for point-to-point similarity and rules for point-to-group association. Indeed, non-appropriate metrics and rules can lead to undesirable clustering artifacts. This is especially relevant for datasets, where groups with heterogeneous structures co-exist. In this work, we propose an algorithm that achieves clustering by exploring the paths between points. This allows both, to evaluate the properties of the path (such as gaps, density variations, etc.), and expressing the preference for certain paths. Moreover, our algorithm supports the integration of existing knowledge about admissible and non-admissible clusters by training a path classifier. We demonstrate the accuracy of the proposed method on challenging datasets including points from synthetic shapes in publicly available benchmarks and microscopy data. |
2403.00767 | Evgeniy Bolbasov | Semen Goreninskii, Ulyana Chernova, Elisaveta Prosetskaya, Alina
Laushkina, Alexander Mishanin, Alexey Golovkin, Evgeny Bolbasov | Single-channel and multi-channel electrospinning for the fabrication of
PLA/PCL tissue engineering scaffolds: comparative study of the materials
physicochemical and biological properties | null | null | null | null | q-bio.TO cond-mat.mtrl-sci physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Fabrication of tissue engineering scaffolds with tailored physicochemical and
biological characteristics is a relevant task in biomedical engineering. The
present work was focused at the evaluation of the effect of fabrication
approach (single-channel or multi-channel electrospinning) on the properties of
the fabricated poly(lactic acid)(PLA)/poly(epsilon-caprolactone)(PCL) scaffolds
with various polymer mass ratios (1/0, 2/1, 1/1, 1/2, and 0/1). The scaffolds
with same morphology (regardless of electrospinning variant) were fabricated
and characterized using SEM, water contact angle measurement, FTIR, XRD,
tensile testing and in vitro experiment with multipotent mesenchymal stem
cells. It was demonstrated, that multi-channel electrospinning prevents
intermolecular interactions between the polymer components of the scaffold,
preserving their crystal structure, what affects the mechanical characteristics
of the scaffold (particularly, leads to 2-fold difference in elongation).
Better adhesion of multipotent mesenchymal stem cells on the surface of the
scaffolds fabricated using multichannel electrospinning was demonstrated.
| [
{
"created": "Fri, 9 Feb 2024 07:28:12 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Goreninskii",
"Semen",
""
],
[
"Chernova",
"Ulyana",
""
],
[
"Prosetskaya",
"Elisaveta",
""
],
[
"Laushkina",
"Alina",
""
],
[
"Mishanin",
"Alexander",
""
],
[
"Golovkin",
"Alexey",
""
],
[
"Bolbasov",
"Evgeny",
""
]
] | Fabrication of tissue engineering scaffolds with tailored physicochemical and biological characteristics is a relevant task in biomedical engineering. The present work was focused at the evaluation of the effect of fabrication approach (single-channel or multi-channel electrospinning) on the properties of the fabricated poly(lactic acid)(PLA)/poly(epsilon-caprolactone)(PCL) scaffolds with various polymer mass ratios (1/0, 2/1, 1/1, 1/2, and 0/1). The scaffolds with same morphology (regardless of electrospinning variant) were fabricated and characterized using SEM, water contact angle measurement, FTIR, XRD, tensile testing and in vitro experiment with multipotent mesenchymal stem cells. It was demonstrated, that multi-channel electrospinning prevents intermolecular interactions between the polymer components of the scaffold, preserving their crystal structure, what affects the mechanical characteristics of the scaffold (particularly, leads to 2-fold difference in elongation). Better adhesion of multipotent mesenchymal stem cells on the surface of the scaffolds fabricated using multichannel electrospinning was demonstrated. |
2004.04614 | Markus M\"uller | Markus M\"uller, Peter M. Derlet, Christopher Mudry, and Gabriel
Aeppli | Using random testing to manage a safe exit from the COVID-19 lockdown | 18 pages, 6 figures, 2 appendices. Phys. Biol. (2020) | null | 10.1088/1478-3975/aba6d0 | null | q-bio.PE cond-mat.other cond-mat.stat-mech physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We argue that frequent sampling of the fraction of infected people (either by
random testing or by analysis of sewage water), is central to managing the
COVID-19 pandemic because it both measures in real time the key variable
controlled by restrictive measures, and anticipates the load on the healthcare
system due to progression of the disease. Knowledge of random testing outcomes
will (i) significantly improve the predictability of the pandemic, (ii) allow
informed and optimized decisions on how to modify restrictive measures, with
much shorter delay times than the present ones, and (iii) enable the real-time
assessment of the efficiency of new means to reduce transmission rates.
Here we suggest, irrespective of the size of a suitably homogeneous
population, a conservative estimate of 15000 for the number of randomly tested
people per day which will suffice to obtain reliable data about the current
fraction of infections and its evolution in time, thus enabling close to
real-time assessment of the quantitative effect of restrictive measures. Still
higher testing capacity permits detection of geographical differences in
spreading rates. Furthermore and most importantly, with daily sampling in
place, a reboot could be attempted while the fraction of infected people is
still an order of magnitude higher than the level required for a relaxation of
restrictions with testing focused on symptomatic individuals. This is
demonstrated by considering a feedback and control model of mitigation where
the feed-back is derived from noisy sampling data.
| [
{
"created": "Thu, 9 Apr 2020 15:57:41 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Apr 2020 12:45:49 GMT",
"version": "v2"
}
] | 2020-07-24 | [
[
"Müller",
"Markus",
""
],
[
"Derlet",
"Peter M.",
""
],
[
"Mudry",
"Christopher",
""
],
[
"Aeppli",
"Gabriel",
""
]
] | We argue that frequent sampling of the fraction of infected people (either by random testing or by analysis of sewage water), is central to managing the COVID-19 pandemic because it both measures in real time the key variable controlled by restrictive measures, and anticipates the load on the healthcare system due to progression of the disease. Knowledge of random testing outcomes will (i) significantly improve the predictability of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates. Here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of 15000 for the number of randomly tested people per day which will suffice to obtain reliable data about the current fraction of infections and its evolution in time, thus enabling close to real-time assessment of the quantitative effect of restrictive measures. Still higher testing capacity permits detection of geographical differences in spreading rates. Furthermore and most importantly, with daily sampling in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a relaxation of restrictions with testing focused on symptomatic individuals. This is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data. |
2107.04318 | Gyorgy Abrusan | Gyorgy Abrusan, David B. Ascher, Michael Inouye | Known allosteric proteins have central roles in genetic disease | null | null | 10.1371/journal.pcbi.1009806 | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Allostery is a form of protein regulation, where ligands that bind sites
located apart from the active site can modify the activity of the protein. The
molecular mechanisms of allostery have been extensively studied, because
allosteric sites are less conserved than active sites, and drugs targeting them
are more specific than drugs binding the active sites. Here we quantify the
importance of allostery in genetic disease. We show that 1) known allosteric
proteins are central in disease networks, and contribute to genetic disease and
comorbidities much more than non-allosteric proteins, in many major disease
types like hematopoietic diseases, cardiovascular diseases, cancers, diabetes,
or diseases of the central nervous system. 2) variants from cancer genome-wide
association studies are enriched near allosteric proteins, indicating their
importance to polygenic traits; and 3) the importance of allosteric proteins in
disease is due, at least partly, to their central positions in protein-protein
interaction networks, and probably not due to their dynamical properties.
| [
{
"created": "Fri, 9 Jul 2021 09:13:19 GMT",
"version": "v1"
}
] | 2022-04-06 | [
[
"Abrusan",
"Gyorgy",
""
],
[
"Ascher",
"David B.",
""
],
[
"Inouye",
"Michael",
""
]
] | Allostery is a form of protein regulation, where ligands that bind sites located apart from the active site can modify the activity of the protein. The molecular mechanisms of allostery have been extensively studied, because allosteric sites are less conserved than active sites, and drugs targeting them are more specific than drugs binding the active sites. Here we quantify the importance of allostery in genetic disease. We show that 1) known allosteric proteins are central in disease networks, and contribute to genetic disease and comorbidities much more than non-allosteric proteins, in many major disease types like hematopoietic diseases, cardiovascular diseases, cancers, diabetes, or diseases of the central nervous system. 2) variants from cancer genome-wide association studies are enriched near allosteric proteins, indicating their importance to polygenic traits; and 3) the importance of allosteric proteins in disease is due, at least partly, to their central positions in protein-protein interaction networks, and probably not due to their dynamical properties. |
2003.04805 | Razvan Marinescu | Razvan V. Marinescu | Modelling the Neuroanatomical Progression of Alzheimer's Disease and
Posterior Cortical Atrophy | PhD thesis; Defended in Jan 2019 at University College London | null | null | null | q-bio.QM q-bio.NC stat.AP | http://creativecommons.org/licenses/by/4.0/ | In order to find effective treatments for Alzheimer's disease (AD), we need
to identify subjects at risk of AD as early as possible. To this end, recently
developed disease progression models can be used to perform early diagnosis, as
well as predict the subjects' disease stages and future evolution. However,
these models have not yet been applied to rare neurodegenerative diseases, are
not suitable to understand the complex dynamics of biomarkers, work only on
large multimodal datasets, and their predictive performance has not been
objectively validated. In this work I developed novel models of disease
progression and applied them to estimate the progression of Alzheimer's disease
and Posterior Cortical atrophy, a rare neurodegenerative syndrome causing
visual deficits. My first contribution is a study on the progression of
Posterior Cortical Atrophy, using models already developed: the Event-based
Model (EBM) and the Differential Equation Model (DEM). My second contribution
is the development of DIVE, a novel spatio-temporal model of disease
progression that estimates fine-grained spatial patterns of pathology,
potentially enabling us to understand complex disease mechanisms relating to
pathology propagation along brain networks. My third contribution is the
development of Disease Knowledge Transfer (DKT), a novel disease progression
model that estimates the multimodal progression of rare neurodegenerative
diseases from limited, unimodal datasets, by transferring information from
larger, multimodal datasets of typical neurodegenerative diseases. My fourth
contribution is the development of novel extensions for the EBM and the DEM,
and the development of novel measures for performance evaluation of such
models. My last contribution is the organization of the TADPOLE challenge, a
competition which aims to identify algorithms and features that best predict
the evolution of AD.
| [
{
"created": "Sat, 29 Feb 2020 21:59:52 GMT",
"version": "v1"
}
] | 2020-03-11 | [
[
"Marinescu",
"Razvan V.",
""
]
] | In order to find effective treatments for Alzheimer's disease (AD), we need to identify subjects at risk of AD as early as possible. To this end, recently developed disease progression models can be used to perform early diagnosis, as well as predict the subjects' disease stages and future evolution. However, these models have not yet been applied to rare neurodegenerative diseases, are not suitable to understand the complex dynamics of biomarkers, work only on large multimodal datasets, and their predictive performance has not been objectively validated. In this work I developed novel models of disease progression and applied them to estimate the progression of Alzheimer's disease and Posterior Cortical atrophy, a rare neurodegenerative syndrome causing visual deficits. My first contribution is a study on the progression of Posterior Cortical Atrophy, using models already developed: the Event-based Model (EBM) and the Differential Equation Model (DEM). My second contribution is the development of DIVE, a novel spatio-temporal model of disease progression that estimates fine-grained spatial patterns of pathology, potentially enabling us to understand complex disease mechanisms relating to pathology propagation along brain networks. My third contribution is the development of Disease Knowledge Transfer (DKT), a novel disease progression model that estimates the multimodal progression of rare neurodegenerative diseases from limited, unimodal datasets, by transferring information from larger, multimodal datasets of typical neurodegenerative diseases. My fourth contribution is the development of novel extensions for the EBM and the DEM, and the development of novel measures for performance evaluation of such models. My last contribution is the organization of the TADPOLE challenge, a competition which aims to identify algorithms and features that best predict the evolution of AD. |
1805.06487 | Gustavo Caetano-Anolles | Derek Caetano-Anoll\'es, Kelsey Caetano-Anoll\'es and Gustavo
Caetano-Anoll\'es | Evolution of macromolecular structure: a 'double tale' of biological
accretion | null | Science Progress 101(4):360-383, 2018 | 10.3184/003685018X15379391431599 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of structure in biology is driven by accretion and change.
Accretion brings together disparate parts to form bigger wholes. Change
provides opportunities for growth and innovation. Here we review patterns and
processes that are responsible for a 'double tale' of evolutionary accretion at
various levels of complexity, from proteins and nucleic acids to high-rise
building structures in cities. Parts are at first weakly linked and associate
variously. As they diversify, they compete with each other and are selected for
performance. The emerging interactions constrain their structure and
associations. This causes parts to self-organize into modules with tight
linkage. In a second phase, variants of the modules evolve and become new parts
for a new generative cycle of higher-level organization. Evolutionary genomics
and network biology support the 'double tale' of structural module creation and
validate an evolutionary principle of maximum abundance that drives the gain
and loss of modules.
| [
{
"created": "Mon, 30 Apr 2018 02:01:58 GMT",
"version": "v1"
}
] | 2019-01-16 | [
[
"Caetano-Anollés",
"Derek",
""
],
[
"Caetano-Anollés",
"Kelsey",
""
],
[
"Caetano-Anollés",
"Gustavo",
""
]
] | The evolution of structure in biology is driven by accretion and change. Accretion brings together disparate parts to form bigger wholes. Change provides opportunities for growth and innovation. Here we review patterns and processes that are responsible for a 'double tale' of evolutionary accretion at various levels of complexity, from proteins and nucleic acids to high-rise building structures in cities. Parts are at first weakly linked and associate variously. As they diversify, they compete with each other and are selected for performance. The emerging interactions constrain their structure and associations. This causes parts to self-organize into modules with tight linkage. In a second phase, variants of the modules evolve and become new parts for a new generative cycle of higher-level organization. Evolutionary genomics and network biology support the 'double tale' of structural module creation and validate an evolutionary principle of maximum abundance that drives the gain and loss of modules. |
1812.06182 | Sherry Towers | Sherry Towers, Linda J.S. Allen, Fred Brauer, Baltazar Espinoza | Assessing the impact of non-vaccinators: quantifying the average length
of infection chains in outbreaks of vaccine-preventable disease | 19 pages, 4 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analytical expressions for the basic reproduction number, R0, have been
obtained in the past for a wide variety of mathematical models for infectious
disease spread, along with expressions for the expected final size of an
outbreak. However, what has so far not been studied is the average number of
infections that descend down the chains of infection begun by each of the
individuals infected in an outbreak (we refer to this quantity as the "average
number of descendant infections" per infectious individual, or ANDI). ANDI
includes not only the number of people that an individual directly contacts and
infects, but also the number of people that those go on to infect, and so on
until that particular chain of infection dies out. Quantification of ANDI has
relevance to the vaccination debate, since with ANDI one can calculate the
probability that one or more people are hospitalised (or die) from a disease
down an average chain of infection descending from an infected un-vaccinated
individual. Here we obtain estimates of ANDI using both deterministic and
stochastic modelling formalisms. With both formalisms we find that even for
relatively small community sizes and under most scenarios for R0 and initial
fraction vaccinated, ANDI can be surprisingly large when the effective
reproduction number is >1, leading to high probabilities of adverse outcomes
for one or more people down an average chain of infection in outbreaks of
diseases like measles.
| [
{
"created": "Mon, 17 Dec 2018 16:27:42 GMT",
"version": "v1"
}
] | 2018-12-18 | [
[
"Towers",
"Sherry",
""
],
[
"Allen",
"Linda J. S.",
""
],
[
"Brauer",
"Fred",
""
],
[
"Espinoza",
"Baltazar",
""
]
] | Analytical expressions for the basic reproduction number, R0, have been obtained in the past for a wide variety of mathematical models for infectious disease spread, along with expressions for the expected final size of an outbreak. However, what has so far not been studied is the average number of infections that descend down the chains of infection begun by each of the individuals infected in an outbreak (we refer to this quantity as the "average number of descendant infections" per infectious individual, or ANDI). ANDI includes not only the number of people that an individual directly contacts and infects, but also the number of people that those go on to infect, and so on until that particular chain of infection dies out. Quantification of ANDI has relevance to the vaccination debate, since with ANDI one can calculate the probability that one or more people are hospitalised (or die) from a disease down an average chain of infection descending from an infected un-vaccinated individual. Here we obtain estimates of ANDI using both deterministic and stochastic modelling formalisms. With both formalisms we find that even for relatively small community sizes and under most scenarios for R0 and initial fraction vaccinated, ANDI can be surprisingly large when the effective reproduction number is >1, leading to high probabilities of adverse outcomes for one or more people down an average chain of infection in outbreaks of diseases like measles. |
0807.3300 | Jan Biro Dr | Jan C. Biro | Correlation between nucleotide composition and folding energy of coding
sequences with special attention to wobble bases | 14 pages including 6 figures and 1 table | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The secondary structure and complexity of mRNA influences its
accessibility to regulatory molecules (proteins, micro-RNAs), its stability and
its level of expression. The mobile elements of the RNA sequence, the wobble
bases, are expected to regulate the formation of structures encompassing coding
sequences.
Results: The sequence/folding energy (FE) relationship was studied by
statistical, bioinformatic methods in 90 CDS containing 26,370 codons. I found
that the FE (dG) associated with coding sequences is significant and negative
(407 kcal/1000 bases, mean +/- S.E.M.) indicating that these sequences are able
to form structures. However, the FE has only a small free component, less than
10% of the total. The contribution of the 1st and 3rd codon bases to the FE is
larger than the contribution of the 2nd (central) bases. It is possible to
achieve a ~ 4-fold change in FE by altering the wobble bases in synonymous
codons. The sequence/FE relationship can be described with a simple algorithm,
and the total FE can be predicted solely from the sequence composition of the
nucleic acid. The contributions of different synonymous codons to the FE are
additive and one codon cannot replace another. The accumulated contributions of
synonymous codons of an amino acid to the total folding energy of an mRNA is
strongly correlated to the relative amount of that amino acid in the translated
protein.
Conclusion: Synonymous codons are not interchangable with regard to their
role in determining the mRNA FE and the relative amounts of amino acids in the
translated protein, even if they are indistinguishable in respect of amino acid
coding.
| [
{
"created": "Mon, 21 Jul 2008 16:27:46 GMT",
"version": "v1"
}
] | 2008-07-22 | [
[
"Biro",
"Jan C.",
""
]
] | Background: The secondary structure and complexity of mRNA influences its accessibility to regulatory molecules (proteins, micro-RNAs), its stability and its level of expression. The mobile elements of the RNA sequence, the wobble bases, are expected to regulate the formation of structures encompassing coding sequences. Results: The sequence/folding energy (FE) relationship was studied by statistical, bioinformatic methods in 90 CDS containing 26,370 codons. I found that the FE (dG) associated with coding sequences is significant and negative (407 kcal/1000 bases, mean +/- S.E.M.) indicating that these sequences are able to form structures. However, the FE has only a small free component, less than 10% of the total. The contribution of the 1st and 3rd codon bases to the FE is larger than the contribution of the 2nd (central) bases. It is possible to achieve a ~ 4-fold change in FE by altering the wobble bases in synonymous codons. The sequence/FE relationship can be described with a simple algorithm, and the total FE can be predicted solely from the sequence composition of the nucleic acid. The contributions of different synonymous codons to the FE are additive and one codon cannot replace another. The accumulated contributions of synonymous codons of an amino acid to the total folding energy of an mRNA is strongly correlated to the relative amount of that amino acid in the translated protein. Conclusion: Synonymous codons are not interchangable with regard to their role in determining the mRNA FE and the relative amounts of amino acids in the translated protein, even if they are indistinguishable in respect of amino acid coding. |
0808.1233 | Henri Orland | Jean-Louis Sikorav, Henri Orland, Alan Braslau | Mechanism of thermal renaturation and hybridization of nucleic acids:
Kramers process and universality in Watson-Crick base pairing | To be published March 26, 2009 in J. Chem. Phys. B | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Renaturation and hybridization reactions lead to the pairing of complementary
single-stranded nucleic acids. We present here a theoretical investigation of
the mechanism of these reactions in vitro under thermal conditions (dilute
solutions of single-stranded chains, in the presence of molar concentrations of
monovalent salts and at elevated temperatures). The mechanism follows a
Kramers' process, whereby the complementary chains overcome a potential barrier
through Brownian motion. The barrier originates from a single rate-limiting
nucleation event in which the first complementary base pairs are formed. The
reaction then proceeds through a fast growth of the double helix. For the DNA
of bacteriophages T7, T4 and $\phi$X174 as well as for Escherichia coli DNA,
the bimolecular rate $k_2$ of the reaction increases as a power law of the
average degree of polymerization $<N>$ of the reacting single- strands: $k_2
\prop <N>^\alpha$. This relationship holds for $100 \leq <N> \leq 50 000$ with
an experimentally determined exponent $\alpha = 0.51 \pm 0.01$. The length
dependence results from a thermodynamic excluded-volume effect. The reacting
single-stranded chains are predicted to be in universal good solvent
conditions, and the scaling law is determined by the relevant equilibrium
monomer contact probability. The value theoretically predicted for the exponent
is $\alpha = 1-\nu \theta_2$, where $\nu$ is Flory's swelling exponent ($nu
approx 0.588$) and $\theta_2$ is a critical exponent introduced by des
Cloizeaux ($\theta_2 \approx 0.82$), yielding $\alpha = 0.52 \pm 0.01$, in
agreement with the experimental results.
| [
{
"created": "Fri, 8 Aug 2008 15:34:51 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jan 2009 10:42:25 GMT",
"version": "v2"
}
] | 2009-01-05 | [
[
"Sikorav",
"Jean-Louis",
""
],
[
"Orland",
"Henri",
""
],
[
"Braslau",
"Alan",
""
]
] | Renaturation and hybridization reactions lead to the pairing of complementary single-stranded nucleic acids. We present here a theoretical investigation of the mechanism of these reactions in vitro under thermal conditions (dilute solutions of single-stranded chains, in the presence of molar concentrations of monovalent salts and at elevated temperatures). The mechanism follows a Kramers' process, whereby the complementary chains overcome a potential barrier through Brownian motion. The barrier originates from a single rate-limiting nucleation event in which the first complementary base pairs are formed. The reaction then proceeds through a fast growth of the double helix. For the DNA of bacteriophages T7, T4 and $\phi$X174 as well as for Escherichia coli DNA, the bimolecular rate $k_2$ of the reaction increases as a power law of the average degree of polymerization $<N>$ of the reacting single- strands: $k_2 \prop <N>^\alpha$. This relationship holds for $100 \leq <N> \leq 50 000$ with an experimentally determined exponent $\alpha = 0.51 \pm 0.01$. The length dependence results from a thermodynamic excluded-volume effect. The reacting single-stranded chains are predicted to be in universal good solvent conditions, and the scaling law is determined by the relevant equilibrium monomer contact probability. The value theoretically predicted for the exponent is $\alpha = 1-\nu \theta_2$, where $\nu$ is Flory's swelling exponent ($nu approx 0.588$) and $\theta_2$ is a critical exponent introduced by des Cloizeaux ($\theta_2 \approx 0.82$), yielding $\alpha = 0.52 \pm 0.01$, in agreement with the experimental results. |
1508.00150 | Carina Curto | Carina Curto, Elizabeth Gross, Jack Jeffries, Katherine Morrison,
Mohamed Omar, Zvi Rosen, Anne Shiu, and Nora Youngs | What makes a neural code convex? | 25 pages, 9 figures, and 2 tables. Supplementary Text begins on page
17. Accepted to SIAM Journal on Applied Algebra and Geometry (SIAGA) | null | null | null | q-bio.NC math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural codes allow the brain to represent, process, and store information
about the world. Combinatorial codes, comprised of binary patterns of neural
activity, encode information via the collective behavior of populations of
neurons. A code is called convex if its codewords correspond to regions defined
by an arrangement of convex open sets in Euclidean space. Convex codes have
been observed experimentally in many brain areas, including sensory cortices
and the hippocampus, where neurons exhibit convex receptive fields. What makes
a neural code convex? That is, how can we tell from the intrinsic structure of
a code if there exists a corresponding arrangement of convex open sets? In this
work, we provide a complete characterization of local obstructions to
convexity. This motivates us to define max intersection-complete codes, a
family guaranteed to have no local obstructions. We then show how our
characterization enables one to use free resolutions of Stanley-Reisner ideals
in order to detect violations of convexity. Taken together, these results
provide a significant advance in understanding the intrinsic combinatorial
properties of convex codes.
| [
{
"created": "Sat, 1 Aug 2015 17:20:44 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Nov 2015 17:31:09 GMT",
"version": "v2"
},
{
"created": "Tue, 3 May 2016 18:53:18 GMT",
"version": "v3"
},
{
"created": "Wed, 21 Dec 2016 18:53:42 GMT",
"version": "v4"
}
] | 2016-12-22 | [
[
"Curto",
"Carina",
""
],
[
"Gross",
"Elizabeth",
""
],
[
"Jeffries",
"Jack",
""
],
[
"Morrison",
"Katherine",
""
],
[
"Omar",
"Mohamed",
""
],
[
"Rosen",
"Zvi",
""
],
[
"Shiu",
"Anne",
""
],
[
"Youngs",
"Nora",
""
]
] | Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called convex if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. Convex codes have been observed experimentally in many brain areas, including sensory cortices and the hippocampus, where neurons exhibit convex receptive fields. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this work, we provide a complete characterization of local obstructions to convexity. This motivates us to define max intersection-complete codes, a family guaranteed to have no local obstructions. We then show how our characterization enables one to use free resolutions of Stanley-Reisner ideals in order to detect violations of convexity. Taken together, these results provide a significant advance in understanding the intrinsic combinatorial properties of convex codes. |
1612.09486 | Adam Mahdi | Adam Mahdi, Erica Rutter, Stephen J. Payne | Effects of non-physiological blood pressure artefacts on measures of
cerebral autoregulation | 9 pages | null | null | null | q-bio.TO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cerebral autoregulation refers to regulation mechanisms that aim to maintain
cerebral blood flow approximately constant. It is often assessed by
autoregulation index (ARI), which uses arterial blood pressure and cerebral
blood flow velocity time series to produce a ten-scale index of autoregulation
performance (0 denoting the absence of and 9 the strongest autoregulation).
Unfortunately, data are rarely free from various artefacts. Here, we consider
four of the most common non-physiological blood pressure artefacts (saturation,
square wave, reduced pulse pressure and impulse) and study their effects on ARI
for a range of different artefact sizes. We show that a sufficiently large
saturation and square wave always result in ARI reaching the maximum value of
9. The pulse pressure reduction and impulse artefact lead to a more diverse
behaviour. Finally, we characterised the critical size of artefacts, defined as
the minimum artefact size that, on average, leads to a 10\% deviation of ARI
| [
{
"created": "Thu, 29 Dec 2016 21:46:02 GMT",
"version": "v1"
}
] | 2017-01-02 | [
[
"Mahdi",
"Adam",
""
],
[
"Rutter",
"Erica",
""
],
[
"Payne",
"Stephen J.",
""
]
] | Cerebral autoregulation refers to regulation mechanisms that aim to maintain cerebral blood flow approximately constant. It is often assessed by autoregulation index (ARI), which uses arterial blood pressure and cerebral blood flow velocity time series to produce a ten-scale index of autoregulation performance (0 denoting the absence of and 9 the strongest autoregulation). Unfortunately, data are rarely free from various artefacts. Here, we consider four of the most common non-physiological blood pressure artefacts (saturation, square wave, reduced pulse pressure and impulse) and study their effects on ARI for a range of different artefact sizes. We show that a sufficiently large saturation and square wave always result in ARI reaching the maximum value of 9. The pulse pressure reduction and impulse artefact lead to a more diverse behaviour. Finally, we characterised the critical size of artefacts, defined as the minimum artefact size that, on average, leads to a 10\% deviation of ARI |
1106.2311 | Jes\'us M. Mir\'o-Bueno | Jes\'us M. Mir\'o-Bueno and Alfonso Rodr\'iguez-Pat\'on | A simple negative interaction in the positive transcriptional feedback
of a single gene is sufficient to produce reliable oscillations | 25 pages, 12 figures, 3 tables | PLoS ONE 6(11): e27414. (2011) | 10.1371/journal.pone.0027414 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Negative and positive transcriptional feedback loops are present in natural
and synthetic genetic oscillators. A single gene with negative transcriptional
feedback needs a time delay and sufficiently strong nonlinearity in the
transmission of the feedback signal in order to produce biochemical rhythms. A
single gene with only positive transcriptional feedback does not produce
oscillations. Here, we demonstrate that this single-gene network in conjunction
with a simple negative interaction can also easily produce rhythms. We examine
a model comprised of two well-differentiated parts. The first is a positive
feedback created by a protein that binds to the promoter of its own gene and
activates the transcription. The second is a negative interaction in which a
repressor molecule prevents this protein from binding to its promoter. A
stochastic study shows that the system is robust to noise. A deterministic
study identifies that the dynamics of the oscillator are mainly driven by two
types of biomolecules: the protein, and the complex formed by the repressor and
this protein. The main conclusion of this paper is that a simple and usual
negative interaction, such as degradation, sequestration or inhibition, acting
on the positive transcriptional feedback of a single gene is a sufficient
condition to produce reliable oscillations. One gene is enough and the positive
transcriptional feedback signal does not need to activate a second repressor
gene. This means that at the genetic level an explicit negative feedback loop
is not necessary. The model needs neither cooperative binding reactions nor the
formation of protein multimers. Therefore, our findings could help to clarify
the design principles of cellular clocks and constitute a new efficient tool
for engineering synthetic genetic oscillators.
| [
{
"created": "Sun, 12 Jun 2011 14:10:56 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jun 2011 18:48:18 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Nov 2011 19:49:50 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Dec 2011 15:51:43 GMT",
"version": "v4"
}
] | 2011-12-23 | [
[
"Miró-Bueno",
"Jesús M.",
""
],
[
"Rodríguez-Patón",
"Alfonso",
""
]
] | Negative and positive transcriptional feedback loops are present in natural and synthetic genetic oscillators. A single gene with negative transcriptional feedback needs a time delay and sufficiently strong nonlinearity in the transmission of the feedback signal in order to produce biochemical rhythms. A single gene with only positive transcriptional feedback does not produce oscillations. Here, we demonstrate that this single-gene network in conjunction with a simple negative interaction can also easily produce rhythms. We examine a model comprised of two well-differentiated parts. The first is a positive feedback created by a protein that binds to the promoter of its own gene and activates the transcription. The second is a negative interaction in which a repressor molecule prevents this protein from binding to its promoter. A stochastic study shows that the system is robust to noise. A deterministic study identifies that the dynamics of the oscillator are mainly driven by two types of biomolecules: the protein, and the complex formed by the repressor and this protein. The main conclusion of this paper is that a simple and usual negative interaction, such as degradation, sequestration or inhibition, acting on the positive transcriptional feedback of a single gene is a sufficient condition to produce reliable oscillations. One gene is enough and the positive transcriptional feedback signal does not need to activate a second repressor gene. This means that at the genetic level an explicit negative feedback loop is not necessary. The model needs neither cooperative binding reactions nor the formation of protein multimers. Therefore, our findings could help to clarify the design principles of cellular clocks and constitute a new efficient tool for engineering synthetic genetic oscillators. |
2406.02381 | Marc Harary | Marc Harary and Chengxin Zhang | Kirigami: large convolutional kernels improve deep learning-based RNA
secondary structure prediction | -Updated authorship and acknowledgements | null | null | null | q-bio.BM cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel fully convolutional neural network (FCN) architecture
for predicting the secondary structure of ribonucleic acid (RNA) molecules.
Interpreting RNA structures as weighted graphs, we employ deep learning to
estimate the probability of base pairing between nucleotide residues. Unique to
our model are its massive 11-pixel kernels, which we argue provide a distinct
advantage for FCNs on the specialized domain of RNA secondary structures. On a
widely adopted, standardized test set comprised of 1,305 molecules, the
accuracy of our method exceeds that of current state-of-the-art (SOTA)
secondary structure prediction software, achieving a Matthews Correlation
Coefficient (MCC) over 11-40% higher than that of other leading methods on
overall structures and 58-400% higher on pseudoknots specifically.
| [
{
"created": "Tue, 4 Jun 2024 14:58:10 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 14:04:32 GMT",
"version": "v2"
}
] | 2024-06-07 | [
[
"Harary",
"Marc",
""
],
[
"Zhang",
"Chengxin",
""
]
] | We introduce a novel fully convolutional neural network (FCN) architecture for predicting the secondary structure of ribonucleic acid (RNA) molecules. Interpreting RNA structures as weighted graphs, we employ deep learning to estimate the probability of base pairing between nucleotide residues. Unique to our model are its massive 11-pixel kernels, which we argue provide a distinct advantage for FCNs on the specialized domain of RNA secondary structures. On a widely adopted, standardized test set comprised of 1,305 molecules, the accuracy of our method exceeds that of current state-of-the-art (SOTA) secondary structure prediction software, achieving a Matthews Correlation Coefficient (MCC) over 11-40% higher than that of other leading methods on overall structures and 58-400% higher on pseudoknots specifically. |
1512.09160 | Mark Catherall | Mark Catherall | Modelling the Role of Nitric Oxide in Cerebral Autoregulation | Thesis submitted for Doctor of Philosophy in Engineering at the
University of Oxford, October 2014 | null | null | null | q-bio.TO q-bio.CB q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malfunction of the system which regulates the bloodflow in the brain is a
major cause of stroke and dementia, costing many lives and many billions of
pounds each year in the UK alone. This regulatory system, known as cerebral
autoregulation, has been the subject of much experimental and mathematical
investigation yet our understanding of it is still quite limited. One area in
which our understanding is particularly lacking is that of the role of nitric
oxide, understood to be a potent vasodilator. The interactions of nitric oxide
with the better understood myogenic response remain un-modelled and poorly
understood. In this thesis we present a novel model of the arteriolar control
mechanism, comprising a mixture of well-established and new models of
individual processes, brought together for the first time. We show that this
model is capable of reproducing experimentally observed behaviour very closely
and go on to investigate its stability in the context of the vasculature of the
whole brain. In conclusion we find that nitric oxide, although it plays a
central role in determining equilibrium vessel radius, is unimportant to the
dynamics of the system and its responses to variation in arterial blood
pressure. We also find that the stability of the system is very sensitive to
the dynamics of Ca$^{2+}$ within the muscle cell, and that self-sustaining
Ca$^{2+}$ waves are not necessary to cause whole-vessel radius oscillations
consistent with vasomotion.
| [
{
"created": "Wed, 30 Dec 2015 21:54:35 GMT",
"version": "v1"
}
] | 2016-01-01 | [
[
"Catherall",
"Mark",
""
]
] | Malfunction of the system which regulates the bloodflow in the brain is a major cause of stroke and dementia, costing many lives and many billions of pounds each year in the UK alone. This regulatory system, known as cerebral autoregulation, has been the subject of much experimental and mathematical investigation yet our understanding of it is still quite limited. One area in which our understanding is particularly lacking is that of the role of nitric oxide, understood to be a potent vasodilator. The interactions of nitric oxide with the better understood myogenic response remain un-modelled and poorly understood. In this thesis we present a novel model of the arteriolar control mechanism, comprising a mixture of well-established and new models of individual processes, brought together for the first time. We show that this model is capable of reproducing experimentally observed behaviour very closely and go on to investigate its stability in the context of the vasculature of the whole brain. In conclusion we find that nitric oxide, although it plays a central role in determining equilibrium vessel radius, is unimportant to the dynamics of the system and its responses to variation in arterial blood pressure. We also find that the stability of the system is very sensitive to the dynamics of Ca$^{2+}$ within the muscle cell, and that self-sustaining Ca$^{2+}$ waves are not necessary to cause whole-vessel radius oscillations consistent with vasomotion. |
2308.06967 | Yuqing Wu | Yingchao Li, Yuqing Wu, Suolin Li, Lin Liu, Xiaoyi Zhang, Jiaxun Lv,
Qinqin Li | Intestinal Microecology in Pediatric Surgery-Related Gastrointestinal
Diseases Current Insights and Future Perspectives | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intestinal microecology is established from birth and is constantly changing
until homeostasis is reached. Intestinal microecology is involved in the immune
inflammatory response of the intestine and regulates the intestinal barrier
function. The imbalance of intestinal microecology is closely related to the
occurrence and development of digestive system diseases. In some
gastrointestinal diseases related to pediatric surgery, intestinal microecology
and its metabolites undergo a series of changes, which can provide a certain
basis for the diagnosis of diseases. The continuous development of
microecological agents and fecal microbiota transplantation technology has
provided a new means for its clinical treatment. We review the relationship
between pathogenesis, diagnosis and treatment of pediatric surgery-related
gastrointestinal diseases and intestinal microecology, in order to provide new
ideas and methods for clinical diagnosis, treatment and research.
| [
{
"created": "Mon, 14 Aug 2023 06:55:39 GMT",
"version": "v1"
}
] | 2023-08-15 | [
[
"Li",
"Yingchao",
""
],
[
"Wu",
"Yuqing",
""
],
[
"Li",
"Suolin",
""
],
[
"Liu",
"Lin",
""
],
[
"Zhang",
"Xiaoyi",
""
],
[
"Lv",
"Jiaxun",
""
],
[
"Li",
"Qinqin",
""
]
] | Intestinal microecology is established from birth and is constantly changing until homeostasis is reached. Intestinal microecology is involved in the immune inflammatory response of the intestine and regulates the intestinal barrier function. The imbalance of intestinal microecology is closely related to the occurrence and development of digestive system diseases. In some gastrointestinal diseases related to pediatric surgery, intestinal microecology and its metabolites undergo a series of changes, which can provide a certain basis for the diagnosis of diseases. The continuous development of microecological agents and fecal microbiota transplantation technology has provided a new means for its clinical treatment. We review the relationship between pathogenesis, diagnosis and treatment of pediatric surgery-related gastrointestinal diseases and intestinal microecology, in order to provide new ideas and methods for clinical diagnosis, treatment and research. |
0711.4476 | Marco Morelli | M. J. Morelli, S. Tanase-Nicola, R.J. Allen, P.R. ten Wolde | Reaction coordinates for the flipping of genetic switches | 24 pages, 7 figures | null | 10.1529/biophysj.107.116699 | null | q-bio.MN q-bio.QM | null | We present a detailed analysis, based on the Forward Flux Sampling (FFS)
simulation method, of the switching dynamics and stability of two models of
genetic toggle switches, consisting of two mutually-repressing genes encoding
transcription factors (TFs); in one model (the exclusive switch), they mutually
exclude each other's binding, while in the other model (general switch) the two
transcription factors can bind simultaneously to the shared operator region. We
assess the role of two pairs of reactions that influence the stability of these
switches: TF-TF homodimerisation and TF-DNA association/dissociation. We
factorise the flipping rate k into the product of the probability rho(q*) of
finding the system at the dividing surface (separatrix) between the two stable
states, and a kinetic prefactor R. In the case of the exclusive switch, the
rate of TF-operator binding affects both rho(q*) and R, while the rate of TF
dimerisation affects only R. In the case of the general switch both TF-operator
binding and TF dimerisation affect k, R and rho(q*). To elucidate this, we
analyse the transition state ensemble (TSE). For the exclusive switch, varying
the rate of TF-operator binding can drastically change the pathway of
switching, while changing the rate of dimerisation changes the switching rate
without altering the mechanism. The switching pathways of the general switch
are highly robust to changes in the rate constants of both TF-operator and
TF-TF binding, even though these rate constants do affect the flipping rate;
this feature is unique for non-equilibrium systems.
| [
{
"created": "Wed, 28 Nov 2007 12:55:21 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Morelli",
"M. J.",
""
],
[
"Tanase-Nicola",
"S.",
""
],
[
"Allen",
"R. J.",
""
],
[
"Wolde",
"P. R. ten",
""
]
] | We present a detailed analysis, based on the Forward Flux Sampling (FFS) simulation method, of the switching dynamics and stability of two models of genetic toggle switches, consisting of two mutually-repressing genes encoding transcription factors (TFs); in one model (the exclusive switch), they mutually exclude each other's binding, while in the other model (general switch) the two transcription factors can bind simultaneously to the shared operator region. We assess the role of two pairs of reactions that influence the stability of these switches: TF-TF homodimerisation and TF-DNA association/dissociation. We factorise the flipping rate k into the product of the probability rho(q*) of finding the system at the dividing surface (separatrix) between the two stable states, and a kinetic prefactor R. In the case of the exclusive switch, the rate of TF-operator binding affects both rho(q*) and R, while the rate of TF dimerisation affects only R. In the case of the general switch both TF-operator binding and TF dimerisation affect k, R and rho(q*). To elucidate this, we analyse the transition state ensemble (TSE). For the exclusive switch, varying the rate of TF-operator binding can drastically change the pathway of switching, while changing the rate of dimerisation changes the switching rate without altering the mechanism. The switching pathways of the general switch are highly robust to changes in the rate constants of both TF-operator and TF-TF binding, even though these rate constants do affect the flipping rate; this feature is unique for non-equilibrium systems. |
1707.07189 | Shahzad Ahmed Mr. | M. Usman Ali, Shahzad Ahmed, Javed Ferzund, Atif Mehmood, Abbas Rehman | Using PCA and Factor Analysis for Dimensionality Reduction of
Bio-informatics Data | 12 pages, 11 figures, 2 tables | International Journal of Advanced Computer Science and
Applications(IJACSA), Volume 8 Issue 5, 2017 | 10.14569/IJACSA.2017.080551 | null | q-bio.OT cs.CE | http://creativecommons.org/licenses/by/4.0/ | Large volume of Genomics data is produced on daily basis due to the
advancement in sequencing technology. This data is of no value if it is not
properly analysed. Different kinds of analytics are required to extract useful
information from this raw data. Classification, Prediction, Clustering and
Pattern Extraction are useful techniques of data mining. These techniques
require appropriate selection of attributes of data for getting accurate
results. However, Bioinformatics data is high dimensional, usually having
hundreds of attributes. Such large a number of attributes affect the
performance of machine learning algorithms used for classification/prediction.
So, dimensionality reduction techniques are required to reduce the number of
attributes that can be further used for analysis. In this paper, Principal
Component Analysis and Factor Analysis are used for dimensionality reduction of
Bioinformatics data. These techniques were applied on Leukaemia data set and
the number of attributes was reduced from to.
| [
{
"created": "Sat, 22 Jul 2017 16:21:12 GMT",
"version": "v1"
}
] | 2017-07-25 | [
[
"Ali",
"M. Usman",
""
],
[
"Ahmed",
"Shahzad",
""
],
[
"Ferzund",
"Javed",
""
],
[
"Mehmood",
"Atif",
""
],
[
"Rehman",
"Abbas",
""
]
] | Large volume of Genomics data is produced on daily basis due to the advancement in sequencing technology. This data is of no value if it is not properly analysed. Different kinds of analytics are required to extract useful information from this raw data. Classification, Prediction, Clustering and Pattern Extraction are useful techniques of data mining. These techniques require appropriate selection of attributes of data for getting accurate results. However, Bioinformatics data is high dimensional, usually having hundreds of attributes. Such large a number of attributes affect the performance of machine learning algorithms used for classification/prediction. So, dimensionality reduction techniques are required to reduce the number of attributes that can be further used for analysis. In this paper, Principal Component Analysis and Factor Analysis are used for dimensionality reduction of Bioinformatics data. These techniques were applied on Leukaemia data set and the number of attributes was reduced from to. |
2007.05180 | Ajitesh Srivastava | Ajitesh Srivastava, Tianjian Xu, Viktor K. Prasanna | Fast and Accurate Forecasting of COVID-19 Deaths Using the SIkJ$\alpha$
Model | Fixed a typo | null | null | null | q-bio.PE cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Forecasting the effect of COVID-19 is essential to design policies that may
prepare us to handle the pandemic. Many methods have already been proposed,
particularly, to forecast reported cases and deaths at country-level and
state-level. Many of these methods are based on traditional epidemiological
model which rely on simulations or Bayesian inference to simultaneously learn
many parameters at a time. This makes them prone to over-fitting and slow
execution. We propose an extension to our model SIkJ$\alpha$ to forecast deaths
and show that it can consider the effect of many complexities of the epidemic
process and yet be simplified to a few parameters that are learned using fast
linear regressions. We also present an evaluation of our method against seven
approaches currently being used by the CDC, based on their two weeks forecast
at various times during the pandemic. We demonstrate that our method achieves
better root mean squared error compared to these seven approaches during
majority of the evaluation period. Further, on a 2 core desktop machine, our
approach takes only 3.18s to tune hyper-parameters, learn parameters and
generate 100 days of forecasts of reported cases and deaths for all the states
in the US. The total execution time for 184 countries is 11.83s and for all the
US counties ($>$ 3000) is 101.03s.
| [
{
"created": "Fri, 10 Jul 2020 06:01:03 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2020 03:34:48 GMT",
"version": "v2"
}
] | 2020-07-14 | [
[
"Srivastava",
"Ajitesh",
""
],
[
"Xu",
"Tianjian",
""
],
[
"Prasanna",
"Viktor K.",
""
]
] | Forecasting the effect of COVID-19 is essential to design policies that may prepare us to handle the pandemic. Many methods have already been proposed, particularly, to forecast reported cases and deaths at country-level and state-level. Many of these methods are based on traditional epidemiological model which rely on simulations or Bayesian inference to simultaneously learn many parameters at a time. This makes them prone to over-fitting and slow execution. We propose an extension to our model SIkJ$\alpha$ to forecast deaths and show that it can consider the effect of many complexities of the epidemic process and yet be simplified to a few parameters that are learned using fast linear regressions. We also present an evaluation of our method against seven approaches currently being used by the CDC, based on their two weeks forecast at various times during the pandemic. We demonstrate that our method achieves better root mean squared error compared to these seven approaches during majority of the evaluation period. Further, on a 2 core desktop machine, our approach takes only 3.18s to tune hyper-parameters, learn parameters and generate 100 days of forecasts of reported cases and deaths for all the states in the US. The total execution time for 184 countries is 11.83s and for all the US counties ($>$ 3000) is 101.03s. |
1006.0507 | Wouter-Jan Rappel | Bo Hu, Wen Chen, Wouter-Jan Rappel and Herbert Levine | Determining the accuracy of spatial gradient sensing using statistical
mechanics | 4 pages, 2 figures | null | null | null | q-bio.CB physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many eukaryotic cells are able to sense chemical gradients by directly
measuring spatial concentration differences. The precision of such gradient
sensing is limited by fluctuations in the binding of diffusing particles to
specific receptors on the cell surface. Here, we explore the physical limits of
the spatial sensing mechanism by modeling the chemotactic cell as an Ising spin
chain subject to a spatially varying field. This allows us to derive the
maximum likelihood estimators of the gradient parameters as well as explicit
expressions for their asymptotic uncertainties. The accuracy increases with the
cell's size and our results demonstrate that this accuracy be further increased
by introducing a non-zero cooperativity between neighboring receptors. Thus,
consistent with recent experimental data, it is possible for small bacteria to
perform spatial measurements of gradients.
| [
{
"created": "Wed, 2 Jun 2010 21:54:15 GMT",
"version": "v1"
}
] | 2010-06-04 | [
[
"Hu",
"Bo",
""
],
[
"Chen",
"Wen",
""
],
[
"Rappel",
"Wouter-Jan",
""
],
[
"Levine",
"Herbert",
""
]
] | Many eukaryotic cells are able to sense chemical gradients by directly measuring spatial concentration differences. The precision of such gradient sensing is limited by fluctuations in the binding of diffusing particles to specific receptors on the cell surface. Here, we explore the physical limits of the spatial sensing mechanism by modeling the chemotactic cell as an Ising spin chain subject to a spatially varying field. This allows us to derive the maximum likelihood estimators of the gradient parameters as well as explicit expressions for their asymptotic uncertainties. The accuracy increases with the cell's size and our results demonstrate that this accuracy be further increased by introducing a non-zero cooperativity between neighboring receptors. Thus, consistent with recent experimental data, it is possible for small bacteria to perform spatial measurements of gradients. |
2003.09204 | Michael Baake | Gernot Akemann, Michael Baake, Nayden Chakarov, Oliver Kr\"uger, Adam
Mielke, Meinolf Ottensmann, Rebecca Werdehausen | Territorial behaviour of buzzards versus random matrix spacing
distributions | 13 pages, with many figures; revised and slightly enlarged version | J. Theor. Biol. 509 (2021), 110475:1-7 | 10.1016/j.jtbi.2020.110475 | null | q-bio.PE math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A deeper understanding of the processes underlying the distribution of
animals in space is crucial for both basic and applied ecology. The Common
buzzard (Buteo buteo) is a highly aggressive, territorial bird of prey that
interacts strongly with its intra- and interspecific competitors. We propose
and use random matrix theory to quantify the strength and range of repulsion as
a function of the buzzard population density, thus providing a novel approach
to model density dependence. As an indicator of territorial behaviour, we
perform a large-scale analysis of the distribution of buzzard nests in an area
of $300$ square kilometres around the Teutoburger Wald, Germany, as gathered
over a period of $20$ years. The nearest and next-to-nearest neighbour spacing
distribution between nests is compared to the two-dimensional Poisson
distribution, originating from uncorrelated random variables, to the complex
eigenvalues of random matrices, which are strongly correlated, and to a
two-dimensional Coulomb gas interpolating between these two. A one-parameter
fit to a time-moving average reveals a significant increase of repulsion
between neighbouring nests, as a function of the observed increase in absolute
population density over the monitored period of time, thereby proving an
unexpected yet simple model for density-dependent spacing of predator
territories. A similar effect is obtained for next-to-nearest neighbours,
albeit with weaker repulsion, indicating a short-range interaction. Our results
show that random matrix theory might be useful in the context of population
ecology.
| [
{
"created": "Fri, 20 Mar 2020 11:32:59 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Nov 2020 15:51:35 GMT",
"version": "v2"
}
] | 2020-11-04 | [
[
"Akemann",
"Gernot",
""
],
[
"Baake",
"Michael",
""
],
[
"Chakarov",
"Nayden",
""
],
[
"Krüger",
"Oliver",
""
],
[
"Mielke",
"Adam",
""
],
[
"Ottensmann",
"Meinolf",
""
],
[
"Werdehausen",
"Rebecca",
""
]
] | A deeper understanding of the processes underlying the distribution of animals in space is crucial for both basic and applied ecology. The Common buzzard (Buteo buteo) is a highly aggressive, territorial bird of prey that interacts strongly with its intra- and interspecific competitors. We propose and use random matrix theory to quantify the strength and range of repulsion as a function of the buzzard population density, thus providing a novel approach to model density dependence. As an indicator of territorial behaviour, we perform a large-scale analysis of the distribution of buzzard nests in an area of $300$ square kilometres around the Teutoburger Wald, Germany, as gathered over a period of $20$ years. The nearest and next-to-nearest neighbour spacing distribution between nests is compared to the two-dimensional Poisson distribution, originating from uncorrelated random variables, to the complex eigenvalues of random matrices, which are strongly correlated, and to a two-dimensional Coulomb gas interpolating between these two. A one-parameter fit to a time-moving average reveals a significant increase of repulsion between neighbouring nests, as a function of the observed increase in absolute population density over the monitored period of time, thereby proving an unexpected yet simple model for density-dependent spacing of predator territories. A similar effect is obtained for next-to-nearest neighbours, albeit with weaker repulsion, indicating a short-range interaction. Our results show that random matrix theory might be useful in the context of population ecology. |
1106.6107 | Naoki Masuda Dr. | Shoma Tanabe and Naoki Masuda | Evolution of cooperation facilitated by reinforcement learning with
adaptive aspiration levels | 8 figures and 1 table | Journal of Theoretical Biology, 293, 151-160 (2012) | 10.1016/j.jtbi.2011.10.020 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Repeated interaction between individuals is the main mechanism for
maintaining cooperation in social dilemma situations. Variants of tit-for-tat
(repeating the previous action of the opponent) and the win-stay lose-shift
strategy are known as strong competitors in iterated social dilemma games. On
the other hand, real repeated interaction generally allows plasticity (i.e.,
learning) of individuals based on the experience of the past. Although
plasticity is relevant to various biological phenomena, its role in repeated
social dilemma games is relatively unexplored. In particular, if
experience-based learning plays a key role in promotion and maintenance of
cooperation, learners should evolve in the contest with nonlearners under
selection pressure. By modeling players using a simple reinforcement learning
model, we numerically show that learning enables the evolution of cooperation.
We also show that numerically estimated adaptive dynamics appositely predict
the outcome of evolutionary simulations. The analysis of the adaptive dynamics
enables us to capture the obtained results as an affirmative example of the
Baldwin effect, where learning accelerates the evolution to optimality.
| [
{
"created": "Thu, 30 Jun 2011 02:26:04 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Nov 2011 08:45:39 GMT",
"version": "v2"
}
] | 2011-11-08 | [
[
"Tanabe",
"Shoma",
""
],
[
"Masuda",
"Naoki",
""
]
] | Repeated interaction between individuals is the main mechanism for maintaining cooperation in social dilemma situations. Variants of tit-for-tat (repeating the previous action of the opponent) and the win-stay lose-shift strategy are known as strong competitors in iterated social dilemma games. On the other hand, real repeated interaction generally allows plasticity (i.e., learning) of individuals based on the experience of the past. Although plasticity is relevant to various biological phenomena, its role in repeated social dilemma games is relatively unexplored. In particular, if experience-based learning plays a key role in promotion and maintenance of cooperation, learners should evolve in the contest with nonlearners under selection pressure. By modeling players using a simple reinforcement learning model, we numerically show that learning enables the evolution of cooperation. We also show that numerically estimated adaptive dynamics appositely predict the outcome of evolutionary simulations. The analysis of the adaptive dynamics enables us to capture the obtained results as an affirmative example of the Baldwin effect, where learning accelerates the evolution to optimality. |
2112.10670 | Anna Madra | Anna Madra, Alex YS. Lin, Daniel J. Vanselow, Keith C. Cheng | An adaptively optimized algorithm for counting nuclei in X-ray micro-CT
scans of whole organisms | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living organisms are primarily made of cells. Identifying them and
characterizing their geometry and spatial distribution is a first step towards
building multi-scale models of these biomaterials. We propose a method to count
cells using nuclei in an X-ray microtomographic scan of a zebrafish. To account
for scanning artifacts and partial volume effect, the method is adaptively
calibrated using parameters approximated from the manifold of manually selected
and optimized special cases. The methodology is tested on nuclei in the eyes of
zebrafish larvae of different ages.
| [
{
"created": "Mon, 20 Dec 2021 16:53:30 GMT",
"version": "v1"
}
] | 2021-12-21 | [
[
"Madra",
"Anna",
""
],
[
"Lin",
"Alex YS.",
""
],
[
"Vanselow",
"Daniel J.",
""
],
[
"Cheng",
"Keith C.",
""
]
] | Living organisms are primarily made of cells. Identifying them and characterizing their geometry and spatial distribution is a first step towards building multi-scale models of these biomaterials. We propose a method to count cells using nuclei in an X-ray microtomographic scan of a zebrafish. To account for scanning artifacts and partial volume effect, the method is adaptively calibrated using parameters approximated from the manifold of manually selected and optimized special cases. The methodology is tested on nuclei in the eyes of zebrafish larvae of different ages. |
2009.02816 | Jae Moon | Jae Moon, Silvia Orlandi, Tom Chau | A comparison of oscillatory characteristics in covert speech and speech
perception | 22 pages, 10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Covert speech, the silent production of words in the mind, has been studied
increasingly to understand and decode thoughts. This task has often been
compared to speech perception as it brings about similar topographical
activation patterns in common brain areas. In studies of speech comprehension,
neural oscillations are thought to play a key role in the sampling of speech at
varying temporal scales. However, very little is known about the role of
oscillations in covert speech. In this study, we aimed to determine to what
extent each oscillatory frequency band is used to process words in covert
speech and speech perception tasks. Secondly, we asked whether the {\theta} and
{\gamma} activity in the two tasks are related through phase-amplitude coupling
(PAC). First, continuous wavelet transform was performed on epoched signals and
subsequently two-tailed t-tests between two classes were conducted to determine
statistical distinctions in frequency and time. While the perception task
dynamically uses all frequencies with more prominent {\theta} and {\gamma}
activity, the covert task favoured higher frequencies with significantly higher
{\gamma} activity than perception. Moreover, the perception condition produced
significant {\theta}-{\gamma} PAC suggesting a linkage of syllabic and
phonological sampling. Although this was found to be suppressed in the covert
condition, we found significant pseudo-coupling between perception {\theta} and
covert speech {\gamma}. We report that covert speech processing is largely
conducted by higher frequencies, and that the {\gamma}- and {\theta}-bands may
function similarly and differently across tasks, respectively. This study is
the first to characterize covert speech in terms of neural oscillatory
engagement. Future studies are directed to explore oscillatory characteristics
and inter-task relationships with a more diverse vocabulary.
| [
{
"created": "Sun, 6 Sep 2020 21:00:47 GMT",
"version": "v1"
}
] | 2020-09-08 | [
[
"Moon",
"Jae",
""
],
[
"Orlandi",
"Silvia",
""
],
[
"Chau",
"Tom",
""
]
] | Covert speech, the silent production of words in the mind, has been studied increasingly to understand and decode thoughts. This task has often been compared to speech perception as it brings about similar topographical activation patterns in common brain areas. In studies of speech comprehension, neural oscillations are thought to play a key role in the sampling of speech at varying temporal scales. However, very little is known about the role of oscillations in covert speech. In this study, we aimed to determine to what extent each oscillatory frequency band is used to process words in covert speech and speech perception tasks. Secondly, we asked whether the {\theta} and {\gamma} activity in the two tasks are related through phase-amplitude coupling (PAC). First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes were conducted to determine statistical distinctions in frequency and time. While the perception task dynamically uses all frequencies with more prominent {\theta} and {\gamma} activity, the covert task favoured higher frequencies with significantly higher {\gamma} activity than perception. Moreover, the perception condition produced significant {\theta}-{\gamma} PAC suggesting a linkage of syllabic and phonological sampling. Although this was found to be suppressed in the covert condition, we found significant pseudo-coupling between perception {\theta} and covert speech {\gamma}. We report that covert speech processing is largely conducted by higher frequencies, and that the {\gamma}- and {\theta}-bands may function similarly and differently across tasks, respectively. This study is the first to characterize covert speech in terms of neural oscillatory engagement. Future studies are directed to explore oscillatory characteristics and inter-task relationships with a more diverse vocabulary. |
1912.08312 | Hayriye Gulbudak | Hayriye Gulbudak | An Immuno-Epidemiological Vector-Host Model with Within-Vector Viral
Kinetics | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A current challenge for disease modeling and public health is understanding
pathogen dynamics across scales since their ecology and evolution ultimately
operate on several coupled scales. This is particularly true for vector-borne
diseases, where within-vector, within-host, and between vector-host populations
all play crucial roles in diversity and distribution of the pathogen. Despite
recent modeling efforts to determine the effect of within-host virus-immune
response dynamics on between-host transmission, the role of within-vector viral
dynamics on disease spread is overlooked. Here we formulate an
age-since-infection structured epidemic model coupled to nonlinear ordinary
differential equations describing within-host immune-virus dynamics and
within-vector viral kinetics, with feedbacks across these scales. We first
define the \emph{within-host viral-immune response and within-vector viral
kinetics dependent} basic reproduction number $\mathcal R_0.$ Then we prove
that whenever $\mathcal R_0<1,$ the disease free equilibrium is locally
asymptotically stable, and under certain biologically interpretable conditions,
globally asymptotically stable. Otherwise if $\mathcal R_0>1,$ it is unstable
and the system has a unique positive endemic equilibrium. In the special case
of constant vector to host inoculum size, we show the positive equilibrium is
locally asymptotically stable and the disease is weakly uniformly persistent.
Furthermore numerical results suggest that within-vector-viral kinetics and
dynamic inoculum size may play a substantial role in epidemics. Finally, we
address how the model can be utilized to better predict the success of control
strategies such as vaccination and drug treatment.
| [
{
"created": "Tue, 17 Dec 2019 23:23:32 GMT",
"version": "v1"
}
] | 2019-12-19 | [
[
"Gulbudak",
"Hayriye",
""
]
] | A current challenge for disease modeling and public health is understanding pathogen dynamics across scales since their ecology and evolution ultimately operate on several coupled scales. This is particularly true for vector-borne diseases, where within-vector, within-host, and between vector-host populations all play crucial roles in diversity and distribution of the pathogen. Despite recent modeling efforts to determine the effect of within-host virus-immune response dynamics on between-host transmission, the role of within-vector viral dynamics on disease spread is overlooked. Here we formulate an age-since-infection structured epidemic model coupled to nonlinear ordinary differential equations describing within-host immune-virus dynamics and within-vector viral kinetics, with feedbacks across these scales. We first define the \emph{within-host viral-immune response and within-vector viral kinetics dependent} basic reproduction number $\mathcal R_0.$ Then we prove that whenever $\mathcal R_0<1,$ the disease free equilibrium is locally asymptotically stable, and under certain biologically interpretable conditions, globally asymptotically stable. Otherwise if $\mathcal R_0>1,$ it is unstable and the system has a unique positive endemic equilibrium. In the special case of constant vector to host inoculum size, we show the positive equilibrium is locally asymptotically stable and the disease is weakly uniformly persistent. Furthermore numerical results suggest that within-vector-viral kinetics and dynamic inoculum size may play a substantial role in epidemics. Finally, we address how the model can be utilized to better predict the success of control strategies such as vaccination and drug treatment. |
1809.06232 | Samad Noeiaghdam | Samad Noeiaghdam, Emran Khoshrouye Ghiasi | Solving a non-linear model of HIV infection for CD4+T cells by combining
Laplace transformation and Homotopy analysis method | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to find the approximate solution of HIV infection
model of CD4+T cells. For this reason, the homotopy analysis transform method
(HATM) is applied. The presented method is combination of traditional homotopy
analysis method (HAM) and the Laplace transformation. The convergence of
presented method is discussed by preparing a theorem which shows the
capabilities of method. The numerical results are shown for different values of
iterations. Also, the regions of convergence are demonstrated by plotting
several h-curves. Furthermore in order to show the efficiency and accuracy of
method, the residual error for different iterations are presented.
| [
{
"created": "Thu, 13 Sep 2018 20:21:37 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2019 20:08:14 GMT",
"version": "v2"
}
] | 2019-07-18 | [
[
"Noeiaghdam",
"Samad",
""
],
[
"Ghiasi",
"Emran Khoshrouye",
""
]
] | The aim of this paper is to find the approximate solution of HIV infection model of CD4+T cells. For this reason, the homotopy analysis transform method (HATM) is applied. The presented method is combination of traditional homotopy analysis method (HAM) and the Laplace transformation. The convergence of presented method is discussed by preparing a theorem which shows the capabilities of method. The numerical results are shown for different values of iterations. Also, the regions of convergence are demonstrated by plotting several h-curves. Furthermore in order to show the efficiency and accuracy of method, the residual error for different iterations are presented. |
1809.05880 | Markus D Schirmer | Markus D. Schirmer and Ai Wern Chung | Structural subnetwork evolution across the life-span: rich-club, feeder,
seeder | null | null | 10.1007/978-3-030-00755-3_15 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The impact of developmental and aging processes on brain connectivity and the
connectome has been widely studied. Network theoretical measures and certain
topological principles are computed from the entire brain, however there is a
need to separate and understand the underlying subnetworks which contribute
towards these observed holistic connectomic alterations. One organizational
principle is the rich-club - a core subnetwork of brain regions that are
strongly connected, forming a high-cost, high-capacity backbone that is
critical for effective communication in the network. Investigations primarily
focus on its alterations with disease and age. Here, we present a systematic
analysis of not only the rich-club, but also other subnetworks derived from
this backbone - namely feeder and seeder subnetworks. Our analysis is applied
to structural connectomes in a normal cohort from a large, publicly available
lifespan study. We demonstrate changes in rich-club membership with age
alongside a shift in importance from 'peripheral' seeder to feeder subnetworks.
Our results show a refinement within the rich-club structure (increase in
transitivity and betweenness centrality), as well as increased efficiency in
the feeder subnetwork and decreased measures of network integration and
segregation in the seeder subnetwork. These results demonstrate the different
developmental patterns when analyzing the connectome stratified according to
its rich-club and the potential of utilizing this subnetwork analysis to reveal
the evolution of brain architectural alterations across the life-span.
| [
{
"created": "Sun, 16 Sep 2018 14:19:59 GMT",
"version": "v1"
}
] | 2018-09-18 | [
[
"Schirmer",
"Markus D.",
""
],
[
"Chung",
"Ai Wern",
""
]
] | The impact of developmental and aging processes on brain connectivity and the connectome has been widely studied. Network theoretical measures and certain topological principles are computed from the entire brain, however there is a need to separate and understand the underlying subnetworks which contribute towards these observed holistic connectomic alterations. One organizational principle is the rich-club - a core subnetwork of brain regions that are strongly connected, forming a high-cost, high-capacity backbone that is critical for effective communication in the network. Investigations primarily focus on its alterations with disease and age. Here, we present a systematic analysis of not only the rich-club, but also other subnetworks derived from this backbone - namely feeder and seeder subnetworks. Our analysis is applied to structural connectomes in a normal cohort from a large, publicly available lifespan study. We demonstrate changes in rich-club membership with age alongside a shift in importance from 'peripheral' seeder to feeder subnetworks. Our results show a refinement within the rich-club structure (increase in transitivity and betweenness centrality), as well as increased efficiency in the feeder subnetwork and decreased measures of network integration and segregation in the seeder subnetwork. These results demonstrate the different developmental patterns when analyzing the connectome stratified according to its rich-club and the potential of utilizing this subnetwork analysis to reveal the evolution of brain architectural alterations across the life-span. |
2005.05053 | Melikasadat Emami | Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K.
Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang,
Jonathan Viventi | Low-Rank Nonlinear Decoding of $\mu$-ECoG from the Primary Auditory
Cortex | 4 pages, 3 figures | null | null | null | q-bio.NC cs.LG cs.NE eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of neural decoding from parallel neural
measurements systems such as micro-electrocorticography ($\mu$-ECoG). In
systems with large numbers of array elements at very high sampling rates, the
dimension of the raw measurement data may be large. Learning neural decoders
for this high-dimensional data can be challenging, particularly when the number
of training samples is limited. To address this challenge, this work presents a
novel neural network decoder with a low-rank structure in the first hidden
layer. The low-rank constraints dramatically reduce the number of parameters in
the decoder while still enabling a rich class of nonlinear decoder maps. The
low-rank decoder is illustrated on $\mu$-ECoG data from the primary auditory
cortex (A1) of awake rats. This decoding problem is particularly challenging
due to the complexity of neural responses in the auditory cortex and the
presence of confounding signals in awake animals. It is shown that the proposed
low-rank decoder significantly outperforms models using standard dimensionality
reduction techniques such as principal component analysis (PCA).
| [
{
"created": "Wed, 6 May 2020 05:51:08 GMT",
"version": "v1"
}
] | 2020-05-12 | [
[
"Emami",
"Melikasadat",
""
],
[
"Sahraee-Ardakan",
"Mojtaba",
""
],
[
"Pandit",
"Parthe",
""
],
[
"Fletcher",
"Alyson K.",
""
],
[
"Rangan",
"Sundeep",
""
],
[
"Trumpis",
"Michael",
""
],
[
"Bent",
"Brinnae",
""
],
[
"Chiang",
"Chia-Han",
""
],
[
"Viventi",
"Jonathan",
""
]
] | This paper considers the problem of neural decoding from parallel neural measurements systems such as micro-electrocorticography ($\mu$-ECoG). In systems with large numbers of array elements at very high sampling rates, the dimension of the raw measurement data may be large. Learning neural decoders for this high-dimensional data can be challenging, particularly when the number of training samples is limited. To address this challenge, this work presents a novel neural network decoder with a low-rank structure in the first hidden layer. The low-rank constraints dramatically reduce the number of parameters in the decoder while still enabling a rich class of nonlinear decoder maps. The low-rank decoder is illustrated on $\mu$-ECoG data from the primary auditory cortex (A1) of awake rats. This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals. It is shown that the proposed low-rank decoder significantly outperforms models using standard dimensionality reduction techniques such as principal component analysis (PCA). |
1607.00969 | Eleonora Russo | Eleonora Russo and Daniel Durstewitz | Cell assemblies at multiple time scales with arbitrary lag
constellations | null | eLife 2017;6:e19428 | 10.7554/eLife.19428 | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hebb's idea of a cell assembly as the fundamental unit of neural information
processing has dominated neuroscience like no other theoretical concept within
the past 60 years. A range of different physiological phenomena, from precisely
synchronized spiking to broadly simultaneous rate increases, has been subsumed
under this term. Yet progress in this area is hampered by the lack of
statistical tools that would enable to extract assemblies with arbitrary
constellations of time lags, and at multiple temporal scales, partly due to the
severe computational burden. Here we present such a unifying methodological and
conceptual framework which detects assembly structure at many different time
scales, levels of precision, and with arbitrary internal organization. Applying
this methodology to multiple single unit recordings from various cortical
areas, we find that there is no universal cortical coding scheme, but that
assembly structure and precision significantly depends on brain area recorded
and ongoing task demands.
| [
{
"created": "Mon, 4 Jul 2016 17:35:35 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2017 14:24:54 GMT",
"version": "v2"
}
] | 2017-02-16 | [
[
"Russo",
"Eleonora",
""
],
[
"Durstewitz",
"Daniel",
""
]
] | Hebb's idea of a cell assembly as the fundamental unit of neural information processing has dominated neuroscience like no other theoretical concept within the past 60 years. A range of different physiological phenomena, from precisely synchronized spiking to broadly simultaneous rate increases, has been subsumed under this term. Yet progress in this area is hampered by the lack of statistical tools that would enable to extract assemblies with arbitrary constellations of time lags, and at multiple temporal scales, partly due to the severe computational burden. Here we present such a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multiple single unit recordings from various cortical areas, we find that there is no universal cortical coding scheme, but that assembly structure and precision significantly depends on brain area recorded and ongoing task demands. |
2008.09637 | Mario Berberan-Santos | Mario Berberan-Santos | Exact and approximate analytic solutions in the SIR epidemic model | 30 pages, 14 figures, 1 table | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, some new exact and approximate analytical solutions are
obtained for the SIR epidemic model, which is formulated in terms of
dimensionless variables and parameters. The susceptibles population (S) is in
this way explicitly related to the infectives population (I) using the Lambert
W function (both the principal and the secondary branches). A simple and
accurate relation for the fraction of the population that does not catch the
disease is also obtained. The explicit time dependences of the susceptibles,
infectives and removed populations, as well as that of the epidemic curve are
also modelled with good accuracy for any value of R0 (basic multiplication
number) using simple functions that are modified solutions of the R0 ->
infinity limiting case (logistic curve). It is also shown that for I0 << S0 the
effect of a change in the ratio I0/S0 on the population evolution curves
amounts to a time shift, their shape and relative position being unaffected.
| [
{
"created": "Fri, 21 Aug 2020 18:30:06 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Sep 2020 16:04:16 GMT",
"version": "v2"
}
] | 2020-09-04 | [
[
"Berberan-Santos",
"Mario",
""
]
] | In this work, some new exact and approximate analytical solutions are obtained for the SIR epidemic model, which is formulated in terms of dimensionless variables and parameters. The susceptibles population (S) is in this way explicitly related to the infectives population (I) using the Lambert W function (both the principal and the secondary branches). A simple and accurate relation for the fraction of the population that does not catch the disease is also obtained. The explicit time dependences of the susceptibles, infectives and removed populations, as well as that of the epidemic curve are also modelled with good accuracy for any value of R0 (basic multiplication number) using simple functions that are modified solutions of the R0 -> infinity limiting case (logistic curve). It is also shown that for I0 << S0 the effect of a change in the ratio I0/S0 on the population evolution curves amounts to a time shift, their shape and relative position being unaffected. |
1109.5410 | Sergei Nechaev | O. V. Valba, M. V. Tamm, S. K. Nechaev | New alphabet-dependent morphological transition in a random RNA
alignment | 4 pages, 3 figures (title is changed, text is essentially reworked),
accepted in PRL | null | 10.1103/PhysRevLett.109.018102 | null | q-bio.GN cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the fraction $f$ of nucleotides involved in the formation of a
cactus--like secondary structure of random heteropolymer RNA--like molecules.
In the low--temperature limit we study this fraction as a function of the
number $c$ of different nucleotide species. We show, that with changing $c$,
the secondary structures of random RNAs undergo a morphological transition:
$f(c)\to 1$ for $c \le c_{\rm cr}$ as the chain length $n$ goes to infinity,
signaling the formation of a virtually "perfect" gapless secondary structure;
while $f(c)<1$ for $c>c_{\rm cr}$, what means that a non-perfect structure with
gaps is formed. The strict upper and lower bounds $2 \le c_{\rm cr} \le 4$ are
proven, and the numerical evidence for $c_{\rm cr}$ is presented. The relevance
of the transition from the evolutional point of view is discussed.
| [
{
"created": "Sun, 25 Sep 2011 22:37:29 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Nov 2011 19:36:45 GMT",
"version": "v2"
},
{
"created": "Tue, 22 May 2012 19:02:07 GMT",
"version": "v3"
}
] | 2013-05-30 | [
[
"Valba",
"O. V.",
""
],
[
"Tamm",
"M. V.",
""
],
[
"Nechaev",
"S. K.",
""
]
] | We study the fraction $f$ of nucleotides involved in the formation of a cactus--like secondary structure of random heteropolymer RNA--like molecules. In the low--temperature limit we study this fraction as a function of the number $c$ of different nucleotide species. We show, that with changing $c$, the secondary structures of random RNAs undergo a morphological transition: $f(c)\to 1$ for $c \le c_{\rm cr}$ as the chain length $n$ goes to infinity, signaling the formation of a virtually "perfect" gapless secondary structure; while $f(c)<1$ for $c>c_{\rm cr}$, what means that a non-perfect structure with gaps is formed. The strict upper and lower bounds $2 \le c_{\rm cr} \le 4$ are proven, and the numerical evidence for $c_{\rm cr}$ is presented. The relevance of the transition from the evolutional point of view is discussed. |
2207.14470 | Yannick Roy | Yannick Roy, Jocelyn Faubert | Significant changes in EEG neural oscillations during different phases
of three-dimensional multiple object tracking task (3D-MOT) imply different
roles for attention and working memory | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Our ability to track multiple objects in a dynamic environment enables us to
perform everyday tasks such as driving, playing team sports, and walking in a
crowded mall. Despite more than three decades of literature on multiple object
tracking (MOT) tasks, the underlying and intertwined neural mechanisms remain
poorly understood. Here we looked at the electroencephalography (EEG) neural
correlates and their changes across the three phases of a 3D-MOT task, namely
identification, tracking and recall. We recorded the EEG activity of 24
participants while they were performing a 3D-MOT task with either 1, 2 or 3
targets where some trials were lateralized and some were not. We observed what
seems to be a handoff between focused attention and working memory processes
when going from tracking to recall. Our findings revealed a strong inhibition
in delta and theta frequencies from the frontal region during tracking,
followed by a strong (re)activation of these same frequencies during recall.
Our results also showed contralateral delay activity (CDA) for the lateralized
trials, in both the identification and recall phases but not during tracking.
| [
{
"created": "Fri, 29 Jul 2022 04:16:46 GMT",
"version": "v1"
}
] | 2022-08-01 | [
[
"Roy",
"Yannick",
""
],
[
"Faubert",
"Jocelyn",
""
]
] | Our ability to track multiple objects in a dynamic environment enables us to perform everyday tasks such as driving, playing team sports, and walking in a crowded mall. Despite more than three decades of literature on multiple object tracking (MOT) tasks, the underlying and intertwined neural mechanisms remain poorly understood. Here we looked at the electroencephalography (EEG) neural correlates and their changes across the three phases of a 3D-MOT task, namely identification, tracking and recall. We recorded the EEG activity of 24 participants while they were performing a 3D-MOT task with either 1, 2 or 3 targets where some trials were lateralized and some were not. We observed what seems to be a handoff between focused attention and working memory processes when going from tracking to recall. Our findings revealed a strong inhibition in delta and theta frequencies from the frontal region during tracking, followed by a strong (re)activation of these same frequencies during recall. Our results also showed contralateral delay activity (CDA) for the lateralized trials, in both the identification and recall phases but not during tracking. |
0807.3064 | Kunihiko Kaneko | Kunihiko Kaneko | Relationship among Phenotypic Plasticity, Genetic and Epigenetic
Fluctuations, Robustness, and Evolovability; Waddington's Legacy revisited
under the Spirit of Einstein | submitted to J BioScience | null | null | null | q-bio.PE cond-mat.stat-mech nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Questions on possible relationship between phenotypic plasticity and
evolvability, as well as that between robustness and evolution have been
addressed over decades in the field of evolution-development. By introducing an
evolutionary stability assumption on the distribution of phenotype and
genotype, we establish quantitative relationships on plasticity, phenotypic
fluctuations, and evolvability. Derived are proportionality among plasticity as
a responsiveness of phenotype against environmental change, variances of
phenotype fluctuations of genetic and developmental origins, and evolution
speed. Confirmation of the relationships is given by numerical experiments of a
gene expression dynamics model with an evolving transcription network, whereas
verifications by laboratory evolution experiments are also discussed. These
results provide quantitative formulation on canalization and genetic
assimilation, in terms of fluctuations of gene expression levels.
| [
{
"created": "Sat, 19 Jul 2008 03:46:45 GMT",
"version": "v1"
}
] | 2008-07-22 | [
[
"Kaneko",
"Kunihiko",
""
]
] | Questions on possible relationship between phenotypic plasticity and evolvability, as well as that between robustness and evolution have been addressed over decades in the field of evolution-development. By introducing an evolutionary stability assumption on the distribution of phenotype and genotype, we establish quantitative relationships on plasticity, phenotypic fluctuations, and evolvability. Derived are proportionality among plasticity as a responsiveness of phenotype against environmental change, variances of phenotype fluctuations of genetic and developmental origins, and evolution speed. Confirmation of the relationships is given by numerical experiments of a gene expression dynamics model with an evolving transcription network, whereas verifications by laboratory evolution experiments are also discussed. These results provide quantitative formulation on canalization and genetic assimilation, in terms of fluctuations of gene expression levels. |
1301.2845 | Alex Volinsky | Alex A. Volinsky, Nikolai V. Gubarev, Galina M. Orlovskaya, Elena V.
Marchenko | Development stages of the "rope" human intestinal parasite | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the five development stages of the rope worm, which
could be human parasite. Rope worms have been discovered as a result of
cleansing enemas. Thousands or people have passed the rope worms from all over
the World. Adult stages live in human gastro-intestinal tract and are
anaerobic. They move inside the body by releasing gas bubbles utilizing jet
propulsion. These worms look like a rope, and can be over a meter long. The
development stages were identified based on their morphology. The fifth stage
looks like a tough string of mucus about a meter long. The fourth stage looks
similar, but the rope worm is shorter and has softer slimier body. The third
stage looks like branched jellyfish. The second stage is viscous snot, or mucus
with visible gas bubbles that act as suction cups. The first stage is slimier
mucus with fewer bubbles, which can reside almost anywhere in the body. Rope
worms have cellular structure, based on optical microscopy, DAPI staining and
DNA analysis, however, the data collected is not sufficient to identify the
specie. Removal methods are also mentioned in the paper.
| [
{
"created": "Mon, 14 Jan 2013 02:23:31 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Oct 2014 19:47:17 GMT",
"version": "v2"
}
] | 2014-10-07 | [
[
"Volinsky",
"Alex A.",
""
],
[
"Gubarev",
"Nikolai V.",
""
],
[
"Orlovskaya",
"Galina M.",
""
],
[
"Marchenko",
"Elena V.",
""
]
] | This paper describes the five development stages of the rope worm, which could be human parasite. Rope worms have been discovered as a result of cleansing enemas. Thousands or people have passed the rope worms from all over the World. Adult stages live in human gastro-intestinal tract and are anaerobic. They move inside the body by releasing gas bubbles utilizing jet propulsion. These worms look like a rope, and can be over a meter long. The development stages were identified based on their morphology. The fifth stage looks like a tough string of mucus about a meter long. The fourth stage looks similar, but the rope worm is shorter and has softer slimier body. The third stage looks like branched jellyfish. The second stage is viscous snot, or mucus with visible gas bubbles that act as suction cups. The first stage is slimier mucus with fewer bubbles, which can reside almost anywhere in the body. Rope worms have cellular structure, based on optical microscopy, DAPI staining and DNA analysis, however, the data collected is not sufficient to identify the specie. Removal methods are also mentioned in the paper. |
1208.2720 | Donald Cooper Ph.D. | Shinya Nakamura, Michael V. Baratta, Matthew B. Pomrenze, Samuel D.
Dolzani, Donald C. Cooper | High fidelity optogenetic control of individual prefrontal cortical
pyramidal neurons in vivo | 4 pages, 4 figures F1000Research article | F1000 Research 2012, 1:7 | 10.3410/f1000research.1-7.v1 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Precise spatial and temporal manipulation of neural activity in specific
genetically defined cell populations is now possible with the advent of
optogenetics. The emerging field of optogenetics consists of a set of
naturally-occurring and engineered light-sensitive membrane proteins that are
able to activate (e.g., channelrhodopsin-2, ChR2) or silence (e.g.,
halorhodopsin, NpHR) neural activity. Here we demonstrate the technique and the
feasibility of using novel adeno-associated viral (AAV) tools to activate
(AAV-CaMKll{\alpha}-ChR2-eYFP) or silence (AAV-CaMKll{\alpha}-eNpHR3.0-eYFP)
neural activity of rat prefrontal cortical prelimbic (PL) pyramidal neurons in
vivo. In vivo single unit extracellular recording of ChR2-transduced pyramidal
neurons showed that delivery of brief (10 ms) blue (473 nm) light-pulse trains
up to 20 Hz via a custom fiber optic-coupled recording electrode (optrode)
induced spiking with high fidelity at 20 Hz for the duration of recording (up
to two hours in some cases). To silence spontaneously active neurons we
transduced them with the NpHR construct and administered continuous green (532
nm) light to completely inhibit action potential activity for up to 10 seconds
with 100% fidelity in most cases. These versatile photosensitive tools combined
with optrode recording methods provide experimental control over activity of
genetically defined neurons and can be used to investigate the functional
relationship between neural activity and complex cognitive behavior.
| [
{
"created": "Mon, 13 Aug 2012 22:20:05 GMT",
"version": "v1"
}
] | 2012-08-15 | [
[
"Nakamura",
"Shinya",
""
],
[
"Baratta",
"Michael V.",
""
],
[
"Pomrenze",
"Matthew B.",
""
],
[
"Dolzani",
"Samuel D.",
""
],
[
"Cooper",
"Donald C.",
""
]
] | Precise spatial and temporal manipulation of neural activity in specific genetically defined cell populations is now possible with the advent of optogenetics. The emerging field of optogenetics consists of a set of naturally-occurring and engineered light-sensitive membrane proteins that are able to activate (e.g., channelrhodopsin-2, ChR2) or silence (e.g., halorhodopsin, NpHR) neural activity. Here we demonstrate the technique and the feasibility of using novel adeno-associated viral (AAV) tools to activate (AAV-CaMKll{\alpha}-ChR2-eYFP) or silence (AAV-CaMKll{\alpha}-eNpHR3.0-eYFP) neural activity of rat prefrontal cortical prelimbic (PL) pyramidal neurons in vivo. In vivo single unit extracellular recording of ChR2-transduced pyramidal neurons showed that delivery of brief (10 ms) blue (473 nm) light-pulse trains up to 20 Hz via a custom fiber optic-coupled recording electrode (optrode) induced spiking with high fidelity at 20 Hz for the duration of recording (up to two hours in some cases). To silence spontaneously active neurons we transduced them with the NpHR construct and administered continuous green (532 nm) light to completely inhibit action potential activity for up to 10 seconds with 100% fidelity in most cases. These versatile photosensitive tools combined with optrode recording methods provide experimental control over activity of genetically defined neurons and can be used to investigate the functional relationship between neural activity and complex cognitive behavior. |
0708.2594 | Michel Salzet | M. Salzet (NA) | Molecular Aspect of Annelid Neuroendocrine system | null | Invertebrate Neuropeptides and Hormones: Basic Knowledge and
Recent Advances, Transworld Research Network (Ed.) (2007) 19 | null | null | q-bio.NC | null | Hormonal processes along with enzymatic processing similar to that found in
vertebrates occur in annelids. Amino acid sequence determination of annelids
precursor gene products reveals the presence of the respective peptides that
exhibit high sequence identity to their mammalian counterparts. Furthermore,
these neuropeptides exert similar physiological function in annelids than the
ones found in vertebrates. In this respect, the high conservation in course of
evolution of these molecules families reflects their importance. Nevertheless,
some specific neuropeptides to annelids or invertebrates have also been in
these animals.
| [
{
"created": "Mon, 20 Aug 2007 07:20:36 GMT",
"version": "v1"
}
] | 2007-08-21 | [
[
"Salzet",
"M.",
"",
"NA"
]
] | Hormonal processes along with enzymatic processing similar to that found in vertebrates occur in annelids. Amino acid sequence determination of annelids precursor gene products reveals the presence of the respective peptides that exhibit high sequence identity to their mammalian counterparts. Furthermore, these neuropeptides exert similar physiological function in annelids than the ones found in vertebrates. In this respect, the high conservation in course of evolution of these molecules families reflects their importance. Nevertheless, some specific neuropeptides to annelids or invertebrates have also been in these animals. |
2107.03611 | Thomas P Quinn | Thomas P. Quinn | Stool Studies Don't Pass the Sniff Test: A Systematic Review of Human
Gut Microbiome Research Suggests Widespread Misuse of Machine Learning | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the machine learning culture, an independent test set is required for
proper model verification. Failures in model verification, including test set
omission and test set leakage, make it impossible to know whether or not a
trained model is fit for purpose. In this article, we present a systematic
review and quantitative analysis of human gut microbiome classification
studies, conducted to measure the frequency and impact of test set omission and
test set leakage on area under the receiver operating curve (AUC) reporting.
Among 102 articles included for analysis, we find that only 12% of studies
report a bona fide test set AUC, meaning that the published AUCs for 88% of
studies cannot be trusted at face value. Our findings cast serious doubt on the
general validity of research claiming that the gut microbiome has high
diagnostic or prognostic potential in human disease.
| [
{
"created": "Thu, 8 Jul 2021 05:23:47 GMT",
"version": "v1"
}
] | 2021-07-09 | [
[
"Quinn",
"Thomas P.",
""
]
] | In the machine learning culture, an independent test set is required for proper model verification. Failures in model verification, including test set omission and test set leakage, make it impossible to know whether or not a trained model is fit for purpose. In this article, we present a systematic review and quantitative analysis of human gut microbiome classification studies, conducted to measure the frequency and impact of test set omission and test set leakage on area under the receiver operating curve (AUC) reporting. Among 102 articles included for analysis, we find that only 12% of studies report a bona fide test set AUC, meaning that the published AUCs for 88% of studies cannot be trusted at face value. Our findings cast serious doubt on the general validity of research claiming that the gut microbiome has high diagnostic or prognostic potential in human disease. |
1909.13751 | Hritik Bansal | Shubham Kundal, Raunak Lohiya, Hritik Bansal, Shreya Johri, Varuni
Sarwal, Kushal Shah | Computational prediction of replication sites in DNA sequences using
complex number representation | 4 Figures, 1 Table. arXiv admin note: substantial text overlap with
arXiv:1701.00707 | null | null | null | q-bio.GN eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational prediction of origin of replication (ORI) has been of great
interest in bioinformatics and several methods including GC-skew,
auto-correlation etc. have been explored in the past. In this paper, we have
extended the auto-correlation method to predict ORI location with much higher
resolution for prokaryotes and eukaryotes, which can be very helpful in
experimental validation of the computational predictions. The proposed complex
correlation method (iCorr) converts the genome sequence into a sequence of
complex numbers by mapping the nucleotides to {+1,-1,+i,-i} instead of {+1,-1}
used in the auto-correlation method (here, i is square root of -1). Thus, the
iCorr method exploits the complete spatial information about the positions of
all the four nucleotides unlike the earlier auto-correlation method which uses
the positional information of only one nucleotide. Also, the earlier
auto-correlation method required visual inspection of the obtained graphs to
identify the location of origin of replication. The proposed iCorr method does
away with this need and is able to identify the origin location simply by
picking the peak in the iCorr graph.
| [
{
"created": "Fri, 27 Sep 2019 17:20:21 GMT",
"version": "v1"
}
] | 2019-10-01 | [
[
"Kundal",
"Shubham",
""
],
[
"Lohiya",
"Raunak",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Johri",
"Shreya",
""
],
[
"Sarwal",
"Varuni",
""
],
[
"Shah",
"Kushal",
""
]
] | Computational prediction of origin of replication (ORI) has been of great interest in bioinformatics and several methods including GC-skew, auto-correlation etc. have been explored in the past. In this paper, we have extended the auto-correlation method to predict ORI location with much higher resolution for prokaryotes and eukaryotes, which can be very helpful in experimental validation of the computational predictions. The proposed complex correlation method (iCorr) converts the genome sequence into a sequence of complex numbers by mapping the nucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation method (here, i is square root of -1). Thus, the iCorr method exploits the complete spatial information about the positions of all the four nucleotides unlike the earlier auto-correlation method which uses the positional information of only one nucleotide. Also, the earlier auto-correlation method required visual inspection of the obtained graphs to identify the location of origin of replication. The proposed iCorr method does away with this need and is able to identify the origin location simply by picking the peak in the iCorr graph. |
q-bio/0512013 | William Bialek | Elad Schneidman, Michael J. Berry II, Ronen Segev and William Bialek | Weak pairwise correlations imply strongly correlated network states in a
neural population | Full account of work presented at the conference on Computational and
Systems Neuroscience (COSYNE), 17-20 March 2005, in Salt Lake City, Utah
(http://cosyne.org) | null | 10.1038/nature04701 | null | q-bio.NC q-bio.QM | null | Biological networks have so many possible states that exhaustive sampling is
impossible. Successful analysis thus depends on simplifying hypotheses, but
experiments on many systems hint that complicated, higher order interactions
among large groups of elements play an important role. In the vertebrate
retina, we show that weak correlations between pairs of neurons coexist with
strongly collective behavior in the responses of ten or more neurons.
Surprisingly, we find that this collective behavior is described quantitatively
by models that capture the observed pairwise correlations but assume no higher
order interactions. These maximum entropy models are equivalent to Ising
models, and predict that larger networks are completely dominated by
correlation effects. This suggests that the neural code has associative or
error-correcting properties, and we provide preliminary evidence for such
behavior. As a first test for the generality of these ideas, we show that
similar results are obtained from networks of cultured cortical neurons.
| [
{
"created": "Tue, 6 Dec 2005 19:09:22 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Schneidman",
"Elad",
""
],
[
"Berry",
"Michael J.",
"II"
],
[
"Segev",
"Ronen",
""
],
[
"Bialek",
"William",
""
]
] | Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher order interactions among large groups of elements play an important role. In the vertebrate retina, we show that weak correlations between pairs of neurons coexist with strongly collective behavior in the responses of ten or more neurons. Surprisingly, we find that this collective behavior is described quantitatively by models that capture the observed pairwise correlations but assume no higher order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behavior. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons. |
1510.09073 | Alex McAvoy | Alex McAvoy, Christoph Hauert | Autocratic strategies for iterated games with arbitrary action spaces | 22 pages; final version | Proceedings of the National Academy of Sciences vol. 113 no. 13,
3573-3578 (2016) | 10.1073/pnas.1520163113 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent discovery of zero-determinant strategies for the iterated
Prisoner's Dilemma sparked a surge of interest in the surprising fact that a
player can exert unilateral control over iterated interactions. These
remarkable strategies, however, are known to exist only in games in which
players choose between two alternative actions such as "cooperate" and
"defect." Here we introduce a broader class of autocratic strategies by
extending zero-determinant strategies to iterated games with more general
action spaces. We use the continuous Donation Game as an example, which
represents an instance of the Prisoner's Dilemma that intuitively extends to a
continuous range of cooperation levels. Surprisingly, despite the fact that the
opponent has infinitely many donation levels from which to choose, a player can
devise an autocratic strategy to enforce a linear relationship between his or
her payoff and that of the opponent even when restricting his or her actions to
merely two discrete levels of cooperation. In particular, a player can use such
a strategy to extort an unfair share of the payoffs from the opponent.
Therefore, although the action space of the continuous Donation Game dwarfs
that of the classical Prisoner's Dilemma, players can still devise relatively
simple autocratic and, in particular, extortionate strategies.
| [
{
"created": "Thu, 29 Oct 2015 01:12:45 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Nov 2015 13:14:04 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Apr 2016 17:17:18 GMT",
"version": "v3"
}
] | 2016-04-12 | [
[
"McAvoy",
"Alex",
""
],
[
"Hauert",
"Christoph",
""
]
] | The recent discovery of zero-determinant strategies for the iterated Prisoner's Dilemma sparked a surge of interest in the surprising fact that a player can exert unilateral control over iterated interactions. These remarkable strategies, however, are known to exist only in games in which players choose between two alternative actions such as "cooperate" and "defect." Here we introduce a broader class of autocratic strategies by extending zero-determinant strategies to iterated games with more general action spaces. We use the continuous Donation Game as an example, which represents an instance of the Prisoner's Dilemma that intuitively extends to a continuous range of cooperation levels. Surprisingly, despite the fact that the opponent has infinitely many donation levels from which to choose, a player can devise an autocratic strategy to enforce a linear relationship between his or her payoff and that of the opponent even when restricting his or her actions to merely two discrete levels of cooperation. In particular, a player can use such a strategy to extort an unfair share of the payoffs from the opponent. Therefore, although the action space of the continuous Donation Game dwarfs that of the classical Prisoner's Dilemma, players can still devise relatively simple autocratic and, in particular, extortionate strategies. |
2407.17601 | Anindita Bhadra | Tuhin Subhra Pal, Srijaya Nandi, Rohan Sarkar, Anindita Bhadra | When Life Gives You Lemons, Squeeze Your Way Through: Understanding
Citrus Avoidance Behaviour by Free-Ranging Dogs in India | Includes supplementary information | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Palatability of food is driven by multiple factors like taste, smell,
texture, freshness, etc. and can be very variable across species. There are
classic examples of local adaptations leading to speciation, driven by food
availability. Urbanization across the world is causing rapid decline of
biodiversity, while also driving local adaptations in some species.
Free-ranging dogs are an interesting example of adaptation to a human-dominated
environment across varied habitats. They have co-existed with humans for
centuries and are a perfect model system for studying local adaptations. We
attempted to understand a specific aspect of their scavenging behaviour in
India: citrus aversion. Pet dogs are known to avoid citrus fruits and food
contaminated by them. In India, lemons are used widely in the cuisine, and
discarded in the garbage. Hence, free-ranging dogs, that typically are
scavengers of human leftovers, are likely to encounter lemons and
lemon-contaminated food on a regular basis. We carried out a population level
experiment to test response of free-ranging dogs to chicken contaminated with
various parts of lemon. The dogs avoided chicken contaminated with lemon juice
the most. Further, when provided with chicken dipped in three different
concentrations of lemon juice, the lowest concentration was most preferred. A
survey confirmed that the local people use lemon in their diet extensively and
also discard these with the leftovers. People avoided giving citrus
contaminated food to their pets but did not follow the same caution for
free-ranging dogs. This study revealed that free-ranging dogs in West Bengal,
India, are well adapted to scavenging among citrus-contaminated garbage and
have their own strategies to avoid the contamination as far as possible, while
maximizing their preferred food intake.
| [
{
"created": "Wed, 24 Jul 2024 19:16:07 GMT",
"version": "v1"
}
] | 2024-07-26 | [
[
"Pal",
"Tuhin Subhra",
""
],
[
"Nandi",
"Srijaya",
""
],
[
"Sarkar",
"Rohan",
""
],
[
"Bhadra",
"Anindita",
""
]
] | Palatability of food is driven by multiple factors like taste, smell, texture, freshness, etc. and can be very variable across species. There are classic examples of local adaptations leading to speciation, driven by food availability. Urbanization across the world is causing rapid decline of biodiversity, while also driving local adaptations in some species. Free-ranging dogs are an interesting example of adaptation to a human-dominated environment across varied habitats. They have co-existed with humans for centuries and are a perfect model system for studying local adaptations. We attempted to understand a specific aspect of their scavenging behaviour in India: citrus aversion. Pet dogs are known to avoid citrus fruits and food contaminated by them. In India, lemons are used widely in the cuisine, and discarded in the garbage. Hence, free-ranging dogs, that typically are scavengers of human leftovers, are likely to encounter lemons and lemon-contaminated food on a regular basis. We carried out a population level experiment to test response of free-ranging dogs to chicken contaminated with various parts of lemon. The dogs avoided chicken contaminated with lemon juice the most. Further, when provided with chicken dipped in three different concentrations of lemon juice, the lowest concentration was most preferred. A survey confirmed that the local people use lemon in their diet extensively and also discard these with the leftovers. People avoided giving citrus contaminated food to their pets but did not follow the same caution for free-ranging dogs. This study revealed that free-ranging dogs in West Bengal, India, are well adapted to scavenging among citrus-contaminated garbage and have their own strategies to avoid the contamination as far as possible, while maximizing their preferred food intake. |
2402.10212 | Lucie Pellissier | Caroline Gora (PRC), Ana Dudas (PRC), Lucas Court (PRC), Anil
Annamneedi (PRC), Ga\"elle Lefort (PRC), Thiago-Seike Picoreti-Nakahara
(PRC), Nicolas Azzopardi (PRC), Adrien Acquistapace (PRC), Anne-Lyse Lain\'e
(PRC, PRC), Anne-Charlotte Trouillet (PRC), Lucile Drobecq (PRC), Emmanuel
Pecnard (PRC), Benoit Piegu (PRC, PRC), Pascale Cr\'epieux (PRC, PRC), Pablo
Chamero (PRC), Lucie P. Pellissier (PRC) | Effect of the social environment on olfaction and social skills in WT
and mouse model of autism | null | null | null | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autism spectrum disorders are complex, polygenic and heterogenous
neurodevelopmental conditions, imposing a substantial economic burden. Genetics
are influenced by the environment, specifically the social experience during
the critical neurodevelopmental period. Despite efficacy of early behavior
interventions targeted specific behaviors in some autistic children, there is
no sustainable treatment for the two core symptoms: deficits in social
interaction and communication, and stereotyped or restrained behaviors or
interests. In this study, we investigated the impact of the social environment
on both wild-type (WT) and Shank3 knockout (KO) mice, a mouse model that
reproduces core autism-like symptoms. Our findings revealed that WT mice raised
in an enriched social environment maintained social interest towards new
conspecifics across multiple trials. Additionally, we observed that 2 hours or
chronic social isolation induced social deficits or enhanced social interaction
and olfactory neuron responses in WT animals, respectively. Notably, chronic
social isolation restored both social novelty and olfactory deficits, and
normalized self-grooming behavior in Shank3 KO mice. These results novel
insights for the implementation of behavioral intervention and inclusive
classrooms programs for children with ASD.
| [
{
"created": "Wed, 29 Nov 2023 16:46:15 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Gora",
"Caroline",
"",
"PRC"
],
[
"Dudas",
"Ana",
"",
"PRC"
],
[
"Court",
"Lucas",
"",
"PRC"
],
[
"Annamneedi",
"Anil",
"",
"PRC"
],
[
"Lefort",
"Gaëlle",
"",
"PRC"
],
[
"Picoreti-Nakahara",
"Thiago-Seike",
"",
"PRC"
],
[
"Azzopardi",
"Nicolas",
"",
"PRC"
],
[
"Acquistapace",
"Adrien",
"",
"PRC"
],
[
"Lainé",
"Anne-Lyse",
"",
"PRC, PRC"
],
[
"Trouillet",
"Anne-Charlotte",
"",
"PRC"
],
[
"Drobecq",
"Lucile",
"",
"PRC"
],
[
"Pecnard",
"Emmanuel",
"",
"PRC"
],
[
"Piegu",
"Benoit",
"",
"PRC, PRC"
],
[
"Crépieux",
"Pascale",
"",
"PRC, PRC"
],
[
"Chamero",
"Pablo",
"",
"PRC"
],
[
"Pellissier",
"Lucie P.",
"",
"PRC"
]
] | Autism spectrum disorders are complex, polygenic and heterogenous neurodevelopmental conditions, imposing a substantial economic burden. Genetics are influenced by the environment, specifically the social experience during the critical neurodevelopmental period. Despite efficacy of early behavior interventions targeted specific behaviors in some autistic children, there is no sustainable treatment for the two core symptoms: deficits in social interaction and communication, and stereotyped or restrained behaviors or interests. In this study, we investigated the impact of the social environment on both wild-type (WT) and Shank3 knockout (KO) mice, a mouse model that reproduces core autism-like symptoms. Our findings revealed that WT mice raised in an enriched social environment maintained social interest towards new conspecifics across multiple trials. Additionally, we observed that 2 hours or chronic social isolation induced social deficits or enhanced social interaction and olfactory neuron responses in WT animals, respectively. Notably, chronic social isolation restored both social novelty and olfactory deficits, and normalized self-grooming behavior in Shank3 KO mice. These results novel insights for the implementation of behavioral intervention and inclusive classrooms programs for children with ASD. |
1712.06150 | M. Reza Shaebani | M. Reza Shaebani, Aravind Pasula, Albrecht Ott, Ludger Santen | Tracking of plus-ends reveals microtubule functional diversity in
different cell types | 10 pages, 4 figures | Sci. Rep. 6, 30285, 2016 | 10.1038/srep30285 | null | q-bio.SC cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cellular processes are tightly connected to the dynamics of microtubules
(MTs). While in neuronal axons MTs mainly regulate intracellular trafficking,
they participate in cytoskeleton reorganization in many other eukaryotic cells,
enabling the cell to efficiently adapt to changes in the environment. We show
that the functional differences of MTs in different cell types and regions is
reflected in the dynamic properties of MT tips. Using plus-end tracking
proteins EB1 to monitor growing MT plus-ends, we show that MT dynamics and life
cycle in axons of human neurons significantly differ from that of fibroblast
cells. The density of plus-ends, as well as the rescue and catastrophe
frequencies increase while the growth rate decreases toward the fibroblast cell
margin. This results in a rather stable filamentous network structure and
maintains the connection between nucleus and membrane. In contrast, plus-ends
are uniformly distributed along the axons and exhibit diverse polymerization
run times and spatially homogeneous rescue and catastrophe frequencies, leading
to MT segments of various lengths. The probability distributions of the
excursion length of polymerization and the MT length both follow nearly
exponential tails, in agreement with the analytical predictions of a two-state
model of MT dynamics.
| [
{
"created": "Sun, 17 Dec 2017 17:42:39 GMT",
"version": "v1"
}
] | 2017-12-19 | [
[
"Shaebani",
"M. Reza",
""
],
[
"Pasula",
"Aravind",
""
],
[
"Ott",
"Albrecht",
""
],
[
"Santen",
"Ludger",
""
]
] | Many cellular processes are tightly connected to the dynamics of microtubules (MTs). While in neuronal axons MTs mainly regulate intracellular trafficking, they participate in cytoskeleton reorganization in many other eukaryotic cells, enabling the cell to efficiently adapt to changes in the environment. We show that the functional differences of MTs in different cell types and regions is reflected in the dynamic properties of MT tips. Using plus-end tracking proteins EB1 to monitor growing MT plus-ends, we show that MT dynamics and life cycle in axons of human neurons significantly differ from that of fibroblast cells. The density of plus-ends, as well as the rescue and catastrophe frequencies increase while the growth rate decreases toward the fibroblast cell margin. This results in a rather stable filamentous network structure and maintains the connection between nucleus and membrane. In contrast, plus-ends are uniformly distributed along the axons and exhibit diverse polymerization run times and spatially homogeneous rescue and catastrophe frequencies, leading to MT segments of various lengths. The probability distributions of the excursion length of polymerization and the MT length both follow nearly exponential tails, in agreement with the analytical predictions of a two-state model of MT dynamics. |
2301.06194 | Md Masud Rana | Md Masud Rana and Duc Duy Nguyen | Geometric Graph Learning with Extended Atom-Types Features for
Protein-Ligand Binding Affinity Prediction | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Understanding and accurately predicting protein-ligand binding affinity are
essential in the drug design and discovery process. At present, machine
learning-based methodologies are gaining popularity as a means of predicting
binding affinity due to their efficiency and accuracy, as well as the
increasing availability of structural and binding affinity data for
protein-ligand complexes. In biomolecular studies, graph theory has been widely
applied since graphs can be used to model molecules or molecular complexes in a
natural manner. In the present work, we upgrade the graph-based learners for
the study of protein-ligand interactions by integrating extensive atom types
such as SYBYL and extended connectivity interactive features (ECIF) into
multiscale weighted colored graphs (MWCG). By pairing with the gradient
boosting decision tree (GBDT) machine learning algorithm, our approach results
in two different methods, namely $^\text{sybyl}\text{GGL}$-Score and
$^\text{ecif}\text{GGL}$-Score. Both of our models are extensively validated in
their scoring power using three commonly used benchmark datasets in the drug
design area, namely CASF-2007, CASF-2013, and CASF-2016. The performance of our
best model $^\text{sybyl}\text{GGL}$-Score is compared with other
state-of-the-art models in the binding affinity prediction for each benchmark.
While both of our models achieve state-of-the-art results, the SYBYL atom-type
model $^\text{sybyl}\text{GGL}$-Score outperforms other methods by a wide
margin in all benchmarks.
| [
{
"created": "Sun, 15 Jan 2023 21:30:21 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Rana",
"Md Masud",
""
],
[
"Nguyen",
"Duc Duy",
""
]
] | Understanding and accurately predicting protein-ligand binding affinity are essential in the drug design and discovery process. At present, machine learning-based methodologies are gaining popularity as a means of predicting binding affinity due to their efficiency and accuracy, as well as the increasing availability of structural and binding affinity data for protein-ligand complexes. In biomolecular studies, graph theory has been widely applied since graphs can be used to model molecules or molecular complexes in a natural manner. In the present work, we upgrade the graph-based learners for the study of protein-ligand interactions by integrating extensive atom types such as SYBYL and extended connectivity interactive features (ECIF) into multiscale weighted colored graphs (MWCG). By pairing with the gradient boosting decision tree (GBDT) machine learning algorithm, our approach results in two different methods, namely $^\text{sybyl}\text{GGL}$-Score and $^\text{ecif}\text{GGL}$-Score. Both of our models are extensively validated in their scoring power using three commonly used benchmark datasets in the drug design area, namely CASF-2007, CASF-2013, and CASF-2016. The performance of our best model $^\text{sybyl}\text{GGL}$-Score is compared with other state-of-the-art models in the binding affinity prediction for each benchmark. While both of our models achieve state-of-the-art results, the SYBYL atom-type model $^\text{sybyl}\text{GGL}$-Score outperforms other methods by a wide margin in all benchmarks. |
1507.07774 | Karl Wienand | Karl Wienand, Matthias Lechner, Felix Becker, Heinrich Jung, Erwin
Frey | Non-selective evolution of growing populations | 9 pages, 4 figures, and 8 pages supplementary information (with 3
figures and 2 tables) | PLOS ONE 2015 10(8): e0134300 | 10.1371/journal.pone.0134300 | LMU-ASC 48/15 | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Non-selective effects, like genetic drift, are an important factor in modern
conceptions of evolution, and have been extensively studied for constant
population sizes. Here, we consider non-selective evolution in the case of
growing populations that are of small size and have varying trait compositions
(e.g. after a population bottleneck). We find that, in these conditions,
populations never fixate to a trait, but tend to a random limit composition,
and that the distribution of compositions 'freezes' to a steady state This
final state is crucially influenced by the initial conditions. We obtain these
findings from a combined theoretical and experimental approach, using multiple
mixed subpopulations of two Pseudomonas putida strains in non-selective growth
conditions as model system. The experimental results for the population
dynamics match the theoretical predictions based on the P\'olya urn model for
all analyzed parameter regimes. In summary, we show that exponential growth
stops genetic drift. This result contrasts with previous theoretical analyses
of non-selective evolution (e.g. genetic drift), which investigated how traits
spread and eventually take over populations (fixate). Moreover, our work
highlights how deeply growth influences non-selective evolution, and how it
plays a key role in maintaining genetic variability. Consequently, it is of
particular importance in life-cycles models of periodically shrinking and
expanding populations.
| [
{
"created": "Tue, 28 Jul 2015 14:05:33 GMT",
"version": "v1"
}
] | 2017-09-04 | [
[
"Wienand",
"Karl",
""
],
[
"Lechner",
"Matthias",
""
],
[
"Becker",
"Felix",
""
],
[
"Jung",
"Heinrich",
""
],
[
"Frey",
"Erwin",
""
]
] | Non-selective effects, like genetic drift, are an important factor in modern conceptions of evolution, and have been extensively studied for constant population sizes. Here, we consider non-selective evolution in the case of growing populations that are of small size and have varying trait compositions (e.g. after a population bottleneck). We find that, in these conditions, populations never fixate to a trait, but tend to a random limit composition, and that the distribution of compositions 'freezes' to a steady state This final state is crucially influenced by the initial conditions. We obtain these findings from a combined theoretical and experimental approach, using multiple mixed subpopulations of two Pseudomonas putida strains in non-selective growth conditions as model system. The experimental results for the population dynamics match the theoretical predictions based on the P\'olya urn model for all analyzed parameter regimes. In summary, we show that exponential growth stops genetic drift. This result contrasts with previous theoretical analyses of non-selective evolution (e.g. genetic drift), which investigated how traits spread and eventually take over populations (fixate). Moreover, our work highlights how deeply growth influences non-selective evolution, and how it plays a key role in maintaining genetic variability. Consequently, it is of particular importance in life-cycles models of periodically shrinking and expanding populations. |
2012.09844 | Kyle Crocker | Kyle Crocker, James London, Andr\'es Medina, Richard Fishel, and Ralf
Bundschuh | Potential evolutionary advantage of a dissociative search mechanism in
DNA mismatch repair | null | Phys. Rev. E 103, 052404 (2021) | 10.1103/PhysRevE.103.052404 | null | q-bio.SC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein complexes involved in DNA mismatch repair appear to diffuse along
dsDNA in order to locate a hemimethylated incision site via a dissociative
mechanism. Here, we study the probability that these complexes locate a given
target site via a semi-analytic, Monte Carlo calculation that tracks the
association and dissociation of the complexes. We compare such probabilities to
those obtained using a non-dissociative diffusive scan, and determine that for
experimentally observed diffusion constants, search distances, and search
durations $\textit{in vitro}$, there is neither a significant advantage nor
disadvantage associated with the dissociative mechanism in terms of probability
of successful search, and that both search mechanisms are highly efficient for
a majority of hemimethylated site distances. Furthermore, we examine the space
of physically realistic diffusion constants, hemimethylated site distances, and
association lifetimes and determine the regions in which dissociative searching
is more or less efficient than non-dissociative searching. We conclude that the
dissociative search mechanism is advantageous in the majority of the physically
realistic parameter space.
| [
{
"created": "Thu, 17 Dec 2020 18:58:05 GMT",
"version": "v1"
}
] | 2021-05-12 | [
[
"Crocker",
"Kyle",
""
],
[
"London",
"James",
""
],
[
"Medina",
"Andrés",
""
],
[
"Fishel",
"Richard",
""
],
[
"Bundschuh",
"Ralf",
""
]
] | Protein complexes involved in DNA mismatch repair appear to diffuse along dsDNA in order to locate a hemimethylated incision site via a dissociative mechanism. Here, we study the probability that these complexes locate a given target site via a semi-analytic, Monte Carlo calculation that tracks the association and dissociation of the complexes. We compare such probabilities to those obtained using a non-dissociative diffusive scan, and determine that for experimentally observed diffusion constants, search distances, and search durations $\textit{in vitro}$, there is neither a significant advantage nor disadvantage associated with the dissociative mechanism in terms of probability of successful search, and that both search mechanisms are highly efficient for a majority of hemimethylated site distances. Furthermore, we examine the space of physically realistic diffusion constants, hemimethylated site distances, and association lifetimes and determine the regions in which dissociative searching is more or less efficient than non-dissociative searching. We conclude that the dissociative search mechanism is advantageous in the majority of the physically realistic parameter space. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.